AI

The future is bright, the future is Artificial Intelligence – but some are calling for regulatory plans in order to curb potential misuse.

Artificial Intelligence (AI) is the development of computer systems able to perform tasks usually requiring human intelligence such as speech recognition, decision-making, or visual perception.

The simple fact is AI outperforms humans at multiple tasks. Whether searching the web, translating in multiple languages at once, delivering medical diagnosis or even getting a PhD, AI proves more efficient.

Although there is little dispute among experts that AI should be encouraged based on its major merits through investment, training and education, the extent to its regulation is widely debated.

Experts suggest that before imposing regulatory measures, authorities must consider that arbitrary regulation risks slowing down the progress of AI.

Multiple bodies have produced reports on the issue. In April 2016 the European Union (EU) released a publication documenting plans to regulate AI by 2018.

Aside from the predictable points regarding protection of personal information, the EU highlighted two common concerns: fairness and transparency.

Fairness and freedom from discrimination is a hot topic in relation to AI. A senior research scientist employed by Google states: ‘A vetted methodology in machine learning for preventing discrimination based on sensitive attributes has been lacking.’

Controversies have included voice recognition software that failed to recognise women, a crime prediction algorithm that targeted black neighbourhoods and an online ad platform which was more likely to show men highly paid executive jobs.

Like every child, machine learning algorithms pick up on biases of those that rear them; in other words, if the creators are bias the software will be also.

A prime example of this in action was the beauty contest judged by AI. After submitting images to be rated on attractiveness, the winners were overwhelmingly Caucasian.

One researcher explained the reason for discriminatory results by describing how the image samples used to train the algorithms weren’t racially balanced.

Similarly, a language processing algorithm judged ‘white-sounding’ names more pleasant than others.

General consensus suggests that the best way to defeat this issue is to insist that developers start from a more diverse dataset.

Creating this shared and regulated database of samples would prevent any party from intentionally or unintentionally manipulating results.

There is less clarity over a solution for the second concern, namely transparency.

A key reason why developers are reluctant to allow such regulation is because the same is not asked of human-based decisions.

For example, when a credit loan is rejected by the bank, individuals have no authority to demand a detailed explanation despite little doubt that bias was involved in the decision making process.

One such area where this debate continues is around the topic of AI driverless or autonomous cars.

Google’s driverless car (courtesy of Wikipedia Commons)

 

Deaths in road traffic accidents are often attributed to human error, but many instinctively demand explanation for failure by AI despite the fact that these systems aren’t yet perfect.

More crucially, experts imply that attempts to find a transparent explanation are bound to fail. The millions of variables used mean that simple explanations for AI decisions apparently aren’t feasible.

There may come a day where AI systems become intelligent enough to explain their own behaviour and amend their own errors; until this stage, many are urging for a regulatory compromise be be found.