Simon Field, CTO of cloud computing company Snowflake, claims that despite the huge potential of AI, the issue of bias must be overcome first

AI bias can stand in the way of the advantages the new tech could bring (Credit: Gerd Leonhard/Flickr)

AI bias can stand in the way of the advantages the new tech could bring (Credit: Gerd Leonhard/Flickr)

There has been an undoubted buzz about the potential of artificial intelligence and its impact on everyday society, but it seems that AI-powered projects are all too prone to failing the “ethics test”, according to Simon Field, CTO of Snowflake.

 

AI algorithms have amplified the social, ethnic, and gender biases that are inherent in the data that informs them — bias data inevitably translates into wrong answers.

The responsibility lies with data-driven organisations at the forefront of the AI revolution to develop models and ethics that ensure transparency, equality, and trust are at the heart. These are critical to delivering fair and sound AI-driven solutions.

According to PWC, by 2030 at least 5% of UK GDP will be generated as a result of AI, meaning a huge number of UK businesses will be integrating AI-based solutions on a mass scale over the next few years.

To ensure this is done correctly and without error, models must be repeatedly tested and altered to combat any possible bias and to make sure that the technology is fully worth the initial investment.

Without this consideration, biased data will never get challenged.

A data set might be unrepresentative or outdated, or it may be affected by the conscious or unconscious biases of the people who selected it or general societal behaviours.

 

The origins of AI bias

When AI models are created, they will often reflect the data used to train them and are often ill-equipped to handle situations that differ from their norm.

A possible reason for the natural bias within some AI models could be due to the lack of diversity within the teams behind them.

If a technology team consists predominantly of males, then models they build may be biased towards men.

ai bias
Simon Field, CTO of cloud-based data firm Snowflake

This was highlighted recently in a similar example by YouGov, which found that close to two-thirds of women say voice assistants such as Siri and Alexa have difficulties responding to their commands because they are built predominantly by men, and therefore are built to better recognise the male voice.

Developers should be looking to diversify STEM research groups, particularly where data is involved, to ensure that data inputted into AI models reflects the wider population.

When these biases arise from AI, the errors can be harder to spot given the complexity and speed of the algorithms and often the lack of knowledge from outsiders who lack the technical know-how to examine the logic.

By continually testing and altering models, bias will become easier to spot to the untrained eye and will begin to remove itself from AI models.

 

Overestimating the ability of AI

Humans tend to overestimate the capabilities of AI applications and rely too heavily on the technology, due to a lack of understanding and an automatic reaction to let go of the responsibility when something or someone with better knowledge comes into the equation.

For example, many judges now routinely get an AI recommendation before ruling on bail, punishment, and parole.

These recommendations are supposed to be strictly advisory, but the temptation to treat the computer’s clear answer as definitive can be strong.

Worse, some of the algorithms in use appear incapable of sifting out systemic racial bias, meaning minority defendants don’t stand a chance.

Recently, facial recognition software was trialled by the Metropolitan Police to identify criminals in various areas of London.

The software matched around 96% of people to criminal profiles, proving that as well as bias, one of the main problems with AI technology is that it is still very much in its infancy.

Until more testing and accurate models are developed, it’s unlikely to be rolled out on a larger scale.

 

Accounting for data security when using AI

Any inaccuracies in AI decision-making will be particularly poignant in today’s heightened age of data security and privacy.

Organisations must be extra cautious with new regulatory measures in place and must avoid any misuse of AI.

For example, GDPR bars the “processing of personal data revealing racial or ethnic origin, political opinions, religious and philosophical beliefs… or sexual orientation.”

Violators can be fined up to 4% of global revenue. The problem is serious enough that US market analysis firm IDC expects spending on AI governance and compliance staff to double in the next two years.

There is also the consideration that AI is still relatively new and many are understandably sceptical of AI.

jack ma ai
Elon Musk sees Neuralink, his concept for a human-AI hybrid, as an answer to a lot of the world’s problems (Credit: YouTube)

Elon Musk famously described AI as humanity’s “biggest existential threat”.

In a recent consumer sentiment survey on technology and data privacy, 62% of respondents said they disagree with AI being used to determine societal decisions such as criminal justice and healthcare.

 

How to tackle AI bias?

From a business perspective, the benefits of AI are clear — companies which are considering deploying AI applications can save time and money by removing the need for manual processes or labour.

It can also address general concerns, by insisting on fairness and transparency from the start.

Most importantly, organisations must be vigilant about the quality of its data and establish rigorous data quality standards and controls for the data we use to develop AI models.

This can be achieved by focusing on three main areas — data governance, making AI explainable and achieving full transparency.

In terms of data governance, every company needs a clear, transparent data policy, and the multi-user nature of the cloud presents a unique opportunity to expand the availability of owned data to power AI models in a more transparent way.

The ability to maintain a history and version control of the exact data used as input to each iteration and refinement of the model is essential for traceability and often mandatory in regulated industries.

Secondly, Advanced machine learning algorithms, such as deep learning neural networks, are incredibly powerful, but lack transparency and thus carry regulatory, ethical, and adoption risks.

We must develop and use AI explaining tools, in addition to partnering with risk and compliance professionals early in the development process, so no models are deployed without the proper vetting and approval.

The final concern with AI models is that they are developed behind the thick walls of secrecy of a select few private companies.

A good example of a transparent company is Elon Musks’ OpenAI, an organisation that fosters openness of research and collaboration in the space under the premise that the more data you have at your disposal to build your models, the fairer and more powerful those models will be.

The bottom line is, AI is without doubt going to be a part of our future. Instead of shying away from the responsibility, companies need to take charge and ensure they are approaching its development responsibly, under a solid framework of ethics, data governance, and transparency.