The European Commission consulted independent experts to come up with a seven point strategy for creating trustworthy AI
The European Commission has released its guidelines for creating trustworthy and ethical AI.
The legislative arm of the European Union consulted an independent panel of experts to help draft the requirements, which aim to promote the development of human-centric AI systems.
It determined that a trustworthy AI must satisfy three components – it should be lawful, ethical and technically robust.
Andrus Ansip, European commissioner for the digital single market, said: “The ethical dimension of AI is not a luxury feature or an add-on – it is only with trust that our society can fully benefit from technologies.
“Ethical AI is a win-win proposition that can become a competitive advantage for Europe – being a leader of human-centric AI that people can trust.”
What are the European Commission’s seven guidelines for trustworthy AI?
The European Commission believes AI can benefit a wide range of sectors such as healthcare, energy production and business, but argues that any AI technology should meet seven essential standards in order to be deemed “trustworthy”.
- Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
- Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
- Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency: The traceability of AI systems should be ensured.
- Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
- Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
These seven requirements should be met throughout the development, deployment and use of AI systems, according to the European Commission.
Why does AI need to be trustworthy?
The new guidelines will hope to stymie growing public concerns over AI technology.
The European Tech Insights 2019 report revealed that European citizens were becoming increasingly anxious about the growing role of technology in society.
The use of AI systems to make important decisions have placed added pressure on the need for an ethical framework.
Speaking at an event attended by Compelo last month, Dr Adrian Weller, director for AI at The Alan Turing Institute, said: “As we move out of use cases in the consumer landscape that don’t have such significant impacts on our lives and role as citizens, to use cases in business that have significant implications and consequences, ethical considerations need to be made.”
These use cases include recruitment, where AI can be used to filter through CVs and shortlist candidates; the criminal justice system, where algorithms can advise on maximum sentences; and financial services, where data analysis is used to screen people for loans.
Concerns surrounding the use of AI algorithms for these tasks were highlighted in recent examples such as Amazon’s “sexist” recruitment AI, which taught itself to exclude CVs that mentioned the word “women”.
Similarly, data-powered algorithms in the US criminal justice system – where a code known as COMPAS is used to determine the likelihood of a defendant reoffending – has been used to mixed results.
The European Commission will hope its seven guidelines for trustworthy AI will help to prevent these issues from happening in the future.
The UK government is also conducting its own investigation into the impact of AI bias in society ahead of introducing potential new regulation.