The new AI paper has proposed rules for addressing future risks and opportunities to ensure that businesses have clarity on how they can build and use AI systems, while consumers know that the systems are safe and strong

robot-g3b8b148fc_640

The British government has set out proposals for new AI rulebook. (Credit: Computerizer from Pixabay)

The British government has put forward proposals for new rulebook for artificial intelligence (AI) in the UK with an aim to unleash innovation and increase public trust in the technology.

According to the UK Department for Digital, Culture, Media & Sport (DCMS), the new plans for regulating the use of the technology will help develop consistent rules.

The publishing of the AI paper in this connection comes as the Data Protection and Digital Information Bill is introduced to the Parliament. The bill’s objective is to transform the country’s data laws to ramp up innovation in AI and other technologies.

DCMS said that the new AI paper sets out the approach of the government towards regulating the technology in the country. It has proposed some rules for addressing future risks and opportunities to ensure that businesses have clarity on how they can build and use AI systems, while consumers know that the systems are safe and strong.

UK Digital Minister Damian Collins said: “We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work.

“It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.”

The department revealed that the approach is based on six key principles that should be applied by regulators such as Ofcom, the Financial Conduct Authority, the Competition and Markets Authority, the Information Commissioner’s Office, and the Medicine and Healthcare Products Regulatory Agency.

The core principles include ensuring that AI is used safely, is technically secure, it functions as designed, and is appropriately transparent and explainable. The other principles need developers and users to consider fairness, clarify routes to redress or contestability, and identify a legal person to be responsible for the technology.

Regulators will have the flexibility in how they implement the principles in ways that best suit the use of AI in their respective sectors.

Last week, the UK Defence Science and Technology Laboratory (Dstl) and the Alan Turing Institute jointly created the Defence Centre for AI Research with an objective to research problems related to advancing AI capability.