There is a process to make sure a person is safe to drive on UK roads which involves learning the Highway Code, passing your theory test and then passing a practical exam. But how can we be sure AI is safe to be embedded in an autonomous vehicle? Ben Peters, co-founder of British driverless car start-up FiveAI, discusses the challenges and potential solutions with Sam Forsdick

FiveAI driverless car

When we hear sirens behind us while driving along a road, motorists know they must move out the way by even mounting a pavement or veering on to the wrong side of the road – but would a driverless car do the same?

This is the type of question being asked by the UK’s Law Commission in a new consultation paper on regulating autonomous vehicles, as it argues that breaking the law might just be necessary for them to stay safe.

As autonomous technology continues to gather pace, it’s important that regulation advances at the same rate to keep industry safety standards high.

Ben Peters, co-founder of UK driverless car technology start-up FiveAI, is pleased the law recommendation body is “stepping up” and adopted some of FiveAI’s ideas during the consultation period.

He says: “Autonomous vehicles are one of those technologies that, unlike social media or other internet-based technologies, has the potential to cause accidents and injure people.

“We need regulation in place before we even start to deploy these vehicles and then we need to iterate over time.”

But the process of regulating autonomous vehicles is not as simple as teaching it the Highway Code.

“These are deeply complex systems and it’s always difficult to figure out how to regulate these systems without limiting the outcome,” Ben says.

“It is a fine line that these regulators have to walk to make sure they don’t stifle innovation but also to limit any potential downsides.”

 

The first step to regulating autonomous vehicles: Creating a ‘digital highway code’

One of the key conclusions from the paper was the need to develop a “digital highway code”.

The Law Commission found several challenges with taking the “analogue” legal rules of the Highway Code and highlighted the need for a much more precise digital version to govern the actions of highly automated vehicles.

Autonomous vehicles must first be tested in simulations before they can make it onto the roads (Credit: FiveAI)

Ben says: “The Highway Code is designed for humans and by the time anyone uses it they have built up at least 17 years of life experience and common sense.”

The commission found that under certain circumstances, autonomous vehicles would need to be able to use “common sense” and break the law in order to drive safely, including mounting the pavement, exceeding the speed limit and edging through pedestrians.

Ben says: “The classic example would be when a human driver pulls into a cycle lane or onto a pavement to give way to an ambulance.”

Programming the Highway Code as a simple set of rules into the AI systems, which self-driving cars must adhere to, could cause more problems than it solves.

In order to get around this, Ben explains that FiveAI programmes these events as “costs”  in its own technology that must be weighed up.

“You could say it’s a hard cost activity to drive on the pavement but an even higher cost activity to not give way to an ambulance – and an even higher cost activity to cause an accident,” he explains.

“Then out of the options available, the vehicle would choose not to cause an accident as the primary option, but also to give way to ambulances occupying the pavement when it’s safe to do so.”

This “digital highway code” would form the basis of any testing system for regulation.

 

Creating a driving test for driverless cars will help with regulating autonomous vehicles

In order to make sure an autonomous vehicle is “statistically safe”, manufacturers and developers would need to drive tens of billions of miles in the real world during testing, explains Ben.

He adds: “No company can do this and you’d still have a risk of causing problems.

“The only safe way to do that is through simulation.”

Ben suggests a set of scenarios – a broad range of challenging driving conditions and events – should be used to test autonomous vehicles before they are certified.

He stresses the need for an independent third party regulator to set a “very high bar to safety”.

“Safety threshold needs to be sufficiently high to get public trust and that should be the role of the regulator in that space,” Ben says.

He adds: “We really think these things should be as safe as human drivers and, over time, much safer.

“We’re not building cars that are smarter than humans – anyone can drive, it’s not the hardest thing a human does.

“The problem is that humans are very bad at paying attention – that is the root cause of more than 90% of accidents – and that’s where our system wins.”

 

The need for an ethical AI in autonomous vehicles

Discussions of autonomous vehicles have strayed into the realm of ethics with scientific journal Nature publishing a paper titled the Moral Machine.

Some 2.3 million people from across the world participated in a survey where they were given morally difficult driving scenarios.

One such question asked whether a driver should hit pedestrians on the road or swerve into a barrier, running the risk of killing themselves and the passengers on board.

The authors of the paper suggested that these were issues that autonomous vehicle AI could have to face.

Regulating autonomous vehicles
Scenarios such as the one above were given to people across the world to determine what course of action a driverless car should take (Credit: Moral Machines)

When shown the results, German car maker Audi says the survey “could help prompt an important discussion about these issues”.

However Ben suggests that driverless cars should never get themselves into a situation where only two options, which would each result in fatalities, would arise.

He says: “We start with guiding principles, one of which is that all life is equal regardless of age, sex, gender – but also irrespective of whether the person is inside or outside the car.

“If the situation was that we were equally certain there were two people on the pavement and one person on the road, then under those conditions our vehicle would stay on the road because it would be acting to minimise the loss of human life.

“But the reality is that we design the system to never get into such a conundrum.”

The AI systems are constantly looking for “safe escape routes” in case the road ahead becomes blocked or the situation suddenly becomes dangerous.

Ben adds: “If we don’t have good escape routes, we moderate the speed of the vehicle so the idea is that we should never find ourselves in such a situation.”