Should autonomous cars think for themselves?

Connected autonomous vehicles are on the road to becoming a reality, with Tesla’s autopilot already making headlines (for all the wrong reasons) and Ford aiming to be the first to have driverless vehicles– with no steering wheels or pedals – on highways by 2021.

But there are a number of barriers before autonomous vehicles can become mainstream, and these are not technical challenges. These are legal, ethical and political.  Questions need to be answered such as:

·         Who is responsible if two autonomous vehicles made by competing firms, both of which use different operating systems, collide?

·         What are the ethical considerations if a smart vehicle has to choose between contributing to a life threatening vehicle pile-up or saving lives by running over a pedestrian or small child? Who lives, who dies – and who is responsible?

·         Who will take the blame if your in-car data gets hacked? You, the car maker, the software developer?

·         Who owns the data created by your vehicle: You? Transit authorities? Car manufacturers? And what about data privacy?

·         If big data insights gathered from combining data from multiple vehicles creates a revenue stream, who owns that revenue?

Autonomous vehicles will clearly bring major benefits; reduced congestion and emissions, improved safety and a decrease in theft. But, these will not be wiped out entirely: accidents happens, there will still be emissions for the foreseeable future and crime is unavoidable.  Responsibility must be apportioned.

Steps to a legal framework

Some governments are taking steps to create a legal framework. The US Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) in February 2016 said the AI in Google’s self-driving car should be considered the legal “driver” under certain federal standards.

This is very important as it means the onus for responsibility in any accident shifts from the driver to the driver’s OS provider. However, in some US states (including Pittsburgh) the law says that while it’s OK to use an autonomous vehicle, there must still be someone with a driving license on board. And so the anomalies go on.

When looking at some of these scenarios, it’s important to note that not every autonomous car will behave in precisely the same way.

“Not all SDS (Self Driving Systems) will be created equal,” warned legal experts, Shearman & Sterling in a client paper made available to the author. “There will be multiple providers, using different hardware and each making different decisions to shape the AI of autonomous driving features. For example, how “sensitive” or “aggressive” might the AI be, or what “ethical” choices might be programmed into it? The assignment of liability will involve many new questions such as these. Also, if every accident requires evaluation of any SDS involved, developers will need strategies to protect their intellectual property and trade secrets.”

Industry must work together

In reaction to their growing understanding of these less obvious complexities, stakeholders from both traditional auto manufacturers and Silicon Valley are beginning to realize that they must cooperate to address these concerns.

There are attempts to implement comprehensive data privacy and cybersecurity agreements, and to develop market strategies that raise data privacy and cybersecurity to core status during product development.

In 2014, the Association of Global Automakers voluntarily adopted the Consumer Privacy Protection Principles for Vehicle Technologies and Services - and in 2015 automakers established an Automotive Information Sharing and Analysis Center (Auto-ISAC) to facilitate the exchange of cybersecurity threat information.

Because the legal framework surrounding this nascent industry is still developing, addressing such matters will become increasingly important. Especially as manufacturers prepare to sell driverless cars to customers on the basis of convenience, rather than the challenges of undefined risk and responsibility.

Dilemmas

The challenge is the sheer scale of intangibles at stake. For example, if two autonomous vehicles have an unavoidable collision, what happens next? Ordinarily, most human drivers attempt to take control of their vehicle in order to avoid further injury.  Would driverless cars react in the same way?

In theory, the connected vehicle would also prioritize the welfare of its occupant(s), but what happens if the best direction to take the car also happens to be in the direction of a school bus? The AI may recognize the need to save other lives, and may instead choose a less optimal post-collision decision, potentially harming passengers. How would the car intelligence know that bus was empty? Those questions formed part of a study by Azim Shariff of the Culture and Morality Lab at the University of Oregon last year. His conclusion? Ultimately each vehicle will prioritize the welfare of its occupant(s) above that of anyone else, as eventually every vehicle will be doing the same thing. In other words, these vehicles will be programmed to kill.

“We found that participants to six MTurk studies approved of utilitarian AVs (that sacrifice their passengers for the greater good), and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. They would disapprove of enforcing utilitarian AVs, and would be less willing to buy such a regulated AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology,” wrote Jean-François Bonnefon in The social dilemma of autonomous vehicles.

Look to the US

Ford has announced it intends putting the first driverless cars on roads by 2021, but the ethical and legal hurdles to achieving this are still enormous. In the US (ever a bellweather for such things) there are only 11 states with autonomous driving regulations in place. This suggests that on a US-only scale, it will take several more years before the legal framework is put in place to support these vehicles, even as the ethical and technological challenges remain.

“While industry players have already developed or tested many of the technological building blocks, tough and tricky legal challenges remain, including new laws on accident liability, on where self-driving cars may operate, and on who may have a license,” said consulting firm AT Kearney, who expect autonomous vehicles to become a $560 billion industry - but not until 2036.

So, will we really travelling around in autonomous vehicles by 2021? Maybe, but there will probably be compromises. New vehicles will be far more autonomous, but complete autonomy will remain a dream. The technology challenges may already have been resolved, but emerging ethical and legal issues will remain stumbling blocks for some time to come.

Read more about the state of driverless car industry development here. Find out more about how Konetik is using Orange’s innovative M2M plug and play fleet management solution here.

Jon Evans

Jon Evans is a highly experienced technology journalist and editor. He has been writing for a living since 1994. These days you might read his daily regular Computerworld AppleHolic and opinion columns. Jon is also technology editor for men's interest magazine, Calibre Quarterly, and news editor for MacFormat magazine, which is the biggest UK Mac title. He's really interested in the impact of technology on the creative spark at the heart of the human experience. In 2010 he won an American Society of Business Publication Editors (Azbee) Award for his work at Computerworld.