What happens when AI gets it wrong?

When machines make decisions that impact your life, the least you can expect is that you can examine how those decisions were reached. “Explainable AI” (XAI) is artificial intelligence that is programmed to describe its purpose, rationale and decision-making process. It will be vital to ensure algorithmic accountability, fairness and transparency, as well as the ethical use of data.

Post-augmentation AI

Most mass-market AI augments human activity. It might optimize an image, estimate time to reach a destination or handle network bandwidth allocation. AI is also in the enterprise. A 2018 Vantage Partners survey confirms 97 percent of Fortune 1000 firms are investing in AI and big data, from retail to medical and beyond.

In every industry, machine intelligence is making the transition from basic pattern matching into applied deep learning, capable of analyzing complex scenarios to deliver solutions. In the future, it is thought the next generation of intelligent machines will become capable of learning about the problems they’re tasked to solve and finding and implementing solutions to them.

Carnegie Mellon robotics lecturer, Dean Pomerleau, first encountered the problem when working to develop a self-driving car. The AI (which trained itself using a neural network that learned as it drove) began making errors, but it required extensive testing to figure out why. Intelligent machines process multiple items of data, make multiple value judgments and assess different approaches as they work to reach a decision. In order to achieve this, they assess large numbers of variables and data stacks.

The problem with this is that most existing neural networking technologies deliver results based on a series of steps that are as complex as the human thought process – and less transparent. How can you see how the machine reached a decision in the event it gets it wrong?

We need to be able to trust the machines

Writing for Forbes, Bernard Marr points out, “To achieve its full potential, AI needs to be trusted – we need to know what it is doing with our data, why, and how it makes its decisions when it comes to issues that affect our lives.”

This is not merely an ethical problem. Europe’s General Data Protection Regulation (GDPR) means Europeans are protected against machine-derived decisions that have “legal or other significant” impact on their lives. This means companies using AI to take decisions that impact people’s lives are accountable.

The problem is that AI reflects the inherent bias of the data sources used and the people who selected them, when it is trained. This bias means intelligent machines can easily mirror and even compound mistakes. There have already been instances in which self-learning systems have learned racist terms or developed other forms of bias.

With these very real fears in mind, the industry is looking for ways to ensure AI-based decision-making processes are transparent, so the decisions they reach can be verified. This means algorithmic transparency and measures to control any built-in bias in the machines themselves.

Making track records

The problem is the complexity of the AI. “These models are really complicated and you cannot look into how the output has been produced,” explains Ulf Schönenberg, Head of Data Science in the Unbelievable Machine Company, part of Orange Business.

This complexity means the criteria used by the machines to reach decisions aren’t necessarily easy to see or to understand, in part because the outputs are too difficult for humans to manually code.

There are implications to this. “Certainly, surgeons would not put you under the knife without knowing why a computer has suggested they do so. Also, top financial management would be very reluctant to base decisions on a machine without knowing on what grounds computer analysis is made,” Schöneberg says.

What happens when AI gets it wrong?

However, as AI becomes more deeply embedded in essential systems (health, transportation, and more), the decisions these intelligent machines reach may have very serious consequences, making it essential that transparent and understandable records of their decision-making process are maintained.

Organizations, including Open AI and Partnership for AI, are working to evangelize AI and to ensure ethical practices in its development. There is also a commitment within the tech sector to develop Explainable AI, which provides general information about how an AI program makes a decision by disclosing:

  • The program's strengths and weaknesses
  • The specific criteria the program uses to arrive at a decision
  • The reason a program makes a particular decision as opposed to alternatives
  • The level of trust that's appropriate for various types of decisions
  • What types of errors the program is prone to
  • How errors can be corrected

Blockchain may provide one solution to the problem of access to the underlying decision-making process used by smart devices. At its simplest, this means the AI records each step it takes when arriving at a decision within the blockchain, which should itself enable humans to assess where any errors may have occurred in the logic chain.

“When we can look into the models and understand how they work, we can use the tool for real world problems. We simply want to know why the algorithms suggest a solution,” Schönenberg said.

To find out more about the potential of AI in your business, read our ebook, Artificial intelligence: what’s next?

Jon Evans

Jon Evans is a highly experienced technology journalist and editor. He has been writing for a living since 1994. These days you might read his daily regular Computerworld AppleHolic and opinion columns. Jon is also technology editor for men's interest magazine, Calibre Quarterly, and news editor for MacFormat magazine, which is the biggest UK Mac title. He's really interested in the impact of technology on the creative spark at the heart of the human experience. In 2010 he won an American Society of Business Publication Editors (Azbee) Award for his work at Computerworld.