Who is in control of AI?

Share

There are increasing calls for government oversight of artificial intelligence development.

Artificial intelligence (AI) promises to have a huge and positive impact on our world – but, it also brings with it complex issues that we have never had to face as a society before.

AI sounds alarm bells for some people, who are frightened that AI will bring about a real threat to humanity, learning our worst traits, intensifying inequalities and triggering weapons of mass destruction. Others believe AI will take people’s jobs and discriminate against the vulnerable in society. Kevin Kelly, author and founder executive editor of Wired believes these anxieties are deep rooted because they link our intelligence to our identity, but that they can be overcome.

AI is undoubtedly set to offer solutions that will do much to change our lives for the better and grow economies, but how important is it that we have a so called “human in command” approach to AI?

The European Economic and Social Committee (EESC) argues that policies need to be put in place for the development, deployment and use of AI to ensure that it works as a positive force for change. A recent initiative opinion by its workers’ group points out that disruptive technologies like AI can present complex societal challenges. It has identified a number of key areas where AI raises such concerns. These include ethics, safety, transparency, labor, privacy and standards, education, access, laws and regulations, governance, democracy, warfare and “superintelligence”.

The EESC is adamant that these challenges can’t be left for the business community alone to tackle and is calling for European Union (EU) standards to be set. “We need a pan-European ethical code to ensure that AI systems remain compatible with the principles of human dignity, integrity, freedom and cultural and gender diversity, as well as with fundamental human rights - and we need labor strategies to retain or create jobs and ensure that workers keep autonomy and pleasure in their work,” explains EESC rapporteur Catelijne Muller.

Differences of opinion

The EESC isn’t the only one to call for AI regulation. Physicist Stephen Hawkings and Microsoft founder Bill Gates have both aired their concerns about the potential risk AI could be if not properly controlled.

In an address to US state governors at the National Governors’ Association conference, Tesla founder and SpaceX CEO Elon Musk has called on the US government to regulate AI, saying it is “a fundamental existential risk for human civilization”. He believes that AI is one of the rare cases in technology where proactive legislation needs to be put in place. “By the time we’re reactive in AI regulation, it’s too late,” he says.

There are other industry leaders, however, who disagree. Facebook chief executive Mark Zuckerberg in a recent Facebook live broadcast said such doomsday scenarios are irresponsible: In the next 5-10 years, AI is going to deliver so many improvements in the quality of our lives,” he says. “People who are arguing for slowing down the process of building AI, I just find that really questionable. I have a hard time wrapping my head around that.”

Professor Subbarao Kambhampati of Arizona State University, who specializes in AI in the context of human-machine collaboration, believes that such public spats between industry moguls do nothing to ignite trust in AI and simply make the public worry. He believes that AI will bring much good to the world, but that “we should remain vigilant of all the ramifications of this powerful technology, and work to mitigate unintended consequences”. This includes “addressing social and workforce displacement issues and establishing industry-wide best practices and ethics guidelines”.

Industries full throttle on AI

Despite risks voiced from some corners, the AI market is growing at a phenomenal pace. Market research firm IDC estimates worldwide revenues for cognitive and AI systems will reach $12 billion this year, an increase of 59% over 2016.

The growing prominence of AI is allowing new players to venture into the market, offering niche application solutions. Companies are also consolidating to gain a competitive edge, according to Grand View Research. Microsoft, for example, acquired startup Maluuba for its AI expertise earlier this year. The software giant said Maluuba will play a part in its strategy to “democratize AI” and make it accessible to everyone.

AI will only get bigger

In the next few years we will see an explosion in the use of AI. At the same time, AI will become smarter, capable of learning from experience and, in some instances, have some level of decision making capabilities. Undoubtedly there will be changes in legislation to accommodate driverless automotive technologies, for example, but the key is not to stop AI advancement in its tracks.

From whichever angle you look, the general consensus seems to be that AI will have an enormous impact on our lives and it is a technology governments, industries and society need to be fully prepared for.

Read more about AI and the demystification of robots in Real Times