Everything you always wanted to know about AI (but were afraid to ask)

Share

Automation is everywhere, but artificial intelligence (AI) takes this further and will have as great an impact on human society as the Internet – perhaps greater.

“AI is set to fundamentally change what we do, just as computers did when they were introduced into businesses,” said Orange Labs Research Director, Nicolas Demassieux.

What is AI, what do you need to know about it, and how was it used during the Wimbledon Tennis championships?

What is AI?

AI means machines that can think as well as – or better than – humans.

It is typically defined as the ability of a machine to perform human-like cognitive functions, such as perceiving, analyzing, learning and solving problems. AI technologies include robotics and autonomous vehicles, computer vision, language, virtual agents and machine learning.

Computers inside the machines think, reason and reach conclusions based on information they acquire. Professors John McCarthy and Marvin Minsky are credited with inventing the term in the 1950s, when they described it as a task performed by a program or a machine that would require intelligence if a human were to do it.

This kind of intelligence can be in any kind of machine, from vending machines to car navigation systems – even passenger aviation, it’s just directed at specific tasks.

There are lots of different forms of AI, and new approaches are emerging. The three most common approaches are:

  • Pattern matching: Some intelligent machines are really just comparative data analytics systems that seek patterns
  • Neural networks: Others analyze events based on information they have been given, using neural networks to figure out trends and using mathematical algorithms try to understand and predict events or responses
  • Deep learning: There are also deep-learning systems, where machines attempt to identify and act on patterns gleaned from different sets of data

What all these different approaches have in common is that they attempt to use sophisticated computing processes to understand and solve real world problems.

AI will usher in profound changes at almost every level of human communication, organization, distribution and manufacturing. Automated systems – intelligent machines - will monitor life signs and recommend appropriate actions at both ends of the life journey.

The history of AI

Humans have spent centuries thinking about smart machines.

The Ancient Greeks described Talos, the mythological bronze automata that protected Crete (and featured in 1963 classic, Jason & the Argonauts). In Leviathan, Thomas Hobbes argued that “reason is nothing but reckoning,” while much of Isaac Asimov’s post-war science fiction work focused on AI, robotics and the conflict between human and intelligent machines.

The Greeks also thought about how we think. Aristotle developed the notion of syllogism, logical argument in which deductive reasoning is used to reach a conclusion based on multiple propositions, all of which are asserted as “true.”

Today, a class of machine learning called Generative Adversarial Networks (more below) puts Aristotelian adversarial logic into the machine, where two AI models compete to generate new data – such as creating a fairly accurate rendition of a person’s face based solely on a voice recording and training data.

Today, AI is weaving itself into everything, from cybercrime and fraud prevention to road transport management systems, automated photo library creation and life-saving surgery.

Each time you speak with Siri, Bixby, Alexa or a chatbot, you interact with man-made smart machines. Most people aren’t aware that the U.S. Defense Advanced Research Projects Agency (DARPA) first created smart personal assistants in 2003, long before consumer assistants shipped.

So, how do these exciting technologies work?

You can divide the topic into two applied sectors: narrow AI and general AI.

What is narrow AI?

Narrow AI describes systems that learn or can be taught how to do specific tasks. These include things like language processing or recommendation systems – they are highly specific systems for specific problems, such as:

  • Motion detection in video surveillance systems
  • Maintenance and monitoring systems for industrial machinery
  • Calendar event organization
  • Chatbots handling customer service questions
  • Identification of items, such as tumors or animals, from images

These are the kinds of highly-specific AI solutions we are most used to encountering today.

What is general AI?

The goal of most AI development is general AI. This describes truly adaptable machine intelligence capable of learning a wide range of tasks, considering numerous arguments, or learning from experience to arrive at autonomous decisions.

It’s the kind of AI you might have encountered in Deep Thought in the Hitchhiker’s Guide to the Galaxy, or HAL in 2001: A Space Odyssey. It doesn’t exist (yet). Some think it may emerge by mid-century, while others are doubtful that self-awareness will ever arise in a machine.

Many experts believe that we’re a very long way from creating machines that can match or improve on the capabilities of the human mind. In other words, the AI we are working with today is not the scary super-machine of dystopian fiction, they are just slightly smarter machines. Demassieux points out: “AI that can replace human beings is largely out of reach today,” he wrote. “There is still a long way to go: conceptual representations of the world will need to be developed, as well as machine reasoning and better decision-making, emotional and ethical capabilities.”

Neurala CEO Massimiliano Versace said: “Artificial intelligence is really taking the brain and trying to emulate it in software. The brain is more than just recognizing an object. It is thinking. It is perceiving. It is action. It is emotion.”

This is the first blog in a four-part series about how AI works, what data it needs and what happens when AI goes wrong. The other articles are: Food for thought: why AI needs good data, The secret life of algorithms, and From supercomputers to smartphones: where is AI today?

Jon Evans

Jon Evans is a highly experienced technology journalist and editor. He has been writing for a living since 1994. These days you might read his daily regular Computerworld AppleHolic and opinion columns. Jon is also technology editor for men's interest magazine, Calibre Quarterly, and news editor for MacFormat magazine, which is the biggest UK Mac title. He's really interested in the impact of technology on the creative spark at the heart of the human experience. In 2010 he won an American Society of Business Publication Editors (Azbee) Award for his work at Computerworld.