Look, no hands!

Can speech, eye-tracking and ultrahaptics end the tyranny of touch?

Technology has come a long way since the 1970s Xerox Alto personal computer brought us the first desktop metaphor and mouse-driven graphical user interface (GUI). In 2007, the Apple iPhone took a leap forward with a multi-touch screen, enabling users to perform movements via touch, such as rotating images.

We have grown accustomed to typing, clicking and pinching. But these methods for interfacing with technology aren’t very practical when you are speeding down a highway, running for a plane or preparing dinner.

The next big step in user interfaces is the idea of “Zero UI.”  Andy Goodman, the In Fjord designer who first coined the phrase, describes Zero UI as “a paradigm where our movements, voice, glances, and even thoughts can all cause systems to respond to us through our environment.”

The goal of Zero UI is to remove as much as possible from the user’s vision, making technology less intrusive. Technology must then learn our words, thoughts and gestures – instead of us having to learn its language of menus, options and tabs. This is particularly important for the Internet of Things.

Gartner predicts that 2 billion devices and IoT endpoints will be zero-touch by 2020.  "Interactions will move away from touchscreens and will increasingly make use of voice, ambient technology, biometrics, movement and gestures," explains Annette Zimmermann, research director at Gartner. Here’s six different approaches to new interfaces:

Speech and AI

In 1995, IBM’s speech recognition error rate was 43 percent. Last year it was 6.9 percent. Microsoft claims 6.3 percent. IBM pegs the human accuracy level at near 4 percent, so technology is not lagging far behind us. Advances in voice recognition and the inclusion of Artificial Intelligence (AI), bring us the likes of intelligent personal assistants Apple Siri, Amazon Alexa, Samsung Bixby and Google Home.

Ultrahaptics

This uses ultrasound to create a haptic response directly onto the user’s hands and is generating a lot of interest in the automotive industry. BMW has demoed a holactive haptic interface, a touch sensitive dashboard that appears to float in midair.  Bosch has an ultrahaptic in-car gesture control system. Instead of touching a screen the driver waves a hand and sensors pick up the movement. Using ultrasound technology, the driver can ‘feel’ the car’s controls.

Gesture interfaces

Users control or interact with devices via gestures and motion detection. Thalmic Labs’ Myo wearable gesture control bracelet, for example, allows the user to control devices such as smartphones without touching them.  Zkoo, a gesture control camera, enables users to interact with their TV, Android set-top box or Windows PC, tablet and smartphone, simply by waving.  

Immersive virtual reality

This puts the user in a fully-immersive artificial environment and includes the much-anticipated Magic Leap, which promises to overlay 3D graphics onto a user’s real world view via a headset.  Nimesha Ranasinge and his team at the National University of Singapore are working on Ambiotherm that will add thermal and air-flow sensations to immersive VR.

Eye tracking

Allows the user to control a device by using their eyes. The Samsung Galaxy S4 smartphone was a pathfinder, pausing video when the user looked away.  Eye tracking combined with speech generation is used by companies such as Tobii Dynavox in devices for individuals with special needs.  Fove has launched a VR headset incorporating eye tracking for increased realism which reduces head movement and the likelihood of VR sickness.

Brain-computer interface

BCI will enable users to control devices with their thoughts. Facebook is working on building a brain-computer interface using optical imaging that will allow users to type with their thoughts.  This technology could lead to important advancements in healthcare. Switzerland's Wyss Center for Bio and Neuroengineering is developing the brain-user interface to enable patients with amyotrophic lateral sclerosis (ALS), a progressive motor neuron disease to communicate.

IoT demands a new UI

The Internet of Things (IoT) will introduce us to a world where many devices will not have screens. The big challenge for designers will be to design experiences that are encased in Zero UIs and invisible technologies.

“If we crack this, we may be able to start designing elegant, frictionless services that push screens to the background and allow us to actually converse with the people sitting next to us”, adds Goodman. More social interaction and less blank faces consumed by screens has to be a good thing.

Find out more about how IoT and data analytics can help you shape innovative business models for the future that will enable you to take advantage of the opportunities a truly connected world will bring here.