Get a handle on gesture control

Author : Courtney Kennedy, Technology Solutions Marketing Manager at Farnell element14

28 November 2018

Credit: Shutterstock

Today, almost everyone has a touchscreen mobile phone. This method of interacting with phones, and other devices, feels so natural that even very young children can pick up a modern phone and be quite proficient after a short time. But, as this piece explains, it’s still not suitable for every situation: touch technology requires physical contact that is not possible in all environments and conditions, ruling out many applications. This is where touchless gesture control comes in.

For the digital issue of this piece, please visit this link – or click here to register for EPDT's magazine.

Indeed, while there are several technologies in development that could enhance touch technology, or even replace it completely, the one that has shown the most potential is touchless gesture control. Humans live in a three-dimensional world, and at times, our interaction with our two-dimensional phone screens can seem flat – even with the haptic feedback that most contemporary phones now offer.

Touchless gesture control adds z-axis sensing in the free space around the sensor: the technology uses sensors to capture a range of motions; then, it compares the results with a software library to interpret the action, before sending the instruction to the system controller for processing.

There are already examples of touchless gesture control on the market that are proving 

popular with users. Console peripherals, such as Microsoft’s Kinect and Nintendo Wii controllers allow users to control actions on a television screen.

Recently, Samsung has led the way in the consumer electronics market by allowing users to control its televisions using gestures from across the room. Touchless gesture recognition has even made its way to the automotive sector: in BMW’s 2016 7-series, for example, the company has incorporated such technology for a variety of simple control functions.

The two most popular sensors used in touchless gesture control applications are cameras and electric-field (e-field) sensors. Camera sensors are usually found in complex, higher-end applications, like the Microsoft Kinect; whereas e-field sensors are simpler and less expensive, making the technology ideal for a much wider range of applications.

The sensors work by detecting slight changes in an ultra-low-power electromagnetic field between two antennas. When an object, such as the user’s hand, interacts with the field, then the field distortion is measured and compared with examples in the software library.

E-field sensors can also operate normally when placed behind non-conductive materials. Not requiring physical contact allows the technology to be used in areas that are difficult for touch technology, or where the operator is required to wear gloves. Many applications of touchless control have been developed for use behind a physical barrier, meaning that the unit can be completely enclosed, providing further protection from the elements.

Design considerations for touchless control applications

Of course, being a new technology brings new design considerations. Touchscreens offer a physical medium where it is easier to decipher the user’s true intentions, with the screen itself offering prompts and feedback – making the process easier still. Touchless sensing doesn’t offer that luxury, so designers have to make firm choices at the start of the design process to compensate. Prime importance in these decisions must be given to making the experience feel comfortable and familiar to users.

Such decisions include how to identify the sensor’s location if it is behind a barrier, what gestures should be used and when, and how to communicate a successful reading. To solve these problems, most designers include a screen that can convey information and provide feedback to the user.

A conceptual model may prove helpful when developing software for touchless gesture control systems. This model allows designers to create a model of the system and all the options required. Once the options are defined, it will become clear which gestures need to be implemented and the information that the user will need on every new screen.

Credit: Shutterstock

From this information, it should be possible to build a clean interface that is easily understood and feels natural to the user. Some techniques that have been used for touch control are also helpful for touchless applications. When developing for the screen, signifiers are important to the context of the application; after all, two different programmes will often use the same gestures for different functions. The contextual nature of the gesture should therefore be reflected on-screen to make operation easier and more natural.

The natural environment around where the design will be situated is also important in the design of touchless control systems. Humans look for visual cues when operating a new piece of equipment. These cues, called affordances, help the user to orientate themselves and can help familiarise them with the system. They show all actions that are possible, making the process seem logical and familiar.

In nature, when we make a gesture, we expect a response, and are often confused if our actions don’t provoke one. Design engineers know this and often use haptic feedback to let users know an input is valid and accepted. Feedback is even more important for touchless gesture controllers, as there is no physical contact. For example, it prevents errors that can come from multiple repeated gestures, which can happen if a user is unsure if an input has been accepted.

Practical examples

Microchip is one company that provides a complete ecosystem for designers looking to build touchless gesture control applications. Under the company’s GestIC banner, the products are built around its MGC3X30 family of gesture controllers and Aurea GUI software.

The MGC3X30 gesture control chip offloads the gesture recognition functions, leaving the main system controller free of overheads. The low-power products offer a detection range of up to 20cm and contain all the building blocks required to develop a single-chip input sensing system. To provide designers with an easy way to evaluate the technology, Microchip has also developed a variety of development boards. Farnell element14 currently offers the Hilsar single-zone development kit and the Sabre Wing dual-zone board available from stock.

There are several hardware options available for designers, including the ADI ADUX1020-EVAL-SDP gesture and proximity sensor evaluation board. The kit provides users with a simple means of interfacing with the sensor (ADUX1020), collecting data from it and evaluating gesture recognition capabilities. It requires an evaluation tool that can be downloaded from ADI: a GUI providing low and high-level configurability, real-time data analysis and user datagraph protocol (UDP) transfer capability, so the eval board can easily interface to a PC.

Another example available from Farnell element14 is Flick HAT, for the Raspberry Pi. The add-on board uses Microchip GestIC technology to allow designers with a Raspberry Pi to have easy access to a powerful gesture control system.

Summary

Touchless gesture control is an exciting technology that can complement existing touch technology or be used to replace it completely. The technology opens up new applications and new ways to naturally interact with machinery.

Although there are some differences in development from touch-based applications, there are also many similarities: primarily, the psychological techniques that help humans easily gain familiarity with the technology and make it easy and natural to use. Touchless control technology is easily accessible, either through a tailored ecosphere or through an add-on for popular development boards.


Contact Details and Archive...

Print this page | E-mail this page