The changing face of the human interface

26 February 2010

Sensor technology now offers a greater opportunity for more sophisticated user interfaces, as expectations continue to rise. Mike Salas and Andres Marcos report.

Today’s tech-savvy consumers have high expectations for the products they use in their daily lives. In particular, they expect the user interfaces to be intuitively easy, yet sophisticated. The concept of the “human interface” is evolving as electronics manufacturers develop new products that offer more intuitive controls, such as touch-sensitive screens, infrared (IR) light sensors, proximity detection and accelerometers that react to motion. In isolation these are relatively rudimentary controls, but when used together they are leagues ahead of the simple on/off switch and portray a level of ‘intelligence’ that makes it easier for users to assimilate the control mechanisms of new devices, quickly.

At its simplest, an interface provides bidirectional information, and in the case of a human interface that corresponds to an action and reaction, such as pressing a switch to turn on a light. Without that ”closed loop” feedback in the form of seeing, hearing or feeling the result, the efficacy of an interface can break down. For this reason, the design of the human interface is now a crucial element in any product’s development.

The information going in to a system to initiate a response varies greatly depending on the application. Figure 1 shows a simplified version of a typical human interface process. In this example an output is tied to a subsystem, and a number of conditions must be met before that output is valid. This may include a single event, such as picking up a mobile phone, which could be the trigger for a number of subsystems, including exiting sleep mode, turning on the screen’s backlight and unlocking the keypad.

The number of conditions inherent in almost any human interface introduces a level of complexity that can only realistically be tackled using a programmable device, such as a microcontroller (MCU). Using an MCU, the design of a software-configurable platform is not only simpler but much more flexible than a hardwired alternative. As Figure 2 illustrates, using an MCU enables a large number of stimuli to be accommodated, which may operate independently or together in a complex level of interdependencies, such that only a specific combination of events will activate a subsystem within a larger environment. The flexibility offered by an MCU enables development of intuitive human interface designs using a few relatively simple sensors. To understand how this interface works, let’s examine the individual elements present in the diagram in Figure 2.

The use of an ambient light sensor can significantly improve usability by detecting the level of ambient light available and activating subsystems – such as a backlight – when necessary. A light sensor is effectively a photodiode -- a semiconductor that when subjected to a bias voltage produces a current that is proportional to the incident light. Its transfer function can be either linear or logarithmic, depending on the sensor, but typically the current’s magnitude would be converted to the digital domain using an analogue-to-digital converter (ADC) and formatted to represent the amount of light, or even just its presence.

The energy present in a light source isn’t only in the visible part of the spectrum; an incandescent light has significantly more energy in the infrared end of the spectrum than a fluorescent light, for instance. Some ambient light sensors are able to take advantage of this by integrating two photodiodes in the same package, such as Silicon Labs’ Si1120 sensor. The two sensors operate at different wavelengths, allowing the predominant light source to be identified as either visible light (380 to 750nm) or infrared (750 to 2500nm). A profile of the two can determine the predominant light source, which may be used in conjunction with other inputs to enable various subsystems within an application, such as reacting to the balance between natural and artificial lighting.

Proximity sensing is a natural extension of the ambient light sensor; it operates predominantly in the infrared part of the spectrum and includes an IR source. The IR will be reflected back by any objects in close proximity to the source and therefore the sensor. This technique is now used extensively in mobile phones to turn the screen off when the handset is held to the user’s ear, thereby saving power. By using several proximity sensors together, simple gestures can be discerned which may be used to ”flip pages” on a screen, or scroll down a page.

Another technique that is gaining momentum within the embedded electronics industry is also a form of proximity detection: capacitive touch-sensing. This type of sensor uses a simple PCB track as one conductor and air as the dielectric. The second conductor, necessary to create a capacitor, is normally a finger brought in to close proximity with the PCB. As the finger moves closer, the capacitance increases, which although very small – in the order of pF – can be detected by an MCU through various methods and used as an input or stimulus.

The benefits of using capacitive touch-sensing technology are clear. It is both very low cost (it can be constructed from just a PCB track) and, if designed correctly, very reliable. Its reliability hinges on the method used to detect the change in capacitance and how well that is implemented in the MCU and supporting hardware. The intention is to create a sensor that offers a large difference in capacitance; the bigger the capacitance the easier it is to detect its status. As the switch’s operation depends on the capacitor’s conductor formed by the PCB track, it is advisable to follow guidelines when implementing that track.

While it is beyond the scope of this article to cover all those guidelines, the dominant factors include positioning the capacitive sensor so it isn’t in close proximity with sources of noise, such as other conductive objects or ground planes. The track’s physical properties will also be the properties of the capacitor’s conductor, so shape and thickness will play a role. It is also important to factor in the environment in which it will be operating and its effects, such as humidity and temperature changes. In this respect, the implementation of baseline measurements can be very effective, where a reading is taken as an average over a series of measurements. Additional advice in implementing capacitive sensors is available in the Silicon Labs application note number AN447, available in PDF format at http://www.silabs.com/Support Documents/TechnicalDocs/AN447.pdf

Methods for detecting a change in capacitance include the use of a relaxation oscillator; successive approximation; charge transfer and resistor-capacitor charge timing. The first method can easily be implemented by an MCU with an integrated comparator. The comparator is used to create a ”window” of upper and lower limits, as the capacitor’s voltage moves below the lower limit the circuit charges the capacitor, discharging it once the upper limit is reached. The frequency of the oscillator is therefore determined by the capacitance, which is in turn set by the proximity of the second conductor (the finger).

With successive approximation, two current DACs are used to drive two capacitors, the sensing element and a reference capacitor. The voltages developed across both capacitors – which are directly proportional to the current and inversely proportional to the capacitance – are compared against a voltage reference. Performing this process several times enables the current produced by the DACs to be adjusted each cycle, such that the ramp rate of the sensing element equals the ramp rate of the reference capacitor. This technique provides greater immunity against DC offsets and reduces the susceptibility to noise. Some MCUs now integrate a reference capacitor, DACs and control circuit, specifically for this application.

Other MCUs may only integrate a reference capacitor, which could be used to implement the third example: charge transfer. In this method, a voltage charges the sensor, which is then transferred to the integrated reference capacitor. This is repeated until the charge across the reference capacitor reaches a predetermined level. By evaluating the number of charge cycles needed to reach the threshold, the sensor’s capacitance can be determined. For MCUs that don’t offer an integrated capacitor, the charge timing method can be implemented by creating an RC network with the sensor and an external resistor. Evaluating the time taken to charge the sensor determines its capacitance.

Capacitive sensors, like proximity sensors, can be used to create a simple but effective user interface. They provide an excellent replacement for mechanical buttons or resistive sliders, and since the only requirement is a PCB track, a variety of appealing sensors can be created, such as sliders, scroll wheels or rotary switches. As Figure 3 shows, a slider can be created from just two PCB tracks, while a wheel takes just four tracks. In this example, an MCU provides the obvious solution. Not only can it be programmed to control the capacitive sensing but it can also translate the stimulus into the most appropriate format, such as angular rotation, a logarithmic scale or a linearly changing control.

Thanks to a growing number of simple yet effective sensors, the face of human interfaces is changing. As it does, end user experiences will improve and perhaps even surpass expectations. It is the responsibility and opportunity to meet or exceed those expectations that continues to drive innovation in the underlying silicon technologies for human interfaces.

Mike Salas is Director of Marketing MCUs, Silicon Laboratories, and Andres Marcos is Firmware Engineer, MCU Systems, Silicon Laboratories


Contact Details and Archive...

Print this page | E-mail this page