Graphics display-based HMIs: capabilities of the latest MCUs

Author : Justin Palmer, Director of Vertical Markets, Embedded, at Future Electronics

08 March 2018

Research shows that few embedded designers will be immune over the next five years from pressure to dramatically enhance the capabilities, mode of operation and appeal of the HMIs in their products. This piece explores some of the driving factors and options available.

Although the move to create more graphical and touch-sensitive interfaces was largely initiated in devices such as smartphones and tablets, the demand for such rich user experiences has now expanded far beyond the consumer market. In fact, products for the industrial, automotive, medical, military and aerospace markets are now all facing the same requirements.

Several factors are driving this revolution in HMI design:

• Sensors, processors and wireless devices have become much better and much cheaper, greatly enhancing systems’ ability to measure and track their own operation.

• A generational shift has taken place in the user base, requiring manufacturers to meet millennials’ (rather than baby boomers’) expectations.

• Colour TFT displays now cost less than monochrome STN displays did just five years ago. Touchscreen overlays have also become better and cheaper, with capacitive touch-sensing technology now widely available – offering better, more interactive interfaces than older resistive technology options.

• Companies have realised the scope to improve efficiency and lower operating costs (by reducing staff training requirements and human error) when equipment has an easy-to-use, intuitive interface.

Figure 1. The Tesla Model S dashboard – a response to modern users’ preference for graphics-rich control interfaces | Credit: Steve Jurvetson, under Creative Commons 2.0 licence

In the past, redesigning an embedded product’s human machine interface (HMI) to feature more and better graphical content would have been out of the question for systems based on a microcontroller. There was a sharp divide between, on the one hand, embedded systems based on microprocessors (MPUs), with sophisticated graphics capability, and a rich OS (such as Windows or Linux platforms); and on the other, those based on microcontrollers (MCUs), often with no OS, and typically running nothing more complex than a segment LCD.

The ground is shifting fast, however, and improving MCU capabilities give design engineers hope that they can stay ahead of customers’ changing expectations – without having to abandon their familiar and productive MCU platform. So how much scope are MCU manufacturers offering their users to dramatically improve the HMI’s functionality?

How and why the HMI is evolving

Before reviewing how system designers might implement an improved HMI, it is worth considering why and how the HMI needs to be improved. The fundamental underlying cause of the shift in HMI design is the development of improved semiconductor technology. Sensors, RF transceivers and microcontrollers have now become so powerful, and yet so cheap, that it is possible for OEMs to embed them in greater numbers – and in more devices – than ever before.

In factories, this enables automation systems to track every important parameter of both manufacturing equipment and manufactured products in real time, at any point in the production process. In medicine, it enables health professionals to remotely monitor a patient’s condition constantly, and to set alerts when critical thresholds are crossed.

The result is that vast amounts of data are being generated and transmitted to control units. As the IoT continues to gain traction, this data is increasingly being hosted in the cloud, where it may be aggregated and analysed – and the results displayed on any internet terminal anywhere. So the extent and type of data available to users is changing rapidly. At the same time, the makeup of the user base (in particular, the workforce) is changing, as baby boomers go into retirement, to be replaced by Generation Xers and millennials. These are largely digital natives, accustomed since childhood to interacting with computer interfaces (see Figure 1).

Naturally, the preferences and working style of millennials are different from those of baby boomers. Whereas baby boomers expected to be trained to implement a process, before being measured on their execution of it, millennials expect to intuitively understand a system, track it with real-time data, and then make their own decisions based on the data – rather than following a set process.

So now we have masses of data generated by sensors, the ability via the internet to share it in real time, and people with the native ability to process and use it. Clearly, simple segment LCDs and push-button inputs do not align with this mode of interacting with complex equipment.

Displays must present menus of data to users

The key factor is the availability of Big Data, and the extraordinary value which can be derived from its use. In fields as diverse as intensive medical care and predictive maintenance of machines, it is the patterns discoverable in multiple streams of data or multiple parameters that provide the most valuable insights. And the most efficient way for humans (especially millennials) to discover these patterns is visually: we learn far more about complex data sets from diagrams, graphs and charts than we do from countless letters and figures. Systems, therefore, require rich graphical display capabilities and support for intuitive touch-sensing interfaces.

The most sophisticated graphical systems, capable of handling HD video streams, will run on high-performance MPUs (such as the i.MX family from NXP Semiconductors, based on ARM Cortex-A processors), which operate within a Linux or Android environment. Such systems are complex and expensive in both software and hardware terms, and present considerable implementation challenges for users not versed in development on a rich OS.


Contact Details and Archive...

Print this page | E-mail this page