New dimensions of facial recognition

Author : Mark Patrick, Technical Marketing Manager at Mouser Electronics

03 December 2018

Credit: Shutterstock

It was the iPhone X's introduction of Face ID, as a means to both unlock the device and make payments, that grabbed the most attention. And as this piece discusses, while face recognition technology itself may not be new, Apple’s implementation of it has clearly been significant.

For the digital issue of this piece, please visit this link – or click here to register for EPDT's magazine.

The roots of automated facial recognition

In common with AI (artificial intelligence) – another hot topic on recent tech trends lists – the roots of facial recognition technology can be traced to more than half a century ago. Computer scientists Woody Bledsoe, Helen Chan Wolf and Charles Bisson pioneered automated processes for it during the 1960s, funded by US intelligence agencies.

At this stage, human operators had to extract the coordinates of a set of facial features (such as the inside and outside corners of the eyes, the centre of the pupils, or the point of the widow’s peak) from a photographic image. Using these coordinates, a list of 20 measurements (such as the width of the subject’s mouth and eyes, and the distance from pupil to pupil) would then be computed, before being subsequently stored on a database.

Bledsoe and his team realised that this type of pattern matching was particularly challenging, because of huge variability in factors like head rotation and tilt, distance from the camera, lighting intensity and angle, facial expression, or even aging. To overcome this problem, each set of distances was normalised to represent the face in a frontal orientation. The program would first try to determine the tilt, lean and rotation of the subject, and then use these angles to calculate and undo the effect of these transformations on the computed distances.

The other element that needed to be taken into account was the three-dimensional geometry of the head; but to overcome the absence of a physical subject, the team used a model of a ‘standard head’, derived from measurements on seven test subjects. The team’s approach helped achieve a breakthrough in the accuracy of its automated facial recognition technique versus previous methods, where variability had resulted in low success rates.

The technology has come a long way since these early efforts, with increases in computing speed and power, as well as improvements in imaging technology, steadily driving up accuracy levels. As the success rates of the systems have approached, or even exceeded, positive identification rates of human operators, several high-profile implementations are now helping the technology make the leap to mainstream.

Of course, security and law enforcement have always maintained an obvious interest in the technology – and it continues to be trialled in multiple different geographies and applications (for instance, during the Notting Hill Carnival and the Remembrance Sunday ceremony at the Cenotaph in the UK). But, in addition to Apple’s Face ID, two other deployments of this technology have helped bring it into mass consumer usage.

Applications of facial recognition

Credit: Shutterstock

International travellers are now familiar with the automated border control (or ePassport) gates that have been introduced at many airports, in order to help cope with rising passenger volumes. For arriving passengers with biometric ‘chipped’ passports, the automated gates use a camera to capture an image of the traveller and facial recognition technology to compare this against the photograph stored on the RFID ‘chip’ in the passport.

Standardised reference images (where photos must meet stipulations regarding size, orientation, facial expression – as well as a lack of eyewear or headwear) and controlled image capture (with rules on where and how to stand and look, plus consistent illumination) each help to keep failure rates low, and throughput high. Some airlines are now also trialling the technology to authenticate passengers at the boarding gate.

Meanwhile, Facebook provides one of the most widely adopted (in the US, at least, though it has met with greater resistance from both regulators and consumers in Europe) implementations of facial recognition technology. The social media site has a database of billions of photographs of individuals and groups, uploaded by users and helpfully tagged to identify the names of the faces captured.

Powered by AI and machine learning technologies (including a deep learning neural network), its DeepFace technology uses an algorithm to calculate a unique number (or ‘template’) based on someone’s facial features. It can then analyse uploaded photos to match faces within them to stored templates, prompting users to tag the pictures with identified matches – and informing the match that a picture of them has been uploaded.

However, the implementation that could do most to normalise and popularise facial recognition technology, bringing its application to mainstream consumer acceptance, may well prove to be Face ID. Some observers argue that Apple has a habit of adding future-facing features to its devices later than other manufacturers; while others maintain that it often manages to do so in a way that ‘just works’, before marketing them so effectively that they become must-have functionality.

While the technology behind Apple’s Face ID may not be new, its implementation is slick, robust and compelling – and it is proving popular with early adopters (according to a recent report from trategy Analytics). Some manufacturers’ previous implementations have been easy to spoof with a photograph, but Face ID uses a suite of sensors, imaging technologies and AI to map and match your face in 3D.

The system is ‘always on’, almost instantaneous, and its infrared flood illuminator means it will work even in the dark, or through sunglasses. Its infrared camera captures an image of your face, while a dot projector casts an array of 30,000 infrared dots on your face to create a highly accurate depth map model of it. The phone handset then uses this to authenticate you against an image of your face, stored locally in a ‘secure enclave’ on the device’s CPU (which is not accessible by Apple).

The company claims that Face ID is so secure, there is only a one-in-a-million chance (versus 1 in 50,000 for its Touch ID fingerprint biometrics) that someone could manage to spoof it. Not only does it require a measure of the user’s level of attention to unlock the phone (your eyes must be open and you must be looking at the display to register a successful scan), but Apple also says it trained the feature on realistic-looking 3D masks, so that it cannot be tricked by them.

Credit: Shutterstock

In building Face ID, Apple analysed over a billion images for data about faces, using that data to train its neural network. The technology uses machine learning algorithms and a ‘neural engine’ to analyse and recognise your face, capturing additional information each time it is used – meaning its ‘map’ of your face improves, and changes with you (so it can cope with you growing a beard, changing your hair or make up, or wearing a hat).

Where next?

Buoyed by Face ID and growing consumer acceptance, there has been an uptick in enterprises building facial recognition technology into products and applications. The facial recognition market is expected to grow by an average of 21.3% over the next four years – reaching $9.6 billion by 2022, according to Allied Market Research.

Plug-and-play hardware modules and software are making it easier for design engineers to incorporate facial recognition technology into their applications and product development. The B5T HVC Face Detection Sensor Module from Omron Electronics is an example of this.

Contained within a compact 60mm x 40mm form factor, it is a fully-integrated plug-in human vision component (HVC) module that features a camera (with long distance and wide angle camera head options) and an accompanying processor, along with UART (universal asynchronous receiver-transmitter) and USB interfaces (which control the module and send data output to an external system). Based on Omron’s OKAO technology, the B5T integrates unique image sensing algorithms that can identify faces with speed and accuracy.

The B5T can capture, detect and recognise a face from a distance of 1.3 metres, in only 1.1 seconds, providing confidence levels to reflect the degree of accuracy. Blink and gaze estimation takes under one second, and the module can even evaluate a subject’s emotional mood, based on one of five pre-programmed expressions.

The technology can detect a human body up to 2.8 metres away, and a hand at a distance of 1.5 metres. The detection angle of the module is specified as 49° horizontal, and 37° vertical, with an input image resolution of 640x480 pixels.

Ten different sensing functions are incorporated in-module for recognising non-verbal intentions, conditions and behaviour, including face, hand and body detection, alongside age, gender and expression estimation. These best-in-class image analysis processes are handled within the module itself, allowing designers to easily add intelligence and functionality to a variety of sensing and IoT applications with minimal effort.


Contact Details and Archive...

Print this page | E-mail this page