Components of effective machine vision

Author : Arnaud Destruels, Visual Communications Product Marketing Manager at Sony Europe Image Sensing

12 December 2017

This article from Sony Europe Image Sensing Solutions discusses how factors such as takt time have become increasingly important in determining the overall throughput and efficiency of a production line.

This piece originally appeared in the December 2017 issue of Electronic Product Design & Test. To view the digital edition, click here – and to register to receive your own printed copy, click here.

Machine vision has proved vital in driving up quality in many manufacturing processes by performing critical inspection in-line. But machine vision integrators need to ensure that the inspection tasks do not result in excess cost through either the flagging of false positives or by causing the production line to be slowed down excessively. 

To support high throughput and low takt time, the ability of an image sensor to capture frames rapidly is one of many important factors. Lighting provides a key element in determining the overall accuracy and repeatability of machine vision. The illumination of the area of interest needs to be high enough to ensure that the camera’s exposure time can be as short as possible, which will help reduce the overall takt time. One way to obtain effective illumination is to increase the light sensitivity of the image sensor itself – but this is only one aspect of how lighting performance needs to be optimised.

The direction of the illumination is also important, to ensure there is high contrast between the features that need to be recognised and their background. Illumination uniformity is equally important to reduce the amount of post-processing needed to ensure that the image processing software can detect each mounted component or product feature for alignment and quality.

It can often be difficult to achieve the required level of illumination balance across an entire image. Some parts of the product or subassembly to be imaged may lie in the shadow of larger components. The high levels of lighting required to image some parts effectively, because they have low contrast against the substrate for instance, can result in glare in other parts of the image.

Another problem may be image vignetting, where the illumination level as captured by the sensor varies across the lens – or issues caused by the need to mount the lighting very close to the object under examination may make consistent coverage difficult. In many cases, this leads to areas in some parts of the image being subtly lighter than others.

Image processing algorithms can be used to partially correct these problems; however, they increase computational load and can make it harder for detection processes to determine whether an area on the image signals a defect, or is simply an illumination artefact. Using the ability of CMOS imagers to deliver high frame rates and control capture at the pixel level, it is possible to overcome the problems of illumination consistency across the board.

An increasingly common technique in photography is the use of high dynamic range (HDR) capture, in which multiple shots are taken in sequence quickly – each with a different exposure time. This results in a sequence of images that capture areas that, if only a single image is taken, might be lost through high brightness or darkness. The resulting images are then combined to produce a composite photograph that provides much higher bit depth than a single image. The high dynamic range allows the use of shading correction on parts of the image to make them easier to recognise, without the loss of effective bit depth that will be encountered with traditional single-exposure images.

By calibrating image capture with a reference frame, the image sensor can further enhance the image by altering the effective brightness of pixels that lie in areas that are not lit as effectively as others. Ideally, a number of reference settings can be stored to accommodate the use of multiple light sources in successive captures, which may be used to highlight different parts of the object under inspection.

Another advantage of using multiple captures in in-sensor processing is the resultant increase in overall image sharpness. Effects such as heat shimmer will cause sharp lines and points to appear to move over time. This will cause blurring on long exposures, or shifts in position from product to product on subsequent short exposures. The software may mistake the visual effect as being the result of an out-of-tolerance manufacturing process, causing it to be sent for time-consuming re-inspection – or worse, to be scrapped unnecessarily. The use of image averaging and correction techniques that correct for heat shimmer allows the fast and effective removal of image defects; this way, the software can focus on actual production problems.

Look-up table (LUT) support and pixel defect correction provide further mechanisms to deal with lighting and image sensor issues. A LUT provides the ability to shift the gamma of the image to make full use of the bit resolution of the image sensor’s output stream, by optimising the contrast within the image. Defect correction allows pixels around one that has failed to provide useful information to the image processing software, so that it does not regard the pixels that are stuck at one value as issues on the product being photographed.

The ability to select specific regions of interest is a further factor in streamlining overall system performance. By programming the image sensor to send only portions of an image, the use of network bandwidth can be optimised. This, in turn, allows the use of more imaging subsystems to be used to capture more images of a product, or deploy more sensors along the production line, as it allows the results of individual processes to be monitored more effectively.

Shutter subsystem performance is also important. If an image sensor is to capture an accurate image of an object passing underneath it, any distortions that may be caused by that motion need to be minimised. The image sensor itself can introduce distortions if it is not designed for machine vision applications.

Many CMOS imagers designed for consumer applications try to maximise their frame rate, by allowing each row to begin the next frame’s exposure after completing a readout. Each row is exposed for the same amount of time, but the timeframes are shifted slightly; resultantly, the row at the top of the sensor captures an image almost one frame before the row at the bottom. The effect is problematic for fast-moving objects, as they move across the field of view. The top of the object will appear to be shifted back relative to the bottom, resulting in a clear spatial distortion.

CMOS imagers designed specifically for machine vision applications instead use a global-shutter architecture, in which all pixels are blanked and activated at the same time. This prevents the shifting that can result from a conventional CMOS shutter design.

By bringing together a number of techniques and technologies, image sensors designed for machine vision can help integrators and end users build machine vision systems that continue to deliver improvements in throughput and takt time in industrial systems.


Contact Details and Archive...

Print this page | E-mail this page