Providing the Advanced Sensor Technology Needed for Cobot Operation

Author : Fabrizio Petris, Omron Electronic Components

12 January 2024

Collaborative robots (or cobots) are making their presence felt in an increasing number of modern scenarios. Delivering unceasing productivity, dexterity and precision, they can be used in all manner of situations where human decision making and discrimination abilities are going to be required. 

From assembly of circuit boards to healthcare applications, the ability for cobots to co-exist safely with humans and work effectively alongside them on the same fenceless factory floor is transforming the way we think about production and its possibilities for customisation. 

Their flexibility, allied with many other benefits, has led to rapid cobot adoption. The market for this technology is estimated to have come to a total of $0.95 billion in 2023. However, this is just the beginning - with it expected to have a $2.41 billion annual worth by 2028, seeing a 20.50% CAGR over the coming years.

For cobots to function successfully with human coworkers on shared tasks, safety is paramount. This will make certain there is no possibility of injury occurring and prevent hazardous conditions from arising. Numerous sensor devices are needed to ensure such safety. These can include proximity sensors for collision detection, force/torque sensors, speed monitoring to restrict movements that might otherwise be dangerous, as well as object detection systems. The latter are particularly important, as accurate object detection that can put a cobot’s effectors in the correct place with high precision is vital for maintaining safe operation. It is likewise essential in the handling of goods, particularly packages that could leak hazardous substances if damaged. Accurate detection is also required to ensure that the cobot can recognise the presence of human staff and subsequently adjust its motions to account for them. 

Options for object detection
There are 2 major sensing technologies that are generally employed for detecting objects and humans within a production facility environment. They are:
  •  Ultrasonics - Although this is useful in some applications, there are several drawbacks to be aware of. Among these are limitations to the range and accuracy (affected by the soft materials involved).
  •  Optical systems - These are often based on light detection and ranging (LiDAR) or laser detection and ranging (LADAR). They can come in 2 different forms, either 2D or 3D. 

2D LiDAR suffers from the drawback that it can miss certain objects that are out of the scannable area of the beam. This is because it uses only a single beam of light, which is bounced off a single surface. For example, it can miss boxes or other items on the floor as well as possible hazards (such as steps). This can be problematical if the cobot is mounted on an automated guided vehicle (AGV) and needs to negotiate its way around an area, avoiding any obstacles present there. By contrast, 3D LiDAR offers a greater detection envelope, using multiple beams of light simultaneously to create a 3D rendering of the surrounding scene. One of the downsides associated with 3D LiDAR is the higher costs involved. 

ToF-based object detection
An alternative optical method that overcomes many of the disadvantages just outlined is use of time-of-flight (ToF). This is a range imaging system that measures the distance to each point by measuring the total round trip time of a light signal from when it is emitted to when a reflection is detected. As a scanner-less system, ToF can capture the entire scene with a single pulse of light from a laser diode or an LED emitter. 

The system is very compact, with the illuminating light/laser source placed close to the lens. This compares favourably with stereo vision systems that need a certain minimum base line. Another advantage over scanning systems is that there are no moving mechanical parts, so reliable long-term operation will result. The computing power needed is also minimised, with only simple algorithms required to extract distance information from the ToF signals. There is scope for the implementation of complex stereo vision correlation algorithms too.
 
ToF cameras can measure the distances within a complete scene using just a single pulse, turning captured images into 3D renderings. With speeds of up to 20fps, they are highly suitable for use in real-time applications and for the detailed tracking of object movements. A ToF-based approach will also  outperform 2D and 3D cameras in situations where there is poor illumination, or when there are colour-related issues. It should be noted that 3D cameras can find it difficult to distinguish objects if the colours are very similar. An example of this is when there is a white coloured object on a white background. In this case, using a conventional camera, the edges and the shape of the object will be hard to determine. Conversely, because it uses near-infrared (NIR) frequencies, a ToF imaging system will not be affected by ambient light conditions and can work very well in strong direct sunlight.

High-precision ToF sensing for cobot deployment
From the previous paragraphs, it is clear that overall ToF sensors will offer a good balance between performance and cost. This is making them a more appealing option for robotics than 3D optical sensors. Omron’s B5L ToF sensor offers all the benefits previously outlined, as well as several others. It can provide real-time 3D sensing of the distance to humans or objects with a 320 x 240 pixel resolution.

The B5L has an ambient light immunity equivalent to 100,000lux. This translates into stable detection performance that is free from saturation even in the brightest of application environments. Designed for measurement distances between 0.5m and 4m, the sensor has a detection resolution of 0.3° and a ±2% detection accuracy (equivalent to 4cm or less at 2m away from the object). The device also outputs compensated signals to minimise the need for data processing by the robot’s on-board computing resource.

An extensive lifespan of 5 years is assured, thanks to the proprietary circuit design utilised, along with its innovative heat emission configuration and the employment of long-lasting LEDs for the emission elements. Omron’s sophisticated optical technology also contributes to the accuracy and quality of the image detection and distance measuring. The lenses used are designed to correspond to the wavelength of the LED emitters, while the arrangement of emitters and receivers mitigates the effect of suspended dust particles.

The B5L also incorporates interference prevention. It allows up to 17 of these ToF sensor units to be in operation simultaneously. This is particularly beneficial when multiple cobots are making use of the devices for distance measurement. Another development that can be used with the B5L is the skeleton detection software. Based upon defined points on the body, output data from the device can enable an estimation of the pose adopted by the human in question. Although not a safety function as such, it can be invaluable in determining if the human worker is moving towards the cobot or away from it. Applied in conjunction with an AI package, cobots could use skeleton detection to learn the typical action of the people it works with, contributing to more efficient collaboration by allowing it to optimise the timing and speed of its movements.

Conclusion 
ToF sensors clearly offer an array of plus points over standard 3D optical sensors. Through their more widespread incorporation into cobot systems new levels of operational performance and accuracy will become attainable. 


Contact Details and Archive...

Print this page | E-mail this page