Skip to main content

Sensor fusion gets robots roving around factories

Automation is the key driving force that for many decades has delivered increased productivity across many different industrial sectors. In the most efficient assembly lines, such as those perfected by automotive manufacturers, robotic machinery plays a crucial role in boosting throughput, reducing operational costs, and optimising the quality of the end product.

These robotic systems excel at performing specific tasks in a reliable and repeatable way, while continuing innovation has allowed them to tackle ever more sophisticated and intricate production processes. But for the most part they remain dumb machines: they act only in a pre-defined way, and they must be controlled by complex computer code.

Meanwhile, industrial manufacturers are seeking to introduce greater flexibility into their production facilities, in many cases to offer their customers greater choice. Again, the automotive sector offers a good example. While the traditional assembly line produces the same product over and over again, buyers of new cars expect to be offered an array of different options – ranging from the interior and exterior finishes through to the addition of novel technologies that are designed to improve comfort or performance.

This requires a more flexible manufacturing environment, which in turn demands robots that can move and perform simple tasks without human intervention. ‘Mobility and flexibility will be the next big step in manufacturing automation,’ said Bruno Adam of Omron, a leading developer of robotic systems for industrial applications.

Adam explained that most manufacturing processes are organised around fixed conveyors and robotic systems. To vary the specifications of the end product, human operators are typically needed to move product pieces from one assembly process to another. ‘Increasing flexibility requires more people to handle the work pieces and push them around, but this human intervention does not add much value,’ he said.

For that reason large manufacturing companies are keen to deploy mobile robots to transport inventory and product pieces around the factory floor. These autonomous mobile robots (AMRs) are designed to move and operate by themselves, which means that they must be able to perceive their surroundings and react to them. Visual information is crucial to aid navigation and avoid collisions, as well as to enable the robot to perform simple functions such as selecting and picking up the objects that need to be moved.

Initial deployments of these autonomous robots have focused on safety, which is most reliably achieved by fitting them with a row of laser scanners around 20cm off the ground. Such 2D lidar systems can detect the presence of people and other obstacles at distances of up to 20m, although most systems can only sense objects at floor level. Omron, which has already deployed more than 2,000 AMRs with customers in Europe, has added a vertical scanner to detect hazards above the ground, such as a raised pallet on a forklift truck.

The problem with these lidar systems is that they don’t know what the obstacle is, which means that they can’t make decisions about what to do next. ‘There will always be some configuration where we need more information than can be provided by a laser scanner,’ Adam said. ‘We are actively developing vision technology to collect more information about the environment and to improve navigation.’

Omron has the in-house expertise to develop its own vision systems, in part through the acquisition of Microscan – a company specialising in industrial barcode readers – in 2017. While safe and efficient navigation remains the primary motivation, Adam said that vision technology is also being developed to enable AMRs to perform simple operations, such as picking up and dropping off work pieces and inventory. This often requires more accurate positioning of the robot at the pick-up or drop-off point, particularly if items need to be taken to and from a moving conveyor. ‘We need vision systems for that final accuracy, particularly for applications where a robot arm is installed on top of the mobile platform to perform a specific function,’ commented Adam.

But it can be difficult for AMR developers to choose which vision technology to deploy. Conventional 2D cameras generally offer the cheapest solution, but 3D vision provides the depth perception that’s needed for the robot, for example, to work out whether to stop in front of an obstacle or to change its direction. ‘3D vision systems in particular is a very dynamic market in terms of the available technologies,’ said Anatoly Sherman of sensor specialist Sick. ‘Each one has pros and cons, so we always have to start by talking to our customers about their specific requirements.’

Key considerations include the field-of-view, which must be large enough for the robot to see enough of its surroundings in the desired direction, as well as sufficient small-scale information to enable accurate navigation and allow the robot to perform simple operations. Factors such as the reaction time, frame rate, and size of the robot are also important to consider, as well as whether the application requires an industrial design for 24/7 use.

Robots moving around a manufacturing facility also need to contend with challenging light conditions, including dark areas, highly reflective surfaces, and rapid changes in brightness levels. ‘As an example, 3D time-of-flight systems are very good for resolving details up to five metres away, but any reflectors even 30m away can distort the data and they are not so good for imaging dark objects that are further away,’ continued Sherman. ‘Sometimes the customer would like to have everything in one package, but sometimes that just isn’t possible.’

Mark Davidson of DreamVu, an early-stage 3D vision company, agreed that no single technology can meet all the requirements for AMRs. ‘There’s no single sensor solution out there,’ he said. ‘The challenge is finding the perfect combination and making sure the different systems work together effectively.’

DreamVu’s omnidirectional optical sensor looks like an upside-down coffee filter, with a series of curved surfaces that capture light for each point in the field of view at multiple vantage points. Powerful algorithms convert the raw data into two RGB panoramas that offer a 360° view over distances of up to five metres, and from these images traditional stereo depth techniques allow distances to be calculated with an accuracy of one per cent for objects that are one metre away.

The PAL omnidirectional image sensor from DreamVu has a series of curved surfaces that capture light from all directions at multiple points on the sensor. This is converted into two RGB panoramic images from which distances can be calculated. Credit: DreamVu

‘Our technology gives AMR robot designers the largest field of view as well as accurate depth information to create the most complete map of the robot’s surroundings,’ said Davidson. But he cautioned that the optimal performance is achieved by combining this 3D information with a lidar system that can reliably provide accurate distances over a longer range. ‘That way we get the accuracy of lidar as well as the situational awareness of a camera,’ he said.

Of course, the cameras and sensors are just the eyes of the AMR. Powerful data processing algorithms are needed for the robot to work out where it is, create a map of its surroundings, and follow a set navigational rules. The ability to make autonomous decisions also requires the addition of machine learning algorithms that enable the robot to evaluate the best course of action.

Here, again, there are compromises to be made. ‘We need powerful computer processors to analyse the data and enable the robot to react quickly enough, but we also need to embed them on a mobile platform with limited access to electrical power,’ said Adam. ‘A bigger battery is needed to power a strong vision system with lots of onboard capabilities, but that places limitations on the size and agility of the robot.’

For sensor manufacturers such as Sick and DreamVu, there is a clear preference among AMR developers for cameras with embedded processors. ‘At the software level, customers are always keen to have a plug-and-play device,’ said Sherman. ‘They don’t care about the technology, they just want to know that the robot can navigate in a reliable way. That means we need to provide on-board software to evaluate the visual information and decide which route to take.’

Davidson agreed. ‘While the sensor itself becomes more expensive, customers want more of the workflow to be done within the camera,’ he said. ‘That reduces the workload for the host computer on the AMR, and requires less integration and less synchronisation between the different sensors that are deployed on the robot.’

But a central computing system is likely to be needed to support more advanced AI applications, particularly when fleets of AMRs are moving around a manufacturing facility at the same time. This will require significant computational resources to collect such large amounts of data, analyse them using sophisticated machine learning algorithms, and relay instructions to the robots quickly enough for it to react.

As well as powerful processing capabilities, such deployments demand a high-bandwidth wireless communications channel for sending and receiving information. ‘So far we have had to rely on WiFi, but that has limited bandwidth and supports only limited mobility,’ said Adam. ‘5G technology will be a real game-changer, because it will make it possible to send an order and receive a response much more quickly.’ As a result, Omron now has a partnership with Nokia to develop 5G technology specifically for mobile robots. ‘We already have a few pilot plants testing this technology, and it looks really promising,’ he added.

Omron currently sells around 70 per cent of its AMRs into the automotive sector, although during the pandemic interest has grown in applications such as food and commodities, as well as disinfection. But current deployments remain small in scale, with manufacturers still working out how they can derive most value from an emerging technology that remains relatively expensive.

AMR guided by a Sick 3D sensor. Credit: Sick

‘We are still in the innovation domain,’ said Adam. ‘Our customers know they need mobility in their factories, but they don’t yet know how it will work in practice. They need to develop a reference application they can rely on and that will inform future deployments.’

One early implementation in the food industry has been for the production of coffee capsules. At the assembly level boxes are generally packed with the same flavour, but manufacturers can generate extra value by creating mixed collections of capsules. ‘An AMR is an ideal solution for these so-called smart-kitting applications, since they can transport capsules from different product lines to the packaging station,’ said Adam.

Another application within the automotive sector is the installation of the dashboard, which is an expensive part of inventory that also offers plenty of options for customisation. ‘Manufacturers can’t afford to have a huge inventory of dashboards, which means they must be produced on demand,’ said Adam. ‘Mobile robots are the right technology to facilitate that flexibility.’

But this is just the start. In the future AMRs could become just as ubiquitous in industrial environments as the fixed robotic machinery that has driven process innovation over the last 20 years. ‘Our customers in the automotive sector are already thinking one step ahead,’ noted Adam, who said that one key function for mobile robots will be transporting a car chassis to several different workstations where different options will be installed. ‘For electric vehicles in particular it’s very important to be able to move the car body from one location to another,’ he said. ‘They have different electrical connectors that in turn need different parts and pieces of equipment, and manufacturers want to use this extra mobility to install the connector, the engine and the battery, as well the optional extras.’

Once equipped with wheels and its own navigation system, Adam imagines that the cars of the future will be able to travel through the manufacturing facility by themselves – visiting different locations to collect the right finishing touches such as the windows, the dashboard, and the interior carpets and seats. ‘The car itself will become part of the process,’ he mused. ‘It will become a kind of robot.’



Topics

Read more about:

Robotics, Logistics, 3D imaging

Media Partners