Skip to main content

Vision of the future

Professor Rüdiger Dillmann and his team at the University of Karlsruhe in Germany have got a new friend. He’s been helping them out with odd jobs in the kitchen, by fetching bowls and loading the dishwasher. You can even strike up a conversation with him, if you ask him the right questions.

Armar III (pictured above) is a robot. He’s a step ahead of his contemporary humanoid robots because of his advanced vision system that provides him with the same kind of detail humans are used to. He’s one of many automated systems, used in everything from space exploration to food processing, that strive to achieve the ‘holy grail’ of robotics: to guide a machine in a randomly arranged environment using nothing more than visual input. 

Armar’s vision is advanced because he can focus his eyes on one particular job while still being aware of what’s going on in the room around him. It’s how humans manage to walk while carrying a drink, without tripping up, and it’s important in allowing Armar to perform tasks in a more natural way. To achieve this, each of his eyes contains two DragonFly cameras from Point Grey Research, to provide both fine focus and a bigger picture of the surroundings.

The DragonFly cameras were chosen because they are small enough to accommodate this. ‘We wanted the robot to have human-like size and shape, but not many FireWire cameras are small enough to put two in the eye,’ says Tamim Asfour of the University of Karlsruhe. A FireWire interface transmits images at a rate that can keep up with the real-time motion of the robot – a solution to a common problem for many robotic systems integrated with vision.

It’s an innovative approach, but Armar is a long way from the sci-fi dream of robotic servants that would put an end to housework. He’s still somewhat clumsy and is only allowed to handle plastic cups at the moment. In the next generation of robots, Asfour hopes to reduce both their size and weight. For now, Armar is proving a useful experiment into how artificial systems can learn from their environment.

A robotic system of more practical use has been ensuring the quality control of hamburger buns on the production line. In common with many robotic applications, this uses the latest in 3D technology to help a robotic arm pick the buns up, even when they are randomly assorted on a conveyor belt. It’s an improvement on previous systems, where the buns needed to be lined up neatly in rows for the robot to know where to grip.

This system, which includes IVC-3D cameras from Sick IVP, uses a method called laser triangulation to provide an accurate picture of the bun. A line of laser light is projected over the conveyor belt. When it falls on a bun, the shape of the laser line changes, as perceived by the camera, and complex image processing algorithms can infer 3D information about the bun from this.



The IVC-3D from Sick IVP inspects buns by forming 3D profiles using laser triangulation.

This provides information about defects in the bun, so the system knows which ones are faulty. It also provides the coordinates of the bun to guide a robotic arm to the correct position, so it can be picked up and placed in the rejects bin. 

‘3D imaging is becoming more popular in picking applications,’ says Anders Murhed, manager of business development at Sick IVP.  Because this kind of imaging does not depend on the contrast between an object and its background, and given its detailed information about its position, the objects can still be found even if they are randomly assorted and placed on top of one another, at any orientation.

This would be particularly useful in automotive applications, where lots of components may be placed in the same ‘bin’ for later use. Currently, the image processing algorithms aren’t up to recognising the many different kinds of objects, but progress is being made. Kai-Udo Modrich at the Fraunhofer Institut für Produktionstechnik und Automatisierung has been working on a new algorithm that could allow this, which he presented at the International Robots and Vision conference in Chicago this year.

The method relies more heavily on the actual 3D image data than previous attempts, and is reportedly more accurate. The program tries to pick out the ‘primitives’ or simple 3D components, such as cylinders, that make up the shape of the object.  This is usually enough to determine the best way to grip a simple object. For more complex objects, it would compare this information with the original CAD drawings of the objects, to help identify them and determine the best way to handle them. 

Previous attempts matched every point of the 3D image data to the CAD drawings of the object in different orientations, rather than singling out a few key features, putting extra strain on the processor. The new algorithm is already being used in some automotive applications.  It’s not just delicate operations on the factory floor that are benefiting from 3D vision for robotic applications. Vision Solutions International is currently developing a system to spray-paint camouflage on US military vehicles that have been damaged in battle, or that need to be redesigned for a different environment.

The system uses Matrox smart cameras to recognise what kind of model it is painting, including any accessories such as an extended cab or a soft top. The method uses a mixture of structured lighting and stereo vision to find the 3D coordinates of features on the vehicle by calculating the geometric relationships between the light source, the reflected light, and the camera. 

Most similar systems require three steps of calibration, to allow for aberrations in the optics of the visual equipment, and to calibrate the position of the light source with respect to the camera, which ensures that the geometric calculations are correct.  It also aligns the coordinate system of the robotic motion with the coordinate system of the visual system.

Vision Solutions, however, has combined these into one step that lasts just a couple of minutes.  The cameras follow an LED target in various predetermined positions, from which the system can calculate an accurate coordinate system for all future measurements.

Like many robotic applications, this too uses smart cameras. ‘We think the smart camera approach is significant,’ he says. ‘Smart cameras bring a couple of things to the party. They increase throughput, because you can parallelise the processing in each camera. They also improve the scalability of the system, which we hadn’t anticipated.’ The smart cameras are joined in a network, so they can be added or removed more easily to tailor the system to each application. 

In addition to smart camera technology, the acceptance of Ethernet communication for vision systems has also increased the flexibility of robotic applications. Ethernet is encouraging machine vision suppliers to talk the same language as robotics manufacturers, which eases the integration of the two systems.

In the past, each robotics manufacturer would build their components around a certain brand of camera, meaning that customers rarely had a choice of which machine vision equipment would best suit their application.

‘Four years ago, we were hitting our heads against brick walls as customers could only use the machine vision the robotics manufacturers supplied,’ says Mark Williamson, director of Firstsight Vision.

Now, both robotics and machine vision organisations are developing open interface standards, meaning that the communication protocols don’t need to be programmed for each type of vision system. To further improve communication, Elau is even providing a control box with a segmented processor: one part for machine vision, and the other to control the robotic motion.

With all these advances in technology, it would be easy to believe that vision-guided robotics will soon be ubiquitous in industrial manufacturing. Jarrod Bichon, vice president of robot integrators RobotWorx, however, says that in many situations ‘our group recommends not to use vision’. He says that many industrial processes are just too dirty and would damage the cameras, and the illumination would be too bad in most factory environments.

As Armar III shows, robotics has improved enormously, but there is still a long way to go. What isn’t in doubt is that the vision is proving more popular all the time, and is still a useful tool for many applications. ‘Vision does add intelligence to the robot,’ says Bichon. ‘In many ways vision technology is growing, and it’s being employed more and more on a daily basis.’



Topics

Read more about:

3D imaging

Media Partners