Skip to main content

More than a machine

Robots crop up a lot in science fiction. Having your own personal robot to help you round the house could be what life’s like in the future. Of course, it’s when robots begin to think for themselves that things generally take a turn for the worse – need I remind you of The Terminator or Blade Runner or The Matrix? The word robot, from robota meaning serf labour in Czech, was first introduced by Czech writer Karel Capek in his play R.U.R. (Rossum’s Universal Robots), published in 1920. Science fiction writer, Isaac Asimov, is credited with coining the term robotics in his 1941 short story, Liar!. More recently, Pixar Animation’s WALL-E brought to the screen a loveable garbage compactor robot programmed to clean up the mountains of rubbish littering planet Earth. WALL-E is particularly expressive as a robot, albeit a fictional computer-generated robot, and we, the audience, get a pretty good idea of his emotions throughout the film.

Meanwhile, back in the real world, researchers at Carnegie Mellon University, Pittsburgh, US, are conducting studies on how humans interact with a real robot platform, through their Snackbot robot. Snackbot is a social mobile robot designed to deliver (you guessed it) snacks to the people around the university campus. It is a test bed for studying social interaction between robots and humans, with the robot participating in dialogue and performing head and face gestural interactions to elicit natural human interactions – instead of typing at a keyboard, one speaks to it.

The project is a collaboration between The Robotics Institute and the Human Computer Interaction Institute (HCII) and Design departments. Researchers at the latter (Jodi Forlizzi, HCII and Design; Min Kyung Lee, HCII; and Wayne Chung, Design) are interested in studying the human side – how do people interact with the robot – while Dr Paul Rybski and his team at the Robotics Institute are interested in the technology development and understanding what it will take to make the robot fully autonomous. ‘That’s not an easy task at all,’ Rybski says, ‘and we have to have a lot of perceptual work and situation awareness – where is the robot in the environment, where are the people, is the robot speaking to a person, does it understand what was said and how that corresponds to the current dialogue question.’

The concept of Snackbot originated in 2007/08, with the first end-to-end system deployed in semi-autonomous trials in autumn 2009. This was to gather data on human responses and so a human operator was supplying the robot with high-level controls. ‘The autonomy is still at an early stage of development and is an independent research project from the human subject experiments at this time,’ comments Rybski. ‘The navigation was fully autonomous, but the human interaction component was carried out manually,’ he says. By autumn 2010 Rybski’s team hopes to have an improved version that can go out and interact with people to carry out more trials and studies.

Snackbot has a Point Grey Bumblebee2 stereo camera mounted in the head, which is used as a standard camera, but also to generate disparity data, providing the distance from the camera to each point in space. A Dragonfly2 camera from Point Grey with a wide angle 190° field of view lens is also mounted in the top of the head to provide peripheral vision (a 360° field of view at the horizon).

The Bumblebee2 stereo camera is used for object detection and 3D object learning. Here, a laser from German company Sick is used to show the robot the position of the object. The robot can then drive up to the object – using the data from the stereo camera to gauge the distance – and circle it, learning the views from all sides. The robot would then recognise the object if it came across it in the environment. Through driving the robot around its environment, Snackbot learns a floor map of the area from the data it collects. During operation, it uses a stochastic state estimator called a protocol filter to localise itself in the map, based on its sensor readings.

Other work involved teaching the robot how to recognise people. A point cloud of data from the stereo camera was used to localise the person’s arms, torso and head. Then a bank of different features, such as skin tone detection, colour histograms, the person’s height and other aspects of their size and shape in the image plane, are fed to a learning system to generate a model that can be used to recognise the person at a later date. The concept of person recognition was taken a step further, with the same learning aspect used to recognise whether the person was paying attention to the robot through body pose. ‘This is quite a challenge, as the person could be anywhere in the robot’s field of view, with changing distances and orientations,’ comments Rybski.

Rybski remarks: ‘Integration for a real system is a huge challenge at all levels. Available computation, available network bandwidth, available throughput of the various data channels, all have to be taken into consideration.’ The vision was originally designed to be standalone and operate on log files and so compromises had to be made to operate in real-time: ‘There isn’t enough computation onboard the robot to have the luxury to operate at several seconds of computation per frame of video, for instance,’ he says.

Industrial robots

Robots used in academic circles are very different to those used in industry, although system engineers in both can be faced with integration challenges. Kyle Voosen, group manager for industrial measurement and control at National Instruments (NI), comments: ‘Controlling the robot is relatively easy to do; the difficult part is integrating all the other systems.’ He is speaking about a robotic cell, engineered by Italian company, ImagingLab, for automated packaging on a cosmetics production line.

Carnegie Mellon University’s Snackbot.

The system was built for Vetraco, which produces the assembly and packaging lines for a lot of the world’s cosmetics manufacturers, and is composed of two Denso robots with two vision systems picking out face powder brushes from a part feeder to place in the correct packaging slots. The main purpose of the vision is to guide the robot, although it is also used to check the assembly of the final product.

NI’s LabView was used as the software programming environment and drivers for Denso robots (built by ImagingLab) are now available, whereby the robot becomes a slave to one LabView program. ImagingLab has developed robotics libraries for Mitsubishi, Denso and Kuka robots using LabView and is in the process of developing libraries for other robot brands, including Stäubli and Epson. ‘The idea is to handle virtually any robot directly from the LabView environment,’ states Ignazio Piacentini, CEO of ImagingLab.

ImagingLab has developed other robotic cells using LabView, one of which is for the automated assembly of terminal blocks (used in electrical equipment to connect wiring). The system was composed of a rotating table containing the terminal block components (terminal blocks are made up of a plastic shell with metal contacts and springs) with five Mitsubishi Scara robots situated around it loading the parts.

There are three components to the machine: the vision system, the robot, and a flexible feeding platform called Anyfeed – produced by Swiss company FlexFactory. ‘To integrate the three components of this machine in a conventional way, i.e. separate programming environments for the robot, the vision system, and the feeder, would be much more difficult to achieve,’ explains Piacentini. ‘We are using LabView as a single programming environment, so each component becomes part of the larger machine. Therefore, a more efficient code can be written so that the three components (robot, vision and feeder) are more intimately connected.’

The vision system images a population of parts on the feeding platform and identifies those in the correct orientation for the robot to handle. It will then pass the coordinates of the part to the robot. If the parts are not in the correct position, the platform is shaken under computer control until more parts are available. The vision system also carries out a quality control inspection of the parts to ensure they are within specification and dimensionally accurate.

A robotics cell for the assembly of terminal blocks. Image courtesy of ImagingLab.

According to Piacentini, the feeder is a key component of the machine. It allows parts to be shaken and advanced or brought back until there are a sufficient number of them correctly positioned and this can be done in an iterative way. ‘It’s not just pure pick-and-place,’ says Piacentini. ‘Imaging is used to monitor and control a population of parts that is in excess of the number needed for the product batch and out of a random population of parts the machine extracts the correct one. If the components are upside down, for instance, the feeder table can shake them and make them jump until statistically there are a sufficient number of correctly orientated parts.’

Both the cosmetics and the terminal block assembly systems had to be flexible – the terminal block assembly machine could produce more than 40 different models of terminal block from different components without reconfiguring the robots.

Voosen, at NI, feels that flexible manufacturing is one area where the use of industrial robots is growing.

Simplifying a conventional robotics setup (a camera connected to a vision system controller that talks to a robot controller) by removing the need for the vision controller, is something that Mark Williamson, sales and marketing director at Stemmer Imaging, identifies as being beneficial to integrating vision and robots. Stemmer Imaging’s Vision for Robotics (V4R) allows a camera to be connected directly to Kuka robots via GigE Vision with the vision algorithms run on the robotic controller. However, V4R is tied to the Kuka controller and very specific applications. Another option Williamson identifies is to use a smart camera, like Dalsa’s Boa, which sends coordinates directly to the robot controller.

Robots for inspection

The two most common scenarios using robots with vision are pick-and-place and inspection. According to Williamson, using the camera to control what the robot does is generally used for pick-and-place, in which the location and type of product is identified, and coordinate instructions are sent to the robot. ‘For inspection tasks this would typically be reversed,’ he says. ‘The robot would control the vision system instructing the vision system when to inspect, and which inspection to run.’

One example of using robotics for inspection is a machine built by Italian company, SIR, to inspect knives during mechanical resharpening. The robot positions the knife under the vision system in order to acquire the shape. Then both sides of the blade are grinded separately. The vision aspect of the robotics cell uses a Cognex VPM-8501 image acquisition card connected to a camera, with Cognex’s VisionPro software carrying out the image processing.

Johan Hallenberg, senior applications engineer at Cognex, says: ‘There needs to be a common communication protocol used by both the robot and the vision system.’ Cognex’s solution is Cognex Connect, a package of commonly used communication protocols that, Hallenberg says, greatly simplifies the set-up of communication between the camera and the robot. Cognex Connect is integrated into the company’s In-Sight Micro smart camera, which can be mounted on a robot arm.

Piacentini of ImagingLab says that, currently, only a small percentage of robots use vision. ‘Robotics driven by vision can carry out more complex tasks and there is no need to palletise the parts or to put them in a specific place, as they can be identified at random,’ he says. ‘There is plenty of room for advanced applications of small accurate robots, not only in automated assembly and packaging, but also in emerging areas, such as biomedical applications and food processing.’ With the addition of vision, and as the technology for integrating vision with robotics matures, the variety of applications that employ robotics will only increase.



Topics

Read more about:

Robotics, 3D imaging

Media Partners