Robots with eyes
With increasing demands on system integrators for vision guidance, and the price of component parts decreasing all the time, vision and robot integration has been accepted more and more within the machine vision industry. It provides accuracy, greater automation and increased efficiency, but at a cost of greater computing power, infrastructure changes and the need for training and specialist knowledge.
Robot integration involves combining machine vision with mechanical motion to provide automated systems. This can have many different applications, such as in agriculture and the automotive industry, with various levels of sophistication, from vision assembly and conveyor tracking, where vision simply aids the production line, to 3D and small-lot assembly, with tightly integrated, ‘near-human’ vision.
The simplest of these would be a single-axis robot with a single area mount camera and no conveyor tracking. This kind of system could be applied in medicine and the automotive industry, and would be simple pick-and-place mechanisms, small-part palletising, and screw driving. Next up would be a system involving a four-axis robot, still with a single camera, with simple conveyor tracking and a vibration feeder. This would be used in electronics and telecommunications, and possible applications include tracking pick-and-place mechanisms, flexible feeding and precision assembly.
The more complex applications have greater precision, and use multiple area-mount and line scan cameras (with possibly one mounted to a six-axis robot), together with multiple high-speed conveyors and multiple mechanisms. Due to the greater level of precision, these are suitable for use in food and pharmaceutical packaging applications, flexible part manufacturing, and multi-product assembly. They can give greater speed, which is necessary when producing lower-value parts.
There are many difficulties when developing such schemes – as Jan-Philippe de Broeck, of Adept Technology, explains: ‘Among them is the transformation of the vision coordinates to 3D world coordinates that the robot can use to pick parts, as well as the requirement for communication efficiency and synchronisation between the vision system and the robot. The physical link is important, but most critical is the software that must be able to reliably send multiple part locations (and vision inspection results) to the robot controller in a very short amount of time to enable the highest throughput. Information needs to be sent from the vision system to the robot controller so that appropriate movement instructions can be carried out in time and with great accuracy.’
The automotive industry is one of the key areas to benefit from applications robot integration. Bin picking is one such example, in which a 3D vision-guided robot uses a camera to identify randomly placed parts in a pick-up area. A robot then picks up the part, and can transfer it to the next step on the production line. According to Adil Shafi of Shafi Inc, who has helped develop such a system, this level of sophistication ‘is the holy grail of robotics’.
Processes like bin picking are so precise, with accuracy up to 0.1mm, that they practically eliminate the need for humans on the factory floor. Humans are the slowest part in manufacturing procedure, so vision-guided robotics would speed up production significantly.
Inflating labour costs also add a key incentive. As Shafi claims: ‘In America, the average labour costs are $20 an hour. Using vision and robotics, the cost is $3 to $5 an hour. This is very attractive in countries with high labour costs. It allows them to keep manufacturing in their own country, while still being able to compete in the world market. What’s more, with robots, there are no costs of worker injuries and there are health benefits.’
An engine head is robotically removed from the shipping container and assembled to the block at Ford.
Ford uses vision-guided robots for high-precision tasks that are impossible for humans. For example, cylinder heads are highly sensitive and must be kept immaculately clean. In the past, high-cost precision-shipping containers had to be used while fitting the heads to the blocks. Now, precision automation makes this possible with much cheaper containers that can hold the parts in less precise positions. Ford also uses vision-automated vehicles to transport parts, and load and unload racks.
The use of vision-guided robotics is not confined to the automotive industry – you can even see an advanced application of robot integration in practice in the rose gardens of The Netherlands. Agrotechnology and Food Innovations is currently testing a fully automatic robotic rose harvester. The machine includes three robots and five cameras. It makes use of both 2D and 3D vision to locate the roses from eight-metre-long gutters, and to decide where would be best to cut the rose before actually doing the job. The third robot then retrieves the cut rose.
A rose worker in the Netherlands, using Agrotechnology and Food Innovations' robotic vision
Robot and vision integration is not just limited to vision-guided motion. Aptura has developed a new method of quality control of sand cores in the engine block casting process, in which a robot transports the parts and presents them to a number of cameras that capture images of the part and check for abnormalities. According to David Dechow, of Aptúra Machine Vision Solutions: ‘The advantages of this are that it is fully automated, with limited human intervention. It does not necessarily reduce labour cost, but it does increase productivity, as it ensures error detection. If an error were to remain undetected the whole thing would have to be scrapped.’
Vision servoing is another technique, and is currently at the cutting edge of robot integration. ‘The idea is wonderful,’ says Dechow. ‘A camera watches and guides a robot to a moving object, without encoders or any other sensors.’ Although it is hoped that it will have many applications in the near future, it has not fully matured as a technology and, as such, is not widely in use yet.
Valerie Bolhouse, automation systems specialist at Ford, agrees. ‘Vision servoing would be the biggest thing yet to come, and we would see a lot of assembly applications opening up. At the moment it is very expensive to accelerate, then stop, then accelerate the assembly line while the heavy parts are fitted.’
Although the development of such schemes has been blooming in recent years, this technology hasn’t always been accepted by the machine vision industry. ‘The development, and reliability has increased dramatically in the last few years,’ says Shafi. ‘The ’90s were very disappointing. However, the internet has increased development by giving factory workers computer literacy, so training and support are much easier.’
Financial concerns are obviously a big factor. Currently, it has been estimated that 18 per cent of production costs come from integration, making it the largest proportion. However, this is not set to always be the case. According to an AIA market study, the trend is for the average system price to decrease rapidly with time, meaning robot integration can become more affordable for smaller companies. With labour costs growing all the time, it is increasingly an appealing option.
According to David Dechow, the increase in computing power, and the development of smart camera architecture, which cut the components’ costs, has been a big factor in its success. ‘And of course, successful applications created a growing recognition that use of machine vision can be successful.’ Smart cameras also facilitated robot integration by reducing the number of wires needed to output results, as the computer is fitted with the sensor.
Whatever the challenges, it seems inevitable that with the current trends of increased acceptance and decreased prices, vision-guided robots will have a much more prominent place on the factory floor in the near future.