Skip to main content

AI can see clearly now

An industrial robot is defined by the International Standards Organisation as being an automatically controlled, reprogrammable, multipurpose manipulator, which is programmable in three or more axes. In practice, the tasks that industrial robots perform are usually highly repetitive, and they are often programmed using simple commands such as ‘move tool to position one, then move to position two, then activate tool...’, etc. Industrial robots were first developed for use in the automotive industry, and this industry persists as the main application of robot technology. While machine vision has played an important role on robot-assisted production lines throughout its development, the ability to integrate vision into a robot system and to have the robot respond to visual information is a relatively recent development.

In a vision-enabled robot system, the data communicated between the robot and the vision system provides information about the number of objects on a manufacturing line, or the environment around it. Mike Badger, account manager at ABB robotics, sums up the function by saying that ‘the objective of vision systems is to be able to guide the robot to a particular target, but it is also often used to provide quality and inspection data.’ According to Badger, the most significant consideration when integrating the vision system with the robot is the question of how easily the robot is able to make use of the data generated by the vision software. ‘A well-integrated robot and vision system can be used intuitively and easily,’ he says, ‘as it is not a complicated system that requires any special knowledge.’

Of all the robot-vision tasks, bin-picking is considered something of an acid test. The task is difficult and complex on account of its requirement for the vision system to carry out a 3D analysis at the scene, and to subsequently use target acquisition algorithms in order to determine an optimum unloading strategy to remove the required object from the bin. This kind of application is not yet in widespread use in an industrial setting, but gives an interesting idea of where the goalposts currently stand. Some applications are already found in the foundry industry, according to Badger, where products are delivered in bins.

Total accountability

Product inspection has always been important for quality control within manufacturing, and machine vision has played an increasingly important role. Traditionally, a small number of products would be separated from the bulk of those produced, and this small, ostensibly representative sample is then tested for conformity away from the main production line. This approach can, if done correctly, result in a high statistical probability that any one example of the final product will meet whatever standards have been applied to it. However, unless every product is checked on an individual basis, this probability cannot reach 100 per cent; faulty products will still manage to sneak their way through the checking process.

Safety-critical applications typically demand a 100 per cent check rate, i.e. every component that exits the production line is checked. If these checks are performed manually, then the cost per component rises accordingly. When the number of components produced is small, and when the cost of those components is high, manual checks may remain a viable option – turbine blades for the aerospace industry, for example, are typically checked manually. The automotive industry, in contrast, requires far larger product runs, and it now often demands the same 100 per cent quality checks for some components, such as those found in a seatbelt mechanism.

Machine vision systems have become important to such quality control applications, but a machine vision system alone is not a solution in itself. Machine vision systems can accurately determine the size and shape of a component, and determine whether it fits within specified tolerances, but it is a robot that removes the faulty piece from the production line.

Take, for example, the TubeInspect machine vision system produced by Aicon, designed to inspect tubular segments in order to make sure that they have been bent correctly during processing. This system was designed to be a part-for-part improvement on conventional mechanical gauges, and it consists of a cabinet containing either 10 or 16 high-resolution cameras (depending on the model), connected to a processor that builds a 3D model of the tube from all directions, and compares the model to existing CAD data in order to verify the accuracy of the component’s manufacture. During normal operation, a quality control engineer would manually remove a small sample of the tubular components as they exit the machine that bends them into shape. The engineer places each sample into the machine, and starts the test, which takes about five seconds to give a result of either ‘green’ – the part is good, ‘red’ – the part is bad, or ‘yellow’ – the machine could not determine a result. Such checks would be performed at key times during a product run, such as when the machine starts up, or at the end of a shift.

Aicon has begun to offer a robot-integrated version of this system, for applications in which high-throughput is important, such as when checking all components produced. A robot arm picks every part up from a conveyer as they exit the bending machine, and holds each one in the TubeInspect cabinet, before placing it either back on the production line or in a bin for rejects based on the result of the inspection.

Guenter Suilmann, sales director at Aicon, explains that achieving good communication between the robot and the measuring system is a challenge. ‘The requirement was to have a good interface – an open interface too. We define an open interface as being one that would enable a third party company to be in a good position to adapt our machine to another manufacturing robot, should we sell only our machine with no robot.’ In practice, the TubeInspect vision system is connected to the robot-controller by way of Ethernet or USB, either of which provide the required versatility. Suilmann states that Aicon’s development efforts are being focussed on speed: ‘We are working on reducing the Takt time [that is, the maximum time per unit allowed to produce a product in order to meet demand] of the system; producing a part with the bending machine takes 20 seconds, and therefore the inspection needs to also be done in this time,’ he says. Another challenge Suilmann notes is to make the system independent of the user for a period of up to five days. ‘In addition, it has to be as simple as possible to use, as the end user is not necessarily an expert on robotics,’ he adds.

Streamlining through combining

While this application achieves good co-operation between a robot and a vision system, the two components are still essentially separate. Stemmer Imaging has undertaken a project in conjunction with the Interstate College for Technology NTB in Buchs, Switzerland, and German robotics producer Kuka Roboter, with the aim of uniting the separate elements of vision system and robot controller. It’s hoped that this integration will streamline the programming process for the robot, making it more user-friendly to robotics engineers, who may not have any experience in programming a vision system.

Conventionally, the vision system of a vision-enabled robot has its own processing capabilities, which are often provided on-board the camera by way of so-called smart cameras. In this case, a location algorithm on the smart camera is implemented, which would be specific to whatever application the camera is being used for. One of the most common applications in industry is product location and orientation recognition. Mark Williamson, sales and marketing director at Stemmer Imaging, explains: ‘If there is, for example, a conveyor belt with products coming down it, there may be more than one type of product and it may be at a random orientation,’ he says. ‘The robot will then be required to manipulate each item in a manner dependent upon its type and orientation. With foodstuffs, for example, you might have lots of chicken bits coming down a conveyer belt. The robot has to be able to say “that’s a chicken breast... I need to pick that up and put it in this chicken breast pack” or “that drumstick needs to be placed in this pack in the opposite direction to the last one.”’

In conventional configurations, a vision system would be programmed to accomplish the recognition side of the task, and it would then send the resulting data through some sort of interface to the robot control system, which would subsequently tell the robot something along the lines of ‘an object of type A is located at this coordinate, and at this orientation.’ This results in a requirement for two lots of coding, and also for two different types of expertise.

With hundreds of cameras installed on hundreds of robots, cabling becomes an important consideration. Image courtesy of Baumer Optronic.

Kuka has, in response to this problem of requiring such differing types of expertise to accomplish one task, developed a new robot controlling processor, and Stemmer Imaging has integrated CVB into the controller in such a way as to allow the vision side of things to be programmed in the controller’s language, as opposed to a vision language. The integration is called V4R (Vision for Robots). The product uses key parts of Stemmer’s Common Vision Blox (CVB), its library of machine vision tools. Stemmer wrote various pieces of code, incorporating modules of the CVB library, which were integrated into the Kuka robot controlling software. As well as porting CVB, a GUI was added to facilitate programming the vision system. The robot controlling processor ships with two Gigabit Ethernet ports, and so the vision system of V4R is designed to work with GigE Vision cameras. Compared to the expense of smart cameras, the option for the integrator to use standard GigE Vision cameras represents a significant cost saving, while still maintaining the high-speed interface necessary to ensure that the robot is not left idle while it receives its instructions.

According to Williamson, setting up a robot-vision system using the V4R equipment entails simply plugging a GigE Vision camera into the robot controller, installing the V4R software on the controller, and then calibrating the vision system using the configuration interface which is built into the V4R software. The coding for both the robot and its vision system is subsequently carried out entirely in the Kuka robotic script.

The V4R system has seen several applications already, primarily in robotic pick-and-place tasks. It has primarily been used in the automotive and electronics industries, but food and beverage applications are also on the rise. Williamson explains that the latter often present more of a challenge, as foodstuffs are more variable in shape and size, and so the vision algorithms used are more advanced. Stemmer is working on porting additional CVB modules into V4R in order to make the system more versatile when dealing with unpredictable groups of objects.

Currently, Williamson says, the product is aimed at a very vertical market, and so the solution is designed to solve one problem only, namely pick-and-place. Further development, he says, could lead to other types of robotics with vision, perhaps featuring cameras mounted on the end of the robot, capable of some sort of inspection. ‘The pick and place is a very specific part of the industry, and while it probably represents the biggest market, it is a very specialised one,’ he says.

Power struggles

As mentioned, some more advanced vision-enabled robots feature cameras situated on the ends of their manipulators. When moving towards an arrangement of this kind, the designers have an additional concern on their hands in the form of cabling constraints. Robots, by their nature, move around a lot, and therefore any cables leading to the end of the manipulator (i.e. to a camera placed there) are subjected to unusual wear and tear, and are likely to fail frequently.

Engineers at robotics company Comau have been working on minimising the chances of system failure due to cabling by reducing the number of cables. The RecogniSense system developed by the company uses a single camera at the end of the manipulator in order to adjust the co-ordinates at which the system expects to find an object towards the co-ordinates where the object actually is in real-time.

Standard GigE Vision cameras require two cables (one for the camera’s power supply and a second for data), or even three if the application requires an external trigger. Comau reduced this by using a Power over Ethernet (PoE) solution provided by Baumer, and writing elements of Baumer’s SDK camera software into the RecogniSense software. When a given production line may have hundreds of cameras operating on hundreds of machines, halving the number of cables equates to significant savings in terms of reduced downtime.



Topics

Read more about:

Robotics

Media Partners