Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Seen from all sides

Share this on social media:

Topic tags: 

Coupling vision with a robot can make for a flexible inspection system. Greg Blackman investigates some areas where robotic systems incorporating imaging are finding a use

Robotics and automotive production tend to go hand in hand. At the Automated Imaging Association’s (AIA) business conference at the beginning of the year, Ken Knight, executive director of North America and global manufacturing engineering at General Motors (GM), quoted a figure of upwards of 800 industrial robots at some of its manufacturing plants. The automotive industry has now largely returned to profitability following the 2008/09 recession, with GM posting a profit for 2010 of $4.7bn. This is after the company filed for bankruptcy in 2008.

Adding vision to a robot improves its flexibility, both in terms of enabling the robot to pick up randomly orientated objects in so-called bin picking and by providing a flexible inspection system for large and complex parts, whereby a camera is moved around the part by the robot.

Norwegian vision company Tordivel has developed a 3D robotic inspection system, designed to inspect suspension components in automotive production. ‘These parts are reasonably large and require 100 per cent inspection,’ notes Thor Vollset, CEO of Tordivel.

The system uses a combination of stereo vision and point cloud processing via Tordivel’s Scorpion Vision Software package to make the measurements. A 3D camera system, comprised of three Allied Vision Technologies (AVT) cameras and a 9 x 9 Lasiris laser grid, is moved around the suspension part by a Toshiba Cartesian robot in an inspection cycle capturing 19 images. The inspection cycle measures the control arm and the bushings, part of the suspension that dampens vibration.

The distance between bushings is recorded using stereo vision to an accuracy of 0.2mm. ‘The part cannot be verified using any conventional measurement methods,’ explains Vollset. ‘The current method for identifying whether suspension components are within specification is to physically see if the part fits a test rig. The imaging system provides a more accurate and more flexible measurement method and it allows the manufacturer to conduct 100 per cent inspection of the components.’

Vollset adds that the robot system allows different product variants to be inspected – it’s just a case of reprogramming the system with the dimensions of the new part.

Angles of rotation between control arm and bushing are also verified. This is determined from point cloud data generated by projecting a laser grid pattern onto the part. 3D planes of the bushing and the control arm are measured and, as the 3D camera is moved, the system can measure the angle difference between them. ‘We’re measuring a surface that has no features, so we used the laser grid to image the plane and obtain an accurate angle measurement,’ explains Vollset.

The suspension components need to be imaged in 3D to achieve the accuracy required, according to Vollset. ‘3D imaging provides new opportunities for inspecting these types of large, complex parts,’ he says.

Another automotive application wherein a camera is manipulated by a robot is that of tracking the gaps in car body panels, using 3D laser triangulation. Here, the robot runs a 3D camera, including a laser projector, along the body panels to identify whether the gap is correct and whether the height between the two panels is the same in relation to one another.

‘The robot moves around as a human would with a handheld scanner along each joint,’ explains Mark Williamson, director of corporate market development at Stemmer Imaging.

With laser profiling, the gap and height tolerances are set up and the robot moves along the profile, measuring all the way. ‘It is effectively a line scan application, because it’s putting a laser stripe across the join and the system is continuously measuring the height of the two panels and the gap as the stripe moves along the join,’ comments Williamson. LMI’s Gocator, a 3D smart camera with a laser projector, is suitable for this type of inspection.

‘You could build a vision system with fixed cameras at all points where you want to measure on the car body panels and end up with a 20- or 30-camera system,’ Williamson continues.

‘However, you then might want to change your system and build a different car with different body panels. To inspect this, all the cameras would have to be realigned before they can inspect it. A robotic system, however, can load different programs for different inspections. It provides a very flexible large-part inspection system – engines, door panels, etc – which would otherwise need multiple cameras to inspect fully.’

Dental inspection

Spanish 3D imaging company Aqsense has developed a laser  triangulation system, whereby the camera and laser system is rotated on an axis around the part on a robot arm. The part is scanned by rotating the 3D camera around it, rather than scanning in a linear motion. Aqsense’s SAL 3D imaging software library is used to analyse the data point cloud.

The angular scanner is used as a measurement tool to inspect dental moulds at The Velasco Group, a group of dental companies based in Sao Paulo, Brazil. The system was installed to provide a cheaper and more compact scanning solution than linear robotic scanning devices.

Dr Carles Matabosch, operations manager at Aqsense, explains that a linear stage, on which a robot scans the part in a linear motion, operates by applying several rotations to achieve the linear movement, which introduces ripple errors. One simple rotational movement, on the other hand, reduces the inaccuracies in the system, he says. The angular system also lowers mechanical expenses compared to a linear device.

It was the cost of the system, as well as its accuracy that attracted Leandro Velasco, director of The Velasco Group, to install the system to inspect dental moulds at its Cubo milling centre. ‘There are other laser scanners available,’ he comments, ‘but most of them use linear displacement. Linear displacement hardware is expensive, especially to produce an accurate system.’

The accuracy of the angular device is related to the accuracy of the motor driving the rotation. The scans of the dental moulds need to be accurate to around 20μm, according to Velasco.

The Cubo milling centre takes dental impressions from the patient and generates a test model. The final mould based on this test model is scanned with the angular 3D imaging system to ensure it is manufactured according to specification.

According to Matabosch, the system is especially suited to inspecting large parts that can’t be placed on a conveyor belt, or, for example, if the object has a very complex geometry making it difficult to position a camera to capture all the intricacies of the part. A robotic scanner would be able to move around the object to image the detail.

3D calibration

Aqsense has developed an Angular Calibration Tool (a calibration part with a known geometry) for its angular 3D scanner. ‘The camera-laser system has to be calibrated along with determining the rotation axis,’ explains Matabosch. ‘Typically, with a conveyor belt the motion is fixed in one direction, which is easy to determine. But in the case of the angular system, the axis of rotation has to be determined in addition to calibrating the camera and laser.’

The system extracts the parameters relating to the camera, the laser, and the movement of the platform from scanning the Angular Calibration Tool. ‘This calibration is done in order to obtain a cloud of points with real metric measures and no perspective distortion,’ states Matabosch.

A vehicle suspension part is inspected in 3D to ensure correct dimensions. Image courtesy of Tordivel

According to Arnaud Lina, MIL processing group manager at Matrox Imaging, a robot-vision system could be calibrated either by absolute positioning based on visual information, so that the robot knows where it is positioned in xyz space at all times, or by visual servoing, whereby the robot converges to the desired position through analysing the effect of small iterative displacements on a camera’s field of view.

Visual servoing is used when the accuracy of the robot is unreliable. Lina explains: ‘Generally, a robotic movement is repeatable but not very accurate – even when calibrated, you can’t rely on the absolute position of the robot. Therefore, by moving it iteratively the robot will gradually converge on the desired position using visual feedback.’

For absolute positioning, Lina notes that calibrating a camera in 3D is well-known, but what’s difficult to determine is the relationship between the robot and the camera at the terminal joint. ‘There are a lot of unknown variables that have to be solved. This calibration can be performed globally by taking multiple views of a calibration grid as the robot and camera moves, so that for each view, the robot’s last joint position is known. In this way, all the parameters can be determined,’ says Lina. The Matrox Imaging Library (MIL) provides 3D calibration tools.

For random bin picking, the system needs to know where the objects are positioned in relation to one another and so is calibrated with absolute positioning. How it grabs the part though is more to do with how tolerant the gripper is rather than the accuracy of the robot position. ‘It’s a combination of the two things, but the robot arm doesn’t need to be exactly positioned to grab the part,’ notes Lina.

However, for inspection of parts, the camera often has to be positioned very accurately over the object in order to get an accurate read on its dimensions. Visual servoing is therefore used in these scenarios, to converge to a position where the subsequent measurements, typically in 2D, will be valid.

One field visual servoing is used in is assembly, such as robotic positioning of a windshield during automotive manufacture. Instead of determining exact coordinates for fitting the windshield, the arm will roughly place the windshield close to the car body and then, with small iterative movements, converge to the correct position. ‘There is no perfect calibration; it is achieved by an iterative approach,’ comments Lina.

Lina adds that in terms of assembly, how the part is picked up might differ. ‘It’s not enough to know where the gripper is in space, but you have to know how it is holding the part. In this case, the camera needs to see the final destination as well as the object itself. This is where visual servoing is important.’

Imaging in 3D has enabled many robotic applications, although much of the vision processing is carried out in 2D. ‘Extracting 3D data from 2D images with a proper 3D calibration is often better than scanning techniques that generate a 3D point cloud,’ comments Vollset of Tordivel.

‘You have to choose the method that is applicable for the application,’ he adds. Either way, the flexibility of a robotic inspection system or vision-controlled robot is a big advantage for manufacturers.

Other tags: 

Related features and analysis & opinion

20 June 2019

The UK is up to 20 per cent less productive than its major competitor countries because it is not investing in automation, Mike Wilson at the British Automation and Robot Association said at UKIVA's machine vision conference in Milton Keynes. Greg Blackman reports

29 March 2019

Andrew Williams explores the vision solutions for robot bin picking

Surface-based matching in Halcon, from MVTec Software

22 November 2019

Greg Blackman looks at the latest techniques to capture and analyse 3D image data

Depth map from the SceneScan. Credit: Nerian Vision

21 November 2019

Dr Konstantin Schauwecker, CEO of Nerian Vision, describes the firm’s stereo vision sensor for fast depth perception with FPGAs

26 July 2019

Matthew Dale explores the high-resolution imaging solutions emerging for inspecting OLEDs and other electronic displays

26 July 2019

As car makers install production lines for electric vehicles, Greg Blackman looks at how vision is currently used in their factories

26 July 2019

Keely Portway investigates how vision technology is being used in the dental sector, from initial diagnosis, to quality control of prostheses

29 March 2019

Andrew Williams explores the vision solutions for robot bin picking