3D in profile

Share this on social media:

Topic tags: 

Greg Blackman looks at 3D imaging technology and the areas where it is used

Imaging in 3D, which we humans might take for granted, is still relatively new in machine vision. Acquiring depth information adds complexity to a vision set-up, both on the image capture side and the processing, but it does open up areas that would otherwise be too difficult to engineer relying solely on 2D vision. In fact, 3D imaging is currently considered one of the major growth sectors in machine vision.

The technology surrounding 3D vision is varied – there are numerous ways to generate depth information – and the type employed really depends on how accurate you want to be – are we talking height data at ±50mm or ±0.1mm? Calculating a real-world measurement such as the volume of an object typically requires a higher degree of accuracy than guiding a robot, say, in bin-picking.

Four main technologies relevant to 3D imaging are: laser triangulation, imaging with structured light, time-of-flight, and stereovision. The first two are the more accurate and laser triangulation, in particular, is a relatively well-established approach in industry for scanning an object and generating a height profile that’s accurate enough to take meaningful measurements. Stereovision and time-of-flight are less accurate, but, stereovision at least, can be used for tasks such as robot guidance.

But installing a 3D imaging system can still pose major challenges for system integrators. ‘The big issue with 3D historically has been, firstly, setting it up; secondly, calibrating it to get accurate height measurements; and thirdly, dealing with temperature drift,’ states Mark Williamson, director of corporate market development at Stemmer Imaging, adding that a slight difference in temperature will affect the laser line or the lens, which can change the calibrated height information over time. Measuring discrete parts in an environment with large temperature variations makes engineering an accurate 3D vision solution more difficult.

‘We’ve had a lot of interest in 3D imaging, but to get accurate 3D data with a calibrated system has been quite difficult,’ Williamson continues. ‘The cost of implementing a calibrated 3D system has traditionally been high.’

A system comprised of discrete components, all of which have to be calibrated, tested and validated, is costly. Stemmer Imaging supplies LMI’s Gocator smart 3D camera, which has an integrated laser line and is calibrated across a temperature range, and which Williamson says has lowered the cost associated with laser triangulation. Stemmer also provides Automation Technologies’ laser triangulation product line for higher-speed and higher-resolution applications, although it requires a separate laser and calibration.

An excellent vintage

Some tasks can only really be undertaken with 3D imaging. Spanish company Baixcat Visión has developed a system using Aqsense’s (Girona, Spain) SAL3D library for 3D inspection of wine bottle corks. High-quality bottles of wine require an air-tight seal and the system was employed to inspect corks for cracks and holes that would let air into the bottle.

The 3D system scans the cork with a laser and creates a depth map of the surface. Mercè Bruned, of Baixcat Visión, explains that there is a lot of variety in the colour of cork stoppers and numerous irregularities – some of which are only surface marks and have little importance, while others can run much deeper.

‘2D machine vision cannot measure depth,’ she says. Other 2D vision systems designed to evaluate the quality of the cork are based on bringing out any cracks or holes with illumination and then using algorithms to identify the fissures as dark areas in the image. However, Bruned says these systems are not robust enough: ‘It is necessary to use lighting tricks and to implement algorithms based on deductions to figure out if a mark is deep or it’s only a spot colour. Even using artificial intelligence systems to optimise these deductions, 2D systems are not robust enough. 3D systems can directly measure depth.’ She adds that the advantages with working in 3D are that there is no need for complex lighting systems and no possibility of a dark mark being confused with a deep one. ‘With 3D we know the depth; with 2D we were inferring the depth from the light intensity or from the colour.’ She adds that the analysis of depth maps is simpler and much more robust than the analysis of greyscale or colour images.

Dr Carles Matabosch, operations manager at Aqsense, says that, for this application, it is not essential to calibrate the system in 3D as only the laser profile needs to be extracted. ‘You don’t need to determine the size of the hole; you only need to determine if there are holes or not,’ he notes.

Calculating the volume of food can only be achieved with 3D imaging. This factory is using laser triangulation to ensure all biscuits are the same dimensions. Image courtesy of Multipix Imaging.

SAL3D has also been used to calculate the volume of the inside of coconut shells. A Catalan company, which produces ice cream served in the shell of a coconut, is using the imaging system to sort shell halves based on volume to determine the size of the ice cream portion required to fill them. ‘Calibration in this instance is vital in order to get metric measurements,’ states Dr Matabosch. Aqsense supplies a standard calibration tool of known dimensions to calibrate the system.

Analysing the height profile of the coconut shell using 3D software tools is also important when making measurements, as Dr Matabosch explains: ‘Identifying a hole in a plane, for example, can be achieved by scanning the plane with a laser and exporting the information to image processing software. Within the library, 2D tools can be used to identify the hole. However, the volume of the cylinder cannot be measured only with 2D tools – you need the real 3D data to do that.’

Aqsense is currently developing a system to determine the internal and external dimensions of a metal tube and if the cut end is perpendicular. ‘You can’t do this with 2D; you need to reconstruct the tube in 3D and make measurements from that,’ Dr Matabosch adds.

Taking the biscuit

The food industry is one area where 3D imaging has really been successful. In food manufacture there is often the need to standardise the size and shape of produce or to portion it equally, all of which require depth information to calculate volume. One example is a biscuit manufacturer that has automated the size and weight control of its biscuits during production using 3D laser scanning. The aim is to ensure all the biscuits – a sandwich construction made up of a top and bottom layer with a fondant filling – are the same height and weight for efficient packaging and to reduce waste. Each biscuit layer is measured to determine how much filling is required to give the correct overall height.

Three Basler Scout cameras are used to grab images of the biscuits as they move down a conveyor belt. Per minute, 120 columns with 30 biscuits each are inspected – a total rate of 3,600 biscuits a minute. A fourth Basler camera captures images of the laser line, giving a height profile of the food. Halcon imaging software from MVTec is then used to analyse the length, width, and height of the biscuits to an accuracy of ±0.17mm for each measurement.

Imaging components for the system, including Halcon software, were supplied by UK distributor Multipix Imaging. According to the firm’s Julie Busby, this application would not be possible using 2D imaging tools. She says that 3D vision is making a great impact in areas like baking, where the evenness of the produce can be inspected combined with colour information to assess the quality of baked goods.

Time-of-flight and pattern projection

While laser triangulation is a well-established method for generating 3D profiles, there are other ways of acquiring height data. Stereovision is typically used in robot guidance, and software packages such as Scorpion Vision from Tordivel and Halcon from MVTec provide algorithms for converting images from two or more cameras into 3D data. There is also time-of-flight imaging, whereby the camera recreates a scene in 3D by firing a laser and measuring the time it takes for the light to return from an object.

‘The main issue with time-of-flight imaging is that it’s not very accurate,’ comments Williamson of Stemmer Imaging. ‘The cameras are quite low on resolution, horizontally and vertically (approximately 300 x 300 pixels), and the effect of noise or jitter on depth information can be quite significant. It could be several centimetres out.’

However, he does feel the technique has a place in industry. ‘Time-of-flight imaging will give a fast, immediate reading of the approximate size of a package, for example. This is an emerging technology; at the moment it’s mainly universities using it for things like human interaction-type studies, similar to the images generated from Microsoft’s Kinect gaming system. The technology is good at detecting an object, but not at a high degree of accuracy.’

Another 3D imaging technique is using a single camera combined with a projection pattern. The camera registers distortions in the pattern, which is translated into 3D data. Unlike time-of-flight, imaging with structured light can be very accurate – VRmagic has developed a structured light system, which Stemmer supplies, that can achieve micrometre resolutions in a small field of view. The system shifts the projection pattern at a rate of 4x, 8x, or 16x, depending on the required accuracy. It grabs four, eight or 16 images and identifies distortions in the pattern to provide depth calculations. The disadvantage is that the product has to be stationary for the duration of the four, eight or 16 images.

According to Williamson, these systems are costly: ‘Both time-of-flight cameras and the pattern projection systems are reasonably expensive – in the single-digit thousands of pounds range.’

As a much lower-cost solution, MVTec has integrated Microsoft’s Kinect sensor into its Halcon software. The Kinect sensor is based on structured light (the IR depth sensor operates at 640 x 480 pixels at 30fps) and is a crude version of the concept employed by VRmagic. It also incorporates a colour sensor (1,280 x 1,024 pixels at 15fps or 640 x 480 pixels at 30fps).

‘That’s as cheap as you’re going to get for a 3D camera sensor,’ says Williamson, ‘but it’s not accurate and robust, so not suited to industrial machine vision. It depends on what you want to do with the system – if you want to get accurate 3D measurements then you could pay £6,000 to £8,000. If you are not interested in accuracy but just want to get a rough idea of height or depth, then a Kinect sensor at £150 is pretty cheap. However, this is not a machine vision system.’ He adds that if time-of-flight cameras came down in price they could become quite interesting for industrial applications.

As with most cases, it comes down to application. Laser triangulation remains a robust and accurate solution for moving objects, but for other areas the other techniques do have a place.