Skip to main content

Bridging the gap

Three-dimensional vision systems have been a familiar sight within the industrial inspection market for more than a decade. During that time, various techniques have been refined and employed for tasks that range from robot guidance to inspection of electronic components. Traditionally, 3D imaging of this type requires a complex set-up with a vast amount of image processing; however a wave of all-in-one systems that include a pattern projector and image processing algorithms is beginning to emerge.

Fredrik Nilsson, manager of the 3D vision product unit at Sick, believes that embedded 3D vision is now allowing a broad user group to take advantage of its benefits. In addition to the obvious advantages of 3D vision – such as height position and volume measurements – Nilsson highlighted the fact that 3D also introduces invariance to the contrast and colour of objects, as well as scale invariance (at least in calibrated 3D). In many cases, this enables more robust and reliable solutions. ‘Three-dimensional smart cameras, with their stand-alone operation and compact housings, bridge the gap to more complex PC-based solutions,’ he said. ‘This then lowers the barrier for many users to embrace 3D vision solutions.’

More than 10 years ago, Sick became the first company to offer an embedded 3D smart camera, the IVC-3D. The company now also provides its Ranger and Ruler cameras for high-speed 3D laser triangulation, and Nilsson thinks that the impact of 3D technology within industrial inspection is only just becoming apparent. Terry Arden, CEO of LMI Technologies, agreed that the technology is gaining momentum: ‘For the longest time, 3D was only found in simple single point displacement applications embedded in closed-loop control systems. As 2D camera chips went digital and lasers got cheaper, line profilers then appeared on the scene to offer contour-based feature measurement.’

The problem, Arden added, was that these line profilers were difficult to use, requiring a PC or dedicated external controllers, as well as system integrator effort to deploy. LMI’s solution was to create web-based user interfaces with effective 3D visualisation graphics and point-and-click simplicity. The company added built-in measurement tools, communication protocols to factory networks, and statistical reporting in its Gocator series of 3D smart sensors.

Arden added that advances in fast, high-sensitivity CMOS camera chips, large density double data rate (DDR) memories, digital mirror devices for projection, and low power gigahertz microprocessors have driven the price of 3D technology down to a level that now effectively competes with the price of 2D smart cameras. While this has certainly led to greater uptake of the technology, the critical factor is the 3D smart camera’s ease of use. ‘The industrial markets can now scan and measure all aspects of a part, including both shape and contrast, and a new generation of process engineers will begin leveraging 3D cameras in all areas of production,’ said Arden. ‘This drives traditional factories to adopt a digital process, where 3D CAD models input to processes including part stamping, casting, moulding, printing, welding, and gluing. The 3D sensor provides 100 per cent verification in these applications, delivering parts that fit together perfectly, eliminating waste and shortening build times. Add robots to the equation and you have full factory automation reaching the goals of “Manufacturing 4.0”. ’

Another company making a name for itself in this market is Canon, which launched a 3D vision system for industrial robotics earlier in the year. The company’s 3D machine vision systems use computer-generated images to learn automatically how to identify the parts, made possible by simply capturing images of parts in five different randomly assembled configurations. This enables users to register data quickly.

Tetsuhisa Tagawa, senior director of European Semiconductor Equipment Operation at Canon Europe, explained that, using a new approach matching CAD data with not only distance measurement data but also greyscale images, Canon’s 3D machine vision systems are able to recognise a wide range of parts; these include components with curved surfaces, components with few distinguishing features, and those with complex shapes – and demand for these systems is rising.

According to market data based on Canon’s own research, in 2014 global sales of 2D and 3D machine vision systems for use with industrial robots totalled approximately 2.75 billion yen. Within this segment, sales of 3D machine vision systems have been expanding rapidly. The market for 3D machine vision systems is expected to grow significantly in the future, driven by strong demand for automation of production lines used by manufacturers in a variety of industries, including automotive and automotive component production.

LMI’s Arden agreed that demand for 3D is increasing, attributing this to the fact that, in many industries, it is the only effective way to measure key features needed to ensure quality control – for example, black tyres with black lettering. Aside from the obvious advantage of acquiring shape information, he said, 3D can mix both 2D texture and shape in a highly accurate calibrated dataset to drive more complete inspection outcomes.

A broader impact

So what application areas will this technology open up, and why? ‘For inspection, LMI predicts existing applications will change from using laser triangulation to using structured light technologies with no motion control, full field 3D scans, and no laser safety,’ Arden said. ‘Outside inspection, we see a general awareness and demand developing for creating 3D models that enable many downstream markets, such as reverse engineering and design, additive manufacturing processes such as 3D printing, customised retail products (shoe inserts, hearing aids, medical prosthetics, reading glasses, clothing), and complete part verification.’

Nilsson from Sick added that besides the increase of 3D vision in robot applications, such as bin picking and quality inspection of large parts, there will be increased use in consumer goods and food inspection. ‘The degree of manual inspection is quite high today and there is a need to improve the quality and efficiency of the production line,’ he said. ‘With a price level closer to traditional 2D solutions and with the benefits of 3D – the possibility of measuring volume in food applications, for example – we will see more 3D smart solutions in these segments.’

In terms of the technology itself, Nilsson believes that laser triangulation will remain a dominating presence in production lines with inherent motion, but that besides this there are several interesting 3D area scan technologies that will have a big impact on the use of 3D for machine vision tasks. ‘For quality inspection, fringe projection is really promising, and for real-time presence detection, time-of-flight imagers with even higher resolution are also very interesting,’ he commented.

Arden also predicted that structured LED light solutions (fringe projection) will dominate in the future. A fringe projection system can develop 3D point clouds from a single acquisition of the entire part providing very accurate results. ‘You do not need an expensive motion system that is typical in laser line profilers,’ he commented. ‘As technology continues to offer higher resolution cameras with very fast frame rates, the cycle times for inspection will decrease and structured light smart sensors will take over where lasers once were common.’

Over at Canon, the focus is primarily on product assembly, a key area of industrial production that still requires significant human intervention. Part recognition – and hence part provision for automated assembly – is dramatically reduced thanks to the software development of the Canon 3D machine vision system.

One of the complexities of imaging in 3D is setting up the system to acquire high quality images. On the software side, a simple user interface can help, while on the hardware side, Canon’s Tagawa said that systems now have to be almost maintenance-free, for instance by using heat sinks instead of fan cooling technology.

LMI’s Arden added that technology typically follows a trend of decreasing in size while increasing in speed – and that leads to the issue of heat. ‘With heat comes movement and that leads to measurement error,’ he explained. ‘Managing heat dissipation and movement with passive cooling is a key challenge in 3D smart camera design. Another challenge is providing the increased resolution (down to the sub-micron level) necessary for today’s electronic assemblies that go into wearable products or medical devices.’ He added that engineering a 3D smart camera requires talented resources supporting electronic and optical design, materials and mechanical design, as well as firmware and software development with deep knowledge in 3D processing.

The vision landscape

Given the level of complexity and potential held within 3D vision, it would be tempting to think that it could eventually become a natural replacement for 2D vision inspection. Not so, said Nilsson: ‘[3D and 2D technologies] will co-exist side by side. Some applications will need the flexibility and power that PC-based systems offer, whereas some are more price-sensitive or need a lower deployment effort. For some applications both 2D and 3D can be the solution.’

‘At LMI, we don’t see 3D competing with 2D,’ agreed Arden. ‘The real opportunity is in combining 3D and 2D to build better inspection outcomes. Our product strategy blends 2D and 3D to create powerful inspection solutions for the future. The question is, why would you want to marginalise a proven technology like the 2D machine vision ecosystem that’s been developed and refined over the last 30 years? The answer is, 2D is simply too valuable to dismiss.’

To receive the latest issue of Imaging and Machine Vision Europe, subscribe for free here. 



MVTec Software’s Halcon machine vision software suite includes a feature called ‘3D scene flow’, which enables users to detect and track the 3D motion of objects. It is ideal for manufacturing environments containing human-machine interaction, as the software feature can help predict movements of humans, vehicles, workpieces, and other objects. This can be used, for example, to avoid collisions.

The 3D scene flow feature works by using two consecutive stereo image pairs to determine uncalibrated, or relative, and calibrated – real-world coordinates – 3D motion data. In the uncalibrated mode, 3D scene flow provides relative motion information in the form of optical flow data (the 2D shift of features between consecutive images) plus the change in disparity of features between two consecutive stereo image pairs.

Calibrated 3D scene flow provides 3D motion data by detecting and tracking the 3D position changes of surface points on a 3D object model found in consecutive point cloud scenes. In this case, the point clouds are reconstructed from the stereo image pairs.

The software functionality can improve human-machine interactions in quite a number of branches, according to MVTec. For example, autonomous robot manufacturers can use this software feature to guide robots or autonomous vehicles safely. If a collision with a human worker is inevitable, only the respective robot or vehicle is stopped, not the whole production line, which saves on production costs and downtime.

Topics

Read more about:

3D imaging

Media Partners