Skip to main content

Another dimension

Lid integrity can be identified easily with 3D imaging, shown here Sick's TriSpector 3D vision solution.

Think of 3D vision and there is probably a list of classic application examples that spring to mind – robot bin picking might be one. Thanks to systems that are now easier to install, set up, calibrate, and get meaningful data from – as well as lower in cost – this list is broadening to include areas that might otherwise be thought of as covered by 2D inspection; tasks like checking for the presence or absence of caps on bottles, for instance.

Imaging in 3D still has its place in factories and it’s not going to take over from 2D inspection any time soon, but the line between applications where 3D data is a definite requirement and where 2D is best suited is blurring. Vendors like Sick and LMI are selling all-in-one smart 3D cameras – in fact, the number of vendors supplying the huge array of 3D imaging technologies has grown considerably over the last two decades – making 3D inspection more accessible for integration into production lines.

‘People are now realising the benefits of 3D vision and the usage of the height information rather than a contrast image of an item,’ observed Fredrik Nilsson, head of product management, 3D vision at Sick. ‘In many cases, it [3D] makes the inspection more reliable, especially when the colour or the contrast of the item changes, as 3D is more or less immune to these variations. Things can actually get easier in 3D compared to 2D, and more reliable – not always, but for some applications and more than people first think of.’

Nilsson believes that as product lot sizes get smaller and with more variety, 3D vision will become indispensible as an inspection technique. He gave the example of bottle inspection, where there might be different coloured caps or different heights of bottles. ‘In 2D, it can be difficult to locate the different caps in the image depending on whether the background is the same colour as the cap or not. In 3D, the machine will always find the cap, regardless of colour,’ he said.

If the factory wants to change the print on caps, in 2D, engineers might have to make adjustments because the contrast is poor; whereas, a 3D camera would still locate the cap. Likewise, height changes – caused by the bottles getting closer or further away from the camera – could make inspection difficult in 2D. In 3D, if the camera is calibrated, it gives the same size no matter what distance the object is from the lens.

‘3D vision is the perfect tool to support Industry 4.0 when it comes to mass production of small lot sizes,’ Nilsson stated.

Terry Arden, CEO of LMI Technologies, goes a step further than Nilsson on describing what he feels will lead to the proliferation of 3D technology, saying ‘the further use of 3D will only happen when 2D is fully integrated into the same sensor design.

‘The need for 2D is equally important to inspect surface markings and not just shape,’ he added. ‘Tomorrow’s 3D sensors must offer a hybrid solution offering both high resolution 2D and 3D in one package. Today’s smart 3D sensors achieve 2D by using reflected light from the laser. Tomorrow’s sensors will use dedicated light sources to optimise 2D quality.’

The throughput bottleneck

One of the areas holding back 3D vision is frame rate, or lack thereof. Advances in CMOS chips have increased the speeds 3D cameras can reach, while imagers now offer multiple high-speed LVDS camera channels to offload pixel data to downstream hardware. It means 3D cameras can achieve hundreds of frames per second at megapixel data densities but, despite this, according to Arden, ‘camera chips are still too slow for many of today’s applications’, by a factor of 10, he said. ‘Instead of 100Hz, 3D sensors need to run at 1,000Hz and even much faster,’ Arden stated.

‘Today’s camera chips don’t go fast enough to meet many high throughput production lines,’ he said. As a result, many solutions require several sensors staggered along the production line to capture and stitch data of objects moving at speed. This means the sensors have to be aligned to a common coordinate system, such that the images are stitched into a final high fidelity 3D point cloud ready for inspection. Arden noted that easy to use multi-sensor alignment and stitching is needed to support demanding uses of 3D sensors.

Compact 3D sensors from Automation Technology offering a range of resolutions up to 12 megapixels. (Credit: Automation Technology)

Compact 3D sensors from Automation Technology offering a range of resolutions up to 12 megapixels. (Credit: Automation Technology)

There are 3D cameras available operating at kilohertz frame rates – Automation Technology will present a 3D sensor at the Vision trade fair in Stuttgart in November with a frame rate of up to 200kHz, or 200,000 profiles per second. Not all applications will need these sorts of speed, but even so, Nilsson at Sick feels there is a demand to move towards 5-10kHz with a larger height range covered.

‘In many cases, especially in laser triangulation, there’s a trade-off between the height range covered and the achievable speed,’ Nilsson commented. The more sensor rows included, the larger the height range that can be inspected, but the slower the frame rate. ‘Getting the balance right is really difficult for many customers; they want to cover a large height range with high accuracy but also at a high enough speed, some 5-10kHz inspection rate is not unusual. Today, this can be hard to achieve and here we need to help them,’ he said.

Electronic component inspection is one example where there is greater need for more accurate inspection as components gets smaller, while at the same time the throughput has to be increased, Nilsson noted.

Nilsson remarked that there isn’t a standard imager available operating at full frame imaging with good x-resolution at 5-10kHz frame rates, nor are there any at extreme speeds like 50kHz or 75kHz with a reasonable height range.

‘Higher 3D speed enables both better product and production quality, because the sensor can inspect in finer detail at an increased production rate,’ Nilsson commented.

‘The other side of it is that engineers want more exact measurements,’ commented Tobias Kröger, responsible for marketing at Automation Technology. The company has released a 12 megapixel laser triangulation camera suitable for inspecting products like semiconductor chips or ball grid arrays in electronic circuits. These types of application would otherwise need multiple one megapixel cameras in order to reach the accuracy required for quality control.

Freeze frame 3D

The dominant 3D technology for measuring moving parts is laser triangulation, while stationary parts use other techniques like stereo fringe projection. ‘The emergence of 3D sensors based on pattern projection and time-of-flight technology, which can be used in a stationary way, represent interesting alternatives [to triangulation] moving forward,’ commented Pierantonio Boriero, product line manager at vision equipment provider, Matrox Imaging.

Time-of-flight (ToF) is gaining traction in industrial settings, with high resolution cameras now available from Odos Imaging, among other companies. Basler has released a ToF camera that the firm is positioning as a ‘mainstream 3D vision product’, according to Jana Bartels, product platform manager, 3D at Basler.

‘At the moment, 3D systems are typically quite expensive,’ she said, ‘but most customers… don’t need extremely high performance; they need good performance at a reasonable price, and there’s really a gap in the market at the moment.’ Basler’s ToF camera is tested to industrial standards and priced at €2,000.

‘Time-of-flight imaging is suited to factory applications including bin picking and robot guidance, both of which will be used in future smart factories,’ Bartels said. ‘Robots for factory automation would definitely need 3D information. I think there will be many more applications for this technology within the next two years.’

Time-of-flight also has the potential for guiding autonomous vehicles around factories, such as carts transporting components. ‘For Industry 4.0 to become a reality, factories want to automate everything; engineers want machines that can be guided like humans and for that they will need 3D information,’ Bartels observed.

Time-of-flight is good for robot guidance because it has a large working range – Basler’s ToF camera operates from zero to 13.325 metres, which makes it suitable for guiding robots and autonomous vehicles.

There are other 3D imaging methods suitable for robot guidance – Matrix Vision’s perception camera MvBlueSirius, for instance, is suitable for pick-and-place tasks, operating via stereovision. ‘Time-of-flight is a good fit for lots of applications in the factory, logistics and medical sectors at the moment, but all the other 3D technologies have their applications,’ Bartels noted. Time-of-flight has difficulties with ambient light and working outdoors, for example, and also imaging shiny objects because of reflected light. ‘You cannot say that it’s just one technology in 3D imaging; you choose the 3D technology that fits best for a specific approach.’

Cost matters

As with many things, the adoption of 3D vision might come down to cost. ‘Integrated 3D sensors that output the easier-to-work-with depth map or point cloud data are, in many instances, still too expensive,’ commented Boriero at Matrox Imaging. ‘Building a 3D scanning setup using general discrete components like 2D cameras and light generators requires a good software toolset, technical knowhow and, in general, experience.

‘Bringing down the cost of 3D sensors and simplifying the process of building a 3D scanning setup from discrete components are both needed to increase adoption,’ he continued. ‘The development and use of a standard for transmitting 3D data from sensor to computer – which is well underway – will be welcomed by all involved. Finally, more education and training, as well as integrators prepared to invest their time and effort to take it on, will further the use of 3D vision.’

It’s not completely clear-cut whether 3D vision is more expensive than 2D setups. Nilsson at Sick argued: ‘2D cameras are still less expensive than 3D, but if you look at the complete solution after adding a light source, a lens, the installation time, and maintenance, actually 3D might not be more expensive in the end. It might be more expensive to invest in 3D vision technology, but looking at the complete solution cost I would claim that, in many cases, 3D vision can be even less expensive than 2D imaging.’

 

One of the ways to reduce the cost of a 3D system is to use stationary-based 3D inspection. Arden at LMI said: ‘The future holds the promise of eliminating high-speed motion systems using laser triangulation, and converting a motion-based application into a stationary application using pick and place robot automation and area-based 3D such as fringe projection. The advantage of stationary-based inspection is to eliminate the cost and errors from motion systems and simplify overall measurement setup.’ 



Topics

Read more about:

3D imaging

Media Partners