Skip to main content

Delivering 3D vision's potential

Imaging and Machine Vision Europe gathered a panel of experts to discuss uptake of 3D vision in robot automation in its latest webinar. Greg Blackman reports on what was said

'Our largest competitor of vision, in general, is no vision,' summed up Peter Soetens, CEO of Pickit, when asked about how often 3D vision is used for robot guidance and robot automation.

Pickit makes 3D vision solutions for robotics – Soetens listed three areas where 3D vision is opening up robot applications: bin picking, which, he said, is impossible to solve without 3D vision; depalletising, where items are unloaded from a pallet with a robot; and 3D part localisation, where vision is used to find a large part in space for the robot to pick it up and perform tasks.

But, as Soetens alluded to, vision isn't used as often as it might be because it's often considered complex, costly, and beyond the expertise of engineers working on automation solutions, especially if they have limited resources or budget. Oliver Selby, robotics business development manager at Fanuc UK, estimated that 15 to 30 per cent of the robots Fanuc UK sells include a vision system, either 2D or 3D. Fanuc has its own range of integrated vision systems, but also offers solutions with third party suppliers.

Some automation solutions don't need vision to be effective, but Kamel Saidi, leader of the Sensing and Perception Systems Group at the National Institute of Standards and Technology (NIST), believes 3D vision can open up many more applications, especially for small- and medium-sized enterprises. He said the benefit of 3D vision for SMEs, and manufacturing in general, is that it reduces the need for mechanical conveyance systems, jigs, and fixtures that would otherwise be required to automate a process. Building a solution with a lot of mechanical fixtures has its own complexities, and if the manufacturer is producing low volumes of parts – and many different parts from month to month – then it can't invest in a lot of infrastructure to automate this.

Saidi said: '3D vision is an enabler for robot automation. When a robot starts to understand its surrounding in 3D – not just in 2D – we believe that it opens up many possibilities to interact more intelligently with the physical world.' The robot doesn't need as many mechanical fixtures to operate.

Working with 3D vision requires its own levels of expertise, however. Reducing the barrier to entry for 3D vision is something Saidi and his team at NIST are working on by developing standards for industrial 3D vision systems.

'We at NIST feel that standards are the building blocks of many successful applications of technology, because they help people understand how well the technologies work,' he said.

He added that NIST has 'talked to a lot of manufacturers or integrators who want to use 3D machine vision, and who haven't had their expectations met', because the terminology behind 3D cameras isn't defined well enough. Saidi said that terms such as resolution or depth error have to be defined, and the methods for measuring them must also be developed and become standard across industry.

A 3D camera might not perform as well as advertised because the parts the robot is asked to handle keep changing, or the environment – the lighting – changes, for instance.

'We feel that standards will help bring a common language, a way to talk about all of these things so that people understand each other better,' Saidi said. 'We really think that's a starting point for helping technology become more prevalent.'

Selby agreed, saying that smaller manufacturers looking for robotics and automation tend to try and find a one-solution-fits-all approach, because it gives them the best return on investment. 'As soon as you look at feeding components to an automation cell, the cost in fixturing components to get them accurate enough to pick and place becomes prohibitive,' he said. '3D vision allows us to essentially not have to fixture those components and it becomes flexible.'

Selby at Fanuc UK has been working on pick-and-place of battery components for battery packing cells in automotive; also in applications for food, pharmaceutical, and assembly of electronic components. In typical food applications or assembly applications, he said that Fanuc aim to pick at 60 picks per minute, which might equate to around 10 to 15fps frame rate, depending on the application, along with factors like the size of the field of view.

Mark Robson, senior research engineer at the Manufacturing Technology Centre in the UK, noted that integration of robots and vision is still a key challenge for improving uptake of automation. He believes there are opportunities for vision companies to form closer relationships with robotics integrators and other automation integrators. The Manufacturing Technology Centre is in the process of compiling a list of integrators in the UK – it is up to 600 firms so far that sell or maintain industrial automation systems. 'There's a huge pool of people with automation knowledge who don't necessarily have that vision knowledge where there's opportunities for partnerships,' he said.

The Manufacturing Technology Centre, which takes ideas from academia and translates them into practical applications for industry, has worked on depalletising applications and, most recently, has been looking at deep learning-based methods for speeding up pick-and-place.

'The challenge that we see with quite a lot of pick-and-place applications isn't really around the vision; the vision is good enough and fast enough. Even the robots are fast enough,' Robson said. Where one challenge lies, according to Robson, is the robot path planning. 'If your task isn't structured enough and there are several options for how the robot has to move, then figuring out the best motion plan for more complicated tasks is a challenge that we're looking at.'

Deep learning is being investigated at the Manufacturing Technology Centre for robot assembly, where the robot has to find the object and pick it up accurately enough to assemble it.

Robson also believes there are quite a lot of applications using 3D vision and robotics, in combination with deep learning, around agriculture and handling food.

Make it easy to use

The key to increasing adoption of 3D vision for robotics, according to Saidi, is to make the technology as simple to use as possible. 'Unless SMEs understand what to expect from a technology it's going to be hard to get them to use it,' he said. 'Because even people with experience in machine vision have had lots of issues in terms of expectations – they think a technology is going to do something for them and it ends up not working the way they expected. It gets shelved or not used, and it's very frustrating for them. SMEs going through that, that will completely put them off.

'First, it needs to be very simple to use,' he continued. 'More importantly, the performance, how well these things work, needs to be well understood, so that when they [the customer] ask for something they know what to ask for, and when they get it they know what to expect.'

In order to help the users and integrators of these systems, NIST is working on best practices for 3D vision solutions for particular applications. Saidi and his team are speaking to industry experts and consolidating the information into something that everyone can use – users, integrators, and manufacturers. He said he welcomes participation from the industry.

Selby said that any reputable company developing automation solutions should be able to offer trials and demonstrations of equipment to a reasonably high level, so that the expectation is met by the customer prior to sale, or through a period of test and proof of concept.

Soetens said that a lot of people are yet to buy their first 3D vision system. 'What we see is that a lot of users are making the same mistakes regarding designing the gripper or setting up their cell,' he said. This led Pickit to the conclusion that the firm has to help customers think about the gripper and the robot alongside the vision - as more of a complete solution.

'We see companies partnering – gripper companies, vision companies, robot companies. Everyone is looking for this magical mix of the perfect combination for a given application,' he said.

'The industry would prefer to have a reliable and tested set of components that work well together instead of having all the freedom of choice but all the freedom to make a mistake,' Soetens continued. He believes that over the next two years there will be much more integration between the gripper, the software, the camera and the robot.

Selby added that education at an early stage – at university and schools – is important to further the use of robotics. Fanuc globally has education products for robotics that include vision. 'It's important to us that engineers of the future are able to take those robotic products and implement vision systems on them,' he concluded.

Write for us

Share your experience developing and deploying 3D vision and robot systems. Please get in touch: greg.blackman@europascience.com.

Topics

Read more about:

3D imaging, Robotics, Business

Media Partners