Making machine vision add up

Share this on social media:

Topic tags: 

Greg Blackman tries to put a figure on the cost of vision technology

How much does machine vision cost? This is something that any engineer will have to specify when designing an automation solution: there will be the cost of components, designing and building the system, and maintenance, with machine vision a percentage of that.

But, does it actually matter what the vision system costs? In some ways, what an integrator or equipment provider might quote for a vision system is not that important.

What matters is return on investment, which is the metric that any good automation or vision project should be based on. If adding a vision system to a manufacturing line improves productivity, or reduces waste or removes error, it will pay for itself over time.

In pharmaceutical production, for example, vision is such an important part of the manufacturing line it’s practically legislated, according to Simon Beveridge, managing director of UK-based vision integrator Siga Vision. Or, at least, the manufacturer has to have an operating method that captures certain data, which is best achieved using vision. So, here, the cost of vision is largely irrelevant, because without it there’s no production line. (Read more about how Siga Vision has been working through the pandemic here.)

Not all manufacturing requires vision to the same extent as pharmaceutical production. Where it’s a little less clear cut on the necessity of vision technology, the starting point should be the application, and whether vision can be used to address the problem.

David Dechow, principal vision systems architect at US integrator Integro Technologies, explained that the difference between a successful and unsuccessful vision installation comes down to technical knowledge and diligence in engineering. Also, making sure the system is specified correctly for the application; no one wants a system that doesn’t work. However it might be that the vision works, just not for the task for which it was built.

‘As an integrator, or even as a proponent of vision technology, we want you to do it right first time,’ Dechow said, ‘and that requires some knowledge and diligence in engineering. It’s a function of engineering – with something as known as machine vision, we can make sure this happens first time.’

He added: ‘Machine vision systems can vary widely in price, but it’s in comparison to the task they have to perform.’

Nevertheless, cost does play a role in the adoption of machine vision. During AIA’s online Vision Show this summer, Mark Lewandowski, global engineer at Procter and Gamble (P&G), said vision systems add capability, but as an enabling technology they shouldn’t cost as much as the rest of the system.

P&G has thousands of vision systems deployed across the company, he said, and it now uses 3D vision more readily as the technology has become more accessible. However, Lewandowski added that the company has a need for complex 3D imaging methods to design innovative production processes, but that when the vision system costs as much as the robot, it makes it unaffordable to deploy across its factories.

During the AIA event, Paul Thomas at P&G described a robot bin picking system the team built using an Intel Realsense depth sensor, a relatively inexpensive stereo camera. He said the vision system and accessories should not exceed $10,000 in cost, not including the robot. He was targeting 45 picks per minute, with 1mm placement accuracy. Maintenance needed to be minimal because some of the sites where the robot would be deployed have little technical expertise.

The team wrote in-house deep learning algorithms for the bin picking solution and ran them on an Nvidia EGX platform. The objects the robot was handling often changed and were not biometrically feature rich, so could look symmetrical. Colour could be the same, which made it difficult to find edges, and the bins could also be a similar colour to the object. The team wanted to get an oriented pick with just one image in order to reach a speed of 45 picks per minute, so using multiple cameras or stopping the robot wasn’t going to be viable.

Thomas said the use of deep learning was paramount to P&G’s success. The team created 3D scans of all the products the robot would work with, to train the deep learning algorithm.

Lewandowski had a wish list for the vision industry. First, the use of standard building blocks – hardware, software, networks – to connect together without having to reprogram or create a customised solution for each application. This would help cut the cost of engineering, as well as meaning systems can be deployed at scale.

He would also like to see systems that can be redeployed easily for other tasks, to improve their value. Ease of use is another desire to open up the use of vision to more small- and medium-sized companies that might not have technical support.

Support itself can also be an issue, he said, in terms of whether a problem is to do with the robot, vision or another part of the system. He asked: ‘How do we get a truly integrated solution?’

Plug-and-play of all vision components with automation and robotics is another wish list item, as well as the ability to use and integrate more open source code.

The bin picking solution that Thomas, at P&G, developed was successful because of its Realsense camera and the deep learning algorithm. In terms of the cost of developing it – which wasn’t the object of Thomas’ presentation – it is difficult to put a figure on it, because where does one draw the line when doing the accounting? For instance, does the cost of the project include all the costs involved in software development for 3D analysis and deep learning? P&G is a large company with a big R&D department that can benefit from economies of scale gained from working on related applications.

Dechow, at Integro Technologies, said that ‘companies without the resources of large players need to have a reliable and robust vision system that can be implemented with reasonable engineering skills’ – without needing a team of R&D scientists skilled in 3D and deep learning algorithms.

‘In this context, commercial vision systems are a real bargain,’ he said. ‘Thousands upon thousands of man hours go into the development of components and associated software, to make the product suitable for general purpose industrial automation. This is a value well worth the cost.’

Dechow also believes that overarching standardisation would be more of a hindrance to the vision sector than a help: ‘The machine vision industry, like any other, thrives and grows based upon competitive products. I would hate to see some standardised, plain vanilla component or software specification where every product did the same thing with the same algorithms and same outcome.

‘It could potentially stifle the industry and market for this technology.’

The question that Dechow says people should ask when it comes to thinking about the cost of technology is: ‘How much should an enabling technology cost if it provides the required automation result and return on investment?

‘What’s important is that industry needs automation, and it needs it now more than ever,’ he concluded.

Additional reporting by Matthew Dale

Other tags: 

Related features and analysis & opinion

15 September 2020

Greg Blackman speaks to Simon Beveridge, managing director of UK-based vision integrator Siga Vision, about installing vision systems during Covid-19

11 March 2020

Raghava Kashyapa, CEO of Indian vision firm Qualitas Technologies, writes about the benefits and challenges of implementing machine vision in the cloud – and what this holds for manufacturing in India