Exploiting embedded vision at EMVA meeting
Greg Blackman reports on the discussion around embedded vision at the European Machine Vision Association’s business conference in Copenhagen, Denmark in mid-May
The new wave of embedded processing boards is likely to benefit everyone in the machine vision industry, was the opinion of leaders at the European Machine Vision Association’s business conference, but traditional industrial inspection will not be early adopters.
A panel of machine vision experts at the conference in Copenhagen discussed the topic of embedded vision and how it could impact the industrial vision sector. The event took place from 16 to 18 May; the EMVA alongside Messe Stuttgart will also hold an embedded vision conference in Stuttgart later in the year, from 24 to 25 October.
Embedded vision in its current form applies generally to compute boards connected to off-the-shelf sensors to create imaging sub-systems.
‘The thing that embedded vision is really bringing is an economic shift for what is feasible and at what price,’ commented Arwyn Roberts, CTO of UK machine builder Surface Inspection.
‘Embedded vision is just a new paradigm of architecture for machine vision or computer vision technologies,’ he said. ‘The application is the same for us; just, for us, the selection of components is changing. We might architect our machines in different ways using the best available technology at the best available prices.’
He went on to say that traditional machine vision companies are not going to be early adopters of this technology. ‘We are conservative, we need to be,’ he said. ‘My customers expect to be able to get service on a machine they bought 15 years ago.’
Most of the work on embedded vision is from outside traditional machine vision, from areas like autonomous driving, which is a much larger market than factory automation. The machine vision industry stands to gain from all this development work, as it has with technology like image sensors that has seen huge investment from mobile phone manufacturers.
The question therefore is how does the machine vision sector exploit these new embedded computing platforms?
The other concern raised during the discussion was whether large firms like Intel or Amazon would take market share in industrial imaging. It was stated that machine vision is too small a market for these companies to target directly, but an audience member, who had attended the Intel partner event in April in Dublin, said that Intel named Internet of Things as an area it expects to grow by around 10 per cent over the next three years, and that 10 per cent of Intel is a $20 billion gain.
‘The message was: we [Intel] want to grow by $20 billion; you, partners, can grow by $60 billion,’ it was reported during the discussion. ‘I don’t know how to judge it,’ he said. ‘I don’t know how much [is] bullshit, but maybe we should be aware of the bullshit.’
Intel’s interest in Internet of Things is in providing edge computing platforms, the sort of devices a vision sensor could be connected to.
Mark Williamson, director of corporate marketing at Stemmer Imaging, remarked that Intel had taken a licence on GenICam, the generic interface for cameras standard. Stemmer Imaging sells Intel’s RealSense 3D camera, along with Framos, for which Stemmer has written a GenICam driver. ‘Intel were very interested [in the driver],’ Williamson said. ‘They [Intel] wanted to partner with us [Stemmer]; they’ve now licenced GenICam – what’s going to happen?’
While keeping in mind the impressive sales figures that Intel commands, the general consensus is that the smaller machine vision market will largely benefit from embedded components. ‘In the traditional machine vision market, there are benefits if the pricing is right and the volumes and the drivers and the software,’ Williamson commented. ‘It’s a win-win.’
Colin Pearce, CEO of Active Silicon, added that, while there will be more applications of vision technology, embedded vision can also be disruptive, potentially for frame grabber and cabling manufacturers. Half of Active Silicon’s revenue is from frame grabbers, the other half from embedded systems. One of its customers, Pearce said, has changed its traditional plate glass inspection machine from racks of line scan cameras with Camera Link interfaces to a system of intelligent cameras connected via Ethernet. He said that a lot of Active Silicon’s investment is now on the embedded system side of the business.
‘High-end [systems] will survive,’ commented Dr Olaf Munkelt, managing director of MVTec Software. ‘There are many high-end applications that will need frame grabbers. The change is that, what was considered high-end 10 years ago can be engineered with embedded vision in two or three years.’
Embedded vision also opens up new applications outside of traditional machine vision, which Dr Andreas Franz, CEO of Framos, said it’s worth looking at. Framos launched a range of embedded sensor modules and processing board adapters at the 2018 Vision show in Stuttgart.
Returning to the question of how machine vision can exploit embedded processing technologies, Roberts of Surface Inspection noted that machine vision has benefited in the past from Moore’s law. ‘We need to choose the technologies to standardise for the applications that work in this industry,’ he said. ‘They won’t all survive. Certain things like USB 3 or Arm or Nvidia GPUs, they are presenting enough longevity and enough added-value to be worth migrating into our standard toolkits.’
Stemmer Imaging has written a GenICam driver for Intel's RealSense 3D camera
Williamson added: ‘You’ve got all these different boards coming out; one of the big challenges is device drivers and software to get these cameras to work. They have to be very specific to each board which means there’s a lot of development work needed. Maybe one day we’ll have GenICam for embedded vision which means they will be plug-and-play. But, I think we’re quite a way before it’s a plug-and-play environment.’
There was a presentation during the business conference about processors for intelligent vision devices from Dr Ren Wu, the founder of NovuMind, which makes computer chips designed to run artificial intelligence algorithms.
The firm has been working with NCS in Singapore to deploy its chip inside NCS security cameras for smart image processing onboard the camera. For these sorts of edge computing applications, power consumption is important; NovuMind’s NovuTensor chip can provide 15 tera-ops per second of computation while consuming only 5W of power. Wu said its next chip, set for release in Q1 of 2020, would consumer 3W of power while running 37 tera-ops per second of computation, and would cost $50. He said this chip would be able to process 45.8fps at HD resolution with AI inference; this is compared to an Nvidia TX2 board costing $399 that he said would run at 1.41fps at HD resolution.
The business conference also had presentations ranging from robotics, multispectral imaging and new illumination methods, to the vision systems deployed in slaughterhouses and the machine vision market in China.
Dr Johannes Meyer, a PhD graduate from the Karlsruhe Institute of Technology, was presented with the EMVA’s Young Professional Award for his work on light field methods for inspecting transparent objects.
The next EMVA business conference will take place from 25 to 27 June 2020 in Sofia, Bulgaria.