Embedded vision: plug-and-smile!

Share this on social media:

The panel discussion at Embedded World in June. Credit: VDMA and Framos

Anne Wendel, VDMA Machine Vision, reports on what was said during a vision panel discussion at Embedded World earlier in the year

Embedded vision is not only a hot topic in the machine vision industry, but computer vision is also one of the most promising technologies in the embedded community. Visiting the Embedded World trade fair in Nuremberg in June 2022, it was clear that embedded vision has become an indispensable part of numerous applications and a future part of the embedded community.

Gion-Pitchen Gross from Allied Vision, Jan Jongboom from Edge Impulse, Olaf Munkelt from MVTec Software, Jan-Erik Schmitt from Vision Components and Frederik Schönebeck from Framos discussed the topic of embedded vision integration and ‘towards plug-and-play’ as part of a VDMA-organised panel discussion at the event.

Embedded vision generally has the reputation of being relatively complicated technology; plug-and-play seemed unachievable in this field for a long time, and in fact the technology was often only suitable for experts due to a lack of tools.

The technical developments have meanwhile simplified and accelerated its use considerably. Jan-Erik Schmitt sees a similar development here as with deep learning: ‘A lot has changed in the past few years. There are tools that are now much easier to understand. The tools for realising embedded vision systems are also developing further because hardware and software are constantly becoming more powerful. This generates new ideas about where the technology can be used. Also, with the many embedded vision applications that have emerged over the past 20 years, there are more and more people who are interested in this topic and want to dive further into the technology.’

In addition, until a few years ago, mathematicians, physicists or engineers were required to create vision applications, and it was only through hands-on experience with the technology that they could gain the necessary experience. ‘Today, young people leave universities and are already vision experts because they have already dealt with this topic during their studies,’ Schmitt said. ‘Therefore, there are many more machine vision experts today than in the past.’ Schmitt therefore described the current state of the technology more as ‘plug-and-smile’.

According to Gion-Pitchen Gross, another factor that simplifies embedded vision is the use of open-source software: ‘Much of it is very easily available today and can be adapted to the specific application with little effort. This trend is still relatively young, but manufacturers such as Allied Vision now make their drivers available as open source so that users can adapt them to their use case and use prepared samples. This also makes it easier for users to realise their applications.’

Boost from AI

Artificial intelligence methods are currently being used in almost all technical fields. This is also a clear trend in the field of embedded vision, as Jan Jongboom explained: ‘From the user's point of view, AI makes it easier to develop systems that can do generalisation well. The use of transfer learning, in which a large number of pre-taught images from other applications serve as the basis for training, with only a relatively small number of new images of a specific use case added, minimises the effort for users enormously.

‘This revolutionises the way machine vision systems are programmed and dramatically lowers the hurdle for developing practical models,’ he added. ‘Just a few years ago, it was necessary to collect and train an extremely large number of images.’

However, there is also a negative aspect to AI methods, Dr Olaf Munkelt pointed out, which is one of trust. ‘AI systems are very powerful, but they are not very good at explaining why they made a decision,’ he said. ‘We spend quite a lot of time building trust among customers in these kinds of algorithms, because people won't use them if they don't have that trust.’ This is especially true for applications that require very precise results, Munkelt said.

Nevertheless, the positives outweigh the drawbacks, Munkelt felt, as AI accelerators, among other things, help to simplify the use of embedded vision. MVTec has provided an abstraction layer in its software for this purpose, which makes it easier for developers to work with AI accelerators such as TensorFlow, OpenVino or other products. ‘Users really appreciate this because it means they no longer have to worry about coding all the bits and bytes,’ Munkelt said.

Supply chain problems

Like all other areas of the electronics industry, the embedded vision scene is also affected by delivery shortages and supply chain problems. Delivery times for many hardware components have increased significantly and, depending on the component, can be half a year or even longer. In addition, there is a huge order backlog that many companies have to work off first. Munkelt is convinced that the current supply problem will not be resolved within a year, but will burden the entire industry for a longer period.

Despite these current limitations, the experts agree that the use of embedded vision in industry will benefit greatly, with a certain time lag, from technological developments in the consumer sector and the increasing computing power of embedded systems. For these reasons, devices equipped with embedded vision, such as mobile barcode or data code scanners, drones, self-driving vehicles, delivery robots, autonomous transportation systems and many other applications, continue to face a very positive development.

Vision Components’ Power SoM is an FPGA-based hardware accelerator. Credit: Vision components

01 June 2022

Vision Components’ Power SoM is an FPGA-based hardware accelerator. Credit: Vision components

01 June 2022

Luca Verre (left) of Prophesee collecting last year's award from Martin Wäny. Credit: Messe Stuttgart

17 August 2022