Skip to main content

Power at the edge

The Embedded World trade fair in Nürnberg in February did go ahead – it was held at a time when people were beginning to stop travelling amid coronavirus concerns but had not yet stopped entirely. The panel discussion on embedded vision, organised by Messe Nürnberg and the machine vision working group of the VDMA, had six experts discussing the current status of the technology and the opportunities it presents.

Today’s computing platforms, such as ARM, X86, GPUs, FPGAs and multi-core processors, are capable of processing large amounts of data, such as that generated by cameras. Much of the processing that has previously had to take place on dedicated PCs or in the cloud can now be moved to the edge – that is, close to the sensor – with these computer chips. This enables embedded vision to serve an increasingly diverse range of applications, which can include inspection and factory automation tasks. It also expands to everything from self-driving vehicles and driver assistance systems to drones, retail, security, biometrics, medical imaging, augmented reality and networked objects.

There are distinct opportunities here, and Jason Carlson, CEO of embedded computing firm Congatec, said during the panel discussion that adopters of machine vision are finding return on investment is taking less than a year, in some cases. This, in his opinion, is helping drive the rapid adoption of embedded vision.

However, the unanimous opinion of the panel was that the complexity behind bringing the different components of an embedded vision system together – comprising technologies and companies from a number of different markets – is currently one of the main barriers to its uptake in industry.

‘At the end of the day, our customers want solutions,’ said Carlson. ‘What is really going to speed up the adoption is our ability to work together to bring all this – CPUs, GPUs, cameras, sensors, interfaces and software – together, because that’s what they want.’

Christopher Scheubel, executive director of software firm Cubemos, a Framos spin-off, said: ‘I would also say that the largest issues that we face right now are to do with interoperability. The CPUs, GPUs, ASICs… it all has to click together. This is not always the case. So if we had interoperability between those different systems, that would make our lives easier and also enhance the uptake of embedded vision.’

Gion-Pitschen Gross, product manager for embedded vision at Allied Vision, agreed: ‘Embedded vision systems require so many different skills to build – on the camera side, imaging side, and software level – it’s quite hard to build an embedded vision solution. To accelerate the growth of it, it needs to be made as easy as possible. I think making it easier through standards is something that could accelerate it considerably.’

Generating a standard that would enable image sensors to be plugged into the numerous types of available processor would be quite the challenge, according to Jan-Erik Schmitt, VP of sales at Vision Components. This would involve working with each of the different processor manufacturers such as Nvidia and Qualcomm. He did remark, however, that one thing that Vision Components has seen on the rise for around one or two years now is the MIPI interface, which comes from the consumer market. ‘We see this coming more and more into industrial applications,’ Schmitt said. The MIPI Camera Serial Interface 2 (MIPI CSI-2) is currently the most widely used camera interface in the mobile device market.

Bengt Abel, project leader for technology and innovations at Still, a supplier of forklift trucks that uses embedded vision for its robotics and assistance systems, was also on the panel. ‘In logistics we need standards in the interfaces, to get the information out over all systems – our fleet management system, warehouse management system and so on. But there are no standards,’ he said. ‘We have standards for image transportation, but not for object recognition interfaces. In robotics we are doing this – we just have one interface for everything we see – but we don’t see this at the moment for embedded vision systems.’

‘What Mr Abel said is completely right, we need standardisation,’ added Scheubel of Cubemos. ‘[When building embedded vision systems] there needs to be an embedded processor with an operating system, then we want to run AI networks, which need to be on either the processor or some special dedicated hardware. Then we’ve got the imaging module… we have to extract the image from the sensor, calculate a nice image, give it over to the processor, then to the hardware accelerator. So it’s quite a [difficult] process, and if we had some standardisation it would be really beneficial.’

Autonomous driving uses embedded computing to process streams from multiple sensors

One audience member was also interested in whether the GenICam standard – which aims to provide a generic programming interface for cameras and other devices – could be applied to embedded vision, or whether a new standard was needed. Gross remarked that as a firm, Allied Vision would indeed like to see the standard brought to embedded vision, and that embedded vision would definitely benefit from GenICam. He explained, however, that while there is currently a standardisation effort ongoing, there is still way to go yet before it can be implemented.

Plug and play

A number of vision companies have now brought out kits in an effort to make embedded vision more accessible. These include components such as computing boards, sensors, optics and software. One such firm is Alrad Instruments. Julian Parfitt, Alrad’s director of sales, told Imaging and Machine Vision Europe: ‘The issue was that there were cameras and GPU boards out there, but no one was bringing them together in a complete solution. Our kits therefore contain all the hardware and software necessary to create a single or multi-camera system.’

Alrad’s kits bring cameras from The Imaging Source together with Nvidia’s Jetson Nano, Jetson TX2 and Jetson AGX Xavier GPU platforms for deep learning and AI, in addition to the necessary peripherals and a cooling system. The kits come with image libraries already set up, and Nvidia software that enables basic face, object and character recognition. They work with a range of compact board cameras equipped with MIPI and FPD-Link interfaces, offering mono and coloured imaging via new sensors from Sony and On Semiconductor. Some of the board cameras are also housed and IP67 rated, making them both water- and dust-proof.

‘These kits significantly reduce the barrier for those looking to get involved with embedded vision, deep learning and AI,’ remarked Parfitt. ‘In developing the kits our target market was initially universities and research institutes that were either using single-board computers or starting to use a variety of GPUs designed for PC computer games in their imaging systems. This solution has not only turned out to be valuable for such users, but also for large company R&D teams in a wide range of application areas.’

The inclusion of Nvidia’s GPUs in the development kits are down to their ability to perform large amounts of parallel processing, Parfitt explained. ‘A lot of the tasks that weren’t able to be done in the past were really down to the speed and amount of processing required, and a CPU system struggles when you’re working with higher resolution cameras,’ he said. ‘In the imaging world we want high resolution, which results in far more pixels that need processing.’

Alrad Instruments is not the only firm offering kits for embedded vision. Basler, for example, offers three kinds of kit: a plug-and-play evaluation kit with its Dart USB camera module, development kits that also feature Dart camera modules, as well as add-on camera kits that enable the operation of Basler cameras with evaluation boards from NXP Semiconductors. In addition to the camera module and lens, the kits contain all the required cables along with Basler’s Pylon camera software suite.

Alrad Instruments' embedded vision development kit

Phytec also offers a range of embedded vision development kits, featuring NXP’s i.MX 6 and i.MX 8 CPUs. ‘These microprocessors include quite powerful Arm Cortex cores, GPU units and peripheral components to build cost-efficient solutions,’ Martin Klahr, Phytec’s head of embedded imaging, told Imaging and Machine Vision Europe. ‘They also include one or more interfaces to camera sensors, which reduces the cost of design – and also power consumption – of an embedded vision system.’

He added that a new, very interesting member of the i.MX 8 family, the i.MX 8 M Plus, has just been announced by NXP and will be available in the autumn. ‘This microprocessor will include a dedicated image signal processor, which allows it to carry out various image pre-processing functions,’ he said. ‘Also, the i.MX 8 M Plus includes an AI co-processor, which will enable many interesting vision applications.’ Phytec has been selected to take part in the NXP alpha programme, which means it will be launch a development kit with this CPU in the future.

Artificial intelligence

AI was also on the cards for the panel discussion at Embedded World this year, with the experts explaining that it is now possible for a whole range of AI capability to happen at the edge.

The training of such AI solutions doesn’t actually happen at the edge, however. This process is far too compute-intensive and requires expensive hardware, capable of performing large amounts of parallel processing. For this reason, training neural networks is either done in the cloud or on-premises in server racks. Once trained, however, AI can be deployed at the edge without expensive hardware.

An important point raised by the panel was that AI shouldn’t be used as a standalone tool.

‘It’s always embedded in another application,’ said Scheubel at Cubemos. ‘If we apply AI for a computer vision task – e.g. a detection classification task – we always combine it with classical computer vision to reach high levels of accuracy, otherwise this wouldn’t be possible. We always embed it into a software environment, which is then applicable for the customer where it can reliably solve their problem.’

Scheubel was asked by an audience member how his firm addresses the problem of data labelling, as this often proves to be an expensive headache when starting out with AI. 

He agreed that this is indeed the case, especially if images have to be segmented, for example in skeleton tracking where several points have to be marked on each person. 

So far Cubemos has experimented with a number of different data labelling strategies. Having its own engineers carry out the task proved to be too expensive, while outsourcing it cheaply led to a drop in quality – often third-party services pay staff per picture labelled, so there is a tendency for high throughput rather than quality. The firm now works with staff who are paid for the time required to label the images, which gives higher quality. 

One panel member remarked that five years from now, having data labelled reliably in an inexpensive region could become a more viable option, as by then creating a high-quality AI model will likely be much easier.

Gross at Allied Vision noted that users were facing limitations when training AI models with high-resolution images. ‘What we have learned in talking to our customers is that everyone wants to have a high-resolution camera, for example from 12 to 20 megapixels,’ he said. 

‘But then in terms of AI, when faced with resolutions like this, they say that they can never train [using] this. They say this because a lot of processing power is needed. Currently this is still a challenge. They want to have a high resolution, but the models are trained with very low resolution images.’ 

He concluded by saying that while processing power has far to go until it reaches the required performance to train AI using high-resolution images, he is sure this will be the case one day.



Perspective from Anne Wendel, director of VDMA Machine Vision

It was much quieter at this year’s Embedded World in Nürnberg: concern about the coronavirus led to numerous cancellations by exhibitors and a significantly lower number of visitors than in previous years. Nevertheless, interest in the panel discussion, ‘Embedded vision everywhere’, was high as six experts discussed the current state of the technology and future developments.

The main driver for the success of embedded vision technology is the consumer sector. Above all, the development of smartphones and the cameras and processors used in them have laid the technical foundation for the rapidly increasing number of applications in many different industries.

In addition, the investment in embedded vision systems pays for itself very quickly, and investments often pay for themselves after just a few months. This aspect also contributes to the great success of the technology. Both the consumer sector and the industry benefit equally from this development.

Thanks to the performance of embedded vision solutions and low price, the number of applications in many different sectors within and outside of manufacturing has increased significantly. Police body cameras that are able to blur faces for data protection in real time, or vision-guided AGVs were two examples given of where embedded vision is being used.

Bengt Abel, project leader technology and innovation at Still, a logistics company belonging to the Kion Group, said that embedded vision is an important enabler for transparent supply and production chains. ‘I see enormous future potential for the traditional industrial sectors,’ he said. ‘There is a particularly large application potential for embedded vision in intralogistics, in the transport of goods to production facilities.’ Still is currently evaluating people recognition and goods tracking systems.

According to the panelists, the success story of embedded vision will not lead to traditional PC-based image processing solutions losing their benefits. In certain cases, a combination of embedded and PC-based vision can also provide optimum solutions. The experts noted that the fact that Linux is usually used as the operating system for embedded vision, but Windows is often used in traditional image processing, does not pose a problem. Gion-Pitschen Gross, product manager at Allied Vision, said: ‘In Linux, you can limit yourself to exactly those parts of the operating system that you really need. This allows Linux systems to be kept simple and with a small footprint. Linux therefore gives users more freedom to adapt the systems exactly to customer requirements than is possible with Windows.’

Based on the technology, the price development and the broad application possibilities, embedded vision currently has great momentum, which, according to the opinion of the discussion participants, will lead to many exciting developments in the next one to three years.

Abel said: ‘Users do not want to develop their own embedded vision system for every task at great expense. The ideal solution would be a kind of App Store, in which solutions for specific tasks, such as tools for edge detection, are available in a simple way.’ Whether this hope will ever become reality, only time will tell.

Topics

Read more about:

Embedded vision, Embedded World

Media Partners