Skip to main content

Edge imaging driven by AI, say Embedded World panel

Greg Blackman reports from the Embedded World show, where industry experts gave insights into vision processing at the edge

Discussions around embedded vision tend to turn to AI at some point, and yesterday's panel session during the Embedded World digital show was no different.

Customers often want to add AI functionality to a vision system, explained Arndt Bake, chief marketing officer at Basler, during the discussion. However, AI doesn't run well on a PC, so customers are looking for hardware where AI inferences do run well, he said – namely embedded architectures.

Bake was joined on the panel by Olaf Munkelt, managing director of MVTec Software; Fredrik Nilsson, head of the machine vision business unit at Sick; and Austin Ashe, head of strategic partners and channels, AIoT devices at Amazon Web Services (AWS). The session was organised by VDMA Machine Vision along with Messe Nürnberg.

Bake added that 'customers are struggling somewhat with the stretch of changing from the classic architecture to the new [embedded] architecture. We need to enable customers to take small steps... to migrate from one technology base to the other.'

Basler is partnering with AWS to provide vision solutions using AWS's new service, Lookout for Vision. The service allows customers to train machine learning models in the cloud that can then be deployed on edge devices, such as camera boards from Basler. Lookout for Vision is designed to make it easier to use machine learning, with the service targeting manufacturing and the industrial internet of things.

Ashe commented during the panel discussion that AWS is trying to lower the barrier to entry for customers wanting to use embedded vision for the first time, or those wanting to expand and scale it. He quoted a figure of 75 per cent of businesses plan to move from pilot to full operational implementations of embedded systems over the next two to five years.

'We are positioning ourselves to orchestrate the edge and the cloud in a unique way,' he said, adding that edge processing is important when it comes to latency, bandwidth, cost of sending data, and security and privacy.

The cloud comes into play for things like monitoring devices, and also updating them. 'When you think about managing embedded vision systems at scale, there is an elegance that comes with being able to take a [neural network] model, train it in the cloud and deploy it over the air to all of the machines that need it,' Ashe said.

He added that the adoption of 5G 'creates a whole new opportunity for cloud and edge to have closer interoperability and more edge-to-cloud use cases to be delivered.'

In terms of using AI, Munkelt and Nilsson agreed that AI adds value and opens up new possibilities for using vision, but that it has to be easier to use. 'We have to enable customers of embedded vision to quickly get to the point where they see an added value [for using AI],' Munkelt said, adding that this has to happen at all stages in the workflow, from data labelling and data management to the processing.

Nilsson noted that AI is good for solving tasks that are difficult to solve with conventional rule-based image processing. It's also good for companies that was to use vision but don't have a lot of expertise in image processing, in tuning algorithms for instance.

He added that both deep learning and conventional image processing will have a place in vision engineering, and that hybrid solutions will also become more common – for instance, doing object segmentation using deep learning and then applying measurement tools with a rule-based algorithm. Some of Sick's smart cameras now run neural networks and it provides deep learning software.

The race for AI accelerators

Munkelt said that there's currently a race to develop AI accelerator hardware, which is going to influence edge processing. He said there are many start-ups providing really interesting hardware, which can perform 10 or 20 times better than existing GPU hardware from established vendors.

'Speed for processing image data is super important, and will be important in the future,' Munkelt remarked. 'Everyone in our vision community is looking at these AI accelerators because they can provide a big benefit.' MVTec's image processing libraries include neural network-based approaches.

Later on during the Embedded World conference, Jeff Bier, president of the Edge AI and Vision Alliance, and BDTI, gave a presentation where he noted that we are now entering the golden era of embedded AI processors, chips he said that could achieve orders of magnitude of 10 to 100 times - or 200 times - improvement in efficiency when running neural networks. These levels of performance are not obtained through improvements in fabrication, but through improvements in architecture - domain-specific architectures.

Bier said that there are approaching 100 companies, from start-ups to the largest established chip makers, that are developing and offering these kinds of processors specialised for deep learning and visual AI. ‘This is very different from what the industry looked like a few years ago,’ he said.

During the panel discussion Bake said that one big benefit of embedded vision is that it brings down the cost of the components needed to build a vision system, and, as a result, opens up vision processing to many new application areas. Bake mentioned medical devices and lab automation, and intelligent traffic systems as areas where he is seeing embedded vision used more often.

He added, however, that one of the barriers to entry is complexity, in terms of the different types of hardware available – GPUs, SoCs, ISPs, special AI processors – and mapping software effectively to that hardware. 'We see a lot of attempts from companies trying to bring the pieces together and make it easier for the customers. The easier it's going to get, the higher the adoption rate and the wider the usage of that technology,' he said.

Ashe concluded that the next few years will bring a proliferation of vision devices thanks to the lower cost of CPUs and GPUs. There's going to be 'a more unified approach for the platform providers and the edge providers to come together, and what's going to unite them are the applications,' he said. 'The three have to work in unison to provide this seamless ease-of-use experience to quickly and easily address these use cases, and it needs to work almost as efficiently as the app store on our smartphones.'

The developers building the applications will be interconnected with hardware vendors and platform vendors. 'It's already happening,' he said, 'there's development happening at the edge and in the cloud, and it's all coming together.'

--

Perspective from Anne Wendel, director of VDMA Machine Vision

Embedded vision was, once again, a major trend topic at the Embedded World digital trade fair, which took place in March. VDMA Machine Vision organised a panel discussion, there was a dedicated embedded vision track and vision featured in many keynotes. So, be it in factories, agriculture, or our households and everyday lives, many future applications will be based on embedded vision.

The drivers for the success of embedded vision are cheaper, smaller, more powerful and more energy-efficient processors. Arndt Bake, CMO of Basler, said during the VDMA MV panel discussion that embedded vision devices will continue to get smaller, with smartphones a benchmark for size for camera manufacturers.

Deep learning has become crucial to building embedded vision solutions in a more efficient way, and therefore reducing time-to-market. However, regardless of the rise of AI, a key factor for any vision application remains the quality of the image. Fredrik Nilsson, head of the machine vision business unit at Sick, noted that a vision task should start with the customer’s needs, and, based on these, the appropriate vision components - camera, lighting, etc - should be selected to get a good image. Nilsson and Olaf Munkelt, managing director of MVTec Software, added that, in many cases, traditional rule-based algorithm approaches will not be replaced by deep learning. Hybrid solutions will also play an important role in the future.

Austin Ashe of Amazon Web Services said that orchestration of cloud and edge computing, as well as big data management, will play an important role in embedded vision. He mentioned monitoring devices - or fleets of devices - in the cloud, running real-time alerts, updating devices, training models and data analytics, all taking place in the cloud. In many industrial environments, however, two critical questions remain: data security and data ownership for any cloud-based application.

PC-based image processing solutions still have their place in an industrial setting. Embedded technology is used more when the task is cost-sensitive and where larger numbers of vision devices are required.

That said, new players are now enriching the embedded vision ecosystem: those developing AI-accelerating hardware, board manufacturers, platform providers, development consultancies focusing on embedded vision solutions and software firms. One thing is for sure: we will see more smart machines and devices that are able to see and understand in the near future.

Media Partners