Skip to main content

Need for speed

Keely Portway on how vision application designers can use embedded technology to reduce complexity and time-to-market

Advances in embedded computing have been transforming how imaging devices are deployed, thanks to lower associated development and deployment costs than more traditional machine vision. This has led to more use cases, with applications in industries such as aerospace, automotive, augmented reality (AR), pharmaceutical, consumer electronics, defence, security and even retail.

The technology behind embedded computing has been around for some time. The first ‘smart cameras’ emerged from research institutions in the 1980s. When they reached the commercial market, most embedded products were custom solutions ideally suited for high volume manufacturing.

Alexis Teissié, product marketing manager at Lucid Vision Labs, explained: ‘For many years, the option was to buy new, more powerful x86 CPU processing, PC-based systems. The way to go if you needed faster processing and higher bandwidth was upgrading the PC architecture, which was very flexible.’ 

The benefit here, explained Teissié, was that this could be adapted to a variety of configurations, both simple and high-end. ‘Instead of having a central processing system, there was a shift towards moving the processing closer to the acquisition side, closer to the edge,’ he said. ‘There was also the evolution of the graphic processing unit (GPU) that was well suited for a lot of vision processing tasks. One of the big motivations for moving to GPU and edge analytics was being able to run those paradigm shifts compared to traditional machine vision. Being able to run artificial intelligence (AI) on-camera is another motivation, because an optimised AI network doesn’t need the high-end processing hardware.’

Smaller and easier

The evolution of embedded tech has also driven a need for systems to be smaller and easier to integrate. ‘Systems started to become less enclosed,’ said Teissié, ‘so designers would not have to deal with cabling, for example – and the camera and processing would be nearby.’

However, as with all developing technologies, embedded tech does not always come without its challenges. With progress towards miniaturisation and edge processing, application designers found that they needed to work through several time-consuming steps to reach a finished product. Advances in modules that can connect directly to embedded boards have helped to alleviate some of these problems for designers, allowing more freedom to create an embedded vision system without having to design everything from a standing start. But the next challenge for vision application designers was architecture limitations, as Teissié explained. ‘For example,’ he said, ‘it can be difficult to deal with multiple cameras, because there is no standardised connection. So they would have to design carrier boards or interface boards. Then there is the industrialisation part, which is how to produce at scale with something that is robust and reliable.’

This move from concept to system production is a particular challenge.Teissié continued: ‘It’s very easy to get an off-the-shelf embedded development kit and use it to get something working. The question is around reliability. Can it be produced for many years? How sustainable is the lifecycle? Some of the chips have a long lifecycle, whereas the boards and the development kits are refreshed every year-and-a-half to two years. In the industrial space, long-term availability is key. So the designers have to do it themselves, managing obsolescence and updating their systems. They have to make sure that these pass all testing and certification, are reliable and can withstand harsh environments. They have to maintain all of this over the lifetime of the product and ultimately, that is a big investment. The alternative is that they select a platform from a manufacturer that can commit to supporting their business for the long-term.’ 

Simplification solutions

In the latter case, vision application designers are looking to manufacturers for solutions that simplify these stages. For Lucid, this has involved a collaboration with AMD Xilinx, leveraging the Zynq UltraScale+ multiprocessor system-on-a-chip (MPSoC) to provide a solution for customers facing these challenges.

Zynq devices are designed to provide 64-bit Arm processor scalability, combining real-time control with soft and hard engines. They are built on a common real-time processor and programmable logic equipped platform. Lucid has integrated the Zynq chipset into its latest development, the Triton Edge camera. Teissié revealed: ‘We have a strong partnership with AMD Xilinx and are leveraging the development framework, as it can adapt to various customers – from the application specialist to the embedded software engineer, all the way to the hardware developer dealing with the field-programmable gate array (FPGA). The Triton Edge is an expandable platform that designers can get running very quickly using the off-the-shelf tool we have built in with the Zynq interface.’ 

The camera is designed to help vision application designers, avoiding hardware validation required to qualify a product for challenging environments – with IP67 protection, it is certified against physical shocks and vibration, offers EMC industrial immunity and operates at temperatures between -20°C to +55°C ambient. Lucid and AMD Xilinx also manage the miniaturisation process before the camera reaches the designer – the Triton Edge features a compact size at 29 x 44 x 45mm. High-speed video direct memory access (AXI VDMA) is allowed for between the on-camera image signal processor, user-programmable FPGA and on-board RAM, while the Arm cores use their own direct memory access (DMA) engine, freeing up the processors from managing data transfers. The video direct memory access (VDMA) and DMA also help reduce system bottlenecks, frame buffer overhead and memory access latency, so that designers can focus on the efficient running of the vision processing. 

‘The embedded FPGA is really the uniqueness of this camera, that part of the FPGA is open for the customer,’ said Teissié. ‘The FPGA is optimised for low-level or parallel processing tasks. It could be accelerating an AI engine, or a more standard computer vision type of processing running on the FPGA of the camera.’ 

In the future, Teissié predicts that major advancements and new use cases will come from designers customising this tech for their own requirements. ‘You really can customise these systems,’ he said ‘It’s at a low level as well, so we have no intention of becoming a solution provider ourselves – however, we are working with a variety of partners that can offer this type of solution and we are eager to see how people use it. We are already seeing many communities and open-source resources with lots of information sharing – but these advancements are not really the hardware side – more analytics, AI or deep learning processing. We are looking forward to seeing what comes next.’

Find out more about how Lucid’s Triton Edge camera helps vision application designers reduce their time-to-market while integrating their own IP into a compact vision system by downloading the latest white paper.

Topics

Read more about:

Viewpoint, Embedded vision

Media Partners