Skip to main content

Bedding down with embedded vision

With recent new product releases offering support for GPUs, more vision engineers are starting to design embedded systems. Greg Blackman reports on the rise of embedded vision

Processing power in computer chips has grown exponentially and looks set to continue to do so. Machine vision engineers are beginning to tap into this abundance of computing power with embedded systems, whereby a digital signal processor (DSP) or a graphics processing unit (GPU) does some of the image processing onboard.

Recently, BitFlow released a software extension for its frame grabbers for Nvidia’s GPUDirect for Video technology, and Andor Technology has introduced its GPU Express software library to aid accelerated GPU image processing.

The trend for building embedded vision systems was seen to a certain extent at the Vision show in Stuttgart at the end of last year, with companies like Vision Components launching systems incorporating FPGAs for onboard processing. FPGA provider Xilinx exhibited for the first time at the show, displaying its Zynq platform which is ideal for vision applications.

Speaking to Imaging and Machine Vision Europe, Dr Vassilis Tsagaris, CEO and co-founder of Irida Labs based in Platani-Patras, Greece, commented: ‘Algorithms that used to run on a PC can now be embedded to achieve the performance needed.’

Tsagaris gave a presentation on embedded vision at the EMVA business conference in June, and Jochem Herrmann, in his update on vision standards during the conference, noted that the Future Standards Forum was setting up a study group looking at whether current machine vision standards can meet the needs of embedded vision.

So can all image processing be embedded? Tsagaris said not all, but that a lot of algorithms could be. ‘Deciding what you want to put on an FPGA and what to put on a GPU or a DSP is hard, and that’s where the magic of embedded vision engineers comes into play,’ he said.

Irida’s product portfolio includes modules for video enhancement, video stabilisation, face recognition, and single frame super resolution. The company is working on the embedded vision aspects of a project called Whiter, a collaborative programme to design a robotic cell for solar cell fabrication, as well as the Borealis project for building a machine for making complex 3D metal parts.

Tsagaris commented that one of the drivers for using embedded vision is to reduce the time-to-market of a vision system. By employing some basic functions onboard the vision chip, the system can be built and implemented much faster. ‘The customers want a complete solution, a system that delivers not just images but usable information and data right away,’ he said.

He also noted that there are now system on chips (SoCs) specifically for embedded vision; Movidius offers its Vision Processing Unit (VPU) only for vision, which is used for tasks like people detection and object recognition. Application specific processors by Cadence and Synopsys are also available; Freescale offers an automotive vision processor. These are ‘companies that have not traditionally been in the vision market, but they see a trend in providing DSPs specifically for vision,’ Tsagaris said.

‘There is a convergence in the vision market between companies coming from hardware and camera markets and consumer electronics markets,’ he said, two areas that weren’t necessarily linked at all in the past. Now, vision subsystems are in demand, which incorporate a camera and lens, as well as embedded algorithms.

Tsagaris noted that power consumption will determine to a certain extent the types of processor used: ‘In most applications, one of the key requirements that drives the decision for hardware is power consumption. FPGAs, GPUs and DSPs are all available, but I think in most cases a DSP provides low power consumption, so if this is a strict requirement then the choice is biased towards DSPs.’

There is a lot of choice out there in terms of processors depending on the application and the skills of the engineers. ‘A GPU with OpenCV support or standard language support can be a very good target,’ Tsagaris commented. ‘You might need a different software engineering skill set for working with DSPs, but this is not rocket science,’ he added. ‘Also a lot of DSPs support libraries like OpenCV, which is not industry standard, but is a good way to start building prototypes.’

Related articles:

Embedded vision highlighted at Vision 2014

Further information:

Irida Labs

Topics

Read more about:

Embedded vision, Business

Media Partners