Reusing code in embedded vision applications

Share this on social media:

Tags: 

Migrating software code from a PC to an embedded system to run vision applications presents various challenges. Following a presentation he gave at Embedded World in Nuremberg, Germany last week, Frank Karstens, field application engineer at Basler, gives his advice on the task

A growing number of companies active in the machine vision field are beginning to recognise the benefits of an embedded approach. Most traditional machine vision systems are based on a classic PC setup: camera, cable and, in most cases, a Windows PC. Embedded vision systems offer the benefit of lower power consumption; they need less space and ultimately help the user to significantly reduce the cost of the system.

Nevertheless, there are a few challenges that need to be overcome to transfer existing software code from a classic PC setup to an embedded target, such as different operating systems, Windows versus hardware-specific Linux; different processor architectures, such as x86 versus ARM; or different camera interfaces, for example GigE versus MIPI CSI-2.

Well-defined standards with well-defined interfaces help to create the framework conditions for these differences. For a classic PC-based setup the industry has already found an answer through the GenICam standard, which was established in 2003.

GenICam – Generic Interface for Cameras – standardises camera configuration and image data transfer, and provides software developers with standard APIs. There are reference implementations with GenICam for various operating systems and processor architectures. On top of that, there are also camera manufacturer-specific, GenICam-based SDKs available, which make the camera APIs easier to use. The broader the choice of the SDKs’ supported operating systems, processor architectures and camera interface technologies, the greater the flexibility the user has when moving from one technology to another, and the easier it is to port existing code to a new target.

In typical, PC-based machine vision applications, GenICam helps to provide a stable interface that defines the specifics for camera or interface and allows plug-and-play functionality. Code written for a GigE camera by manufacturer A can easily be reused for a USB 3.0 camera by manufacturer B – the user only needs to make minor modifications.

Basler's Dart board camera with MIPI development kit

For embedded vision, however, things are different. There are a number of variables that need to be considered. The vision sensor is not necessarily a camera; it could be a camera module or even a bare CMOS sensor. In addition, there is an even greater choice of processing platforms than in the machine vision world. The different classic CPU architectures – such as x86, ARM, MIPS or PowerPC – have to compete with FPGA, GPU, DSP-based approaches and so on. Moving from one sensor or camera to another, or changing the processing system architecture, quite likely requires the software developer to rewrite significant parts of the vision software.

The possible challenges that may come up when migrating code from a non-embedded setup to an embedded one depend on the selected camera interface. The code for an embedded system with GigE or USB 3.0 is actually not that different from the code of a non-embedded approach. If the camera interfacing code for the non-embedded system was written for a GenICam-compliant API, and as long as the targeted processing platform provides these interfaces, the existing code can be reused without any modification when ported to an embedded target.

If the engineer wants to use the MIPI CSI-2 interface, this is not the case. In 2003, vendors of mobile devices or components formed the Mobile Industry Processor Interface (MIPI) Alliance, as it became more obvious that standards for connecting peripherals, including all kinds of sensors or displays, are required to speed up development and product release cycles. The CSI-2 specification – Camera Serial Interface, second generation – is today’s number one standard for connecting vision sensors or camera modules to mobile processors or SoCs.

MIPI CSI-2 will become the most important camera interface for embedded machine vision applications. The lack of standardised APIs like GenICam makes it difficult to reuse code that was written for other GenICam-compliant camera hardware.

Some camera vendors are now starting to put a MIPI CSI-2 driver and software stack under the hood of their existing vendor-specific, GenICam-based, camera SDK, which abstracts CSI-2 specifics. From the point of view of the user, such an API would look exactly like any other camera API from the same vendor and thus makes migration from another camera interface technology very easy. However, until the MIPI Alliance integrates GenICam into the CSI specification, this vendor-specific approach will remain a proprietary solution for specific camera/SoC combinations.

For the software developer this means finding a camera SDK that offer the broadest support for both non-embedded and embedded processing platforms, operating systems and interface technologies, including MIPI CSI-2. Having one unified camera API means it is possible to reuse significant amounts of existing code, and offers more flexibility to the user to move from one technology to another and to port existing code to the new target.

Related article:

Building embedded IR imaging devices: what to consider

Other tags: 

Related analysis & opinion

Neil Trevett and Chris Yates

16 March 2021

The Khronos Group and the EMVA are to explore software standards for embedded vision. Khronos’ Neil Trevett and EMVA’s Chris Yates explain the work

05 March 2021

Greg Blackman reports from the Embedded World show, where industry experts gave insights into vision processing at the edge

23 November 2020

As AMD buys Xilinx and Nvidia acquires Arm, we ask two industry experts what this could mean for the vision sector

10 November 2020

Greg Blackman explores the efforts underway to improve connectivity in factories

27 January 2020

Prior to speaking at the Embedded World trade fair, The Khronos Group’s president, Neil Trevett, discusses the open API standards available for applications using machine learning and embedded vision

Related features and analysis & opinion

Neil Trevett and Chris Yates

16 March 2021

The Khronos Group and the EMVA are to explore software standards for embedded vision. Khronos’ Neil Trevett and EMVA’s Chris Yates explain the work

05 March 2021

Greg Blackman reports from the Embedded World show, where industry experts gave insights into vision processing at the edge

Vision Components MIPI modules can be connected to various embedded processors, including Nvidia Jetson boards

17 February 2021

Greg Blackman examines the effort that goes into creating an embedded vision system

23 November 2020

As AMD buys Xilinx and Nvidia acquires Arm, we ask two industry experts what this could mean for the vision sector

10 November 2020

Greg Blackman explores the efforts underway to improve connectivity in factories