NEWS
Tags: 

GenICam model recommended for future embedded standard

The Embedded Vision Study Group (EVSG) has recommended GenICam and OPC UA specifications as the first methods of choice for standardisation of embedded vision. EVSG is led by VDMA Machine Vision under the G3 Future Standards Forum. Dr Klaus-Henning Noffz, CEO of Silicon Software is in charge of standardisation issues on the board of VDMA Machine Vision.

In an update on the software API for communicating with embedded components, the working group suggested an expansion of the GenICam Standard Features Naming Convention (SFNC) as a suitable model for standardisation.

EVSG has already recommended introducing GenICam into a new, yet-to-be-created OPC UA companion specification for machine vision that would cover the integration of embedded systems into automation or processing environments. At the 2016 Automatica event, VDMA Machine Vision and the OPC Foundation signed a memorandum of understanding to formulate an OPC UA machine vision companion specification.

The group has yet to make a recommendation for a standard concerning modular construction and compatibility of systems using sensor boards and a processor unit or system-on-chip (SoC).

By expanding the GenICam naming convention to cover the software API, consistent description models for image data such as bounding boxes, regions of interest or centre of gravity could be fixed, and the manufacturer-specific XML descriptions for processor modules integrated to produce a set syntax and uniform semantics.

There are two options currently under evaluation for integrating XML descriptions: in the first approach, the XML descriptions as well as parameter trees of camera and applications are integrated; in the second approach, called GenTP, which is still being researched, the processor modules are individually recognised, addressed and configured by a PC host, and the XML files are consequently read out separately.

An optimised interplay of electronic hardware and intelligent software is a prerequisite for using image processing systems, since embedded vision devices exhibit particular characteristics. They consist of an arbitrary combination of components such as FPGAs, ARM-CPUs and GPUs to preprocess images internally. Inconsistent image data formats occur as a result, such as RAW images, centre of gravity (Vector2), label (string), time stamp (date), event and encrypted data, to name a few examples.

Image preprocessing can take place in several steps, whereby the processor modules’ communication must be precisely aligned. For that reason, they require a uniform description of the inputs and outputs and, moreover, must easily be recognised, addressed, and configured. In addition to the image data, other data formats such as objects, blobs and complex results can arise. This variety of data requires expanded generic description models of data formats and structures, as well as their semantic information.

Thus, within the focus of a solution, the description, parameterisation, control, and synchronisation of the entire system should be central. One further aspect concerns the security mechanisms that must be addressed, such as data encryption and IP protection. An expanded GenICam standard for requirements in the field of embedded systems should harmonise the diversity of components and data, with an emphasis on support of generic data formats and processor modules.

In the development of a software standard, the EVSG working group reverted to the very similar requirements of 3D line scan cameras with regard to image preprocessing and processor modules. The range of processor modules from various manufacturers requires their generic description regarding capacities, consistent input and output formats for data transport, and uniform data formats, structures and their semantics, such as unit of measurement, to guarantee interoperability of the processor modules. Treatment of output data as objects as opposed to pixel-based regions is envisioned. In doing so, dynamic object sizes, lists, stream combinations and metadata can be considered. The processor modules’ topology of complex processing nodes should thus be overcome.

Further information:

The Embedded Vision Study Group report can be downloaded by registered members from the VDMA Machine Vision website (http://ibv.vdma.org) or may be requested from the VDMA, see 'Embedded Vision: which standards are necessary to prepare the sector for the future?'.

Related articles:

Embedded vision to transform industrial imaging? - Greg Blackman at the Vision trade fair in Stuttgart finds that embedded image processing looks set to transform the vision industry

Connecting vision to factories of the future - Anne Wendel, director of VDMA Machine Vision, discusses some of the latest standard initiatives the group is investigating to cater for the requirements of future smart factories

Other tags: 
Twitter icon
Google icon
Del.icio.us icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon
Feature

Rob Ashwell looks at how vision fits into the battery of sensors onboard autonomous vehicles

Feature

The harvesting process could be on the verge of a complete overhaul thanks to machine vision, finds Matthew Dale

Feature

Cognex has strengthened its position in 3D vision with a number of recent company acquisitions in the area, as Greg Blackman discovers

Feature

Barry Warzak, owner and founder, Midwest Optical Systems

Feature

Embedded processing is opening up a huge market for imaging, a market that machine vision suppliers are trying to tap into. Greg Blackman attended the Embedded Vision Summit in Santa Clara, where Allied Vision launched its new camera platform