Setting standards

Share this on social media:

Topic tags: 

The number of machine vision standards available has grown considerably since Camera Link was introduced 13 years ago. We asked those on the various standards committees for an update on their standards

The future of standard development

By Jochem Herrmann, chief scientist at Adimec, member of the EMVA executive committee and co-chairman of the Future Standards Forum

Standardisation is important for any industry and perhaps even more so for the relatively complex and distributed machine vision industry.

There are 20 years between the first de-facto digital interface standards supported by only a few manufacturers and the latest truly global machine vision standard, during which time the industry has learned how to manage the development and dissemination of standards more efficiently. This article looks back on how the industry made this transition and gives a preview on how standards will be developed more effectively in the future.

The early days of interface standards

In the early days of machine vision, life was easy. The interface between cameras and frame grabbers was analogue governed by TV standards like CCIR or RS-170. The first digital interfaces became available in the 1990s. Because many manufacturers designed their own digital interface, this led to many different interfaces that were not compatible with each other – a great time for companies producing breakout boxes and cable assemblies, but bad news for customers of machine vision components.

It was in the late 90s when the idea for Camera Link was born – the first digital interface for the machine vision market that was supported by a large group of manufacturers of cameras and frame grabbers. Standardisation was hosted by AIA. Even today, Camera Link still plays an important role.

Because Camera Link could not be used in systems with a need for long cable lengths, and required the use of a frame grabber, the GigE Vision standard was released. GigE Vision allowed for cables of over 100 metres long, but at the time standardsthe speed was limited to 1Gb/s, which was enough for many low- and mid-end cameras.

But the fast pace of technology development resulted in new demands from the market on the one hand, and more technology options on the other.

Ever-growing speeds of new cameras and frame grabbers required an interface standard that could operate beyond the maximum speed of Camera Link. As often happens, there were two solutions developed addressing the same problem, which led to two new world standards: CoaXPress and Camera Link HS. Though both standards have their own features and benefits, there is quite some overlap between the standards. Given the small size of the machine vision market, multiple competing standards created a waste of valuable resources and caused confusion for customers.

On the software side,the GenICam and IIDC2 standards were developed. Both offer a standardised interface to the software developer to be able to set up camera modes and acquire images independent of the type of camera and frame grabbers (if used). This makes it much easier for the user to write portable software – that is, software that can run in combination with cameras and frame grabbers from different manufacturers.

Apart from interface and software standards, there were also initiatives from the industry to develop standards for lens mounts and illumination for use with machine vision.

The need for global cooperation

With so many new standards developing, the three most important machine vision associations – AIA, JIIA and EMVA, based in the United States, Japan and Europe respectively – agreed to cooperate on the development of machine vision standards by signing the G3 agreement. Working together makes perfect sense given the fact that machine vision is a truly global market and because most vendors sell their products worldwide.

Cooperation not only prevents counter-productive competition between standards, but also allows for global access to standards, faster development and more re-use, resulting in better standards and faster acceptance by the market. This is beneficial for manufacturers and users of machine vision components alike.

Though the implementation of the G3 agreement came too late to prevent the side-by-side development of CoaXPress and Camera Link HS, the development of the new interface standard USB3 Vision was a global activity with more companies cooperating, more re-use, and a faster time to market than ever before.

Looking back, we can conclude that the industry truly learned from the past!

But the successful cooperation on the development of standards is not enough and there are several things that still can be improved. A limitation in the G3 agreement was that it originally focused on the work after it was decided to develop a new standard.

It could therefore not prevent two groups of companies starting work on competing standards, which is exactly what happened with Camera Link HS and CoaXPress. When both groups learned of each other’s existence, it was too late to change course. What we were still lacking was a planning aspect to standardisation.

For this reason, G3 started the Future Standards Forum (FSF), ‘a forum under G3 for the exchange of information about standards and technologies for the machine vision industry’. The FSF is proactive and operates globally for the benefits of all standards.

Generally speaking, the FSF does the following:

  • Investigates opportunities offered by new technologies (technology push) and identifies future challenges (will result in a market pull);
  • Provides recommendations for new standards and evolution of existing standards taking into account industry trends, global trends and user requirements;
  • Promotes the re-use and harmonisation of existing standards in order to minimise overlap between standards and prevent double work; and
  • Actively seeks collaboration with standard bodies outside the machine vision market in order to share ideas and investigate which standards can be re-used.

Two FSF working groups have been formed so far: the first to prepare a roadmap for digital interface standards, the second one will make recommendations about lighting standards.

The first action of the digital interface working group is a comparison of all interface standards for machine vision. This will result in a brochure that will be published by G3 later this year, targeting users of machine vision components. The goal is to help machine vision users with a comparison of all interface standards in order to be able to select the best interface option(s) for their application. Because the chairmen of all G3 standards are members of this FSF working group, an independent and unbiased comparison is guaranteed.

The next step will be the development of the interface standards roadmap. Initially this will show all existing interface standards, including developments that are already ongoing.

Starting from that roadmap, the working group will make proposals on how the standards can develop in the future based on market needs and technology options. This will probably be the most difficult and exciting part of the work. Not only is it difficult to predict the future, but also we will have tough discussions about which standards should be developed further (and in which direction), and which standards should be classified as ‘maintenance only’.

We are, however, confident that these discussions will lead to recommendations that are good for the industry as a whole. Users will benefit from the fact that there is a better view on standards roadmaps (important for planning), better standards, less confusion, and a broader product offering; manufacturers will benefit because of a faster market acceptance, more re-use, and hopefully fewer standards to be supported by their products – all resulting in greater profits.

But what about…

One of the concerns we hear most often is that the FSF will limit competition, which is bad for innovation. We honestly believe that this is not the case. Firstly, we shift the competition between standards to a much earlier moment in time.

People who are interested in standards development will share their ideas in the FSF working groups and have in-depth discussions with their peers.
Good ideas will arise much more often by discussion and cooperation upfront – the best standards proposal will still win, but at a much earlier moment and at a lower cost for everybody.

The machine vision industry is already more than 25 years old and has constantly been learning. The successful precompetitive cooperation in G3 and the FSF shows that the industry is maturing. Better planning of future standards development, fewer but better standards, and global cooperation is not only beneficial for our customers, but for everybody earning their money in the industry.

USB3 Vision

By Eric Gross, National Instruments and Chair of the USB3 Vision committee

The volume of industrial cameras being embedded into systems has been shifting away from interfaces that require expensive dedicated frame grabbers to using consumer-based buses like FireWire, Ethernet, and USB, which enable vision to be added to a greater number of systems. The latest step forward is the recent release of the USB3 Vision standard. USB3 Vision has a number of benefits for vision applications, especially over the USB 2.0 interface. Even though multi-billions of USB 2.0 ports ship each year, the bus was never universally adopted as a camera interface and, while the bandwidth limitations surely decreased the potential adoption rate, one of the main reasons for the limited success of USB 2.0 in vision applications is the lack of a standard.

Standards-based camera interfaces, such as Camera Link, FireWire, and GigE Vision have seen a lot of success in the machine vision market and USB3 Vision should see similar results, potentially faster. Since both USB3 Vision and GigE Vision are GenICam based, the migration between the two standards is very smooth. USB3 Vision will unseat GigE Vision in certain applications, but the pros and cons of each interface will show that USB3 Vision isn’t the replacement of GigE Vision, but rather a complement.

With the GenICam basis, however, system designers can choose the interface best suited for their application using criteria such as bandwidth and cable length without any software work, because the camera experience is identical between the two technologies.

USB3 Vision also inherits a lot of benefits from being based upon USB 3.0. With bandwidth up to 400+ MB/s, it is 10 times faster than USB 2.0 and can compete with many machine vision standards on speed alone.

USB3 Vision is positioned to ramp up quickly. Camera manufacturers have been highlighting their USB 3.0 offering at recent trade shows and the major players are on board. It was also recently announced that the USB 3.0 Promoter Group plans to double the USB 3.0 speed to 10 Gb/s in an update later this year. USB3 Vision should fuel even more systems to take advantage of vision to make their devices and machines smarter.

CoaXPress

By Colin Pearce, Active Silicon, part of the CoaXPress committee

Two years on from becoming a standard (CoaXPress was ratified in March 2011), the number of companies adopting CoaXPress continues to grow. At Vision 2012 in Stuttgart, Germany, there were 12 camera manufacturers demonstrating CoaXPress cameras and four frame grabber manufacturers. There are also many new players in industries outside of machine vision as evidenced by data provided by EqcoLogic as to the number of companies purchasing their transceiver chips.

CoaXPress is a high-speed digital imaging transmission standard hosted by JIIA (Japan Industrial Imaging Association). It provides very high-speed data transmission over relatively long cable lengths with power and bi-directional communication. Because of this, the standard has generated interest not only in machine vision but also in a variety of other imaging-related industries where the simplicity of coaxial cable and high-speed transmission provides significant benefits.

Now, two years after the standard was ratified, there is an emerging pattern of application areas in the adoption of CoaXPress. These can be broadly categorised into three major areas: firstly, super high-speed, vision-based inspection systems (typical speeds of 25Gb/s), where the speed of inspection is critical in providing a competitive product. Typical applications include flat panel inspection, smart phone inspection, PCB inspection and semiconductor inspection. These markets, predominantly in the Far East, require high speed vision-based inspection for quality control and efficient manufacture. The competitive nature of these industries often results in promising new technology being adopted rapidly to gain competitive advantage.

Secondly, there are high speed imaging applications (6-12Gb/s) where CoaXPress not only offers the convenience of coaxial cable but also the benefits of much faster speeds compared to more traditional standards such as Camera Link. Applications here include medical imaging in diagnosis and treatment as well as high-speed scientific imaging in life sciences.

The third area is in applications where the benefits of medium speed (3Gb/s) over long cable lengths and single coax provide significant application benefits. Typically, applications here include high-end surveillance where single or multiple video channels can be streamed over a single coax with power and bi-directional control.

In terms of progress of the standard itself, v1.1 was released in March 2013. This version included many minor improvements in the documentation to aid implementation but also added the optional high-speed uplink and the DIN 1.0/2.3 connector as an alternative to the established BNC. This connector may be used on its own, but the real space-saving benefits come as a result of an innovative multi-way version developed primarily by Components Express working within the JIIA’s CoaXPress Task Force for connectors.

Future enhancements to the standard, currently under discussion, include moving to a faster maximum link speed (to 10Gb/s per link, up from the existing 6.25Gb/s); forward error correction to provide a degree of immunity against channel errors; compressed data, metadata and time-stamping, and a number of minor enhancements to maintain continued compatibility with GenICam as that standard moves forward.

  

Camera Link HS

By Mike Miethig, Teledyne Dalsa and chair of Camera Link HS committee 


CLHS system overview

The Camera Link HS (CLHS) standard is designed to meet machine vision needs. It features data rates from 300 to 16,000MB/s, parallel data processing, fibre optic-enabled distances greater than 1,000m, 3.125 and 10Gb/s lane speeds, with single bit error immunity ensuring reliable data and control messages. The pulse message is low latency (about 150ns delay) and has a peak-to-peak jitter of 6.4ns. Pulse message frequencies of several megahertz are possible. Sixteen bidirectional GPIOs are supported and offer low latency and jitter of about 300ns. So, is all the capability available with CLHS hard to implement? Absolutely not.

Version 1.1 of the CLHS IP core was released in March 2013 and simplified CLHS core instantiation over version 1.0 by using inferred RAM instead of FPGA vendor specific blocks. The core consumes a small portion of modern FPGAs.

The IP is available from the AIA (the IP team consists of associates from Matrox, Mikrotron, PCO, Siso and Teledyne Dalsa), and includes both frame grabber and camera unencrypted VHDL code, a frame grabber reference design capable of reading seven lanes of the C2 cable, along with a comprehensive test bench and user guide.

ProDrive purchased the core and within one week was able to have the CLHS code instantiated in their camera. Eric Jansen of ProDrive said: ‘ProDrive chose CLHS because it offered single cable high video bandwidth, high uplink bandwidths, and small triggering latencies needed by our products. We haven’t had any issues in using the core and it really helped us adopt CLHS.’

Hooking the core up is easy (see figure). All message types use the same parallel interfaces for transmit and receive functions. The CLHS IP core is responsible for forming the packets and prioritising the messages according to the CLHS specification. The unencrypted simulation test bench is very comprehensive as it includes regression tests. The regression tests are run when design improvements are made or when new features are added, ensuring that previously working functions are not inadvertently broken.

The sophistication of the test bench is an opportunity for FPGA designers to learn test bench automation techniques that can be applied to other projects. The test bench design effort is a major reason why the IP core team is confident that developers using the CLHS IP core will be able to connect with products from different vendors with first time success.

CLHS continues to add capability for the most demanding applications. The committee is developing a methodology that expands single region of interest (ROI) to multiple ROI that can change frame by frame, with a goal to support camera-determined ROI. The test benches will ensure that any capability that is added will not break the working designs. CLHS offers the most robust, low-cost, long-service life data transmission technology, and the most comprehensive feature set. At the same time developers can concentrate on product features rather than protocol through the use of the IP cores.

EMVA 1288

By Bernd Jähne, chair of EMVA 1288 committee

Choosing a suitable camera for a given machine vision application can be a challenging task. The data sheets provided by the manufacturers are difficult to compare. Frequently, vital pieces of information are not available and the user is forced to conduct a costly comparative test that still fails to deliver all the relevant camera parameters. This is where the EMVA 1288 standard comes in. It creates transparency by defining reliable and exact measurement procedures and data presentation guidelines, it is globally accepted, and several companies offer measuring services and measuring equipment.

The EMVA 1288 standard was developed by a working group of more than 20 leading manufacturers, vision users and research institutes within the European Machine Vision Association (EMVA). The quality and the parameters of a camera, not including the lens, can be described by objective criteria.

EMVA 1288 allows a true datasheet-based comparison of different products. Specifications are defined as are the measuring methods. By publishing EMVA 1288 data sheets, manufacturers can clearly communicate the performance and quality of their product. The basic parameters that are provided by the EMVA 1288 standard can be reduced to just a set of four categories.

Firstly, sensitivity: the brightness of a camera image does not adequately describe its quality. The relevant parameter is the ratio between signal and noise (SNR). For a camera with linear characteristics, the SNR is determined only by the quantum efficiency and the standard deviation of the dark signal noise. For a good sensitivity at low irradiation, a low dark noise is the most essential parameter and for a low SNR at high irradiation it is important in addition that the sensor can collect as many charge units as possible.

The second is linearity. A number of applications require a good linear relationship between the intensity of illumination and the digital greyscale value. Thirdly, dark current: in an image sensor, a signal is generated, which is not dependent on the intensity of the illumination but created by purely thermal effects or leakage. This determines the maximum useful exposure time of a camera.

Finally, homogeneity: image quality is heavily influenced by the type and intensity of the variations from pixel to pixel (fixed pattern noise). The dark signal non-uniformity (DSNU) is taken into consideration, which describes the spatial variations in the dark image, as is the photo response non-uniformity (PRNU), which is the spatial variation of the sensitivity.

GigE Vision

By Eric Carey, Teledyne Dalsa and Chair of GigE Vision committee

GigE Vision is a camera interface standard based on Ethernet technology. It provides two protocols layered on top of the Internet Protocol (IP): one to control a device and one to stream images from a device. Since all modern PCs have Ethernet connectivity, a GigE Vision camera does not need a frame grabber to capture images; it can be directly connected into any Ethernet port. With a Gigabit Ethernet connection, which is currently the most typical configuration, it is possible to stream images at up to 115 MB/s over a copper cable of 100 metres. Fibre optic is natively supported by Ethernet, and therefore by GigE Vision. Because it is built on top of Ethernet, GigE Vision directly benefits from any improvements made to Ethernet technology.

GigE Vision was originally released by the AIA, and is now at version 2.0, which adds:

  • Formal support for 10GigE: this enables faster image transmission, up to about 900 MB/s, by using 10 Gigabit Ethernet technology; Support for Link Aggregation: multiple cables can now be paired to augment the bandwidth. For a two-cable configuration, throughput of about 230 MB/s can be achieved. Up to four cables can be combined in this manner;
  • Multi-camera synchronisation: this is realised using the IEEE1588 Precision Time Protocol. A common time base is broadcast on the network and can be used by GigE Vision devices to synchronise specific actions, such as an image acquisition trigger
  • command; and
  • Support for image compression (JPEG, JPEG2000 and H.264): on top of uncompressed images, GigE Vision now supports the transmission of compressed data. This can be used to minimise the bandwidth consumed during image streaming. It can be especially useful when multiple cameras are combined on the same gigabit link through an Ethernet switch.

Of the above, support for 10GigE coupled with faster CMOS sensors is certainly one that will open many new opportunities to GigE Vision-based systems. A few companies have recently introduced 10GigE cameras and Camera Link to GigE Vision (10GigE) converter boxes, but this speed grade poses power consumption challenges that must be carefully managed by camera manufacturers.

The upcoming update to the GigE Vision standard is planned to improve the mechanical specification of the locking connector, add a few new pixel formats and look at offering better support for data coming out of 3D cameras. This should be available sometime in 2014.

GenICam

By Dr Fritz Dierks, Basler and Chair of GenICam committee

GenICam is the heart of all modern interface standards for machine vision cameras, such as GigE Vision, Camera Link, CoaXPress, Camera Link HS, and the latest newcomer, USB3 Vision. GenICam standardises camera features and how to access them by software whereas the interface standards define how to shift data between the camera and the host. Since the GenICam layer is the same for all interfaces, it becomes very easy to change cameras or interface – or even run mixed systems. GenICam comes with a reference implementation which is available for many platforms.

The key idea of GenICam is to separate the transport layer (TL) from the SDK and make it agnostic of camera features (see figure). The TL consists of the physical interface and the driver on the host side and is responsible for enumerating cameras, giving access to camera registers, delivering video stream data, and raising asynchronous events. The TL deals only with data transport and is not aware of camera functionality such as gain or exposure.

The camera functionality is the domain of GenICam. Instead of defining a fixed register layout, GenICam standardises the format of a camera description file which lists all camera features and how they are mapped to camera registers.

This scheme leaves a lot of room for competition because the implementation details of the features are up to the camera maker and custom features can be made easily available by just adding them to the camera description file.

The interpretation of the camera description file is done within the SDK which typically uses the GenICam reference implementation as the engine under the hood. The necessary TL driver running the interface protocol can either be part of that SDK or purchased from a third party and accessed via the GenTL standard interface for camera drivers.

GenICam is hosted by the European Machine Vision Association (EMVA). The standard committee meets twice a year with 25 out of approximately 120 member companies having attended the last meeting in Seoul in April this year. The standard is constantly enhanced and adapted to the industry’s needs, with new use cases and camera features added and improvements made to the reference implementation. Currently the next major release, v3.0, is under preparation. This will once again improve the loading time and memory footprint, making GenICam ready even for deep embedded systems.

 

Other tags: 
Company: 

Related features and analysis & opinion

26 June 2019

Suprateek Banerjee, VDMA Robotics and Automation, gives an update on OPC Machine Vision, which provides uniform integration of machine vision systems into higher IT production systems

03 January 2019

Dr Reinhard Heister at VDMA Robotics and Automation on the plans for the new OPC Vision standard

31 October 2018

Qualcomm Technologies' Snapdragon board is designed for mobile devices, but can be used to create other embedded vision systems. Credit: Qualcomm Technologies

Embedded computing promises to lower the cost of building vision solutions, making imaging ubiquitous across many areas of society. Whether this turns out to be the case or not remains to be seen, but in the industrial sector the G3 vision group, led by the European Machine Vision Association (EMVA), is preparing for an influx of embedded vision products with a new standard.