Jochem Herrmann, EMVA president and chief scientist at Adimec, on standardising for embedded vision, which could potentially infiltrate manufacturing in a big way
There is no question that the common global standardisation activities over the last decade have been a major contributor to customer acceptance of machine vision technology. With embedded vision on the verge of being applied in new applications, and possibly also replacing vision systems in existing applications, this new era of machine vision technology places different challenges on future standard activities, a topic that is being evaluated thoroughly at the moment within the Future Standards Forum (FSF) of the global G3 vision group.
Concentrating on only a limited number of machine vision interface standards over the last 10 years has not only prevented double development of standards, but also brought faster acceptance of vision technology. It has also meant the standards used today cover all present needs in the industry, as well as focusing on specific strengths, from high speed to long cable length or the ability to use several cameras. In addition, the price and performance of machine vision has constantly improved, together with the portability of application software. Therefore, what the industry has achieved in terms of standardisation is a success story; and it has to be pointed out that the G3, the FSF and the global vision associations only built the framework for the hard work that has been done by employees of vision companies within the various standard working groups. Yet, although low cost and small size have always been important factors in machine vision development, the next era of embedded vision systems will have very different expectations in that respect.
Business case for embedded vision
This next embedded era has already begun and will accelerate with further implementation of Industry 4.0. Factories of the future will have small, flexible and networked production cells, which will need a reduction in the cost and size of vision equipment. Integration plays an important role and that can only be achieved with vision systems that are no longer standalone entities but integrated into the system architecture of the factory. These are the needs.
In addition, there is a lot of innovation in embedded vision at the moment driven by consumer electronics – the integrated or, in other words, embedded functions of cameras in smart phones makes that quite obvious. Vision in consumer devices combines high speed, low power, small size, and low cost. This will have a strong impact on machine vision solutions: PCs will increasingly be replaced by processing platforms with embedded vision processors. These will be highly integrated, small size, cost-effective and powerful solutions that have the most relevant I/O onboard. Last but not least, the interface between the camera module and embedded processor will be as simple as possible, and here is where standardisation comes in.
Machine vision is a low-volume market where standards are crucial for acceptance of the technology. In the future, embedded processing will lead the machine vision sector into larger volumes by making smaller and easier to use systems. In the factories of the future, where it is all about networks of small and easy-to-program production cells, it should be obvious that size and cost are much more important there than in classical machine vision systems. So the question will be: how much cost will standards add to the bill of materials of next-generation embedded vision systems? To date, there is no definite answer to this question.
Evaluating possible standards
Within the FSF there are several options for suitable standards for embedded systems under evaluation, and part of this evaluation process is that the group looks at what exactly should be standardised and what not. This is why the EMVA has started a working group and invited all G3 members to specify the needs and layout of a future interface between the camera module and the embedded processor module for video data, control and power supply.
The group’s initial thoughts are in the direction of creating what is currently called an industrial camera serial interface (CSI), which might use a MIPI standard – but also future standards that are better adapted to the needs of the machine vision industry. As to the MIPI Alliance, its standards are for consumer electronics, mainly mobile and mobile-influenced products. The standard has a specific use case in mind, where the needs are not congruent with those in the machine vision industry. MIPI is fast, but it is not high-speed, which is needed for industry. Secondly, cable length is limited to centimetres, which is often not usable in an industrial environment; and the standard does not specify anything about connectors or power. And thirdly, the MIPI standard is not particularly suited to connecting to FPGAs, because in its world FPGAs don’t exist.
The need for higher speed and, at the same time, limiting the number of I/O pins on CMOS image sensors will make image sensor suppliers move to fast serial interfaces on their products. Sony is at the forefront here, with the introduction of its proprietary interface standard SLVS-EC on the latest generation of fast CMOS sensors. The good thing about that standard is that it connects quite well to modern FPGAs that have fast serial I/O. It can potentially run much faster than MIPI and can operate over a few metres of cable, which makes the standard a good fit for the requirements of embedded vision systems. The downside is that SLVS-EC is still a Sony proprietary standard, which is why EMVA and the Japan Industrial Imaging Association are in discussion with Sony to evaluate if it can become an open industry standard.
Other standards focusing on embedded
Regarding other standards, an important initiative under the FSF is OPC UA, which is led by the VDMA. The goal is to standardise the hardware and software connection between a machine vision system and its process environment. So, the standard interfaces on a higher level giving answers to the question of how do we standardise and connect a machine vision system to the factory floor.
And then there is GenICam. Until today, the beauty of GenICam is that one can have different cameras and various interface standards such as GigE Vision, USB3 Vision, CoaXPress, and the engineers that write software still have a unified way of controlling the camera and acquiring images. Embedded systems will change that, because the architecture GenICam 3.0 is based on assumes that there is some intelligence in the camera. In future, that will change because the simplest camera modules will be no more than a sensor, plus some kind of simple interface. The processing that used to take place in the camera will be moved to the embedded processing module. A future version 4.0 of GenICam aims to support both classic machine vision systems and machine vision solutions based on embedded systems, making it possible to migrate to embedded systems easily.
The above points certainly are not a definite list of all topics that need to be addressed in order to prepare for the embedded vision era. More aspects will occur during future meetings and discussions. One of these topics, for instance, should be lens mount interfaces. One can ask the question that when everything shrinks and goes embedded, whether C-mount and CSmount lenses are still the right way to go? These standards have been around for a very long time and most likely will limit the system’s ability to become smaller. The Embedded Vision Europe conference in Stuttgart, organised jointly by the Vision trade fair and the EMVA, certainly marks an important milestone and arena to discuss open questions and work on roadmaps for the various standardisation initiatives.
The machine vision industry is getting ready for the next level of acceptance of machine vision technology. The FSF and events like the Embedded Vision Europe conference prove the whole industry is working to achieve that. The machine vision sector has to be aware things around us are speeding up rather quickly.
About the author
Jochem Herrmann was elected EMVA president in 2015. He has been involved in machine vision standardisation for many years and is co-chair of the Future Standards Forum, which is hosted by the G3. He is co-founder and chief scientist of the Dutch camera manufacturer Adimec.
By Dr Reinhard Heister, responsible for standards development at VDMA Robotics and Automation
The VDMA Machine Vision group currently is involved in two standardisation projects: OPC UA for machine vision and the VDI/ VDE/VDE 2632 series of standards. Both activities have been introduced to the G3 international vision standardisation community.
OPC UA for machine vision
The Open Platform Communications Unified Architecture (OPC UA) is a vendor and platform independent machine-to-machine communication technology. It is officially recommended by the German Industry 4.0 initiative and other international consortia, like the Industrial Internet Consortium (IIC), to implement Industry 4.0.
The specification of OPC UA can be divided in two areas: the basis specification and companion specifications. The basis specification describes how data can be transferred in an OPC UA manner, while the companion specifications describe what information and data are transferred. The OPC Foundation is responsible for the development of the basis specification. Sectorspecific companion specifications are developed within working groups, usually organised by trade associations, such as the VDMA as one key player in the Industry 4.0 initiative.
At Automatica 2016 in Munich, a memorandum of understanding was signed between VDMA Machine Vision and the OPC Foundation with the intention to develop an OPC UA companion specification for machine vision. The VDMA Machine Vision and OPC Foundation joint working group, VDMA OPC Vision Initiative, officially met for the first time in spring 2017.
So far, five working group meetings have taken place; the group is very active and running well. After clarifying the underlining definition of a machine vision system, the scope and use cases were defined.
The OPC UA companion specification aims at a straightforward integration of machine vision systems into production control and IT systems. The scope is not only to complement or substitute existing interfaces between a machine vision system and its process environment by OPC UA, but also to create non-existent horizontal and vertical integration capabilities to communicate relevant data to other authorised process participants, for example right up to the IT enterprise level. The ideas is that the OPC UA vision interface will exchange information between a machine vision system and another machine vision system, a machine PLC, a line PLC, or any software system at the control device level accessing the machine vision system.
Because of the interface complexity, the working group built on two coexisting approaches to face the challenge. The first approach is to partition the total companion specification into several parts; the second approach is to model the information in a machine vision skill oriented manner. Therefore, part one represents the smallest common denominator of information and functions, which every type of machine vision system needs for integration into production environment. For example, functions for triggering or managing recipes are included. Subsequent parts will extend the basic machine vision description and cover the use cases. The VDMA OPC Vision Initiative intends to publish the first release candidate at the Automatica 2018 trade show. It’s still a long way to go, but given the dedication of the working group members, it is a realistic goal.
VDI/VDE/VDE 2632 series
The VDI/VDE/VDMA 2632 series of standards structures the communication between supplier and user. This working group is hosted by the VDI organisation and is promoted and supported by the VDMA. The standards help to avoid misunderstandings and to handle projects efficiently and successfully. Part one describes basics, terms, and definitions; part two is a guideline for preparation of a requirement specification and system specification; part three specifies an acceptance test of classifying machine vision systems. Parts one and two are already released. The VDI/VDE/VDMA part two is already a G3 vision standard and available in German, English and Chinese. The draft version of part three is under review, taking in various comments from the international vision community. A release version will be available soon, in German and English.
Input and active participation from the international machine vision community in both standardisation projects is very welcome and needed. For developing standards, expertise and acceptance of many stakeholders is mandatory. So, those interested in learning more or becoming involved, please contact VDMA Machine Vision, www.vdma.com/vision.
Dr Reinhard Heister joined the VDMA Machine Vision team in September 2016