Skip to main content

Building a twin

Greg Blackman explores the efforts underway to improve connectivity in factories

Germany’s mechanical engineering, electrical and digital industry associations, the VDMA, ZVEI, and Bitkom, along with 20 partner companies, formed the Industrial Digital Twin Association (IDTA) in September. The aim is to advance Industry 4.0 through the development of an open source ‘digital twin’, which will function as an interface between physical industrial products and the digital aspects of Industry 4.0 applications.

The fourth industrial revolution, smart manufacturing, the Industrial Internet of Things, are all terms for the push to use digital technologies and connectivity to a much greater extent inside and between factories. The IDTA plans to combine the various parallel development strategies – work in vision, robotics and other domains – in Industry 4.0 into one globally viable open source solution.

Along with the VDMA, ZVEI, and Bitkom, Pepperl+Fuchs, Bosch, SAP, Siemens and Volkswagen are among the founding companies. Dr Matthias Bölke, from Schneider Electric, was elected chairman, and Dr Horst Heinol-Heikkinen, CEO of vision firm Asentics, deputy chairman.

‘The IDTA was founded to work on achieving interoperability between machines on a common platform,’ Heinol-Heikkinen told Imaging and Machine Vision Europe. ‘We’re approaching this from the top down, with the platform itself rather than the individual pieces of equipment. We’re creating interoperable digital twins of a vision system or a robot, and understanding how those two pieces of equipment behave and interact on the common digital platform.’

He said that some larger companies are working independently on improving how machines talk to each other inside their factories, but that, in the majority of cases, Industry 4.0 is in its infancy.

Heinol-Heikkinen is also chairman of the VDMA OPC Vision group, a joint standardisation initiative led by the VDMA and the OPC Foundation that aims to include machine vision in the industrial interoperability standard Open Platform Communications Unified Architecture (OPC UA). The group released part one of the machine vision companion specification to OPC UA last year, and has now developed a hardware demonstrator that includes a practical implementation of the work.

OPC Machine Vision part one describes an abstraction of the generic image processing system, i.e. a representation of a digital twin of the system. It handles the administration of recipes, configurations and results in a standardised way, while the contents remain manufacturer-specific and are treated as a black box. The demonstrator establishes an infrastructure layer to simplify integration of image processing systems into higher-level IT production systems, such as a PLC, Scada, MES, ERP or the cloud. It demonstrates the generalised control of a vision system and abstracts the necessary behaviour via the concept of a state machine.

The information model specified in OPC Machine Vision part one is designed to reduce the implementation time of vision systems, which Heinol-Heikkinen said is ‘one of the biggest pain points in daily automation’.

The group is now working on part two, about how to reach other devices and exchange high-level data, which is also of use to other automation equipment. For example, exchanging data from a vision system switching between different tasks, such as inspecting different products. This can be done easily, but the idea is to transfer data to another device during or after each task, and generate understanding and maybe also interaction. This is the starting point of getting machines to talk to each other, Heinol-Heikkinen said.

Today, the traditional way of taking data from a camera is via a PLC. The aim of OPC UA Machine Vision part two is for vision data to reach other IT levels directly.

‘OPC UA is not only an interface. It is a technology we are using to describe a digital twin,’ said Heinol-Heikkinen. 

‘Everything introduced into the network, including a vision system, is described using a standardised information model – a companion specification – which tells the user the type of data recorded by the vision system and how to provide this information to the rest of the factory, or outside the factory. It’s like a language. Making sure the language is the same, regardless of the content. It’s totally different to a standard interface.’

The information model is designed to be applicable for all vision devices, from small sensors to large vision systems. Asentics vision products, for example, have the OPC UA companion specification built in, to ease connectivity.

As long as machines and devices are represented by an interoperable digital twin behaving in an Industry 4.0 ecosystem, Heinol-Heikkinen said it doesn’t matter how machines are physically connected, be that reaching another IT system inside the factory or an external partner in the cloud. ‘Once you have the interoperable digital twin information model, you know how to connect to other IT levels,’ he said. ‘This then becomes just a matter of security, how to connect securely with the outside world.

‘We want to connect to different machines and different IT levels, and reach interoperability between machines. How to reach that is not easy. Machine vision, robots and other factory machines have traditionally developed their own ways of connecting in isolation. These different domains are not connected and not talking to each other. Even though they use OPC UA, there’s still work to be done to reach interoperability.’

Working at the edge

‘Moving towards reliable Industry 4.0 on manufacturing sites requires a combination of factors to work in parallel,’ Jonathan Hou, CTO at video interface firm Pleora Technologies, said. He listed three key things: improving the backbone network and the communication infrastructure; more automation by using edge devices; and improving the accuracy of vision and other sensors using AI running on edge devices.

Working at ‘the edge’ involves running any computer processing onboard the device, rather than sending information to a separate PC. Edge devices include embedded camera boards or smart cameras. The idea is that these devices can be positioned throughout the factory to feed in data about the production process. One of the advantages of edge processing for IoT, Hou said, is the ability to add intelligence at the right point in a system, where it can make a decision and pass on results to the next step in the process. In this way, factories can speed up production and lower costs by reducing the need for central processing.

‘In order to connect machines so that edge devices can talk to each other, networks start to become an important part, especially networking video data and interconnecting various protocols,’ Hou said. ‘One major challenge today are the numerous standard protocols across different devices.’

He suggested GigE Vision as one standard that can be used to transmit video data reliably. ‘Manufacturers need to look at how to start bridging networked video like GigE Vision with other network protocols, such as OPC UA and MQTT, that are more commonly used for machine-to-machine communication in the factory to drive full “lights out” automation.’

At the moment manufacturing infrastructure is primarily a closed network, he said, with proprietary protocols or gated infrastructure requiring licensed protocols from vendors. These approaches make it difficult to scale a system, or it locks the end-user to a specific vendor. Hou is an advocate for Ethernet, which he said ‘is common across manufacturing sites and provides a reliable, secure, and time-sensitive approach to unify processes and equipment across a network’.

In terms of vision, GigE transmits vision data reliably, with low latency, and in real time using the IEEE 1588 Precision Time Protocol. ‘There’s now an opportunity to extend that networking expertise into other areas of the manufacturing floor as end-users seek ways to unify all systems,’ Hou said.

Pleora is deploying a hybrid approach for its clients in manufacturing through its AI Gateway product. The device allows manufacturers to upgrade PC-based infrastructure to one where a machine learning plug-in can be added to existing edge devices, without disrupting the infrastructure or end-use processes. It means factories can retain existing computer vision algorithms and overlay AI on top. One potential advantage is to reduce false positives during quality control checks, saving products that would otherwise be wasted.

Overlaying AI on existing inspection processes ‘provides an easier way for large enterprises to adopt AI to improve existing systems and re-use existing infrastructure, including cameras and PCs that are already in place,’ Hou said.

Hou emphasised edge devices, because working in the cloud is not suitable for real-time vision processing at high speed, where latency is critical.

He said it is not feasible to get video data into the cloud and have a decision back without some kind of lag.

Where the cloud can play a role in Industry 4.0 is standardising processes across different manufacturing sites. A multinational will often have different equipment or different inspection tolerances depending on the factory, which leads to variability in the quality of product.

The role of the cloud is to get some consistency in the production environment.

‘A manufacturer might use the cloud to store profiles and train AI models that would be consistent, training neural networks on a global set of data from all factories,’ Hou explained. ‘The model can then be deployed consistently across different sites. Building an AI infrastructure and trying to improve that AI over time requires a lot of data, so training the model in the cloud is a good way to do it.’

The cloud can also be used to manage edge device settings, as well as for analysing the output from edge devices. ‘It’s okay to be catching defects, but ultimately the manufacturer wants to understand root causes and why it is getting these defects,’ Hou said.

Future factories

When is the work by the VDMA OPC Vision group, and now the IDTA, going to come to fruition? Heinol-Heikkinen said: ‘It’s hard to say how long this will take to implement in the real world, but the pace of work is double, or even triple, what it was when we started five years ago, because the community is much more powerful.’

Work began on the OPC UA companion specification for machine vision five years ago in a cluster with 10 vision companies. Now, the founding members of the IDTA include companies like Bosch, Siemens, Volkswagen and Kuka. ‘It has gained momentum, with more companies working on Industry 4.0,’ Heinol-Heikkinen concluded.

Topics

Media Partners