Skip to main content

Connecting the dots

Getting a networked vision system to work should simply be a case of connecting all the devices via a piece of cable and plugging that into a computer, right? Wrong.

The world of setting up a networked vision system is full of quirks and misconceptions that can be difficult to work around, including colliding packets of information, latency, jitter and getting everything to work in an efficient manner.

The term ‘networked vision system’ itself is pretty vague and could conjure up a range of ideas on what this actually is: from a simple system consisting of a sensor, camera and image processor to a more complex system comprising many sensors, cameras and other components, linked to a variety of processors. And as a system becomes increasingly complex, the range and number of problems to get the thing to work also increase. For now, let’s consider a system comprised of many cameras, each linking back to one processor.

A user has to bear in mind many different factors when picking a networked vision system, as Michael Gibbons, product marketing manager at Point Grey Research, explains: ‘There are several things to consider when looking for a multi-camera system, including total throughput, quality of service, latency, and overall system complexity and cost.’

Most systems are linked together using Gigabit Ethernet technologies to transmit the Ethernet frames. Gibbons adds: ‘Gigabit Ethernet’s maximum data rate is 125MB/s. However, GigE lacks some basic quality of service (QoS) mechanisms, which requires vision system designers to understand and manage low-level GigE Vision and Ethernet parameters such as packet size and burst rate. Failure to correctly set these parameters can result in lost data due to packet collisions.’

GigE Vision is an interface standard for high-performance industrial cameras, which has been developed by a group of about 50 companies. The standard is trying to unify protocols currently used in machine vision industrial cameras and let third-party organisations develop compatible software and hardware. Burst rate is the highest speed at which data can be transferred from the camera to the connecting computer.

A packet collision can occur when two or more cameras try to transmit a packet of information across the network at the same time. When a packet collision occurs, the packets are either discarded or sent back to their originating cameras and then retransmitted in a timed sequence to avoid further collision.

Packet collisions seem to be quite a problem when setting up a multi-camera system. Dwayne Crawford, product manager at Matrox Imaging, says: ‘Multi-camera systems typically introduce collision domains in the transmission of the data from camera to host, which may impact the performance of the system.’

The packet problem

Packet collisions aside, there are a whole host of other problems when trying to move large and multiple packets of data around a system.

Large packets of data increase the probability of packet corruption on the network. As packet size grows, the time taken to transmit the packet over the network increases. Increasing the time the packet spends whizzing around the wire results in an increase in system latency and exposure to external noise, ultimately leading to data corruption and loss.

To make matters worse, increased wire time also introduces further competition for network bandwidth when multiple devices share the network. This competition results in longer hold-off periods to access the media and increased jitter (an unwanted variation of one of more characteristics in the signal). Since larger packets increase transmission latency, packets that contain timesensitive information such as a software trigger to a camera for firing a strobe, initiating exposure, or activating an ejector become less deterministic. When critical events such as triggers are lost, a camera or ejector may operate at the wrong time and give false positive or false negative results.

But there are solutions out there. The Matrox Solios GigE NIC supports variable packet sizes that let users optimise the application latency. In addition, the Matrox Solios GigE NIC provides on-board dedicated inputs and outputs (I/O) that are independent of the Ethernet network to ensure critical software trigger events are reliably communicated and protected from the latencies and jitter of packet-based protocols.

End of the line

The processor at the end of the network trying to understand all the information it receives from a string of cameras can also face issues, as Gibbons adds: ‘Another challenge that designers face is managing the increased CPU load that occurs as more cameras are added; in some cases there can be an almost linear increase. While CPU load can be minimised by a GigE Vision frame grabber that pre-processes the packet data and enables DMA (Direct Memory Access) into the system, these types of solution increase cost and integration complexity.’

This is a problem also noted by Crawford : ‘In addition when the amount of data reaching the host starts to increase in magnitude, the “protocol loading” issue starts to become more apparent.'

This so-called ‘protocol loading’ problem results from trying to manage Ethernet packets on the processor, or CPU. The packet-based nature of Ethernet generates excessive processor interrupts, which adversely affects the processing tasks, causing an increase in processing latency and jitter.

The GigE Vision protocol generates an interrupt on the processor each time a packet is received, but large images generate hundreds or even thousands of interrupts per second and cause significant performance losses due to task switching (where an operating environment switches from one program to another without losing its spot in the first program).

Switches and software

Multiple cameras may be operated through a single network adaptor using a Gigabit Ethernet switch, or maybe operated on multiple network cards. Mark Williamson, sales and marketing director for Firstsight Vision, says: ‘We need to consider the topology of the system to ensure the switches, software and so on can support the topology and there are no bandwidth bottlenecks. GigE Vision switches can vary from a few hundred Euros to a few thousand, and you get what you pay for. The more complex the configuration, the more care is needed.’

But there are solutions out there for users that need complex camera systems, as Williamson adds: ‘For example if we have a network with many cameras on the same network the standard GigE Vision device discovery mechanism can become extremely slow and unusable. The software implementation needs to have a methodology to deal with this. Our Common Vision Blox software has some advanced network discovery features added as we have recently implemented a system with more than 240 cameras on one network. So complex systems can be done.’

No limit?

So are there limits to the numbers of cameras a user can put on a network? This seems to be a question for the software engineers, as Williamson explains: ‘In theory, no. But most GigE Vision software developers have not considered or tested a very large number of cameras. At Firstsight Vision we have successfully connected more than 240 cameras, but we did need to ensure our software implementation could deal with this number.’

Matrox’s Crawford believes there is a limit and adds: ‘It is very important to understand the network topology used for the camera site.’

When setting up a multi-camera system the user must also make sure that the information gained from the cameras is processed effectively, and this can be done in a number of ways. ‘The data still needs to be processed by the PC,’ says Crawford, ‘as it has been with protocols such as IEEE 1394, Camera Link and even analogue with tools such as the MIL library. Performance on the other hand will depend on the availability of the PC’s resources, which is where GigE differs significantly from protocols such as Camera Link where the image is passed “reconstructed” from the frame grabber to the host memory. GigE Vision differs in that the image is passed as “packets” to the host, which then has to spend resources on re-assembly of the image. These are resources not being used to process the images effectively.’

One way to up the efficiency is by subdividing the network into subnets to create a break in a single network address range. These subnets can be assigned to different tasks and are defined by their IP addresses. They can break, for example, one massive network into many smaller and more useful independent networks, which minimises traffic and improves throughput and efficiency on each subnet.

Further to this, Williamson says: ‘GigE Vision provides the capability to support a number of checks to ensure images are delivered successfully to the computer. Features such as image time stamp, image check sums and packet resends ensure the receiving computers get good images or know when an image is corrupt. Not all cameras and GigE Vision software implementations provide these hooks, but well-designed products do.’

Williamson adds: ‘The image processing application on the host requires GigE Vision compliant acquisition software. Most vision software products have this built in so it’s just a case of plugging the camera in. If you are developing an application from scratch, most of the camera manufacturers provide a free GigE Vision API. However these are usually limited to supporting just that manufacturer’s cameras.’

‘You can alternatively use an independent GigE Vision API, such as is available with Common Vision Blox, where you can support any camera manufacturer. We provide this free if the customer buys the camera from us, or at a small charge if it’s a third-party camera,’ Williamson adds.


There is a perception, with the arrival of GigE and the layman’s interpretation of that term, that to set up a networked vision system the user just needs to plug a camera into an existing computer network and can then have an unlimited number of cameras feeding back into a server.

But this is not strictly correct, as Crawford says: ‘There are, in fact, very real limits to the number of cameras that can be supported on a system. There are limits for addressing and limits within the subnets. The system’s bandwidth is always an issue – the bandwidth will limit the frame rate and/or image size, especially in machine vision where frames are grabbed continuously. Adding GigE Vision in a nondedicated, non point-to-point network may also introduce many other issues such as latency, jitter and lost or corrupted packets.’

Point Grey’s Gibbons agrees that things are not as clear cut as they first appear, and adds: ‘The vision system designer needs to be aware of some of the challenges that crop up as more GigE Vision cameras are added, including increased CPU load, quality of service and latency issues, and a need for the designer to understand and manage low-level GigE Vision and Ethernet parameters such as packet size and burst rate.’

And there are many reasons why things are not as clear cut as you might assume. ‘Not all GigE switches are capable of supporting all the possible demands of a GigE Vision solution,’ says Williamson. ‘For efficiency, GigE Vision uses jumbo packets and some switches do not support this. If you want to send the data from the camera to multiple computers (a term called multi-cast) even more switches cannot deal with the performance.’

Jumbo packets, or jumbo frames as they are sometimes known, extend packet sizes to around 9,000 bytes. Williamson continues: ‘At Firstsight we have validated switches for different uses. In addition we have invested in network test equipment so we can validate a network to be capable of running GigE Vision in the required configuration.’

Multi-casting is an important problem to address too, where multiple computers are hooked up to the network. Williamson adds: ‘Some GigE Cameras do not support multi-cast and the majority of GigE Vision software implementations do not support this either. Multi-cast means multiple computers can get the image feed from a camera, i.e. one to many. This is a standard network concept that may not be supported in the camera and software.’

Cabling considerations

But what about the cables linking the vision system together? Do they differ from a standard PC network? It seems they do, as Crawford explains: ‘Higher performance systems on the factory floor have a number of different requirements, from specialised cabling such as shielded and/or high-flex cable.’

And a user must make sure they are keeping up with the minimum equipment standards too, as Crawford adds: ‘Cat 5e cables are fine for standard Gigabit Ethernet connections, but moving to 10GigE will require alternative technologies. In higher performance systems, protocol loading becomes an issue.’

Williamson adds: ‘Cat 5e is the minimum for GigE Vision. Cat 6 is preferred as it has better screening for use in industry. Lower grade cable means shorter cable lengths and more network errors so people should not skimp on this.’

Ethernet is also bringing advantages to multi-camera systems’ cabling too, as Williamson says: ‘One of the key advantages of using Ethernet is the long cable lengths between the cameras and the computer. This enables the computers to be kept in an air-conditioned server room rather than the dusty factory floor environment.’

But separating the cameras and computers throws up problems too, as Williamson says: ‘The issue in making the PCs remote to the inspection point is the real-time sequencing and triggering of the inspection task. To aid this, Ethernet-based timing controllers and lighting controllers have been developed, such as the CC320 from Gardasoft, which enables real-time triggering and sequencing of reject gates based on encoders on the production line. Previously this was all managed by I/O on the processing PC, but with long cable distances this has become less viable.’

‘To aid this, further current discussions in the GigE Vision standards committee are also looking at supporting this type of functionality in future versions of the GigE Vision standard as well as network synchronised camera triggering,’ Williamson adds.

So with all these pitfalls and problems when setting up and operating a networked vision system, should users call in the experts rather than doing a DIY job? It all depends on how complex a system you want, as Williamson explains: ‘For simple point to point implementations the GigE Vision software developers have tried to make the system as simple as possible so users do not need networking experience.’

Crawford agrees: ‘Being an Ethernet-based technology in all but a simple point-to-point connection, a network specialist is typically required to correctly address the devices on the network to ensure proper isolation of data and resolvable addressing.’

Media Partners