Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Code trackers

Share this on social media:

Topic tags: 

Rob Coppinger finds that vision systems are now required to track smaller and smaller markings that whizz past on ever-faster conveyors

As with any application of vision technology, resolution is everything. This applies just as much to code reading, for track and trace purposes, as it does to any other, even though a simple barcode or QR code may appear to be much less complex than the target objects in other markets.

The trend towards traceability has provided a rich vein of applications for the vision industry, from individual parts to pharmaceutical bottles, to boxes containing finished product. In each case, the item being tracked will contain an individual, unique, identifying mark. Those markings can be the 1D barcode, to be found on any supermarket product’s packaging, or the 2D square codes, also known as Data Matrix Codes (DMC), that are favoured for mobile phone-related promotions, or human readable markings, – which, as the name suggests, are where people can read the information. While the majority of marking systems can be printed directly on to the subject to be tracked, there is also the markings application technique called data-peening. This causes small indentations in the surface of a metal part and the DMC can be made up of these indentations.

The challenge for tracking is that, over time, as parts and products travel through the production system or wider supply chain, the markings can become degraded either through damage or dirt. If a marking becomes illegible and is misread, traceability is undermined. While lasers have been used to read 1D barcodes the advancement of vision-based readers has made the active solution less competitive, because of the difficulty of damage and degradation to the markings in the industrial environment. Lasers generally require markings in good condition with a contrast level of around 80 per cent between foreground and background, but vision-based image interpretation software can cope with a range of conditions. Another advantage vision systems have is that they are more rugged, as they are solid-state technology so they last longer than the more complicated lasers with their many components.

And with vision systems now offering 10 Megapixels (and greater than 15 Megapixels expected in a few years) the Gigabit Ethernet interface camera with its CCD or CMOS sensor technology has few rivals. Cognex UK and Ireland district sales manager Leigh Jordan expects CCD and CMOS to be the leading sensor technologies for some time to come.

Microscan’s machine vision product manager Jonathan Ludlow agrees. He also expects the distinction between CCD and CMOS to diminish in future. ‘In a few years CMOS could equal CCD in quality,’ he says. CCD has been more expensive than CMOS, but it provided better-quality images. CMOS however is improving and Ludlow expects that quality and cost difference to fade.

Usually, if a marking is degraded or if a product is not orientated correctly within the field of view of the reader, then no code will be detected. To cope with industrial environments, where many marked products are conveyed, but not sufficiently controlled to be orientated correctly, multiple cameras are a solution. But, like any technology, there is a cost attached to greater capability and more competent vision systems.

German company Seidenader Vision, which manufactures optical inspection systems for the pharmaceutical industry, has produced its SV360 vision module with six cameras that reads 2D codes by capturing 360° images of products including bottles. The six cameras have integrated LED flashes that are positioned at an angular distance of 60°. When a product passes the inspection module, six individual photos of the product are taken automatically. That imagery data is sent to a PC for analysis to identify and read the product’s DMC. Not disclosing the contract value, a SV360 vision module was employed at a pharmaceutical company to read markings on bottles. In this case its PC was provided by embedded computer solutions specialist Kontron. As a result the SV360 can cope with a throughput of 400 containers, including bottles, per minute.

However, as many cameras as this are not always necessary. Markings can be read when they are almost perpendicular to the camera, as Jordan explains: ‘They can read the code just shy of the vertical; they can handle that level of perspective distortion.’ He adds: ‘[Better] software has allowed a big leap in readability.’

Reading more than one marking at a time is also possible with the double-digit Megapixel cameras. They deliver greater image accuracy within the same field of view and an ability to cope with greater depths of field.

Liquid lenses get exposure

Depth of field can present a technical challenge for rapid production lines. The solution has been to provide cameras with a liquid lens. This liquid lens consists of water and oil, and with the application of current it can change its focal length. Microscan recently introduced its own product that incorporates a liquid lens, called AutoVision. As well as a liquid lens for auto focus, it also has aperture control to allow changes in exposure for when more light is needed. Higher light levels can be needed when products are travelling along the conveyer faster and a fast exposure is aided by more light or where complex small codes or human readable markings require a greater contrast for the camera to distinguish between their elements.

Stemmer Imaging’s group manager Chris Pitt explains that liquid lenses are necessary when simple factors such as different-sized boxes on a conveyor and the inconsistent location of the barcode on the box can mean a lot of variation in depth of field. ‘While you can get cameras with auto focus, they are slower and will limit the speed of the products [on the conveyor]’, says Pitt.

The wider field of view, possible with many-Megapixel cameras, means entire objects can be imaged for code detection. Microscan’s Ludlow explains that with higher resolutions, small codes can be found easily when searching for them over a wide area. For example, a motherboard can be glanced at and the 1 or 2D code can be found. Another example he gives is a tyre, where, when rotated, the entire tyre can be viewed at once.

Siemens has readers that can read 1D and 2D codes as well as human readable markings. They achieve this with optical character recognition. Optical character recognition (OCR) describes the capability to identify human-readable characters, most often the Roman alphabet. Siemens explains that OCR is more difficult for electronic systems to read than 1D or DMC markings. Letters such as O or C can be confused where markings are degraded or become obscured. Siemens created its Text Genius algorithm to enable the image interpretation to cope with these instances. According to Stemmer’s Pitt, OCR can be read at high speed, but it requires software development and the machine vision system ‘has to be taught to recognise’ symbols and letters that can be very close together.

Whatever the type of object or code, lighting can be a key factor if the throughput is high. High throughput can be hundreds of metres of conveyer per minute, for example. A conveyer running at 70 metres per minute can see more than 300 products whizz past a reader in 60 seconds – five products per second. As such, high light levels are needed to allow the camera to capture the image for the code data with a very short exposure time.

Vision integrators face many challenges when retrofitting any kind of vision system, and code reading systems are no different. Robert Pounder, Olmec UK technical director, says that many production lines were designed long before machine vision became an option. ‘We have to design the reader station around the production line,’ he says. ‘But even with these challenges, we can still retrofit a code reader station to a conveyor line running at 300 or even 600 metres per minute.’

Regardless of the type of code, traceability will not work if tracking is impaired by a marking that cannot be read accurately. While data-peening may become a niche where active systems can hold an advantage, the overwhelming majority of printed markings will be tracked using passive vision systems.

Related features and analysis & opinion

26 June 2019

Matthew Dale explores how machine vision integrators are responding to new demands in the pharmaceutical industry

26 June 2019

Matthew Dale explores how machine vision integrators are responding to new demands in the pharmaceutical industry