Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Advances in algorithms

Share this on social media:

Topic tags: 

Greg Blackman investigates some of the advancements in image processing technology

One of the obstacles encountered when machine vision was first carving out a niche for itself as an inspection tool – and for that matter, as Pierantonio Boriero, product line manager at Matrox Imaging, notes when dealing with a customer new to machine vision today – is that defects easily identifiable by a human eye actually require extremely complex algorithms for a vision system to identify. On the other hand, there are areas where machine vision can pick out a defect that would otherwise be difficult to find manually – finding a needle in a haystack is the analogy Boriero uses to illustrate his point.

The very nature of machine vision implies some kind of image processing is involved. ‘In the infancy of machine vision, software was written on a platform or application basis,’ comments Boriero. ‘With each new hardware platform or new application field the software was rewritten from scratch.’ This can be an expensive proposition and therefore imaging software programs were developed that could be reused for different applications and easily adapted to new platforms.

The Matrox Imaging Library (MIL) has been available since 1993 and provides a software framework that allows easy migration from one hardware platform to the next. Boriero says that the development of MIL was prompted by, firstly, the need to provide a consistent software interface with each new generation of hardware – and secondly, the realisation that a lot of imaging applications fundamentally require the same set of software tools.

In the majority of imaging applications, the first step is to locate the object of interest – blob analysis is one basic object locator tool. Once the object has been located, a quality assessment is made. This can include: ‘is the object present?’ or any number of measurements (analysis of colour, texture, or geometry of the part). Software libraries (MIL from Matrox Imaging, Common Vision Blox from Stemmer Imaging, Halcon from MVTec Software, Sapera from Dalsa, to name a few) provide end users with standard tools for carrying out these sorts of tasks.

How much control the end user requires over algorithm implementation can vary, with some simply interested in the result (pass or fail), some wanting quantification of that result (what is considered a pass and what is considered a border-line pass, etc), others wanting to control how the results are obtained, and others wanting influence over how the processing is carried out. ‘There are different levels of user expertise and each would be comfortable with a certain layer of abstraction without necessarily needing to know the layers below this,’ explains Inder Kohli, product manager at Dalsa. ‘Software permits that.’

Flexibility is one of the big advantages of programming in software over hardware. Programming in hardware is generally fixed and the user is stuck with the abstraction that the hardware designer provides, whereas software allows different levels of abstraction without having to change the underlying algorithm.

The need for speed

According to Boriero, one of the drivers in terms of software development is to optimise the tools to make them run faster. ‘Improving processing speeds is tightly coupled with enhancements in hardware technology,’ he says. This includes taking advantage of multicore CPUs to parallelise the execution of software tools, as well as using GPUs, which historically were specifically designed for generating computer graphics, but can also be used for image processing.

A common use of imaging software is to provide part geometry checks. Image courtesy of Matrox Imaging.

Kohli notes that, due to cameras and sensors becoming faster, the need to process the data at a faster rate inevitably follows. In order to optimise image processing further, pre-processing functionality can often be offloaded from software into hardware, such as an FPGA, or into a GPU.

A field-programmable gate array (FPGA) is a hardware platform that can be reprogrammed using software. Outsourcing certain preprocessing tasks to an FPGA unburdens the CPU, freeing it up to do other image processing work. Silicon Software, a manufacturer of hard and software products, specialises in image processing using FPGA technology. Its VisualApplets is a hardware programming tool for FPGAs optimised for image processing algorithms and applications. It contains more than 200 operators in 14 libraries, including classic pre-processing functions such as image corrections (shading, spatial, flatfield, pattern noise, dead pixel cancelation, etc) and image enhancements (gamma, contact, brightness, bit depth conversion) that are ideally outsourced on the FPGA, but also segmentation and classification functionality.

There are limitations to implementing complete applications on an FPGA. Some algorithms are not suited to running on FPGAs, as Michael Noffz, head of marketing at Silicon Software, explains: ‘FPGAs are highly parallelised and require a constant flow of data processing to work effectively.’ Therefore, a typical machine vision system will use an FPGA to carry out preprocessing tasks while post-processing, in which the system will often have to wait for the image data, is carried out in serial in a CPU.

Noffz identifies four application areas where hardware-based image processing is advantageous. Firstly, those with high bandwidth, such as high-speed or high-resolution applications – FPGAs can carry out image corrections and enhancements allowing the CPU to focus on image analysis tasks. Merging several images into one is also an often-required task that can be carried out on an FPGA. Secondly, low latency applications, such as those found in pick-and-place applications and robot vision in general, can use an FPGA processor to reduce the information from the image to coordinates, for example.

Thirdly, applications requiring data reduction, such as 3D imaging, can use the hardware platform to great effect. Laser triangulation images are scaled down to focus on the laser line and the reduction in bandwidth can be in factors of hundreds (an image with a height of 256 pixels will be reduced to one line). Noffz comments that FPGAs are ideal to calculate the best fitting coordinate in high algorithmic quality. Finally, outsourcing of image processing to hardware can improve the stability of the overall system.

Silicon Software’s VisualApplets is a hardware programming tool for FPGAs optimised for image processing.

Seeing in 3D

Kohli of Dalsa remarks that as software performance has increased along with computing power, there are certain application areas – colour processing and 3D vision are two examples – that have become more cost-effective.

Most major imaging software libraries contain modules for 3D imaging – Dr Olaf Munkelt managing director at MVTec Software singles out 3D vision as being ‘one of the main future challenges in machine vision’. Spanish company Aqsense has developed a software library for 3D machine vision based around the concept of a point cloud – a calibrated set of data coordinates (x, y and z) representing the position of an object in space. The latest version of the company’s SAL3D (3D Shape Analysis Library) software is shortlisted for this year’s Vision Award, an annual award recognising innovation in machine vision presented at the Vision Show in Stuttgart.

‘SAL3D is targeted at 3D inspection and factory automation,’ explains Josep Forest, technical director at Aqsense. Both of these applications use dense point clouds describing the 3D features of an object, which the software has to process quickly and accurately. Inspection tasks include looking for surface defects, such as scratches, but also performing a dimensional inspection of the part.

A 2D image consists of a matrix of pixel values, whether that’s greyscale or RGB. A 3D point cloud does not have such a representation; instead of adjoining pixels there are simply coordinates without an immediate neighbour. Therefore, not all of the volume occupied by the point cloud is defined, i.e. there are only values where points exist. Typically, a 3D point cloud analysis is more computationally complex than 2D image processing (in the order of a power of three).

SAL3D has two parts: data acquisition of point clouds and their subsequent processing, (although the software is able to process the data from other acquisition systems). ‘The pose of a point cloud consisting of one million points can be data mined in approximately 100ms [with SAL3D],’ states Forest. ‘From the dense point clouds, the software can perform a dimensional check of the part.’ The dimensional information, together with the pose of the part, can be fed into a robotic system for pick-and-place applications.

Forest says that in this way small parts can be inspected, but also multiple cameras can be combined to image larger objects in high resolution and with the same accuracy. ‘The software can still perform dimensional checks, 3D surface inspection, and feed the information into a robot system, even with several million points.’

A robust solution

No matter what the software package, ease-of-use and robustness are two qualities that software manufacturers aim to build into their systems. Boriero of Matrox states: ‘Software manufacturers want to provide tools that work “out of the box” and that are easily configurable to handle variations in a manufacturing process.’

The vision system must be able to handle variations in the process and still deliver accurate results. According to Munkelt from MVTec, a robust system can still deliver accurate results even though the images captured undergo changes over time, such as varying illumination or changing contrast. MVTec’s products Halcon and ActivVisionTools are designed to be robust and are able to cope with changing system and process requirements on-the-fly.

Bruno Menard, image processing software group leader at Dalsa, also points out that ‘a robust software platform is one that is stable in time’. Software should be engineered to run on new generations of hardware without extensive reconfiguration, thereby ensuring a longer lifespan for the software.

While specialised applications will require specialised software tools, according to Kohli of Dalsa ease-of-use will continue to be a big driver in terms of software development. ‘Developers want to deploy applications faster and to be more productive. They also want tools that allow them to be more effective and to code for problems in their domain, rather than coding for problems that are supplied in off-the-shelf software libraries.’

Adding intelligence

Image processing software is a tool that is set up to run algorithms depending upon measurement parameters. ‘Most vision software, including the majority of Common Vision Blox (CVB), isn’t intelligent,’ says Mark Williamson, sales and marketing director at Stemmer Imaging, providers of CVB. ‘Intelligent image processing, on the other hand, includes software with the capability to learn what is and what isn’t acceptable based on examples provided. This allows problems to be solved and opens up application areas that conventional machine vision software tools wouldn’t be able to handle easily.’ Minos and Manto, part of CVB, both contain a degree of intelligence in how they operate.

A conventional pattern matching tool compares a model product with images of actual products and determines how well the two match. ‘The problem occurs when trying to differentiate between two very similar items, which conventional software struggles to achieve reliably,’ explains Williamson.

Minos is a decision tree recognition engine that uses negative instances to distinguish between two similar items. Negative instances allow the programmer to specify not only what the object is, i.e. the object’s pattern, but also what it’s not, i.e. the similar object. The program looks at all the images of type A and all the images of type B and builds a decision tree to classify the features of each item.

A typical OCR tool, for example, requires a sharp, constant edge around the character. Text printed on varying backgrounds or substrates would interfere with a classic OCR tool. Through exposure to images of the character on different backgrounds, Minos will learn what features are important (the character, which remains constant throughout), and what features aren’t (the background).

Manto is a newer module, introduced within the last few years, that uses a technology called support vector machines, which is a neural-based approach to pattern recognition. It specialises in learning patterns that could change from image to image, such as with organic materials like foodstuffs, grain on wood, or human faces.

Designing vision systems

Various industries use virtual engineering design tools to simulate and analyse design. SensorDesk, an engineering design software company based in New Jersey, US, has developed a software design tool for the machine vision industry. Its Vision System Designer software, shortlisted for the Vision Award, allows a designer to model common components such as cameras, lenses and lighting, annotate the object to be inspected according to the application requirements, and validate them.

The software includes internal models for lighting, lenses and cameras, which can be configured in respect to the target object and an approximate image generated in real time. However, the resulting simulated image is not where the strengths of the software lie, as Matthias Voigt, president of SensorDesk, explains: ‘An image doesn’t tell the user how good the vision system is; it gives intuitive visual feedback. What is important are the vision system performance characteristics, such as whether lighting level is sufficient, whether contrast and depth of field are correct, the occurrence or absence of specular reflections, variations of resolution due to displacement, etc.’

Voigt states that in machine design, factory automation, and robotics, engineers have conceptual tools as do the workflow designers, but when it comes to inspection, there is no conceptual design tool. The Vision System Designer software allows inspection system engineers to detail their requirements and to show what the system will look like to other engineers in the team. On the other side, a small company with limited time and laboratory resources for vision system design can use the software to conceptualise the inspection system without having to order parts.

Vision System Designer is a conceptual tool for modelling cameras, lenses and lighting.