Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Soft progress

Share this on social media:

Topic tags: 

Stephen Mounsey looks at trends in the development of image processing software and libraries, and finds that usability is becoming increasingly important here

The difference between simple digital photography and a true imaging or machine vision implementation lies in what the user does with the image. Using software to extract useful data from an image is fundamental to the imaging and machine vision industry, and it should therefore come as no surprise that some of the most exciting developments within vision fall within the scope of intelligent image processing.

The most advanced imaging system in the world would be of little commercial interest to customers if it needed a full-time post-doc researcher to calibrate its processing software. As such, the developers of imaging systems are investing their efforts into making sure that their products are easy to use. The Matrox Imaging Library (MIL) is a collection of image processing algorithms that are collected as a series of tools, available to higher level programs as required. The MIL and other libraries like it are becoming more useful through improved accessibility, as Pierantonio Boriero, product line manager at Matrox, explains: ‘The imaging library is the foundation [of the image processing solution], and the Matrox Design Assistant sits on top of this. Tools and algorithms that are available at the MIL level are also available at the Design Assistant level, the only difference between the two being that to use the tool at the MIL level, you must be able to write traditional programming code in C, C++, C#, or Visual Basic, whereas Design Assistant gives access to the tools through a flowchart that you construct.’ This, he says, makes the process of designing and optimising an image processing system accessible to users without dedicated programming expertise. ‘In terms of the intelligence of the software, both approaches have access to the same tools.’

The tools on offer within the MIL include pattern recognition, feature extraction and analysis, edge-finding tools, and colour analysis tools that users would expect from a library such as this, and Boriero readily points out that libraries supplied by other imaging suppliers offer similar tools. ‘A lot of these tools, to be honest with you, are quite consistent at the higher level from vendor to vendor, but the way they differ is through implementations… the way in which the user interacts with the tool, and how easy it is to make use of the tool.’ Machine vision is, he adds, a very broad field, covering a wide breadth of applications, but software tools must nonetheless be easy to use. ‘Making them easy to use essentially means minimising the number of dials and switches you need to flip – and that’s the challenge. Today’s customers are pressed in terms of time to get a solution up-and-running as quickly as possible, and they don’t necessarily have the time to fiddle with all of the parameters.’ According to Boriero, many customers expect their image processing solution to work out of the box for their specific case. ‘In certain cases this works, but unfortunately sometimes you’ve got to roll up your sleeves and get turning some dials and flipping some switches.’

A point cloud (left) is one method used by software packages to reconstruct an object in 3D (right), in this case through the Matrox Imaging Library

Improving in real time

A flowchart-based approach to constructing an imaging algorithm is one way to make the process easier for a non-programmer. Elsewhere, techniques of real-time visualisation have been used to create what amounts to rapid prototyping for image processing software. Yves Daoust is CEO and founder of Vision for Vision, a Belgian company offering this kind of intelligent image processing software. The company’s Vis+ prototyping environment has been shortlisted for the Vision Award, to be presented at Vision 2010 in Stuttgart. ‘We provide our customers with a highly interactive prototyping workbench. The focus is on the interactivity, in the sense that they get instant feedback on the image processing operations that they apply.’ When designing an image processing solution, says Daoust, users must combine a number of processing steps, each of which has its own intermediate results, and each of which has its own tuneable parameters. In traditional solutions, this processing is achieved step-by-step, often as the result of a script, and it is not easy to see the effect of each individual step or of individual parameter settings. ‘In our environment, whatever processing steps you include or whatever parameter setting you choose, you see the results instantly. Whatever parameter you change, the results are displayed immediately. This leads to more insight into what you’re doing, into the performance of the chain of operations, and into the exact effect of the various parameters,’ he says.

Ultimately, this kind of approach can allow image processing solutions to be developed very quickly. The workbench software can be interfaced with machine vision cameras, and as the user can experiment with the devices while observing the output on a live video output. ‘That tells you a lot,’ say Daoust. ‘We can see a lot of things just by looking at real-time processing, and we can put probes wherever you want within the various processing steps, so as to see the results of intermediate steps as well. When we’re looking at the outputs of an experiment as a live data stream, we are in effect doing 20 experiments per second, whereas if it was a point-and-click application, it would take you one minute to do one experiment. Also, with this approach you can easily test the limit conditions of what you’re looking at. For example, if you are examining a certain component, you can move it around [in front of the camera] until it reaches the edges of the image, and you will immediately see how the algorithm is coping; it will start to fail as the objects get too close to the edges.’

Vision for Vision is still working to enrich the library of functions, Daoust says, as there are many areas of image processing that the product needs to be extended to. The company is also improving the scripting concept: ‘Currently the scripts that we can generate and handle are linear with no choice decisions, or loops. In the future we will turn this simple scripting mechanism into a fully fledged programming language. I am sure that customers will require that.’

Software for the third dimension

Image processing algorithms work on matrices of data, with each value corresponding to a point on the 2D image. When making the shift to visual algorithms for use in 3D processing, these algorithms are no longer sufficient, and specialised 3D libraries must be used. Josep Forest is technical director at Aqsense, a Spanish company specialising in this kind of 3D image processing by offering a 3D shape analysis library, and which is another entry in the Vision Award. ‘It’s purely 3D, in that we don’t deal with images, but solely with point clouds,’ he explains. ‘An image, for example, is a collection of matrix values. They could be greyscale values for a monochrome image or they could be in values in the red, green or blue planes, but images are essentially always composed of matrices of values.’ This, he says, is important, because of the way in which neighbours are preserved between points in the real world and points in the matrix, making it easier to perform operations on the data.

In the 3D scheme, however, this is more difficult to achieve. ‘Point clouds don’t necessarily keep neighbourhood unless we find some way to do that. Also, when we represent a point cloud we cannot treat it as if it was an image. We have to represent it on the screen and give it different movements, such as rotating or translating in three axes. Additionally, point clouds are always expressed in floating point units. There is some relationship between 2D and 3D image processing of course, but in essence the data treatment for 3D structures is very different, and has far higher CPU costs,’ says Forest. Nonetheless, he explains that approaches adapted from the 2D space, such as maintaining the neighbourhood information of data within the library, has allowed the applications to be sped-up in terms of processing time.

Processing time is of key importance in all of the industrial applications in which Aqsense has installed its software so far. In the food industry, for example, a system based on the 3D shape processing library has been used to measure the volume of pieces of bacon, cheese, ham, sausages, and other foodstuffs. Forest describes the way in which these vision systems are able to measure the volume of a piece of food, to a very high accuracy. The food can then be cut to slices or chunks of a certain weight, to a very close approximation, thereby eliminating the need to weigh each individual piece of food before packaging it. This, he says, saves a lot of time in food producing operations.

In the automotive area, and also the industrial sector in general, Aqsense has provided tools for dimensional tests, which used to check 100 per cent of the parts produced by a certain production line. Laser scanning or other scanning techniques allow an accurate scan to be produced. The point cloud data resulting from this scan of the individual part is then aligned against point clouds in the software’s library that correspond to the required shape of the part, and the system then reports on overall deviations of the part. The time taken to achieve this is impressively short: ‘Typically, it will take from 100 to 200ms to align two million to three million points,’ says Forest.

A colour recognition component within MIL analyses an image of colourful sweets

Ease of use is as vital in the 3D arena as it is in 2D, and Aqsense is working hard to keep its tools usable: ‘We strive to offer very easy-to-use tools, because we understand that these applications are in 3D, and that even though 3D has been prevalent in some areas of the industry for many years, such as reverse engineering or metrology, it is quite a new thing to the machine vision market. In addition to the support we also provide training and consultancy services. We strive to always do our best to provide very easy concepts, so that the system integrator does not have to be a rocket scientist to make a program. We’re now providing a C++ interface, along with selling our own library.’

Beyond this, Forest says, the company aims to be compatible with as many other products as possible, sub-licensing the library to Stemmer Imaging, for integration into the Common Vision Blox product, and providing interfaces for other libraries, such as LabView, Halcon, and MIL. ‘We want to be compatible with anybody, while focusing only on 3D,’ he adds.

Enabling hardware

Daoust, from Vision for Vision, points out that the underlying factor allowing the whole industry to develop intelligent tools is the availability of inexpensive, high performance computer hardware, particularly processors. ‘When it comes to camera technology, [progress is] about higher resolutions, but enabling these higher resolutions is the fact that we have access to computer processors that can execute high level algorithms in a practical amount of time. It’s not that the approaches are necessarily brand new – some of the techniques have been around for a long time – but now we have the muscle to actually perform these operations in a practical amount of time, at typical production rates as opposed to having high tech algorithms that take ten minutes to implement well; that would be kind of useless on a production line.’

New approaches to image processing have, Matrox’s Boriero says, opened up new applications, and allowed more people to tackle those applications. These users may not have the know-how to develop applications from scratch, but thanks to the availability of highly capable packaged tools, they don’t need it.

Related features and analysis & opinion

Depth map from the SceneScan. Credit: Nerian Vision

21 November 2019

Dr Konstantin Schauwecker, CEO of Nerian Vision, describes the firm’s stereo vision sensor for fast depth perception with FPGAs

26 July 2019

Matthew Dale explores the high-resolution imaging solutions emerging for inspecting OLEDs and other electronic displays

26 July 2019

As car makers install production lines for electric vehicles, Greg Blackman looks at how vision is currently used in their factories

26 July 2019

Keely Portway investigates how vision technology is being used in the dental sector, from initial diagnosis, to quality control of prostheses

29 March 2019

Andrew Williams explores the vision solutions for robot bin picking

19 February 2019

The agri-food industry is on the verge of a revolution thanks to advances in precision farming. Machine vision plays a crucial role in these advances, as Keely Portway finds out

19 February 2019

Greg Blackman explores some novel ways of imaging glass, including a 3D technique to measure the flatness of glass panels