The development of image processing algorithms like those for pattern matching over the last 30 years has been one of the major advances in machine vision. When machine vision was still in its infancy, it was very much a science, with precise control over lighting and the environment necessary for the technique to work. Developing robust algorithms has meant changes in ambient lighting aren’t such a problem and it has moved what we now think of as machine vision out of the lab and onto the factory floor.
Making machine vision more robust through the image processing tools still holds true today and, as Pierantonio Boriero, product line manager at Matrox Imaging, comments, a good imaging software package should enable the user to simplify the mechanical setup of the system. ‘It will allow the engineer to relax some of the mechanical constraints in system design,’ he says. ‘If you look at the evolution of pattern recognition technology, the earlier implementations required a lot of mechanical fixturing to present the part to the vision system in a certain orientation. But with the development of geometric pattern recognition, for example, where you’re able to find a pattern at any angle, you’re able to remove that mechanical constraint from the system.’ The same applies to illumination and new image processing tools mean the system can deal with more complex images with less uniform lighting.
While the user can develop image processing tools in-house, commercial imaging libraries, such as the Matrox Imaging Library (MIL), Common Vision Blox from Stemmer Imaging, Halcon from MVTec, or Scorpion Vision from Tordivel, to name a few, provide a reasonably comprehensive set of software tools for those developing vision systems. ‘The benefit of an image processing library is that most of the programming has already been done,’ states Boriero. ‘But moreover, you’re benefitting from the fact that there are many other users of the tool and the improvements and developments catering to their needs as well. If you’re looking at a library that’s been around for a while, you’re working with a tool that you know has been successfully deployed for many years in the field.’
There are other arguments Dr Wolfgang Eckstein, managing director at MVTec Software, gives for choosing a standard product: ‘Generally, long-term maintenance of the software is not a primary consideration for in-house developments – software tools that have been developed in-house might not be able to run on 64-bit machines, for example.’ Upgrades to 64-bit processors require a substantial rewrite of the code if the software was developed in-house. By comparison, Halcon has offered a 64-bit version of its imaging library for eight years.
Time-to-market is also an important factor for OEMs developing systems for new application areas. ‘Writing and developing the algorithms takes time – you need algorithms to begin with to allow you to work on the machine,’ comments Dr Eckstein. A standard imaging library, however, provides prototyping environments for feasibility studies, which for Halcon is HDevelop. A similar problem exists for integrators, which need to design and develop systems quickly for their customers.
A further problem with in-house development Eckstein adds is maintaining the algorithms when software engineers leave the company. Stability and reliability is important and provided for by a standard imaging library.
‘The user expects a fully equipped toolbox from an image processing library,’ states Boriero. ‘In addition, they want the peace of mind to know that if application requirements evolve or if they have to tackle new applications that they’ll have the tools to address these changes. They might not use every appendage of the toolkit, but they need to know they’re there if they need them.’
Ease of use
Leaving aside all these arguments for purchasing a commercial imaging library, a big attraction of the standard imaging library is that the user doesn’t necessarily need to do any programming. ‘The majority of users are not experts in vision and need an imaging library that’s intuitive enough to use without having an academic degree in vision,’ comments Arnaud Lina, manager processing team, analysis tools at Matrox Imaging. ‘The complexity and number of parameters involved in optimising a particular solution must be kept to a minimum. This is something that vision library manufacturers are working on; the pure black box with one button to press is not yet a reality.’
Software environments like Matrox’s Design Assistant allow the engineer to program a vision system using a graphical interface. The engineer builds algorithms using building blocks; the steps are configured and linked graphically without having to write any code.
Deformable 3D matching is used in bin picking applications, with the parts recognised irrespective of angle or orientation. Image courtesy of MVTec.
‘Simplifying the image processing concepts – masking all the image processing, signal processing, and computer vision concepts – is not easy to do,’ states Lina. ‘We’ve worked hard to put a layer in between the user and the underlying mathematics and make the parameters intuitive. We’ve added a layer of heuristics or combinations of mathematics to translate the maths to visual concepts. Some concepts are very easy to map – a scale is a scale, a translation is a translation, a rotation is a rotation. But how smooth a filter is, or how much a filter will denote an image, is more complex.’ He adds that some parameters are too complex to control manually and the software is programmed to automatically determine a value based on the image rather than the user setting the parameter.
Making imaging software easier to use is something Dr Eckstein also feels is important. ‘In general, the algorithms are getting more and more complicated,’ he says. ‘There are more parameters to consider – even the relatively straightforward task of pattern matching requires the user to specify multiple parameters, such as rotation angle, scaling, invariance to perspective distortion, etc. This is difficult for many customers.’
Halcon provides algorithms in which parameters can be trained automatically using sample images. ‘You increase robustness and speed of image processing just by showing example images,’ Dr Eckstein says. Data code reading, OCR, and defect classification are areas benefitting from automatic parameterisation within the software.
Dr Eckstein also recognises that an end user might want to be able to tune parameters themselves and considers both manual and automatic parameterisation important. ‘Some parameters are known and can be specified manually,’ he says. ‘However, automatically training the system is more convenient for certain tasks. It also depends on the customer: end users and integrators prefer automatic training, while an OEM wants to get the best performance and robustness and so they might fine-tune their parameters.’
There are also different software environments available, depending on the knowledge of the engineer. Generally, companies with more standard tasks, such as OCR, barcode reading, or blob analysis, would use a configuration tool. An OEM, on the other hand, typically needs more flexibility and is generally more experienced in programming.
Colour and 3D imaging are two areas Boriero is seeing a lot of interest in. He also adds code reading – barcodes or data matrix codes – to that list because of underlying trends for traceability in manufacturing. Dr Eckstein also pinpoints 3D imaging as a major growth area: ‘There is a wide range of 3D sensors available and 3D imaging provides new inspection possibilities. Consider the developments in 2D image processing over the last 30 years; a huge range of algorithms have been developed in this space, such as blob analysis, texture analysis, and template matching, to name a few. The same level of development has to take place for 3D.’
From the customer’s perspective, the software tools available for 3D image processing are similar to those for 2D – object alignment, for instance, or object identification and classification in the image works along similar principles whether in 2D or 3D. However, the algorithms for these processes are much more complex in 3D than in 2D because of the extra dimension. Dr Eckstein cites a 3D pattern matching algorithm as being comprised of around 300,000 lines of code. ‘Even ignoring testing and specification, this requires lots of manpower to develop,’ he says. ‘The increased sophistication in 3D algorithms is so that the execution time remains similar for 3D tools compared to their 2D counterparts. Besides this, there is a lot of new mathematics required, because the internal algorithms differ from those in 2D – not everything that works in 2D can be converted to 3D.’
Boriero states: ‘Users can get overly excited about new tools in 3D imaging, for example, sometimes to a point where they lose sight of solving the application, which can often be resolved using traditional 2D tools. But there’s still momentum for 3D and colour imaging.’
Colour and 3D imaging might be growth areas, but Lina states that they are still marginal in terms of volume. According to Marc Damhaut, CEO of machine vision company Euresys, this is similar to his experience: ‘Surprisingly, the most popular, i.e. most used, features of Open eVision are still the first libraries that we have developed: sub-pixel measurement, blob analysis and pattern matching. They are appropriate for most of today’s applications.’ Open eVision is Euresys’ imaging library suitable for most applications, although it has had a lot of success, according to Damhaut, in semiconductor inspection, as well as LED and solar cell inspection.
‘Most of the applications can be solved with simple, classical tools,’ Lina adds. ‘The issue with selecting an imaging library is not in terms of high-end algorithms for most people; it’s in terms of compatibility, long-term support, ease of use, and ease of deployment,’ he says, adding that there are still incremental developments taking place on classical algorithms to enhance and optimise them.