Skip to main content

More than the eye can see

Think you’ve got perfect vision? Take a look at Figure 1 and think again. Like most people, you should see one block of grey as darker than the other, whereas in reality they are exactly the same shade. It’s a phenomenon known as White’s Illusion, and it’s one of many effects that plague human vision.

Figure 1: White’s Illusion. The two grey boxes are in fact the same shade.

Figure 1: White’s Illusion. The two grey boxes are in fact the same shade.

The chances are that you’ve never noticed these errors affecting your daily life, because they are inherent in the way our eyes have developed to find useful information from often ambiguous visual data. For example, an object may appear light or dark either because of the shade of its colour, or because it is in bright light or shadow, and as babies we had to learn, through trial and error, how to disentangle this information and make correct predictions about the world around us.

Most of the time we get it right, but sometimes, if a scene doesn’t match with our previous experiences, we make mistakes, as with White’s Illusion. You may hope that machine vision could overcome these problems, but Beau Lotto at University College London, UK, has recently shown that these errors can occur in any system that tries to emulate human vision.

This raises the tricky question: should machine vision developers try to iron out these problems, at the risk of needlessly complicating the image processing algorithms, or would it be sufficient to leave them as they are? Many companies are working to improve on nature, but in some cases this would be pointless, and possibly detrimental to the system’s performance. In fact, some systems actively try to recreate the ambiguities of human vision.

An example of the latter case would be the inspection of display screens during production, where a defect is only important if it is noticeable to a human being. It is important for manufacturers to detect a high proportion of the faulty products, but obviously they don’t want to dispose of displays with faults that are only visible to a machine. To solve this problem, Radiant Imaging has developed a system that ‘sees’ the display as a human would do.

The defects could include a ‘dead’ pixel that emits no signal, but sometimes faults can take the form of more subtle variations in the output of the pixels that may not be visible on their own, but might destroy the quality of an image if they appear in certain patterns. Both the colour and distribution of these faulty pixels can affect their visibility to humans: we are more likely to detect faulty pixels distributed across horizontal and vertical lines than on the diagonal, and we are more perceptive to variations in green than in blue.

Radiant Imaging’s solution is evident in both the hardware and the image processing software. Firstly, the CCD sensor in the camera views the display through filters that try to emulate the same spectral response as our eyes, which are better at detecting certain wavelengths than others. This removes unnecessary data that would have no impact on the visibility of defects, but could increase the load of the image processing.

‘Our cameras apply this weighting [of spectral data] as they take the image, which means you capture a lot less data up-front,’ says Hubert Kostal, the vice president of sales and marketing at Radiant Imaging. ‘It’s faster, more efficient and less expensive.’

Once they have been collected, the data are converted into a matrix with each pixel assigned a number that represents the intensity of its output. The software then analyses the matrix to find patterns in the variation of the display that could represent a visible defect to humans. ‘Tremura looks at the geometry of the display image, using studies on the human detection of “just-noticeable defects”,’ says Kostal.

The display screen is then graded based on the number and severity of these defects, and the software decides whether it has passed or failed. The advantages of using an automated system are manifold: ‘Currently, most judgments are made by human observers, but human perception varies over time. Camera systems can be calibrated to give repeatable data,’ says Kostal. ‘Humans can’t capture the full image for later analysis, but our system can store records of individual displays.’

This method of manually programming a system can emulate human vision to a certain degree, but many machine vision suppliers now wish to take it a step further, with solutions that actively learn to see in the same way as human babies. This provides a more general solution that can be more easily customised to different problems, without the need for programming expertise.

Firstsight Vision’s Manto software does this on the basis of a series of example images of good and faulty products. From these examples the software learns to identify the important features of a defective product, which it will then apply when it is used for inspection. As you would expect, the accuracy of the system is directly proportional to the amount of training the system has received.

Mark Williamson, director of Firstsight Vision, explains how the system is used to inspect wood for blemishes: ‘The texture of wood is very random, so defects are very difficult to define. However, provided you can create a catalogue [of example images], the software can define its own features.

‘A human expert would know what features to look for, and Manto allows us to teach this expertise into a machine vision system. It’s allowing us to move into areas that wouldn’t have been possible before.’

The Tremura system from Radiant Imaging uses filters to ‘see’ a defect as a human would.

The Tremura system from Radiant Imaging uses filters to ‘see’ a defect as a human would.

The Flawscan system from i2S Linescan also makes use of artificial intelligence to inspect coated glass products for scratches, impurities and an uneven spread in the coating. The technique of copying human neurology has now been tried and tested by many companies, and suppliers are now working to improve their systems with additional capabilities. In particular, Adrien Poly, the business division manager for i2S, emphasised that these surface inspection systems now need to provide more detailed information about the production line than a simple pass or fail reading.

‘It’s not only important to detect defects, but also to tell the user what kind of defect it is,’ he says. ‘It’s a major trend to put the final usage of a product as the highest priority… We provide tools to help customers understand the cause of defects. The systems must go beyond quality control to become a production optimisation tool.’

For this, i2S’s software provides statistics about any trends in the quality of products over time, and the system can connect to other instruments to store all the information from the production process in a single database.

Radiant Imaging’s Hubert Kostal agrees that the systems need to provide added value to really become an accepted tool in an industry: ‘The quality improvement for a company must be sufficiently great that it forces all the competition into using the system too. Once enough people use it, then it will become a standard.’

With the current rate of progress, it seems that this acceptance will be sooner rather than later. Which may well prove useful, since UCL’s Beau Lotto and other researchers across the world are coming to the conclusion that it’s no coincidence that advanced machine vision systems see the same illusions as us; to a certain extent, they might be an inevitable consequence of a robust system, in which case we might as well make use of them as well as we possibly can.

REFERENCE

(i) Public Library of Science Computational Biology, doi:10.1371/journal.pcbi.0030180.



If anything was evident in 2007, it was the trend for advanced 3D inspection systems, often developed by recently founded start-up companies. Three entries for the Vision Award at last year’s Vision Show in Stuttgart reported innovations in this area, with the entry from in-situ eventually scooping the prize (see page 6).

The entry from OBE Ohnmacht & Baumgärtner’s makes use of the ‘shape from shading’ technique to overcome the problems of imaging shiny surfaces found in the automotive industry. Reflective surfaces can create distortions in an image that make accurate measurement difficult, and in the past it has been necessary to use complex and expensive illumination systems to overcome these problems.

To overcome these problems the trevista system uses a structured, diffuse lighting system, encased in a dome. A camera takes four images from different angles to give a topographic representation of the component, which is then analysed by image processing software. OBE claims that the method has the accuracy of a 3D technique but functions at the fast speeds of 2D inspection methods, making it ideal for use on the production line.

The trevista system from OBE Ohnmacht & Baumgärtner uses a ‘shape from shading’ technique to detect defects on shiny components, such as this piston.

The trevista system from OBE Ohnmacht & Baumgärtner uses a ‘shape from shading’ technique to detect defects on shiny components, such as this piston.

Using a different technique, but with a similar goal in mind, Aqsense has developed a 3D inspection system that compares 3D constructions of the components to their original CAD drawings to highlight any differences.

To align the 3D scan of the part to the correct CAD design, the software recognises important features of the part’s shape, which reduces the amount of image processing.

‘Complicated objects are actually easier to process as there are more features to align,’ says Josep Forest, technical director of Aqsense. The method also takes advantage of recent developments in computer processing, by spreading the task of comparing the two 3D models across many different processors, each performing their jobs simultaneously.

This reduces the time it takes to perform the image analysis. ‘At the moment we’re not aware of another product that can perform the process at this speed,’ says Forest. ‘With dual-core processing at 1.8MHz and with 2Gb of RAM, we can align two surfaces with a million points in a time of just 300ms, which is effectively real-time output.’

A more traditional method of gaining data about the 3D shape of an object is the triangulation technique employed by the Ranger with MultiScan scanner from Sick. The device, which is also frequently used in wood inspection, scans a laser beam across the surface, and from the way this light scatters it is possible to create a height profile of the object.

Manufacturers now demand faster inspection rates than ever before, and the Ranger scanner can now manage to scan across lengths of more than 200 metres per minute. The scanner then matches this 3D information to a greyscale image of the surface, which can then be used to differentiate dirt from knots in the wood, and to grade the quality of the wood. This information would allow manufacturers to decide the optimum positions to cut and shape the wood to make their products.

Topics

Read more about:

Surface inspection

Media Partners