Skip to main content

The smartphone generation

The mobile phone camera market is based almost entirely on CMOS sensors and the large consumer market has pushed CMOS technology to the limit, including development of advanced pixels with low noise. However, in the more specialised field of machine vision, both CCD (charge-coupled device) and CMOS (complementary metal oxide semiconductor) offer advantages, with technical requirements for machine vision more specific and difficult to develop than for consumer imagers.

The latest technology draws on improvements that have evolved across both consumer and specialised applications – combining CMOS readout with a CCD type of pixel is one such example. Then there is backside illumination technology, which is now seen in consumer image sensors but hasn’t yet made its way into sensors for machine vision. Industrial cameras, while following trends in consumer image sensor technology in some respects, such as smaller pixel sizes, still retain their own specific set of imaging requirements.


Global shutters and high speeds

The first requirement of machine vision is a high image capture rate for inspecting products and measuring dimensions. ‘The faster it goes, the lower the cost,’ said Albert Theuwissen, founder of Harvest Imaging in Belgium.

CMOS sensors hold the advantage in terms of frame rate and ability to do electronic processing on the chip itself, but compared to CCD, they tend to produce lower-quality images with noise, particularly in low-light situations.

‘The mobile phone typically uses rolling shutter pixels which are less complex than global shutter pixels,’ explained Lou Hermans, co-founder of Cmosis image sensors in Antwerp, Belgium. Ninety nine per cent of imagers today for the consumer industry use rolling shutters, which read pixels row by row, scanning across the entire frame.

‘This is fine for still photos, but for moving scenes you get odd effects,’ added Piet De Moor, programme manager at Imec nano-electronics research centre in Belgium.

The rolling shutter introduces an image distortion because of how it is read, which occurs when imaging very fast-moving objects. In contrast, CCD uses a global shutter pixel which exposes the entire frame in the same time window and produces a higher-quality image.

For the rapid imaging required in a machine vision situation, the fundamental solution is to use a global shutter pixel which has a memory element in each pixel. Machine vision requires some memory inside the pixel which consumes space and therefore is not compatible with the very small pixels used in mobile phones. ‘Global shutters now have pixel size 5 to 10µm; consumer pixels are 1.5µm or below,’ added De Moor.

While the first CMOS sensors had rolling shutters, global shutters are now being integrated to provide CCD image quality with high CMOS frame rates. At Cmosis, the CMV (Cmosis Machine Vision) family comprises high-speed, global-shutter CMOS sensors with up to 20 megapixel resolution and an active sensor area of 32.8 x 24.6mm. The sensor has a programmable on-board sequencer and integrated analogue-to-digital convertors on the chip, which make it easy to build into a camera.

‘In the last five years, the ability to combine global shutter pixels with a reasonable pixel size has made global shutter image sensors possible,’ said Hermans. There is interest in future machine vision products to use smaller pixels, and mobile phone technology helps to achieve that, he added, referring to the ability to shrink pixel size without compromising on performance, yet offering higher resolution at a reasonable price.

‘Global shutter verses rolling shutter is a CMOS statement,’ summarised DeLuca. ‘When you talk about CCD devices, the type of device that we make for machine vision is the interline transfer CCD which has an electronic shutter in it as well.’ People talk about global shutter being important – but CCDs have this problem solved. Combining it with a high frame rate and high rate of data reading is critical for making the most of available technology.

Data readout

‘With past CCD sensors, we had a few hundred frames per second. But now, CMOS can produce several thousand frames per second and allows electronics to be done on the chip itself,’ said Theuwissen. This means that data from various columns on the image sensor can be pre-processed, making it possible to read high data volumes at high speeds.

‘For consumer-type applications [such as mobile phones], almost everything has been taken over by CMOS,’ said De Moor. ‘But if you look at the much smaller market for scientific or industrial inspection, they are still mostly using CCD because of the very good performance.’ ‘CCD still brings best image quality, as measured in dynamic range and image uniformity across image array,’ added DeLuca.

As explained by DeLuca, each pixel in a CCD sensor is devoted entirely to light capture. The photon hits the light-sensitive part of the silicon and generates an electron. Once the exposure is complete, all the charge is transferred in one electronic pulse to the adjacent light-protected area and into an analogue signal, leading to uniform image production. The entire pixel can be devoted to light capture.

CMOS is different in that it has an output amplifier associated with each pixel or group of pixels. The charge is moved into the amplifier and converted to a voltage, which is read off-chip as an analogue signal. The advantage is that it is a lot easier to move voltage than charge, which, along with parallel conversion between pixels allows the high total bandwidth necessary for higher speed. Further, a CMOS sensor also includes amplifiers and noise correction so that the chip outputs digital bits. The disadvantage is that because each location has a different amplifier, conversion of charge to voltage might not be completely uniform across an image. The other on-chip functions mean that the entirety of each pixel is not devoted to light capture.

To combine the best of both CCD and CMOS, Imec sets up a process flow which combines CMOS readout with a CCD type of pixel. ‘Some things can still be done better in CCD than in CMOS, but we want to benefit from the process of having readout on the chip,’ said De Moor. This can achieve high speed imaging: the CCD element inside the pixel acts as the memory element, while at extremely high speed the images are stored inside the pixel.

In another ‘co-design’ of CMOS and CCD, Time Delay Integration uses a scanning camera. ‘Think of a belt with products moving by,’ said De Moor. If there is one linear array of pixels, then time intervals can get the entire image of items passing by. Using several lines of pixels improves signal-to-noise ratio; then different images that are sampled while moving along this line are added up. ‘This is nice in CCD: charges are moved from pixel to pixel just following the line on the belt,’ explained De Moor. Combining CCD with CMOS readout and electronics is a key element in terms of speed and power consumption for any industrial inspection, and new demonstrator chips are showing promising results.


Filters and intensities

Another important feature that might make its way from mobile phone to machine vision is backside illumination for CMOS (see panel). A key feature of Android smart phones and Apple iPhone 4, this arrangement of imaging elements improves low-light performance by increasing the amount of light captured, thus increasing light sensitivity overall.

‘In many applications, you need high accuracy, which requires a light source with a very short wavelength,’ said Theuwissen. To take the extreme case in terms of dimensions, ultraviolet light gives shorter wavelengths and higher accuracy measurements. Near-UV imaging is critical in some semiconductor inspection markets. But traditional frontside illumination devices, with a lens at the front, wiring in the middle, and photodetectors at the back, absorb UV light. The solution is backside illumination which is widely seen in the consumer market, using both CMOS and CCD, but not yet in machine vision.

Another addition suitable for industrial inspection links to optical filtering. Imec has developed band pass filters – hyperspectral filters – which are deposited directly on top of pixels as an extension of the imager process. A very narrow band path selection per row allows hyperspectral imaging, which looks at intensity as a function of wavelength in the visible range.

‘The RGB type of old filters gave colour images. But one step ahead is that you have different pixels being sensitive in specific wavelength band in the visible domain,’ explained De Moor. Combining information from all different wavelength bands, together with spatial information allows an image to be taken not in RGB but in all different wavelengths. This provides a large amount of data and the potential for material analysis product quality evaluation. The miniaturised Imec hyperspectral filter technology will trigger applications in industrial and agricultural inspection, for example, but ‘may even end up in mobile phones. Envisage conceptually a part of the standard imager in a mobile phone, allowing you to screen tomatoes in the shop to check whether they’re ok or not!’ De Moor remarked.


Predictions for future pixels

Machine vision applications need to provide the opportunity to make decisions quickly and accurately. ‘The problem is, “quick” means different things; it could be more frames per second, or it could mean more discrete images of an object so as to construct a 3D map; sometimes it means that data needs to be very clean so there’s no image processing that needs to be done,’ said DeLuca. Different clients want different things, and which sensor is best for the job remains a case-specific question.

Truesense Imaging provides a portfolio of sensors available to camera manufacturers. For any manufacturer-designed camera, any sensor can be plugged in and made to work simply by changing a few lines of code. New CMOS sensors are all pin-and-package compatible. And unlike mobile phones and many consumer-driven markets, machine vision imaging tends to work in predominantly monochrome because it requires less data. But sometimes colour is important for product scanning, such as identifying individually coloured wires for a robot to solder correctly. Truesense’s devices also offer different colour configurations.

Theuwissen predicted that pixels will be used for multiple purposes in future imaging developments. Information from pixels is then used to construct an image, but can themselves be used for measuring light sensitivity. Major companies have image sensors which, in a 2D array, have designated focus pixels. These constantly check if an image is in focus, but can no longer be used to construct the image. CMOS technology can correct defective lines and columns. ‘But if you can correct all these pixels, you can also correct pixels that were used for something else,’ explained Theuwissen. Powerful CMOS processing means that the pixels which were used for focusing could then be corrected within the image.

De Moor added that different types of pixel architecture based on avalanche photodiodes – photodetectors that provide internal photo-electronic signal gain – which are extremely sensitive to each photon would allow high speed industrial inspection in low-light conditions.

Hermans sees CMOS technology moving to smaller pixels and global shutter pixels in more classical applications on the consumer side.

And DeLuca sees CMOS and CCD as ‘complementary, existing side by side for the foreseeable future’. New technologies with high frame rates are enabling new markets that could not be reached efficiently using CCD. There is opportunity for the market to grow, but CCD continues to work very well too.

Historically, image sensor development was not aimed at the consumer. But a large consumer market has pushed the technology of sensors to the limit, including development of advanced pixels with low noise in CMOS technology. ‘In that sense, the know-how has developed for the customer imagers and the availability of image fabrication processes will help to serve the industrial inspection imager market,’ said De Moor.

The technical requirements for industrial imagers are more specific and more difficult than for consumer imagers. In that sense, it is not copy and paste, but rather the need to develop specific technologies for specific situations that will drive any imaging application.



One of the technology trends that could find its way from consumer camera sensors to the machine vision and scientific imaging world is backside illumination.

Sony Exmor R CMOS sensors are one such example using the technology, which minimises noise and improves sensitivity with small pixel sizes. The Exmor R sensors are found in Sony’s Cyber-shot digital cameras and Handycam camcorders, but the same sensor technology is also being integrated into industrial and scientific cameras from Point Grey.

An Exmor R device is based on a backside illuminated design, which flips the silicon wafer so that light can reach the photodiode without passing through the wiring layer. A backside illuminated design, as opposed to a frontside illuminated approach, is typically used with extremely small pixels to address limited performance due to a small light collection area.

Point Grey’s Flea3 USB 3.0, 8.8 megapixel, rolling shutter model uses Sony Exmor R technology for improved sensitivity and dynamic range.

Topics

Read more about:

Image sensor

Media Partners