Taking charge: the big sensor debate
Ten or so years ago, CCD image sensors were considered superior to CMOS for any kind of imaging task where image quality was deemed important. Now, however, driven by the huge amount of investment in CMOS sensors for mobile phone cameras, CMOS technology is catching up, and maybe has already caught up, with CCD in terms of image quality, to the point where, according to some commentators, machine vision will be dominated by CMOS cameras in the not-so-distant future. The debate between the various attributes of CMOS vs CCD is a complex one, though, and CCD technology is by no means standing still. There is a lot of talk about CMOS taking market share from CCD, but, depending on who you talk to, CCD still represents an important image sensor technology in higher-end applications like machine vision.
The market for CMOS image sensors was $6.6 billion in 2012 and is predicted to reach $11 billion in 2017, according to a recent report by the French consulting company Yole Développement. Mobile handsets accounted for around 65 per cent of total shipments of CMOS sensors in 2011, the report states, which is where a lot of development in the sensor fabrication technology derives from. The report also expects the automotive market for CMOS sensors to reach $400 million in 2017, with cameras built into cars for greater safety and driver assistance. This is much more of an industrial market requiring high-performance sensors with features such as global shutter, high dynamic range, and low-light sensitivity.
In industrial imaging there are basically three categories of silicon imagers: area scan, line scan, and time delay and integration (TDI) imagers. Dr Nixon O, technical director of imaging science and technology at Teledyne Dalsa, sets out the divisions between CCD and CMOS in these categories: ‘In machine vision, the majority of area scan imagers are CMOS. For line scan imagers, I speculate that the market is roughly evenly split between CMOS and CCD. TDIs are almost exclusively CCDs.’ Teledyne Dalsa designs and manufactures CCD and CMOS image sensors.
A difference in architecture
As a generalisation, CMOS devices perform better at higher frame rates for a given resolution than CCDs, while CCDs provide better pixel uniformity across an image. The architecture of CCDs is such that the pixels are read through a single amplifier, so the conversion of the electrons into a voltage happens in one place for many pixels; on a CMOS device that conversion happens at each pixel. The result of this is that high-speed CMOS imagers can be designed to have much lower noise than high-speed CCDs, and, for machine vision, that’s a big advantage, according to Lou Hermans, COO at CMOS sensor manufacturer, Cmosis.
Hermans states: ‘CMOS sensors are taking over from CCDs in terms of industrial imaging.’ He gives two reasons, one of them being the demand for higher frame rates in the machine vision industry, where CMOS performs better. The other is the advances in fabrication, which is now such that, in his opinion, CMOS image quality equals that of CCDs, while providing all the other advantages of CMOS, namely being able to integrate peripheral electronics on the chip itself. ‘By integrating the electronics onboard the chip, you can build smaller, lighter cameras,’ he says, which is important when integrating vision into tight spaces on a production line.
Typically, the only electronics integrated on a CCD is an analogue output amplifier. A CCD needs a separate controller chip and separate A/D converters, whereas on a CMOS image sensor the A/D converter and full control of the image sensor is integrated on the chip. ‘You have more components with a CCD, and once you’re targeting the frame rates that the machine vision industry is asking for today, the performance drops as well,’ says Hermans.
Cmosis sensors have a global shutter, similar to an interline sensor CCD, which is crucial for machine vision to freeze moving objects. The Cmosis CMOS design allows the imager to correlate double sampling and global shutter together, which lowers the noise. Guy Meynants, CTO at the company, admits that global shutter CMOS sensors are still a bit behind CCDs in terms of noise. ‘If you need a global shutter in combination with very low noise and you don’t have any speed constraints, then a CCD sensor is still superior to CMOS,’ he comments.
In machine vision, the majority of area scan imagers are CMOS, such as this sensor from Teledyne Dalsa. Credit: Teledyne Dalsa
The disadvantage with a CMOS architecture is that because each conversion of electrons into a voltage happens at individual pixels, each conversion will be slightly different. The result is background noise, or non-uniformity, from pixel to pixel that manifests itself in the image.
There are a number of correction algorithms that can be applied to make the image more consistent on a CMOS device, but, as Michael DeLuca, marketing manager at Truesense Imaging points out, the advantage with CCD is that it doesn’t have that problem in its native state. Therefore the quality of the image can be very high, which is something that can be important in machine vision where throughput is often critical. ‘A high-quality CCD device doesn’t require this type of software overhead to pre-process the image, meaning image analysis can be faster,’ he says.
At the recent Vision show in Stuttgart in November, Truesense Imaging, formerly the Image Sensor Solutions (ISS) division of Eastman Kodak Company, launched its first CMOS sensor, the KAC-12040, targeted at industrial applications, but it also launched two high-performance CCD sensors for the same market. DeLuca explains the CCDs are ideal for tasks requiring high image quality under demanding conditions where lighting is not controlled, such as imaging outdoors. ‘In situations like that, there are still a lot of really good advantages that CCD offers,’ he says. ‘For us, bringing a CMOS device to the market was not a situation of saying "CCD no longer has a place"; on the contrary, we feel CCD has a very important place, but at the same time we feel CMOS has an important place. We really view these as complementary technologies.’
CCDs still improving
While most commentators agree that CMOS is getting very close to the image quality provided by CCDs, improvements are still being made to CCD technology. ‘Both sensor technologies are improving,’ states Mike Gibbons, director of sales and marketing at Point Grey, ‘with CMOS image quality getting better and CCD frame rates getting faster.’ Point Grey’s CCD cameras are largely based on CCDs from Sony, while its CMOS models incorporate sensors from Aptina, Sony, On Semiconductor and e2v.
Gibbons says that a lot of the new CCD image sensors from Sony have a quad-tap architecture, meaning the sensor is split into four quadrants. Instead of transferring one large image from the sensor, quad-tap architectures allow four images to be transferred simultaneously, which means the sensors can run at faster frame rates. ‘If costs can be reduced for CCD technology and frame rates continue to increase, then why buy CMOS over CCD?’ he asks.
New developments aside, CCD sensors have definite advantages over CMOS in two areas: TDI imagers and near-infrared sensitivity. Time delay and integration (TDI) imagers are useful when the light signal is weak. They operate by summing the signal from a number of pixel lines, effectively adding together multiple snapshots of a scene to generate a stronger signal. CCD and CMOS TDIs sum the multiple snapshots differently, CCDs combining signal charges, while CMOS combines voltage signals. The summing operation is noiseless in CCDs, but not in CMOS, and as the number of rows increase the noise from the summing operation becomes such that CCD TDIs outperform even the most advanced CMOS equivalent.
The other area where a CCD imager is preferable to CMOS is in imaging in the near-infrared (NIR). Dr Nixon O at Teledyne Dalsa comments: ‘CCDs that are specifically designed to be highly sensitive in the near infrared are much more sensitive than CMOS imagers.’
The reason, as Dr O explains, is that most CMOS imager fabrication processes are tuned for high-volume applications that only image in the visible. NIR sensitivity is all to do with the substrate thickness, or the epitaxial layer thickness, of the sensor. Increasing this to improve NIR sensitivity in CMOS sensors degrades the spatial resolution of the imager, whereas it is easier to preserve the ability of a CCD sensor to resolve fine spatial features while fabricating it with thicker epitaxial layers. In some near infrared CCDs, according to Dr O, the epi layer is more than 100μm thick, compared to the 5-10μm thick epi layer in most CMOS imagers.
The cost of the two sensor technologies is difficult to determine accurately. Dr O comments that it is generally cheaper to develop a custom CCD than it is to develop a custom CMOS imager, because CMOS uses more expensive deep submicron masks and requires more circuitry. That said, CMOS imagers can potentially make use of larger economies of scale and will therefore have lower unit cost.
CCD and CMOS technologies continue to be fine-tuned. Credit: Shcherbakov Sergii/Shutterstock
Dr O also makes the point that supply security is important and that in spite of a better value proposition, it may be wiser to choose the company which is best able to produce the imager over the long term, whether that’s CMOS or CCD.
From a camera manufacture perspective, Point Grey offers both a 1.3-megapixel CCD GigE camera and a 1.3-megapixel CMOS USB 3.0 camera, which Gibbons says are around the same price.
The future of CMOS vs CCD
In general, most machine vision applications are moving to using CMOS sensors, while more scientific applications and those requiring very high image quality without the fast frame rates will continue to use CCD. Dr O comments: ‘I expect that most high volume area and line scan imager applications will eventually migrate to CMOS. For custom, lower volume, more specialised applications, the value proposition can still favour a custom CCD.’
He adds: ‘In the near future, TDIs will continue to be primarily CCDs. In the more distant future, TDIs may evolve into a hybrid, incorporating elements from CCD and CMOS.’
Hermans of Cmosis agrees: ‘There will always be niche markets for CCDs in scientific applications like astronomy where CCD might offer specific advantages. In the long term, for machine vision, everything will become CMOS.’ He adds, however, that the product lifetime in machine vision is long, so it might take five or ten years to make that transition.
In terms of fabrication, both CCD and CMOS technologies are continually being refined and fine tuned. Meynants at Cmosis suggests that the need for higher resolution and for making the pixels smaller in machine vision sensors could result in switching to different CMOS processing technologies. Once again, developments in this area come from companies fabricating sensors for mobile phone cameras, which can have pixels as small as 1μm. Cmosis’ 12-megapixel CMV12000 sensor has 5.5μm pixels by comparison, although in industrial cameras the pixel sizes tend to be larger to give better performance.
On top of that, Meynants says if frame rates remain constant and resolution increases, the total data rate of the sensor increases, which requires faster A/D converters, faster readouts, and faster circuits. ‘That’s the other challenge: increased readout speed.’
According to the market report from Yole Développement, one of the next technological breakthroughs will likely come from Sony, which has developed the first stacked CMOS sensor architecture for the consumer market. Stacking pixels on the signal processing circuit rather than next to each other will optimise the manufacturing process of each circuit and provide sensors with greater sensitivity, faster readout, and much higher signal processing integration.
At a much more experimental level, researchers at Fraunhofer Institute for Microelectronic Circuits and Systems (IMS) have introduced an ultrasensitive CMOS image sensor, based on Single Photon Avalanche Photodiodes (SPAD). The sensor was developed as part of the MiSPiA project consortium. Its pixel structure can count individual photons within a few picoseconds, making it a thousand times faster than comparable models.
It is difficult to say how quickly or to what degree the respective sensor technologies of CMOS and CCD will wax and wane. Improvements in sensor technology will take their cue from volume consumer applications like mobile phone cameras, but the higher-end machine vision market will have its specific requirements when it comes to image quality and performance, and it might be there will be a place for both.
Choosing between CCD and CMOS
Most machine vision camera manufacturers supply both CMOS and CCD models and from an end-user perspective the difficultly lies in deciding between them. ‘In our industry there are a lot of "ice cube" size cameras; there are a lot of cameras with the same form factors and specs but with different image sensors,’ states Mike Gibbons, director of sales and marketing at camera manufacturer Point Grey. ‘One might have a 1.3-megapixel e2v CMOS sensor, another might have a 1.3-megapixel On Semiconductor CMOS sensor, or a 1.3-megapixel Sony CCD. The difficult part for the end-user is how to differentiate between them.’
This is where the EMVA 1288 standard from the European Machine Vision Association comes into play. The standard aims to provide a means of comparing imaging performance of cameras from different vendors using a standard test methodology. ‘EMVA 1288 helps the customer decide which camera to choose if, for instance, low read noise is important for the application,’ says Gibbons. ‘Point Grey determines a lot of its imaging performance measurements using EMVA 1288 methodology, and that’s really going to become very important in the future.’