Skip to main content

Keeping up with fast frame rates

High-speed imaging has come a long way in recent years. The fastest cameras now boast frame rates of several million per second, and high-speed capabilities are even being added to mobile phones, as witnessed by the release of the iPhone 6 last month that can capture 240 frames per second. But although the introduction of newer CMOS technologies, more sophisticated shuttering mechanisms, and laser illumination systems, have all enabled higher speeds at an improved quality, users of high-speed cameras are still demanding greater sensitivity and higher resolution.

One of the most common uses of high-speed imaging is slow-motion analysis, whereby a high-speed event is recorded using a high-frame-rate camera and played back at a slower pace to obtain additional information about movement. It is used in numerous applications, from biologists studying high-speed animal motions such as frogs jumping or birds flying, to athletes evaluating their performance in order to improve their technique.

For applications where abnormalities need to be detected immediately, however, machine vision is used to inspect fast-moving objects in real time. Although the technology employed is similar, software is used to analyse the images in real time, as opposed to examining them afterwards.

The speed of cameras has increased considerably over recent years, and applications are requiring ever higher speeds. ‘We are always looking at higher speeds,’ said Andrew Bridges, director of sales and marketing at Photron. ‘For example, there is a lot of interest these days in the glass they use for cell phones. But just seeing the way that the pieces shatter after the glass has broken is not particularly fast − what a lot of people are interested in now is the way the actual crack moves through the glass. That is typically moving at the speed of sound, so you need 100, 200 or 500 thousand frames per second to see how the glass propagates to see weaknesses in the sheet of glass.’

An important component that has allowed such high speeds is the sensor, according to Bridges: ‘The heart of any high-speed camera is very much the sensor. The sensors are unique to us − we spend half a million to a million dollars having sensors designed.’ Compared to conventional video cameras that record at around 25 to 50 frames per second, high-speed cameras can nowadays boast incredibly high frame rates, such as Photron’s latest high-speed camera, the FastCam Sa-z which at reduced resolution can operate beyond two million frames per second. It is the design of the sensor, Bridges pointed out, which is central for coping with the huge quantities of data that comes with these high frame rates. ‘The great secret behind [the sensors] is that they have so many channels − we can have 256 for example − trying to screen the data away from the sensor.’

Improvements in CMOS technology over the last couple of years have led to an industry shift towards CMOS as opposed to CCD sensors. Before, CMOS was rarely the sensor of choice for camera manufacturers because there were limitations brought about by the design. ‘It was always said that if you wanted a sensitive sensor, you needed to take a CCD sensor. This was because the whole pixel area can be used to capture light, and the processing for the pixel is outside,’ explained Susanne Rehrl, sales manager at Mikrotron. ‘In the CMOS sensor, you used to have all of the electronics to process the signal inside the pixel, which meant that only half of the pixel area was light-sensitive. With newer CMOS sensors, the light-sensitive area becomes bigger because the electronics in the pixel shrinks.’

CMOS sensors offer a variety of advantages over CCD technology. Apart from being easier to manufacture and therefore more readily available, they offer more freedom in design for camera manufacturers. ‘Everybody wants to use CMOS because they are much more flexible in control, electronic design − more things are possible with these sensors,’ said Rehrl.

And, CMOS sensors avoid undesirable effects often brought about by CCD sensors when capturing bright light sources, such as the sun or car headlights. ‘CCD technology suffered from blooming or smearing, where if you had a very bright spot − it could be just a reflection from a piece of glass for example − it streaked vertically down the image. This is because the pixel, when it is overexposed, would overflow to neighbouring pixels and cause this streaking in the image,’ described Bridges. ‘CMOS works the other way, so it cannot overexpose the neighbouring pixels, and it avoids the phenomena of smearing or blooming that the CCD technology suffered from.’

The type of electronic shuttering mechanism is also important for ensuring image quality when capturing fast-moving objects. In what is known as a rolling shutter, all of the pixels in one row of the imager are exposed one line at a time, starting from the top. ‘If you use a rolling shutter, the exposure starts at the upper part of the image and scrolls down over the sensor,’ explained Rehrl. ‘If you imagine a fast moving car, the exposure starts and the car would be in one position, and a little bit later, when the exposure starts for the bottom line, it will shift a little bit.’ This causes strange effects in the final image, Photron’s Bridges pointed out: ‘You get these rather strange phenomena, where for instance if you have a ruler or a straight edge and you’re moving it very fast it appears to be bent because it has moved from one line to the next. The line could be as little as a millisecond later, but if the object is moving fast enough, then there is definitely a difference.’

With what is known as a global shuttering mechanism, the entire imager is reset before integration to remove any residual signal in the sensor wells. ‘It is very important that we use a global shutter with a high-speed camera, where all of the pixels are exposed at the same time,’ noted Bridges. ‘All of our high-speed cameras have a shutter that can be set independent of the frame rate.’

Although CMOS technology now offers improved sensitivity, higher speeds mean shorter exposure times, so, as speeds keep increasing, the light sensitivity always needs to be improved. ‘We need to increase the light intensity. When you’re running at 10,000 frames per second you only have 0.1 microsecond shutter time to actually capture any light,’ Bridges explained. ‘So it is very important that the camera is very light-sensitive, so you need less additional lighting.’

When additional lighting is required, particularly in applications where there is limited light, such as in military situations, the recent advances in the speed and power of lasers have made them an ideal choice of light source for high-speed applications.

Released in September by Specialised Imaging and Cavitar, the SI-LUX640 is a laser illumination system based on pulsed high-power laser diode technology. It consists of a class-four laser and high-speed electronics, and can create short nanosecond pulses with a typical optical output power of 400W at a 640nm wavelength. The pulses are of a higher power than other light sources, such as LEDs, which helps to provide frozen images of transient events of just a few hundreds of nanoseconds duration. It can be fibre-coupled for use in hard-to-access places, and the low coherence laser light output means it can be interfaced to almost any high-speed camera system.

However, for a growing number of users who require the highest speeds, black and white camera systems are becoming a more attractive option as they offer greater sensitivity. ‘Recently we have seen an increase in black and white, or monochrome, systems versus colour,’ stated Bridges. ‘The main reason for that is that colour systems need more light − all high-speed cameras start as black and white, and then we put a colour filter over the monochrome sensor to give it the red, green and blue components. But, that absorbs more light. Typically, a monochrome system is two or three times more light-sensitive than the colour equivalent. So, if people are pushing the envelope for speed and they are looking for serious analysis, as opposed to pretty images for PR purposes, monochrome typically is going to be the best seller. We sell about 70 per cent monochrome systems.’

Indeed, when it comes to applications that require speeds in the range of tens of thousands of frames per second or more, sacrificing the colour will not make a massive difference to the final image quality, as the resolution decreases when the frame rate goes up. ‘In the last five years we have gone from being able to do megapixel resolution at 5,000fps to 20,000fps. But when you get up beyond the megapixel range there are not so many applications where the images are going to be acceptable for PR purposes,’ said Photron’s Bridges. For users of high-speed cameras, it is about getting the right balance between the speed, resolution and cost for the application in hand. ‘When we get to two million frames per second the resolution is around 1/28 pixels wide by eight high. But there are applications where that is acceptable, and then this becomes a fairly low-cost solution,’ Bridges added. ‘There are other systems out there that can do higher resolutions at those speeds but they cost around $250,000.’

The CoaXPress interface standard introduced in 2011 was significant for reaching higher resolutions at higher speeds, as it provided an increase in download speed to 25Gb/s per cable for video, images, and data. This has been particularly important for machine vision applications where the data transfer needs to be instant. ‘Now we can achieve higher resolution together with a higher frame rate because of the new interface standard,’ said Mikrotron’s Rehrl. ‘This is more for the machine vision cameras because all images, all the captured data, has to be transferred immediately. You need an interface that is fast enough to transfer all of this data.’

Fast track to the future

For high-speed technologies to become even faster and with better-quality images, light sensitivity and resolution are the factors that camera manufacturers will continue to improve, while considering the balance between cost and performance. ‘I think the resolution and the light sensitivity are going to be the areas for development in the future, and of course cost,’ said Bridges.

According to Mikrotron’s Rehrl, for machine vision applications, the demand for the improvement of other camera features will take precedence over the demand for even higher speeds: ‘I think now there are always demands for higher speed, but I think this will lower a little bit, and other factors like sensitivity, controlling options, distance − there are other features that will become more important,’ she explained. ‘This is because if you see production in a factory, there is a limit; nothing is produced in 500 pieces per second.’ But again, the requirements will always vary depending on the application. ‘The newer applications, such as laser triangulation, need high speeds because you need more image per piece. For this, the desire for more speed is there and will rise for the future, but I think there is a limit somewhere,’ Rehrl concluded.



The high temperatures reached during industrial welding make infrared sensing, in particular in the MWIR band, a powerful tool for monitoring such processes. Now, with fast response IR sensors, in-line monitoring can provide real time information about the quality of the welding process. Matthew Ashton, product manager at Pacer, explained: ‘Laser or spot welding in the automotive industry are techniques where the final results are directly related with the process dynamics (cooling and heating rates, density of spatters, power variations, pores, etc). All the important events occur in a range of times of few milliseconds.’

New Infrared Technologies has developed an IR system, supplied by Pacer, which provides, in uncooled operation, high-speed detection capabilities ranging from 100 to 10,000 frames per second. This is ideal for in-line production monitoring of welding process.

Active thermography is another technique operating in the infrared that uses high-speed cameras. It is a method that’s progressed beyond R&D and is now also used for industrial applications. ‘Only recently with the arrival of high-performance indium antimonide arrays have pixel counts grown to allow meaningful image dimensions,’ said Ian Johnstone, sales and marketing director at Armstrong Optical.

The basis of the technique involves using a light source, such as a lamp or a pulsed laser, to create a pulse of heat that diffuses through the structure, which is then recorded with a high-speed camera. Gas turbine blades are commonly tested in this way to detect faults in the blades, such as cracks or inclusions, which alter the way the heat flows. ‘Image acquisition by the camera is synchronised with the pulses to generate a ‘phase map’ and an ‘amplitude map’ with the defects standing out from the homogeneous background,’ explained Johnstone.

The heat flow through turbine blades is extremely fast due to the material from which they are manufactured, such as metal or ceramic. There are various options of high-speed cameras for this application, depending on the frame-rate and resolution that the user requires. ‘For frame rates up to 60Hz, uncooled detector thermography cameras such as the VarioCam HD camera with a 1,024 x 768 pixel microbolometer sensor array can be used,’ explained Johnstone. ‘Faster frame rates (up to kilohertz rates with reduced frame sizes) can be achieved with cooled detector thermography cameras such as the ImageIR 9300 with a 1,280 x 1,024 pixel indium antimonide sensor array. Cooled indium antimonide arrays are photon detectors and can have microsecond integration times, thereby allowing kilohertz frame rates.’

Topics

Read more about:

High-speed imaging

Media Partners