Freeze frame

While imaging at high speed used to require a dedicated camera, now - with the advances in CMOS technology - machine vision in general is getting faster, as Greg Blackman discovers

CMOS image sensors are gradually taking over from CCDs in machine vision, thanks to improvements in performance. One of the big advantages of the latest wave of CMOS sensors is that they operate at faster frame rates, which ultimately means machine vision is speeding up.

‘In the next year, with new On Semiconductor and Sony [CMOS] sensors being released, general machine vision will run so much faster. Now that it’s all moving to CMOS, we’re seeing a dramatic increase in speed,’ commented Mark Williamson, director of corporate market development at Stemmer Imaging.

High-speed imaging means different things to different people, and there are specialised imaging setups for freezing the trajectory of bullets and the like that run at thousands or millions of frames per second. More general high-speed imaging – the type used in machine vision for instance – might cover hundreds of frames per second, which is the speed range Williamson is referring to with the new CMOS technology.

‘We’ve always had CMOS for high-speed, but whereas you’d have a very specialist sensor three years ago and a camera that would cost £15,000, now there are cameras for £1,000 that are small and can run similar performances,’ he said.

One of the nice things, Williamson noted, about the new CMOS sensors is that they can be windowed down to smaller regions of interest, both vertically and horizontally, to give faster frame rates as well as offering multiple regions of interest. This makes them very scalable. Stemmer Imaging supplies the JAI Go 5 megapixel camera, for example, that can run very quickly when windowed down to 1 megapixel. The company has also supplied just this sort of high-speed, scalable camera technology for the Bloodhound SSC land speed record project (see ‘Test driving the 1,000mph car’ on page 24).

‘Whereas in the past cameras tended to be very specific with regards to the frame rate, now you can choose to have a system that is a lot more scalable depending on what you want to do with it,’ he said. ‘There’s been this massive change between having a camera that just does 1,000 fps at 640 x 480 pixels, to cameras that might do 200 fps at 5 megapixels, but will do 2,000 fps at less than 1 megapixel.’

Putting this in context, testing the aerodynamics of car or aircraft components, or even cyclists, with particle image velocimetry (PIV) in a wind tunnel needs a high-speed camera, and new CMOS cameras are making this process much more cost-effective.

Dr Reinert Müller, managing director of the Fibus Research Institute for Image Analysis, Environmental Control and Flow Mechanics in Hamburg, commented: ‘Oftentimes in wind tunnels, very cost-intensive high-speed cameras are installed that reach high image rates over 1,000 fps at full resolution, but they put out such an enormous data volume that their interface can’t transfer it in real time to the host computer.’

The Fibus institute develops measurement methods and software solutions to accelerate wind tunnel experiments, and for PIV studies one model it has used is the Bonito CL-400 camera from Allied Vision. The Bonito has a

4 megapixel global shutter CMOS sensor running at 400 images per second at full resolution – relatively slow for PIV analysis where capturing images of airflow and turbulence highlighted by atomised particles in the air needs as fast an image rate as possible. However, the camera can be windowed down to a region of interest to reach 1,000 fps. Where this camera saves the customer money, though, is in being able to transfer and process the images in real time – it uses a double 10-tap Camera Link Full+ interface to do this – rather than storing them onboard the camera and then transferring them after the test, which can take several minutes. ‘The use time saved on the wind tunnel and the well paid engineers’ wait time is worth a lot of hard cash,’ Müller said. ‘Add to that the fact that the Bonito is markedly more affordable in comparison to the usual high-speed cameras.’

The Bonito camera can also generate two consecutive images for PIV analysis separated by only 550ns. It does this by eliminating the electronic shutter and using a laser light pulse synchronised with the camera. The short time lag between the two images means the particles’ path and velocity in the air are detected very precisely.

The bike and clothing worn by German cyclist André Greipel in this summer’s Tour de France was designed using particle image velocimetry in a wind tunnel at the Technical University in Dresden. Greipel’s bike, made by Ridley Bikes, and his helmet from Lazer Sport were tested by a PIV system from Intelligent Laser Applications using an Edge 5.5 camera from PCO. The analysis was used to optimise the performance of bike and clothing.

Burst of speed

New CMOS sensors can output hundreds of frames per second, and yet Teledyne Dalsa has added burst mode functionality to some of its Gigabit Ethernet cameras to increase the speed further. The function captures a series of frames in quick succession. The camera then recovers by sending these images to the host while the next object on the conveyor presents itself. It’s not continuous high-speed imaging, but a burst of images.

‘The burst mode is very popular in the electronic packaging sector, where the requirements for 100 per cent inspection are becoming more stringent and the defect size is becoming smaller,’ commented Yvon Bouchard, technology director, Asia Pacific at Teledyne Dalsa.

The other area where the burst mode might be used is 3D imaging, added Bouchard, specifically using a single camera with multiple exposures under different structured lighting. ‘Here, there is a big requirement to be able to acquire multiple frames of the same target in rapid succession. The average rate doesn’t have to be that high, but the burst capability has to be high,’ he said.

Teledyne Dalsa’s burst function exploits the fact that inspection tasks usually need a lot of data in a short timeframe with some time to recover in between. ‘A lot of industrial inspection systems are based around this,’ Bouchard said.

The burst mode can also keep the price of the system manageable. ‘This burst capability was unheard of even five years ago and now it’s becoming more common,’ Bouchard said. ‘It opens up new possibilities.’ Combined with Teledyne Dalsa’s Turbo GigE capability, average GigE bandwidth can reach 150MB/s to 200MB/s.

Getting the data out

Data bandwidth can become a limiting factor as speed increases, which is why very fast cameras – operating at thousands of frames per second – tend to store the images onboard the camera and then download them after the event has occurred.

Interfaces like CoaXPress, which has a total bandwidth of 25Gb/s using four cables, is one option for high-speed imaging – Mikrotron’s EoSens CXP camera combined with Active Silicon’s FireBird frame grabber, for example, might record at 500 fps and 4-megapixel resolution with a 2GB/s acquisition speed. The FireBird card uses an eight-lane Gen2 PCI Express interface to handle the bandwidth, and has no CPU cycles since all data is transferred to the PC’s memory in hardware using Active Silicon’s ActiveDMA engine.

Some cameras now have enough computing power for processing to occur inside the camera at the maximum speed of the sensor and then only send interesting data back to the PC. With these cameras triggering is less of an issue because they can image continuously at high speed and only record and process the frames of interest. ‘You’re using the sensor as a part detector, which simplifies the system – you don’t need any extra hardware. Another avenue that high-speed sensors open up is that they simplify the overall system,’ explained Bouchard.

Illumination also has to be considered; LED strobes can now emit much more intense pulses of light, which – combined with sensitivity improvements in modern image sensors – gives better solutions. Bouchard said: ‘The sensor has to be able to accept short light pulses and have short integration time with low noise. In the past few years Sony has raised the bar on everyone that develops sensors in terms of sensor performance, and dynamic range especially is just outstanding. It’s probably a one order of magnitude improvement at least within the last three to four years on sensor performance, especially dynamic range.’

Finnish company Cavitar provides laser illumination for high-speed events like monitoring welding or imaging ballistics. ‘Fast-moving objects, especially if they are small, demand powerful and intense illumination for avoiding motion blur,’ said Erkki Lassila, vice president of solutions at Cavitar. The other advantage of using laser light is that thermal background radiation can be filtered out when imaging hot processes like welding – Cavitar’s lasers have a bandwidth of a few nanometres, so a narrowband filter will block all other wavelengths.

‘The camera exposure time can vary from 100ns for specialised cameras to 1µs or more for the majority of high-speed cameras,’ explained Lassila. ‘If an object is moving very quickly, an exposure time of 1µs can be too long to obtain a sharp image. If this is the case, a strong laser pulse can provide sufficient energy to illuminate the object and freeze motion.’

Cavitar’s lasers can generate 10ns pulses, which reduces motion blur significantly. They can pulse at megahertz repetition rates, or even a continuous pulse for a limited duration.

‘The laser makes the difference in some cases as to whether the object can be visualised at all,’ commented Lassila, adding that it can also improve the image quality significantly.

Odos Imaging, a company better know for 3D time-of-flight (ToF) cameras, has employed its ToF technology to image some very fast events, including capturing an image of a bullet at an effective frame rate of 5 million frames per second.

The company has recently released a high speed camera, the SE-1000, running at 450fps at full frame resolution of 1,280 x 1,024 pixels, and up to 17,000fps as the region of interest is narrowed.

The fact that the camera is based on the company’s ToF sensor has resulted in a cost effective high-speed imaging system, according to Ritchie Logan at Odos Imaging. ‘As a time-of-flight camera, the internal timing requirements are incredibly tight,’ he said. ‘So, our high-speed camera, which is built using the same hardware [as the ToF system], has some extraordinarily good timing and triggering capabilities.’

The camera has 2GB of storage, or around 1.4 seconds at 450fps, which means the triggers have to be used effectively. However, the camera has a latency of less than 850ns from receiving a trigger.

‘We can easily capture a bullet travelling at 750m/s as a result of our timing and triggering capabilities,’ continued Logan.

A base frame rate of 450fps is not fast enough to capture images of bullets, but the laser illumination, normally part of the ToF light source, has a pulse duration of just 200ns, which replaces the requirement for a fast shutter – the stopping power is 1/200ns, which equates to five million frames per second.

To capture an image of the bullet, Odos Imaging set up a foil just outside the field of view. The bullet passed through the foil and generated a trigger signal for the camera, which then opened the shutter and started an internal sequencer. One hundred microseconds later, the sequencer pulsed the first illumination module, followed by the second after a further 100µs. ‘That allowed us to get a single composite image, with the bullet positioned in two places – with a highly accurate time interval between,’ commented Logan.

‘The SE-1000 is exceptional in that it has a fully programmable pulse generator or sequencer, which is what allowed us to control exactly when the illumination pulses were generated,’ Logan added. ‘To do this with another camera, you’d have to use an external pulse generator, but our approach is totally integrated. External pulse generators are pretty expensive.’

Odos Imaging has provided equipment for applications in ballistics and shot peening, where shot is imaged as it is emitted from a nozzle. When imaging even faster, transient events equipment like streak and framing cameras start to be used, such as those from Specialised Imaging, which has supplied a system to the Leibniz Institute for Plasma Science and Technology for scientists studying the mechanism of plasma breakdown. Specialised Imaging’s SIM framing camera can capture images at one billion frames per second, with gating down to 3ns.

A billion frames per second is at the upper range of image capture. In terms of the more general machine vision tasks, however, advances in CMOS sensor technology are unlocking the potential for imaging at fast frame rates just about everywhere.

To receive the latest issue of Imaging and Machine Vision Europe, subscribe for free here. 

Test driving the 1,000mph car

The car aiming to beat the world land speed record, Bloodhound SSC, was unveiled to the public at an event in London’s Canary Wharf on 24 September. Stemmer Imaging is providing high-speed cameras both onboard the vehicle and for use in external testing.

The 13.5m streamliner, which uses jet and rocket motors to produce approximately 135,000 thrust horse power, is more than nine times more powerful than all the cars in Formula 1 combined. The current land speed record stands at 763mph, but the Bloodhound SSC team are aiming to reach 1,000mph when they attempt to break the record next year in South Africa. Later this year, the team will start UK testing.

The car has 24 cameras onboard, all provided by Stemmer Imaging. In addition, the team conducted tests offline with a high-speed camera from Optronis, also supplied by Stemmer Imaging, to gather information about how the vehicle’s rocket plume will ignite.

‘That’s very important,’ commented Mark Williamson, director of corporate market development at Stemmer Imaging. ‘The way the rocket ignites gives a clear indication of how the propulsion unit and engine management system is operating.’ How the rocket ignites will give different inertias and different forces, so it’s vital the team understand this. The Optronis camera can reach 5,000 fps.

The onboard cameras range in speed, but the camera monitoring the wheel-to-ground interface has one of the faster frame rates, according to Williamson. This will look at the vibration during the initial runs, to make sure there are no oscillations or instability.

When the vehicle starts to do its land speed record runs, there will always be three critical cameras constantly recording. Then the team can choose to record 12 cameras out of the 24 at any one time.

‘Because the car aims to reach 1,000 mph, the imaging needs to be fast, otherwise you’re just going to get unusable image data,’ said Williamson. The cameras will run up to 500 fps, with windowing to a certain region of interest, but – with up to 12 cameras running – it’s quite an impressive system, Williamson commented.

‘Most high-speed systems are just one camera,’ he said, ‘but these are all high-speed cameras all synchronised together, all time stamped back into the engine management system, so that when anything happens they can directly cross correlate recorded events with other sensors in the engine management system.’

The system uses highly optimised graphics processors to do the image compression, with an option to move to an FPGA if the bandwidth exceeds a certain level.


Greg Blackman asks what it takes to commercialise new imaging technology


Embedded vision, deep learning, and Industry 4.0 could all have a big impact on the machine vision sector in the future. Three experts give their opinions


Andrew Williams explores the production and automation markets in China, India and other fast-growing nations


Pierre Cambou, imaging activity leader at Yole Développement, analyses the merger and acquisition landscape for machine vision


Following a successful European Machine Vision Forum, which brought together representatives from industry and research, Professor Bernd Jähne at the HCI, Heidelberg University and a board member of the European Machine Vision Association, argues collaboration between industry and academia is now more important than ever