Skip to main content

Faster than the eye can see

What do parachutes and golf have in common? Both are benefiting from the application of high-speed vision technology. The details of a golfer’s swing, and the deployment of the parachute from a Nasa spacecraft on re-entry, happen too fast for the human eye to see, but high-speed imaging can, in effect, slow down time, and resolve events that may be over before they’re noticeable – but which are too important to miss.

High-speed cameras can be thought of as still cameras which take hundreds, thousands, or even millions of individual still images each second, according to Rick Robinson, vice president of Vision Research, a company in the Ametek group, which specialises in developing high speed cameras. ‘The main thing that makes a high-speed camera a high-speed camera is the internal technology,’ he says. ‘The sensor must be capable of taking many pictures per second and sending each picture to the camera electronics quickly. This usually implies a tremendous amount of parallelism in the design. The camera’s electronics must also be able to ingest this data very quickly, do whatever manipulation is required to each image (black balance, bad pixel correction, etc) and then store it in memory. The camera must have large amounts of very fast memory to store the images, and it also needs a way to offload that memory, do colour interpolation for a colour camera, etc. Often, these tasks are all done in hardware using dedicated electronics and FPGAs.’

Balancing quality and quantity

The fastest, dedicated high-speed cameras are used primarily as data acquisition tools, able to collect many images over a relatively short period of time for analysis after the event. Vision Research’s cameras have recently been used for observation of lightning strikes, optimisation of parachute deployment in NASA re-entry vehicles, and analysis of the failure of building materials in an explosive blast.

The fastest cameras are capable of frame-rates in excess of one million frames per second, but Robinson says that frame-rate alone is not the measure of the performance of a camera, noting that few customers require such high frame-rates: ‘Sometimes we’re tempted to talk about, and brag about one million frames per second, and people say “well, why would we ever need that?” Well, they might not, but cameras that can do one million frames per second will also give you the highest resolution at a fixed speed, or the highest speed at a fixed resolution.’

For this reason, the high-speed camera manufacturers quote the performance of their devices in terms of the bandwidth of which they’re capable. The high-end cameras on offer from Vision Research can record more than seven gigapixels per second, which could mean that a one-Megapixel image is recorded seven-thousand times per second, or it could mean that a seven-thousand pixel image is recorded a million times per second.

The pixels-per-second unit contrasts with the bits-per-second more commonly quoted when discussing camera bandwidth, but Robinson explains that this is because the internal electronics of the camera are essentially agnostic to the bit-depth of each pixel. ‘In our cameras, a pixel could be made up of 8, 10, 12, and we even have one camera that can do 14-bits per pixel… If I have a seven gigapixels per second camera, that’s actually the throughput of the sensor,’ he says.

In order to achieve pixel throughputs as high as seven billion per second, camera manufacturers such as Vision Research must optimise every element of the camera for speed. Robinson explains that this optimisation starts with the sensor element itself: ‘Typically, the CMOS sensors in a standard camera only have one or two ports out of the sensor,’ he says; ‘Image data is serialised and transmitted over one or two ports.’ In specialised high-throughput sensor designs, however, the support electronics reads data from the sensor via 64, 128, or up to 256 ports, each of which has a dedicated analogue to digital converter (A/D). This parallelism adds to the cost of the cameras: ‘Because each of those 256 A/Ds need to support a camera that’s running at a million frames per second or more, they tend to be the very latest, state of the art, and very expensive,’ says Robinson, adding that each A/D can cost up to $100. Additionally, the A/Ds get hot, and so ventilation and cooling is an additional requirement, and leads to additional expense.

The third factor

Robinson describes a third factor that must be considered when optimising a camera for speed: ‘If we’re running at a million frames per second, we have, therefore, less than one microsecond to collect photons into each photon site. We therefore need a sensor that has very high quantum efficiency, and one which has a very high fill factor.’ Of the square that makes up the pixel, he explains, part of the area is taken up by microelectronics such as transistors to turn on, turn off, and reset the pixel. High-speed camera designers favour smaller and fewer transistors to maximise the fill factor on each photo site, which is a design Robinson describes as unique to high-speed imaging. ‘Each individual pixel cell is designed for high speed, and the cells are put together and multiplexed off of the chip in a way that is also optimised for high speed,’ he says.

‘One of the reasons that these cameras are expensive is that they could actually contain all of the electronics of a normal camera but duplicated 100 times, or even 200 times inside the camera body. The design has to be done clear through the camera, all the way from the sensor to the point at which you’re going to get the data out of the camera,’ he says.

Capturing the event

Seven billion pixels per second is too much data to transmit in real-time over standard camera interfaces, and so very high-speed cameras make use of fast on board memory. Vision Research uses high-speed D-RAM (dynamic random access memory), several gigabytes of which can be built into the camera. However, an 8GB memory cartridge may only equate to around one second of recording capacity at the required frame-rate and resolution. ‘Depending upon what your event is,’ observes Robinson, ‘that may not leave you a lot of leeway. Most people when they’re using a [standard] camera, they push a button to start it, and they push a button to stop it, and if you’re using a still camera, you just press a button and it captures an image at that moment in time. On a high-speed camera, you really need this third method of triggering, which is to take continuous recordings and then tell the camera where inside that continuous recording you want the triggering to occur.’

This approach is known as a circular buffer (also called a ring buffer by some manufacturers). When in circular buffer mode, the camera always has around one second’s worth of images stored in its memory depending on the capacity of the memory and the resolution/frame-rate. ‘The trigger tells the camera which bit of memory to stop at,’ says Robinson. ‘You could put the trigger at the end of the buffer to tell it to stop recording and save the images that you’ve recorded over the previous second.’ Alternatively, he explains, if the capture is triggered by an operator responding to something seen or heard, the trigger point is placed in the middle of the buffer. ‘That way, when I see the mousetrap, or hear the gunshot and trigger manually, even though there’s a human latency, half of my movie will be images that were stored prior to the trigger, and the other half will be images that were stored after the trigger, and you can be reasonably certain that the event occurs somewhere in the middle of that,’ says Robinson. Electronic triggers are also used in some circumstances so as to achieve a near-instantaneous response.

Bandwidth, cabling and other options

To get the data out, some manufacturers, Vision Research included, abandon industry standards in favour of proprietary high-speed data links. Robinson explains that his company’s solution is based on customised I/O over optical fibre, which can transfer up to one gigapixel per second. Such approaches face a drawback, however, in that they must be paired with a computer capable of ingesting the data, and also with a high-speed RAID array able to store the several gigabytes per second that the system outputs.

n the machine vision industry in particular, high speed video is more commonly limited to the bandwidth of a GigE Vision cable. ‘There’s a collision occurring between two distinct different market segments, in my opinion,’ says Robinson. He believes that high-speed machine vision cameras, characterised by their streaming connection to a computer, are beginning to offer an alternative to the specialised high-speed cameras.

Jean-Philippe Roman, from Allied Vision Technologies (AVT), explains that the cameras his company produces do not have the multi-gigapixel per second bandwidth of very high-end cameras, but they are still considered high-speed by machine vision standards. AVT cameras interface by GigE Vision, which he says gives them a transfer rate of around 100MB per second.

In order to increase the resolution and/or frame-rate obtainable from its cameras, AVT has found several ways of increasing the usable bandwidth of the interface, without moving away from the versatile GigE Vision standard. Firstly, Roman explains, when using a GigE Vision interface, one would usually assume a 100MB per second transfer rate, but this can be increased by doing away with superfluous data: ‘When transmitting data from the camera to the PC, you do not transmit pure image data; there is always a little bit of overhead, which amounts to a transmission code,’ he says. ‘We try to optimise this overhead, and we reduce it to the minimum in order to have a little bit more bandwidth available for the image data itself.’ In practice, this reduction can amount to an increase of 20 or 25 per cent, to achieve 120 or even 125MB/s over gigabit Ethernet.

In addition to cutting overheads in this way, one of AVT’s most recent product releases – the Prosilica GX Series camera – makes use of the technique of link aggregation, a well-established standard in IT networking by which two parallel cable connections are combined so as to appear as one. The camera features two GigE Vision ports, which are considered by a computer to be a single cable capable of 240MB/s transfers, and therefore commensurately higher frame-rates and/or resolution.

According to Roman, the dual GigE Vision camera is too recent an introduction on the market to have found many applications just yet. He says, however, that that AVT expects the device to be suitable to traffic-monitoring applications. ‘Here you need a certain level of detail, for example in speed enforcement, in order to recognise a licence plate. Increasingly there are more sophisticated cameras that try to scan several lanes with a single camera, which requires that each lane still has sufficient resolution. Therefore, we expect that this camera would be adapted to this kind of traffic application, because it can offer a higher frame-rate with a high level of resolution,’ he says.

The company has, however, already found an application for its existing high-speed cameras in the form of a dedicated golf-coaching system. The Swing&See system uses a camera at 200fps – slow compared to some of the dedicated high-speed systems, but still faster than many machine vision systems. ‘With 200fps you can already have a quite precise visualisation of movement,’ says Roman. ‘In the golf application you are able to split the motion frame by frame and perform visual analysis very reliably.

As high-speed cameras become more reliable, inexpensive, and versatile, customers and integrators have much to look forward to.



Topics

Media Partners