Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

MIT researchers develop trillion-frame-per-second camera

Share this on social media:

MIT researchers have created an imaging system that can acquire visual data at a rate of one trillion exposures per second. That’s fast enough to produce a slow-motion video of a burst of light travelling the length of a one-litre bottle, bouncing off the cap and reflecting back to the bottle’s bottom.

The system relies on a streak camera, the aperture of which is a narrow slit. Photons enter the camera through the slit and pass through an electric field that deflects them in a direction perpendicular to the slit. Because the electric field is changing very rapidly, it deflects late-arriving photons more than it does early-arriving ones.

The image produced by the camera is thus two-dimensional, but only one of the dimensions — the one corresponding to the direction of the slit — is spatial. The other dimension, corresponding to the degree of deflection, is time. The image thus represents the time of arrival of photons passing through a one-dimensional slice of space.

To produce their super-slow-mo videos, Media Lab professor Ramesh Raskar, postdoc Andreas Velten, and Moungi Bawendi, the Lester Wolfe Professor of Chemistry, must perform the same experiment — such as passing a light pulse through a bottle — over and over, continually repositioning the streak camera to gradually build up a two-dimensional image. It takes only a nanosecond — a billionth of a second — for light to scatter through a bottle, but it takes about an hour to collect all the data necessary for the final video.

Because the ultrafast-imaging system requires multiple passes to produce its videos, it can’t record events that aren’t exactly repeatable. Any practical applications will probably involve cases where the way in which light scatters — or bounces around as it strikes different surfaces — is itself a source of useful information. Those cases may, however, include analyses of the physical structure of both manufactured materials and biological tissues — 'like ultrasound with light,' commented Raskar.

Recent News

04 October 2019

Each pixel in Prophesee’s Metavision sensor only activates if it detects a change in the scene – an event – which means low power, latency and data processing requirements

18 September 2019

3D sensing company, Outsight, has introduced a 3D semantic camera that combines lidar ranging with hyperspectral material analysis. The camera was introduced at the Autosens conference in Brussels

16 September 2019

OmniVision Technologies will be showing an automotive camera module at the AutoSens conference in Brussels from 17 to 19 September, built using OmniVision’s OX03A1Y image sensor with an Arm Mali-C71 image signal processor

09 September 2019

Hamamatsu Photonics claims it is the first company to mass produce a mid-infrared detector that doesn’t use mercury and cadmium, which are restricted under the European Commission’s RoHS directive