Skip to main content

Computer vision for seeing around corners presented at CVPR

Researchers at the Computer Vision and Pattern Recognition (CVPR) conference in Long Beach, California have presented two different computational methods that give cameras the ability to see around corners.

A team from the University of Montreal, Princeton University and Algolux were able to reconstruct high-quality images of traffic signs and other 3D objects taken by smartphone or vehicle cameras.

Meanwhile, researchers from Carnegie Mellon University, the University of Toronto and University College London showed a non-line-of-sight (NLOS) imaging technique able to compute millimetre- and micrometre-scale shapes of curved objects.

Non-line-of-sight imaging aims to recover occluded objects by analysing their indirect reflections on visible scene surfaces.

The Carnegie Mellon University work was supported by the Defense Advanced Research Project Agency’s Reveal programme, which is developing NLOS capabilities. The research received a best paper award at the CVPR conference, which ran from 16 to 20 June.

‘Other NLOS researchers have already demonstrated NLOS imaging systems that can understand room-size scenes, or even extract information using only naturally occurring light,’ Ioannis Gkioulekas, an assistant professor in Carnegie Mellon’s Robotics Institute, said. ‘We’re doing something that’s complementary to those approaches – enabling NLOS systems to capture fine detail over a small area.’

The Carnegie researchers used an ultrafast laser to bounce light off a wall to illuminate a hidden object. By knowing when the laser fired pulses of light, the researchers could calculate the time the light took to reflect off the object, bounce off the wall on its return trip and reach a sensor.

Previous attempts to use these time-of-flight calculations to reconstruct an image of the object have depended on the brightness of the reflections. But in this study, Gkioulekas said the researchers developed a new method based purely on the geometry of the object, which in turn enabled them to create an algorithm for measuring its curvature.

The researchers used an imaging system that is effectively a lidar capable of sensing single particles of light to test the technique on glass and plastic objects. They also combined this technique with optical coherence tomography to reconstruct images of a US quarter.

In addition to seeing around corners, the technique proved effective in seeing through diffusing filters, such as thick paper.

The technique thus far has been demonstrated only at short distances – a meter at most. 

The University of Montreal accomplished its steady-state non-line-of-sight imaging technique using conventional CMOS camera sensors and a change in illumination method – a small change to a car’s headlights or a smartphone’s flash.

The research opens a path to practical implementation, said research partner Algolux, which provides embedded software. Algolux believes this technology can strengthen the ability of autonomous vehicles to navigate in difficult road scenarios even when the view is blocked by obstructions or vehicles.

Other potential uses include increased security for video surveillance, as well as use cases for smartphones, augmented reality, and medical imaging.

Topics

Read more about:

Computer vision, Technology

Media Partners