Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Computer vision for seeing around corners presented at CVPR

Share this on social media:

Reconstruction of traffic signs with high-resolution colour non-line-of-sight imaging using conventional CMOS cameras sensors

Researchers at the Computer Vision and Pattern Recognition (CVPR) conference in Long Beach, California have presented two different computational methods that give cameras the ability to see around corners.

A team from the University of Montreal, Princeton University and Algolux were able to reconstruct high-quality images of traffic signs and other 3D objects taken by smartphone or vehicle cameras.

Meanwhile, researchers from Carnegie Mellon University, the University of Toronto and University College London showed a non-line-of-sight (NLOS) imaging technique able to compute millimetre- and micrometre-scale shapes of curved objects.

Non-line-of-sight imaging aims to recover occluded objects by analysing their indirect reflections on visible scene surfaces.

The Carnegie Mellon University work was supported by the Defense Advanced Research Project Agency’s Reveal programme, which is developing NLOS capabilities. The research received a best paper award at the CVPR conference, which ran from 16 to 20 June.

‘Other NLOS researchers have already demonstrated NLOS imaging systems that can understand room-size scenes, or even extract information using only naturally occurring light,’ Ioannis Gkioulekas, an assistant professor in Carnegie Mellon’s Robotics Institute, said. ‘We’re doing something that’s complementary to those approaches – enabling NLOS systems to capture fine detail over a small area.’

The Carnegie researchers used an ultrafast laser to bounce light off a wall to illuminate a hidden object. By knowing when the laser fired pulses of light, the researchers could calculate the time the light took to reflect off the object, bounce off the wall on its return trip and reach a sensor.

Previous attempts to use these time-of-flight calculations to reconstruct an image of the object have depended on the brightness of the reflections. But in this study, Gkioulekas said the researchers developed a new method based purely on the geometry of the object, which in turn enabled them to create an algorithm for measuring its curvature.

The researchers used an imaging system that is effectively a lidar capable of sensing single particles of light to test the technique on glass and plastic objects. They also combined this technique with optical coherence tomography to reconstruct images of a US quarter.

In addition to seeing around corners, the technique proved effective in seeing through diffusing filters, such as thick paper.

The technique thus far has been demonstrated only at short distances – a meter at most. 

The University of Montreal accomplished its steady-state non-line-of-sight imaging technique using conventional CMOS camera sensors and a change in illumination method – a small change to a car’s headlights or a smartphone’s flash.

The research opens a path to practical implementation, said research partner Algolux, which provides embedded software. Algolux believes this technology can strengthen the ability of autonomous vehicles to navigate in difficult road scenarios even when the view is blocked by obstructions or vehicles.

Other potential uses include increased security for video surveillance, as well as use cases for smartphones, augmented reality, and medical imaging.

Related news

Recent News

04 October 2019

Each pixel in Prophesee’s Metavision sensor only activates if it detects a change in the scene – an event – which means low power, latency and data processing requirements

18 September 2019

3D sensing company, Outsight, has introduced a 3D semantic camera that combines lidar ranging with hyperspectral material analysis. The camera was introduced at the Autosens conference in Brussels

16 September 2019

OmniVision Technologies will be showing an automotive camera module at the AutoSens conference in Brussels from 17 to 19 September, built using OmniVision’s OX03A1Y image sensor with an Arm Mali-C71 image signal processor

09 September 2019

Hamamatsu Photonics claims it is the first company to mass produce a mid-infrared detector that doesn’t use mercury and cadmium, which are restricted under the European Commission’s RoHS directive