Skip to main content

Researchers use computer modelling to make 3D camera more efficient

A prototype 3D camera that is more robust to bright light conditions has been engineered by researchers at Carnegie Mellon University (CMU) and the University of Toronto. The technology involves synchronising the camera with the pattern projector so that only laser light from a certain plane is imaged by the camera.

The researchers created a mathematical model to help program 3D imaging systems so that the camera and its light source work together efficiently, eliminating extraneous light that would otherwise wash out the signals needed to detect a scene’s contours.

The mathematical framework can compute energy-efficient codes that optimise the amount of energy that reaches the camera.

‘We have a way of choosing the light rays we want to capture and only those rays,’ said Srinivasa Narasimhan, CMU associate professor of robotics. ‘We don’t need new image-processing algorithms and we don’t need extra processing to eliminate the noise, because we don’t collect the noise. This is all done by the sensor.’

One prototype based on this model synchronises a laser projector with a rolling-shutter camera so that the camera detects light only from points being illuminated by the laser as it scans across the scene.

This not only makes it possible for the camera to work under extremely bright light or amidst highly reflected or diffused light – it can capture the shape of a light bulb that has been turned on, for instance, and see through smoke – but also makes it extremely energy efficient. This combination of features could make this imaging technology suitable for many applications, including medical imaging, inspection of shiny parts, and sensing for robots used to explore the moon and planets. It also could be readily incorporated into smartphones.

The researchers presented their findings at Siggraph 2015, the International Conference on Computer Graphics and Interactive Techniques, in Los Angeles.

Depth cameras work by projecting a pattern of dots or lines over a scene. Depending on how these patterns are deformed or how much time it takes light to reflect back to the camera, it is possible to calculate the 3D contours of the scene.

The problem is that some of these devices use compact projectors that operate at low power, so their faint patterns are washed out and undetectable when the camera captures ambient light from a scene. But as a projector scans a laser across the scene, the spots illuminated by the laser beam are brighter, if only briefly, noted Kyros Kutulakos, University of Toronto professor of computer science.

‘Even though we’re not sending a huge amount of photons, at short time scales, we’re sending a lot more energy to that spot than the energy sent by the sun,’ he said. The trick is to be able to record only the light from that spot as it is illuminated, rather than try to pick out the spot from the entire bright scene.

In the prototype using a rolling-shutter camera, this is accomplished by synchronising the projector so that as the laser scans a particular plane, the camera accepts light only from that plane.

In addition to enabling the use of Kinect-like devices to play videogames outdoors, the new approach also could be used for medical imaging, such as skin structures that otherwise would be obscured when light diffuses as it enters the skin. Likewise, the system can see through smoke despite the light scattering that usually makes it impenetrable to cameras. Manufacturers also could use the system to look for anomalies in shiny or mirrored components.

Further information:

Carnegie Mellon University

Topics

Read more about:

Technology

Media Partners