Skip to main content

Researchers' LED control algorithm simplifies 3D imaging

Researchers at the University of Strathclyde have demonstrated a method for capturing 3D images with any camera simply by controlling the illumination.

It is hoped the approach could open up new ways for robots to sense the environment around them, along with better indoor surveillance with 3D images.

In a paper published in The Optical Society journal Optics Express, the researchers demonstrate that 3D optical imaging can be performed with a mobile phone and LEDs without requiring any complex manual processes to synchronise the camera with the lighting.

Depth information can be acquired using a method called photometric stereo imaging, in which one camera is combined with illumination that comes from multiple directions, traditionally four light sources.

In the new work, the researchers show that 3D images can also be reconstructed when objects are illuminated from the top down but imaged from the side. This setup allows overhead room lighting to be used for illumination.

In work supported under the UK’s EPSRC Quantic research programme, the scientists developed algorithms that modulate each LED in a unique way. This acts like a fingerprint that allows the camera to determine which LED generated which image to facilitate the 3D reconstruction.

The new modulation approach also carries its own clock signal so that the image acquisition can be self-synchronised with the LEDs by simply using the camera to passively detect the LED clock signal.

'We wanted to make photometric stereo imaging more easily deployable by removing the link between the light sources and the camera,' said Emma Le Francois, a doctoral student in the research group led by Martin Dawson, Johannes Herrnsdorf and Michael Strain at the University of Strathclyde in the UK. 'To our knowledge, we are the first to demonstrate a top-down illumination system with a side image acquisition where the modulation of the light is self-synchronised with the camera.'

To demonstrate the new approach, the researchers used their modulation scheme with a photometric stereo setup based on commercially available LEDs. An Arduino board provided the electronic control for the LEDs. Images were captured using the high-speed video mode of a smartphone. They imaged a 48mm-tall figurine that they 3D printed with a matte material to avoid any shiny surfaces that might complicate imaging.

After identifying the best position for the LEDs and the smartphone, the researchers achieved a reconstruction error of 2.6mm for the figurine when imaged from 42cm away. This error rate shows that the quality of the reconstruction was comparable to that of other photometric stereo imaging approaches. They were also able to reconstruct images of a moving object and showed that the method is not affected by ambient light.

In the current system, the image reconstruction takes a few minutes on a laptop. To make the system practical, the researchers are working to decrease the computational time to just a few seconds by incorporating a neural network that would learn to reconstruct the shape of the object from the raw image data.

Topics

Media Partners