Skip to main content

Researchers investigate novel way of creating 3D footage

Films made in 3D could soon pack more of a punch thanks to work being carried out by a researcher at De Montfort University (DMU), Leicester. Dr Cristian Serdean is exploring an alternative way of creating high quality 3D footage from 2D stereoscopic images – stereoscopic images are created by filming two sets of footage of the same subject but from slightly different angles, corresponding to the viewer's left and right eye.

The two-year project, funded by a £182,693 grant under the Engineering and Physical Sciences Research Council's First Grant Scheme, will look at how to improve the complex process of extracting depth information from 2D stereoscopic video frames to produce 3D film.

Traditionally, the 3D effect is achieved by shooting stereoscopic images and then merging them for display purposes. The resulting film is seen in 3D with the aid of special glasses designed to pass the correct image to each eye which the brain then processes into 3D information.

This method is often inefficient, expensive and inconvenient because it involves having to store and transmit two sets of footage and also requires the viewer to wear special glasses.

Serdean is hoping to perfect a different method of representing 3D data which is created using a single set of footage containing the 2D view, plus information about the depth of each pixel in the scene. This 3D data can then be viewed on autostereoscopic displays that allow people to see the 3D effect without special glasses.

Pixels are first turned into frequency coefficients using a mathematical function known as a transform. The coefficients are then used to find corresponding points between the two sets of footage in order to estimate the correct depth for each pixel.

Serdean will look at whether a particular type of mathematical transform, known as a multiwavelet, will find the correspondence points between the two sets of footage to a greater degree of accuracy.

Serdean said: 'Traditional mathematical transforms used in 2D to 3D processing do not retain information about the pixels' relationship in space, meaning that when the coefficients are displayed as an image, they no longer bear any resemblance to the original picture.

'This can be a significant disadvantage that multiresolution transforms such as the wavelets and the much newer and under-researched multiwavelets can overcome.'

Wavelets have been successfully used in stereo imaging for a number of years, but they still have some limitations. Multiwavelets are more versatile and offer perfect localisation in both frequency and space while also correcting some of the drawbacks of wavelets.

Serdean added: 'Finding correspondence points accurately is a critical stage of 2D to 3D conversion and it's by far the most difficult part of this process. One point from the left image will have a corresponding point in the right image, but due to the slightly different angles at which the two images were captured, the location of this point will be slightly displaced compared with the location in the left image.

'If we can find the accurate location of the corresponding point in the right image, then using the distance between the camera and the scene and the distance between the two corresponding points in the two images, we can calculate the depth for that point via triangulation.'

Identifying these stereo correspondence points more accurately will mean a significant step forward in stereo imaging, leading to higher 3D footage quality and the development of algorithms and processing tools that are able to work accurately with minimal human input.

Topics

Read more about:

Technology

Media Partners