Motion reconstruction technique developed using hundreds of cameras

Share this on social media:

Videos of sporting events might one day be reconstructed using hundreds of pictures taken by spectators, thanks to new research by a team at Carnegie Mellon University. The scientists have developed techniques for combining the views of 480 video cameras mounted in a two-story geodesic dome to perform large-scale 3D motion reconstruction.

Though the research was performed in a specialised video laboratory, Yaser Sheikh, an assistant research professor of robotics who led the research team, said the techniques might eventually be applied to large-scale reconstructions of sporting events or performances captured by hundreds of spectator cameras.

Images from a large number of cameras, such as smartphones, have been used to create 3D reconstructions of still images, but 3D motion reconstruction at such a large scale has so far not been possible.

The Carnegie Mellon camera system can track 100,000 points at a time. The difficulty was how to choose which of the hundreds of video trajectories can see each of those points and select only those camera views for the reconstruction.

‘At some point, extra camera views just become noise,’ said Hanbyul Joo, a PhD student in the Robotics Institute. ‘To fully leverage hundreds of cameras, we need to figure out which cameras can see each target point at any given time.’

The research team developed a technique for estimating visibility that uses motion as a cue. In contrast to motion capture systems that use balls or other markers, the researchers used established techniques for automatically identifying and tracking points based on appearance features – in this case, distinctive patterns. For each point, the system then seeks to determine which cameras see motion that is consistent with that point.

For instance, if a point on a person’s chest is being tracked and most cameras show that point is moving to the right, a camera that picks up motion in the opposite direction is probably seeing a person or object that is in between the target and the camera. Or it may indicate the person has turned and the chest is no longer visible to the camera. In either case, the system knows that camera cannot see the target point and that its video feed is not useful for 3D reconstruction involving that point.

In the Panoptic Studio, the researchers have 480 video cameras, plus an additional 30 high-definition video cameras, arrayed all around and halfway up the walls of a geodesic dome that can easily accommodate 10 people.

Such a dense array of cameras enables the researchers to perform 3D motion reconstructions not previously possible. These include 3D reconstructions of a person tossing confetti into the air, with each piece of paper tracked until it reaches the floor. In another case, confetti is fed into a fan, enabling a motion capture of the air flow. ‘You couldn’t put markers on the paper without changing the flow,’ Joo explained.

The findings were presented at the Computer Vision and Pattern Recognition conference, 24-27 June, in Columbus, Ohio.

--

Further information:

Video of the 3D reconstructions and links to the team’s research paper: http://www.cs.cmu.edu/~hanbyulj/14/visibility.html

 

 

Recent News

03 September 2020

Terahertz imaging company, Tihive, has been awarded €8.6m from the European Innovation Council's Accelerator programme to scale up its industrial inspection technology

19 May 2020

The National Institute of Standards and Technology and ASTM Committee E57 have released proceedings on a workshop to define the performance of 3D imaging systems for robots in manufacturing

12 May 2020

The sensors boast a pixel pitch of 5μm thanks to Sony's stacking technology using a copper-to-copper connection. They also deliver high quantum efficiency even in the visible range

06 April 2020

Zensors' algorithms analyse feeds from CCTV cameras to provide real-time data on the number of people in an area and whether safe distances are maintained between them