Skip to main content

BMW makes car body inspections with gestures

Researchers at the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation (IOSB) in Karlsruhe have engineered an intelligent gesture control system on behalf of BMW. The gesture-detection system is based on 3D data and is designed for BMW staff to document any defects on car body parts simply by pointing at them on the part.

The system means BMW employees no longer have note defects and input them into a PC; they simply point at the area on the part and the system will automatically record the defect.

The gesture control system works by reconstructing the test environment in 3D using a standard PC and two Microsoft Kinect cameras and 3D sensors. The researchers developed algorithms to fuse multiple 2D and 3D images together and adapted them to BMW’s requirements.

The reconstruction includes tracking the inspector in 3D in real time, as well as the object with which he is working. Alexander Schick, scientist at IOSB, explained: ‘What does the inspector look like? Where is he situated? How does he move? What is he doing? Where is the object? – all of these data are required so that the pointing gesture can properly link to the bumper [or part under inspection].’

Schick described how the inspection process used to be carried out: ‘Previously, the inspector had to note all defects that were detected, leave his workstation, go to the PC terminal, operate multiple input screens and then label the position of the defect and the defect type. That approach is laborious, time-intensive and prone to error.’

The gesture control system, by contrast, allows the employee to remain at his workstation and interact directly with the test object. ‘If the bumper is fine, then he swipes over it from left to right. In the event of damage, he points to the location of the defect,’ said Schick.

Fraunhofer has built a Smart Control Room, where people can interact with the room and use pointing gestures to operate remote displays. The room recognises what actions are taking place at that moment, and offers the appropriate information and tools. ‘Since gesture detection does not depend on display screens, this means we can implement applications that use no monitors, like the gesture interaction here with real objects,’ said Schick.

Topics

Media Partners