Painful loss in the human vs computer-vision battle

Share this on social media:

A computer-vision system has been found to be 30 per cent better than humans at identifying if an expression of pain is real or faked. This could be used to uncover pain fabrication within security, law, and medicine.

Human observers were asked to deduce the veracity of expressions of pain of individuals who were either undergoing cold presser tests -- which involves the hand of the individual being placed in ice water for a period of time -- or were told to fake painful expressions. In total 205 human observers were asked to decide whether the pain was real. The cold presser test is a common method to measure pain tolerance of a subject.

The research was published in the most recent edition of Current Biology. The authors were: Dr Marian Bartlett, research professor, Institute for Neural Computation, University of California, San Diego; Dr Gwen Littlewort, co-director of the institute’s Machine Perception Laboratory; Dr Mark Frank, professor of communication, University at Buffalo, and Dr Kang Lee, Dr Erick Jackman Institute of Child Study, University of Toronto.

The researchers employed the computer expression recognition toolbox (CERT), an end-to-end system for fully automated facial-expression recognition that operates in real time. It was developed by Bartlett, Littlewort, Frank and others to assess the accuracy of machine versus human vision.

The research found that, even after training, the observers were only 55 per cent accurate at deciding whether the facial expression was a real or fake and before training were less successful than if the decisions had been left to chance. The vision system on the other hand made the correct decision 85 per cent of the time.

Bartlett commented: ‘[The system] managed to detect distinctive, dynamic features of facial expressions that people missed. Human observers just aren’t very good at telling real from faked expressions of pain.’ He continued to say that the system ‘can be applied to detect states in which the human face may provide important clues as to health, physiology, emotion or thought, such as drivers’ expressions of sleepiness, students’ expressions of attention and comprehension of lectures, or responses to treatment of affective disorders.’

Recent News

25 May 2021

The face recognition imager consumes 10,000 times less energy than a typical camera and processor. CEA-Leti is working with STMicroelectronics on the imager

06 May 2021

The GTOF0503 sensor features a 5µm three-tap iToF pixel, incorporating an array with a resolution of 640 x 480 pixels

30 April 2021

The algorithm can deduce the shape, size and layout of a room by measuring the time it takes for sound from speakers to return to the phone's microphone

20 April 2021

The Kria K26 SOM is built on top of the Zynq UltraScale+ MPSoC architecture. It has 4GB of DDR4 memory and 245 IOs for connecting sensors