Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Deep learning assists technicians analyse medical images

Share this on social media:

A trial using deep learning algorithms has shown that artificial intelligence has the potential to assist technicians and detect human errors in medical image handling.

System-on-chip manufacturer Socionext and Japanese AI software company Soinn presented results from the project at Medtec Japan, held in Tokyo from 19-21 April.

In the trial, Socionext extracted and delivered biometric data to Soinn’s Artificial Brain. Soinn learned to read subcutaneous fat thickness from abdominal ultrasound images. The estimations by Soinn were then compared with the reading results by ultrasound technicians.

Soinn’s Artificial Brain can accurately read fat tissue thickness from 80 per cent of the data within 5 per cent margin of error. There were noticeable differences between the readings by human and by Soinn for some of the images.

After reviewing these data, it was confirmed that human error, including numerical input, was a common occurrence from data reading by human. Based on the findings, the companies believe that AI has the potential to be used for assisting technicians in reading images and detecting human errors in medical image handling.

Machine deep leaning, which is attracting attention from fields including medical imaging to driverless cars, is thought to require hundreds of thousands of images in order to learn from reading the images. In contrast, Soinn needed only about 700 images.

Company: 

Related news

Jeffrey Chew, CEO of ADE, and Jean-François Delepau, chairman of Lynred

09 September 2019

Recent News

04 October 2019

Each pixel in Prophesee’s Metavision sensor only activates if it detects a change in the scene – an event – which means low power, latency and data processing requirements

18 September 2019

3D sensing company, Outsight, has introduced a 3D semantic camera that combines lidar ranging with hyperspectral material analysis. The camera was introduced at the Autosens conference in Brussels

16 September 2019

OmniVision Technologies will be showing an automotive camera module at the AutoSens conference in Brussels from 17 to 19 September, built using OmniVision’s OX03A1Y image sensor with an Arm Mali-C71 image signal processor

09 September 2019

Hamamatsu Photonics claims it is the first company to mass produce a mid-infrared detector that doesn’t use mercury and cadmium, which are restricted under the European Commission’s RoHS directive