Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Deep learning algorithms improve tumour detection in medical images

Share this on social media:

Researchers from the Fraunhofer Institute for Medical Image Computing (MEVIS) in Bremen, Germany have developed software that uses deep learning to facilitate the detection of tumours in progressive cancer treatment images. The package will be demonstrated in Chicago at the world’s largest radiology meeting, RSNA, on 27 November to 2 December.

CT and MRI scans are often performed to determine whether a tumour has shrunk during the course of a cancer treatment. In most cases, tumour progression is evaluated only visually, which leads to new tumours often being overlooked. The new program – after being trained using large data sets beforehand to identify common features and patterns – can identify and reveal changes in tumour images automatically, bringing them to the attention of physicians, whereas previously they may have gone unnoticed.

‘Our program package increases confidence during tumour measurement and follow-up,’ explained Mark Schenk from Fraunhofer MEVIS. ‘The software can, for example, determine how the volume of a tumour changes over time and supports the detection of new tumours.’ 

Existing computer programs seek clearly defined image features such as certain grey values that are designated by experts to show organ outlines. ‘However, this can often lead to errors,’ commented Fraunhofer researcher Markus Harz. ‘The [existing] software assigns areas to the liver that do not belong to the organ.’ Physicians must then correct these errors before continuing, a process, which can often be quite time consuming.

The newly developed software was trained with CT liver images from a total of 149 patients, with results showing that the more data the program analysed, the better it could automatically identify liver contours. In doing this, the package’s deep learning approach, which ‘reaches far beyond existing approaches', according to Fraunhofer MEVIS, promises improved results that will save physicians valuable time. 

A further application of the deep learning approach is image registration, in which the software aligns images from different patient visits so that physicians can compare them with ease. Machine learning can aid the particularly difficult task of locating bone metastases in the torso, in which the hip bones, ribs, and spine are visible. Currently, these metastases are often overlooked because of time constraints in clinical practice. Deep learning methods can help discover metastases reliably and thus improve treatment outcomes.

Related articles:

Image processing reaches new depths - Facebook, Amazon and Google are all working on high-profile deep learning projects, from speech pattern recognition to building driverless cars. Rob Ashwell looks at how the technology is being deployed in the machine vision sector to improve and speed inspection

Recent News

24 October 2019

Imec says the new production method promises an order of magnitude gain in fabrication throughput and cost compared to processing conventional infrared imagers

04 October 2019

Each pixel in Prophesee’s Metavision sensor only activates if it detects a change in the scene – an event – which means low power, latency and data processing requirements

18 September 2019

3D sensing company, Outsight, has introduced a 3D semantic camera that combines lidar ranging with hyperspectral material analysis. The camera was introduced at the Autosens conference in Brussels

16 September 2019

OmniVision Technologies will be showing an automotive camera module at the AutoSens conference in Brussels from 17 to 19 September, built using OmniVision’s OX03A1Y image sensor with an Arm Mali-C71 image signal processor