Augmented reality aids brain surgery

Share this on social media:

A neurosurgeon in the US has used augmented reality to overlay scans onto live images during brain surgery.

Dr Joshua Bederson, chairman of neurosurgery of Mount Sinai Health System, used Leica Microsystems’ CaptiView tool for the first time to treat a patient with a brain aneurysm, a dilated blood vessel caused by a weakness in the vessel wall.

CaptiView overlays images from image-guided surgery (IGS) software, such as brain scans, onto the live feed seen through the microscope. Surgeons would usually view these images on a separate monitor. With the images as an overlay in the microscope eyepieces, the surgeon can stay focused on their patient.

Aneurysms may rupture at any time, because of the weak spot in the blood vessel wall. This is a considerable risk to the patient – according to the American Brain Aneurysm Foundation, 40 per cent of the patients with ruptured brain aneurysms do not survive and, of the survivors, two thirds have permanent neurological damage.

The current craze of Pokemon Go uses similar augmented reality technology to combine virtual- and real-world images, while augment reality is also being developed for car drivers. CaptiView technology links IGS software and the microscope, allowing surgeons to perform their procedures solely looking through the eyepieces, promoting increased concentration.

Further information:

Leica Microsystems

Recent News

21 June 2019

Carnegie Mellon University showed a non-line-of-sight imaging technique able to compute millimetre- and micrometre-scale shapes of curved objects

11 June 2019

Researchers at Earlham Institute have developed a machine learning platform to categorise lettuce crops using computer vision and aerial images

10 June 2019

Graphene Flagship partner, Emberion, is launching a VIS-SWIR graphene photodetector at Laser World of Photonics. The technology has been shortlisted for an innovation award at the Munich trade fair

02 May 2019

UK start-up, Photonic Vision, has developed what it considers a disruptive approach to time-of-flight sensing for lidar, able to reach high sensitivity without avalanche multiplication