Skip to main content

Under the microscope

The observation of specimens using microscopy is experiencing a renaissance,’ says Dr Winfried Busch, product manager for life sciences at Olympus Life and Materials Sciences Europa. While scientists conventionally borrowed vision technology from industrial applications, suppliers are finally realising that vision in the laboratory has vastly different requirements to deal with the low and varying light levels and the high levels of detail that need to be observed.

The conditions of living samples also need to be catered for, something that Olympus specialises in. Its imaging systems are integrated into incubators to keep the CO2 and temperature levels stable for cell cultures and tissues. It is proving to be so successful that the study of whole, living animals is increasingly popular, varying in size from fruit fly embryos to rats! This obviously requires a bigger field of view, and since they are often used to study the effects of drugs, the systems have to cater for longer periods of observation.



A rat's kidney, taken by Dr Winfired Busch of Olympus Life and Materials Science Europa



In addition, automation is proving to be increasingly important, to allow scientists to ‘move away from mundane tasks to what they do best,’ in the words of Greg Hollows, the vision integration partners coordinator for Edmund Optics. Many experiments take hours to perform. Now, automated systems can perform experiments rather than making grad students work unfriendly hours to perform the observations.

Automation also allows scientists to perform many more experiments than would otherwise have been possible using microarrays – plates of possibly thousands of wells that will contain either living cells and tissue, or chemical reactions, to be observed. It is not only speeding up the scientific process, it is allowing a number of permutations of different factors to be tested that would have been literally impossible using manual methods alone. The wells are tiny, so clearly the development of motion control has been important – even to nanometre precision.

An increase in resolution has been instrumental to this success – even though the cost of this is still greater than would be comfortable for most laboratories. With greater resolution, more of the wells can be observed to the required detail at any one time, providing a faster throughput. An increase in processing speed means that even these large images can still be studied in a tenth of the time it would have taken previously.

‘We were leveraging technology from other industries, but now it has moved above and beyond that,’ says Hollows. ‘It has been pushed in new directions. We are almost getting to the point where we can’t push it any further, but pharmaceutical companies have the money to help move products forward.’



Magnified image of crystals, captured by Edmund Optics' instruments


It is obviously essential that the camera can be integrated easily with the microscope – both in its physical design, and in its functionality. An important function for any camera used for these applications is back focus, to allow the camera to adjust to the focus of the microscope as it zooms in and out on the sample.

However, some laboratories just cannot afford the high-resolution equipment, in which case they would need to take more images using magnification lenses to achieve the required detail, and then piece them together.

Firstsight Vision has tried to combat the high price of Megapixel cameras by releasing the uEye LE series of non-industrial cameras, which feature high resolution and other features necessary for microscopy, but few of the other features necessary for machine vision, such as instant image capture. In addition, the cameras are attractive to look at – not an important feature in the rugged conditions of the factory.

In fluorescence experiments the sensitivity of cameras is an even greater issue than resolution, necessary to detect the notoriously low levels of light emitted by the fluorescent dyes when they are excited by external radiation. The Toshiba IK-1000ME, distributed in the UK by Firstsight Vision, overcomes this by using an electron multiplying CCD (EMCCD) detector that allows colour sensitivity at previously impossibly low levels. Colour sensitivity is especially important in biofluorescence to detect the wavelengths emitted by the different types of DNA and proteins.

In addition to detecting the low levels of light, the scientist also wants to be able to differentiate between the different shades. Kyle Voosen, vision product manager for National Instruments, says there is a need for ‘deep pixels’, which give more data about the intensity levels of light. For example, one camera may be able to differentiate between 255 levels of brightness, whereas another may have 64,000 shades of grey that can be identified.

It’s not just the cameras that need to cater for this – the software needs to be able to filter the background noise from the light produced by the fluorescence. Suppose there are 255 shades of grey. In the past, a piece of software might form a binary image from the data, that would return a value of one if a pixel’s brightness was above a threshold of say 100, and zero if its value was below that value.

However, this crude method isn’t particularly relevant to microscopy, because the light source can vary significantly depending on the position of the plate with respect to the microscope. To combat this, LabView has included a function called dynamic thresholding. It automatically varies the threshold depending on the position of the pixel relative to the illumination, giving a far more accurate picture.

‘Back when I was at school, it took hours to program this out, both in terms of my time and the processing time, but it has really come into its own over the past two years,’ says Voosen. ‘It is not a new algorithm, but now it is a lot easier for PCs to do in a reasonable amount of time.’

This method is particularly useful when performing simple analysis of images, like the counting of cells, a typical application of fluorescence. This could give information about the growth of cells and how they are responding to their environment – useful in fields such as drug discovery, and fertiliser development.

The software obviously needs to be able to identify the separate cells as opposed to just a large mass, which can be tricky due to the different sizes and shapes of cells, and if they are in multiple layers. This process of separation is called segmentation, and LabView includes the watershed algorithm for this purpose. This method creates a kind of contour map based on the shade of grey in each pixel. This then forms hills and valleys that separate sections of the image – providing segmentation of the cells.

Voosen believes ease-of-use is even more important in the laboratory than on the factory floor. ‘A user can have an expensive camera, with precision motion equipment, and efficient, advanced algorithms, but it must be easy to use. A scientist should not have to be a computer scientist to be able to use it.’ LabView includes various ‘assistants’ that work with one of the scientist’s own images to code the necessary acquisition and analysis specifications, limiting the amount of necessary programming.

However, according to Greg Hollows from Edmund Optics, it is the laboratories that must take the ultimate responsibility for how successful these systems are, by knowing requirements of their applications before they purchase the equipment. ‘They need to identify what they really need, and not just what they want. We can manipulate the laws of physics, but we cannot break them.’



The use of vision in laboratories is not limited to life sciences research: it is becoming increasingly popular in the materials sciences too. An innovative method of x-ray imaging, called computed tomography (CT) is being used by Fraunhofer Institut für Produktionstechnik und Automatisierung IPA in metrology labs for their industrial partners.

The Fraunhofer team, led by Dr Kai-Udo Modrich, has developed a method of CT that can create a 3D x-ray image of an object, with each voxel (3D pixel) containing a shade of grey that gives density of the object at that point. To achieve this 3D image, the object is rotated and 800 2D x-ray pictures are taken. Computer processing software then pieces the separate images together.

The image can be analysed to give dimensional measurements (of up to 10μm accuracy) and to detect faults within the material. However, its most exciting application is in prototyping and simulation, reducing the time for a product to leave the design board and reach the end user.

Once a physical rapid prototype has been created, CT is used to scan the prototype into a computer and create a virtual prototype. The design can then be modified and simulations can be run on the model before a final design is decided on.

Topics

Read more about:

Life sciences

Media Partners