Skip to main content

Visualising the future of research

The scope of scientific research has become more ambitious over the last decade than ever before. Whereas it took researchers 13 years during the 1990s and early 2000s to sequence one whole human genome, scientists taking part in the ambitious 1,000 Genomes Project plan to sequence the genomes of 1,000 people from across the world in just two years to pinpoint more accurately the variations that can lead to disease.

Outside of genetics, recent improvements in computing technology now allow scientists to model the behaviour of cars, the human heart and even the world’s climate to an unprecedented level of accuracy. It would be easy to question where an established technology like computer imaging can fit with these new techniques.

The answer, of course, is that it is more important than ever before to be able to mine data from experiments to an increasing – and some would say escalating – level of detail. The computer simulations are only as accurate as the data that feeds them, and modern genetics research captures a volume of data that would simply be impossible to record manually. It may have once been possible for a scientist to record measurements by hand into their paper notebook, but the modern researcher needs to find more than the human eye can see, and for this they are calling on vision technology to feed their work.

In the life sciences, imaging systems, such as those supplied by Syngene, are used to determine which genes and proteins are present in a sample. The analysis is so complicated that only a computer could successfully transform the data, acquired by a camera, into useful information, which can then be used to provide a DNA fingerprint, or to find the effects of a drug on the production of a certain protein.

 

A 2D gel captured by the Dyversity imaging system from Syngene. Each dot represents a different protein found in a sample.

The process, called gel eletrophoresis, follows a number of steps that can take days to perform. Firstly, a gel is prepared in a glass box and allowed to set with a row of little wells that will later be used to contain the samples. Once the wells have been filled with the sample, an electric field is applied to the gel, which encourages molecules within the sample to move from one side of the sheet to the other.

The gel contains many cross-linked polymers, which act as a barrier to the different molecules within the sample. Roughly speaking, the bigger a molecule is, the harder it will be for it to pass through these cross-linked polymers, so the heavier molecules are left behind while the smaller ones shoot ahead. Once the electric field is turned off, the gel will contain a number of different ‘finishing lines’ that mark where the particular chemicals ended up within the plate of gel, separating the different components of the sample.

The result is a one-dimensional array that can be used to identify the different sections of DNA present in the mixture. ‘The patterns are like a barcode, with the different lines representing different bits of the genome,’ says Dr Paru Oatey, applications support manager from Syngene.

The separation of different proteins is slightly more complex. Whereas the mass can act as a signature for different molecules of DNA, it is possible to have many proteins with the same mass. To solve this problem, researchers separate the molecules by charge along one axis, and then by mass along the other axis, to give a 2D array of dots rather than lines. The result is still a unique pattern that can be used to help to identify the different proteins.

Once these patterns have been created, they must be treated with a dye to make them visible to the imaging system. These dyes may be evident in the visible spectrum, where they show up as silver or blue lines or spots, in which case the researcher would backlight the gel to highlight the patterns, and capture the result on a high-resolution (4- to 6-Megapixel) camera. Alternatively, the dye may need to be excited first with UV or blue light, and a filter must be employed to separate the output signal from the original light source.

The images, which may contain up to 1,000 spots, must then be compared to the results from a ‘model’ sample. For example, if a biopsy from liver cancer had been taken, the scientists may compare the resultant pattern to a healthy specimen. As Dr Oatey explains: ‘The software would make comparisons between two or more conditions to find what the proteins are like and whether they have been altered by looking at the different positions of the spots in the gel. The proteins may not be present, or they may be over-expressed. It allows scientists to identify proteins of interest that may be a target for drugs.’

To place a specific name on the protein of interest, the researcher would need to cut the protein from the gel to perform a mass spectrometry test to provide further clarification. The largest difficulty of gel image analysis lies in the fact that it is often not possible to run the experiment in exactly the same way twice, so the same sample may give a seemingly different pattern to be analysed. The software analysis must see through these differences to find the underlying information, which is no mean feat.

‘Many of the variables are not totally reproducible,’ explains Oatey. ‘The user would run replicates of samples from both a healthy sample and the patient. The software must take the replicates to perform a pattern analysis and find an average pattern. The different patterns (from the two biopsies) are then compared. ‘We have several algorithms which do a background subtraction, and filtering. Following that, the images are aligned to make sure the common patterns are aligned to highlight the differences quickly. It is important to optimise the images – for example, if the gel has been torn it would require a manual alignment. Then the software looks at the various characteristics of the spots to define the boundaries between the marks. After this, the spots are marked to see the differences between the normal and the diseased biopsy.

 

The Dyversity 2D gel imaging system from Syngene.

‘Users used to consider image analysis as the bottleneck to the whole process. Some users would spend days in front of a computer optimising and changing the analysis to find the best results. We like to think we are faster than that – with our system, it shouldn’t take more than an hour.’

According to Oatey, a shift from genetics research to proteomics research is changing the requirements for imaging systems in these situations.

‘People believed that once we had sequenced the entire genome we would be able to cure the world of everything. But DNA can be transcribed into several forms of RNA, which in turn can be translated into many different proteins. The emphasis is now on protein research, which is producing better information… Once we entered proteomics, we needed more sensitive cameras with high resolutions to see a better picture of the proteins being separated.’

Outside of genomics and proteomics, the wider life sciences are undoubtedly a growing application area for imaging equipment. ‘We are seeing a big push of scientific imaging into the life sciences to speed up environmental monitoring and drug discovery,’ says Mark Riches, managing director of Invisible Vision, a company that specialises in scientific imaging. ‘The most obvious application would be the imaging of cells down a microscope.’

This application, which may seem like a basic task for the biologist, presents its own challenges for the imaging equipment. A bright light source usually required to capture a good image would literally cook the cell being studied, so it is necessary to use highly-sensitive cameras that can operate at low light levels. In addition, the cell’s tiniest movements will be magnified many times, making them appear to be much faster than they really are, so it is necessary to use a camera that can capture images at a very fast rate, without blur.

It is a common problem that occurs throughout scientific research, and not just the life sciences. Engineers need to study the fuel injection to microscopic detail to find the most efficient way of feeding a car’s engine or the spray of ink in an inkjet printer. The solution is to choose a camera with a short exposure time, to reduce the blur, and a fast frame rate to allow the user to capture many images of the object throughout its journey.

Ultimately, this may just be part of a bigger trend that is changing the perceived value of images in scientific research. Whereas it may have once been enough to provide a pretty picture of an experimental result, the use of an image now relies on the data it can provide: ‘Huge research is now going into computer models, but they must be confirmed with experiments. The big push is to get real numbers to feed back into the models.

‘There is the need for more information from 3D imaging from multiple perspectives. Most companies can’t afford this at the moment, so there is currently a push to lower the cost of 3D imaging.’

It is likely that this will rely on both better software to mine the data from the raw image and better equipment to capture more data. To achieve these goals, it may in fact be necessary to draw on a new type of technology altogether.

‘There will be many applications for scientific imaging, but not necessarily with the conventional technology we’re used to,’ predicts Riches. Ultrasound scanning and the MRI scan are prime examples of this kind of innovation, and it is likely that a similar, though currently undiscovered, technology will be necessary to meet the requirements of the future. For now, however, it seems we will need to rely on improving the current technology to keep up with the latest scientific research, by providing greater sensitivities, higher resolutions and faster image capture.



Topics

Read more about:

Life sciences

Media Partners