article

Automated classification of crystallisation images

Posted: 23 May 2007 | | No comments yet

The findings of many crystallisation experiments are required in order to identify conditions that will produce diffraction quality crystals. The use of robots has increased the number of experiments performed in most laboratories and, in structural genomics centres, tens of thousands of experiments can be produced every day. As each experiment must be assessed regularly over a period of time, visual inspection is becoming increasingly impractical and automated imaging systems are now available to record the results; the aim of this research is the development of software to analyse and classify images from crystallisation trials.

The findings of many crystallisation experiments are required in order to identify conditions that will produce diffraction quality crystals. The use of robots has increased the number of experiments performed in most laboratories and, in structural genomics centres, tens of thousands of experiments can be produced every day. As each experiment must be assessed regularly over a period of time, visual inspection is becoming increasingly impractical and automated imaging systems are now available to record the results; the aim of this research is the development of software to analyse and classify images from crystallisation trials.

The findings of many crystallisation experiments are required in order to identify conditions that will produce diffraction quality crystals. The use of robots has increased the number of experiments performed in most laboratories and, in structural genomics centres, tens of thousands of experiments can be produced every day. As each experiment must be assessed regularly over a period of time, visual inspection is becoming increasingly impractical and automated imaging systems are now available to record the results; the aim of this research is the development of software to analyse and classify images from crystallisation trials.

Structural genomics aims to implement high-throughput technologies in order to speed up molecular structure determination; however, the process of obtaining diffraction quality crystals continues to present a major bottleneck. Currently there is no a priori method to determine the optimum crystallisation strategy for a particular protein, and the process remains highly empirical. Several important variables, which often interact; such as protein concentration, precipitant, pH and temperature must be tested in combination. Although various sampling techniques have been proposed to reduce the number of possible combinations, the screening of conditions is a largely trial-and-error process.

The automation of nano-volume crystallisation has vastly increased the number of experiments performed in most laboratories and, in high-throughput mode, robotic systems can successfully perform tens of thousands of experiments a day1. The results of these experiments must be assessed repeatedly over a period of time and incorporated into optimisation protocols. Visual inspection of the results is becoming increasingly impractical and the development of tools to automate analysis is essential. Integrated storage and imaging systems allow the results of crystallisation trials to be recorded at regular intervals; enabling crystallographers to view their experiments on a computer screen rather than through a microscope. However, evaluating the many images produced is a monotonous task with the majority of experiments yielding negative results, and software to automatically analyse the images is now being developed by a number of research groups2-8.

In the software developed here, ALICE (Analysis of Images from Crystallisation Experiments), identification of crystals is the primary objective but, as reliable classification of other experimental outcomes provides information for subsequent trials, the aim is to provide a score for each image reflecting how close the experimental conditions are to those required for crystal growth. This allows the images to be examined in order of merit and as soon as some high-scoring conditions are confirmed, no further images need be considered.

Currently, individual objects are identified within the crystallisation drop and used to classify a crystallisation image into a number of categories2,3. However, this method fails to take into account the relationship between objects or their spatial arrangement; therefore higher classification rates can be achieved by combining this approach with the use of textural information obtained from the crystallisation drop as a whole. Fourier analysis and wavelet analysis have been used widely in image analysis and both have been applied to the classification of crystallisation images. Pan et al.4 use features based on texture and the Gabor wavelet decomposition to classify individual blocks within the crystallisation image, while Bern et al.5 make use of Fourier transforms. Here, features extracted from crystallisation images using both Fourier and wavelet methods are used to provide classification variables and the results combined with those from the object-based approach.

Imaging system

ALICE is being developed in collaboration with the Oxford Protein Production Facility (OPPF) at the University of Oxford; where it is now used routinely to annotate images. Many of the images shown were supplied by the OPPF where crystallisation experiments are performed in 96-well Greiner plates (micro-titre format) and the images are taken using an automated Oasis 1700 imaging system (Veeco, Cambridge, UK). Native images are 1024 x 1024 x 8 bit bitmap (BMP) images (~1 Megabyte in size, corresponding to a pixel width of about 3 mm), which are cropped to the size of the well to give images of 700 x 750 pixels.

Wavelet analysis

Wavelet transforms the decomposition of an image into different levels of detail and is therefore extremely suitable for texture analysis; here we use the multi-resolution analysis of Mallat9 with the simplest wavelet function due to Haar10. Each level of the transform applies a low-pass filter, giving a smoothed approximation, and a high-pass filter, giving the high frequency details, in both the x (horizontal) and y (vertical) directions. Thus, for a single level transform, we obtain four sub-images, as shown in Figure 1.

The vertical details in the image at this scale are seen when the high-pass filter is applied along x and the low-pass filter along y, and vice versa for the horizontal details. Diagonal details are shown in the sub-image obtained by applying the high-pass filter in both directions. The process can be repeated by applying the filters to the smooth approximation, i.e. the sub-image obtained by applying the low-pass filter in both directions. At level k of the transform, the number of pixels in each sub-image consists of N/22k pixels, where N is the number of pixels in the original image.

Object-based classification

To speed up processing, the crystallisation drop is identified using the horizontal and vertical sub-images from a 4-level wavelet transform combined. This gives an image of approximately 2,000 pixels (in comparison to the 700 x 750 pixels in the original image), that can be used to reduce the image to the size of the drop and create a mask on a coarse grid; as shown in Figure 2.

The gradient of the resulting image is calculated and a threshold; determined by the intensity statistics, is applied to the gradient magnitudes to define potential object pixels. Objects are defined as connected sets of pixels above the threshold after dilation and erosion operations are performed to prevent twisted boundaries (Figure 3).

Each object is evaluated separately in terms of characteristic features that can be quantified. For example; boundary-related variables include measures of curvature and the length of straight sections. Ordered patterns in the gradient direction within an object indicate the presence of regular objects and various shape descriptors, and statistical measures provide information about other types of object2. The feature vector obtained from 14 such variables is used to classify individual objects and an overall score given to the image based on the object scores.

Drop-based classification

The coarse mask is sufficient for the method involving classification of individual objects as the drop edge and objects due to splatter outside the drop can be identified as such and ignored. However, a better mask is required for both Fourier and wavelet analysis as the edge of the drop in particular would give rise to large wavelet coefficients. The mask is therefore refined by dilation until pixels identified as objects are reached; as objects outside the drop would prevent the mask reaching the drop edge, this is performed after small objects beyond a certain distance from the centre of gravity of all object pixels have been deleted (Figure 4).

Wavelet analysis

Mallat9 considered the distribution of wavelet coefficients in relation to the error on a reconstructed image and showed that the histograms could be modeled by the functions,

where the parameters a and b can be determined from the first and second moments of the histogram. We have found that images from crystallisation experiments can also be modeled by these generalised Gaussian functions, and that differences between classes are reflected in the values of α and β10. Furthermore, we can use the lack of fit to provide further discriminatory variables that can be exploited for classification. A further variable, calculated for each wavelet sub-image, is the information entropy given by

where the probability, p(xi), is determined from the histogram of wavelet coefficients. These variables are calculated for each of the three detail sub-images on each of three wavelet levels.

As well as the first-order statistics determined from each detail sub-image, we also calculate second-order statistics from joint probability distributions. The decay of the wavelet coefficients across the levels of the transform can be used to characterise different types of edges. Sharp changes such as crystal edges give rise to large wavelet coefficients across the scales, whereas smoother changes in grey-scale due to shadows, for example, will produce wavelet coefficients that change gradually with subsequent levels of the transform. This information can be extracted by considering the correlation between corresponding wavelet coefficients on different levels of the transform. In order to increase the information content and provide some level of invariance to orientation, the three detail sub-images on each level are combined in the correlation matrices. Statistical measures are then calculated from the joint probability distributions obtained from the first and second/second and third levels of the transform. Further pair-wise combinations were considered but did not improve the classification results.

Fourier analysis

After normalising the image to account for varying intensities caused by drop curvature, we follow the approach described by Gonzalez and Woods11 for extracting classification variables from the Fourier Transform (FT). This method involves determining the intensity of the power spectrum,

where F(r.ϑ) is the FT of the re-scaled image in polar coordinates, N is the number of pixels in the image, and r is measured from the zero spatial frequency point. By deliberating over different angles and radii we can obtain a more global description in terms of the functions, S(r) and S(ϑ), where

and;

For photographs of natural scenes, Van Der Schaaf and Van Hateren12 found that the intensity of the radial power spectrum tends to depend on the inverse square of the spatial frequency, r. Similarly we found that, for most crystallisation images, spectral power and spatial frequency are related by a power law, i.e. S(r) = Arα. We found that S(r) varied significantly between images at low spatial frequency, but changed relatively little at high spatial frequency and therefore model S(r) for two separate regions determined by the radii13. In addition we calculate the root-mean-square (r.m.s.) contrast, crms, of the input image where;

and the log-contrast, cl, calculated by summing the logarithm of the power spectrum.

Following Papoulis14, the power spectrum, S, is separated into a product of a deterministic modulation function, M, and a probabilistic residue, ε, i.e S = Mε. Although the ideal is ‘probabilistic’ in nature, deviations from the model vary and much information remains in ε. Therefore statistical values determined from the function provide useful descriptors.

Large values of S(ϑ) at a range of radial distances can indicate the presence of crystals (Figure 4), and angular descriptors provide further information for classification. For example, a measure of the variation in intensity with ϑ is given by the standard deviation of the parameters, A and α, fitted for different angles and high skewness suggests a few deviant peaks rather than many, smaller fluctuations; thus the parameters of fitted models, together with information about the deviation from the model and further statistical values, provides a series of extracted features that can be used to classify the images.

Results and discussion

Supervised learning algorithms are trained to associate a certain output with particular inputs so that the vector of values obtained from the classification variables, or feature vector, can be used to assign an object to a particular class. Training requires a set of input vectors that have been pre-classified by eye and an independent test set for validation to avoid problems with over-fitting. For each method, descriptors were calculated for a training set of 700 images, classified by eye into seven categories, (see Figure 5) and used to train a learning vector quantisation (LVQ) neural network15.

It should be noted that the number of classes is somewhat arbitrary as crystallisation experiments produce a continuum of results rather than discrete outcomes. Whatever the number, there will be some overlap between neighbouring classes as demonstrated in studies to assess the reliability of human classification (unpublished work). We found that a 7-class system gave an average agreement rate of ~70% for 16 crystallographers but that, in most cases where the scores do not agree, they differ by only one class. Allowing “agreement” to tolerate a one-class difference gave an average agreement rate of ~94% and allowing a two-class difference gave close to a total agreement of ~99%.

The resulting LVQ code vectors were then used to classify an independent test set consisting of 150 images from each of the seven classes; (the results are given in Table 1).

The diagonal entries give the percentage of exact matches with the scores given when the images were classified by eye; however misclassifications between neighbouring classes need not necessarily be considered incorrect if the continuous nature of the experiment/the diversity in human classification is taken into account, and bold type in the tables indicates acceptable classifications. More serious misclassifications appear further away from the diagonal with images classified too low above the diagonal, and images classified too high below the diagonal.

The use of parameters derived from frequency space or wavelet methods leads to a higher proportion of false negatives in comparison to the object-based method. This is not unexpected as the majority of the crystallisation drop can be empty in the case of single crystals, particularly those growing at the edge of the drop. In such cases, classification based on individual objects is likely to perform better. However, many images have features that occur throughout the drop and are better suited to texture-based methods. As the methods exploit complementary features, their combination leads to improved classification rates, although the best way to combine the results is not obvious. Table 2 shows the results when the object-based method is combined with each of the texture-based methods. Here the final score for the image is taken as the maximum score obtained by either method individually. Rather than determining the class for each object or image from a single winning vector, the LVQ can be used to provide a probability for every class, calculated from the classes of the top ten vectors weighted according to the distance measure. A better way to combine the results using probabilities is being sought and will ideally also allow the class probabilities of one object to influence those of another.

The aim is not to replace human classification but to reduce the number of experiments that need to be examined by eye. Automated scoring allows the images to be examined in order of merit and, when high-scoring conditions are confirmed, no further images need be considered. The potential for the development of optimisation procedures is also important; not only will many promising initial conditions be identified but failed experiments will also be recorded, allowing the possibility for automated screening protocols to be developed via data-mining techniques.

Table 1

Table 2

Figure 1: A single-level wavelet decomposition of the image in (a) is shown in (b). The four sub-images show smooth approximation (top-left), obtained by applying the low-pass filter in both x and y directions, the vertical details (top-right), obtained by applying the high-pass filter along x and the low-pass filter along y, the horizontal details (bottom left), obtained by applying the low-pass filter along x and the high-pass filter along y and the diagonal details (bottom right), obtained by applying the high-pass filter along both x and y.

Figure 1: A single-level wavelet decomposition of the image in (a) is shown in (b). The four sub-images show smooth approximation (top-left), obtained by applying the low-pass filter in both x and y directions, the vertical details (top-right), obtained by applying the high-pass filter along x and the low-pass filter along y, the horizontal details (bottom left), obtained by applying the low-pass filter along x and the high-pass filter along y and the diagonal details (bottom right), obtained by applying the high-pass filter along both x and y.

Figure 2: The image obtained by combining the three detail sub-images from the fourth level of the wavelet transform is shown in (a) for the image in Figure 1. The coarse mask obtained from this, applied to the image, is shown in (b).

Figure 2: The image obtained by combining the three detail sub-images from the fourth level of the wavelet transform is shown in (a) for the image in Figure 1. The coarse mask obtained from this, applied to the image, is shown in (b).

Figure 3: The pixels identified as object pixels for the image in Figure 1 are shown in black.

Figure 3: The pixels identified as object pixels for the image in Figure 1 are shown in black.

 Figure 4: An image with the refined mask is shown in (a) with its Fourier transform showing the "spoke" due to the crystal in (b).

Figure 4: An image with the refined mask is shown in (a) with its Fourier transform showing the "spoke" due to the crystal in (b).

Figure 5: Example images from the seven categories used in classification: (a) Empty drops score 0; (b) drops with denatured protein or other unwanted outcomes score 1; (c) precipitate scores 2; (d) outcomes better than precipitate but non-crystalline score 3; (e) micro-crystals score 4; (f) crystal clusters score 5; (g) single crystals score 6.

Figure 5: Example images from the seven categories used in classification: (a) Empty drops score 0; (b) drops with denatured protein or other unwanted outcomes score 1; (c) precipitate scores 2; (d) outcomes better than precipitate but non-crystalline score 3; (e) micro-crystals score 4; (f) crystal clusters score 5; (g) single crystals score 6.

References

  1. Mayo, C.J, Diprose, J.M., Walter, T.S.,Berry, I.M., Wilson, J., Owens, R.J., Jones, E.Y., Harlos, K., Stuart, D.I. and Esnouf, R.M. (2005). Structure, 13(2),175-182.
  2. Wilson, J. (2002). Acta Cryst., D58, part 11, 1907- 1914.
  3. Wilson, J. (2004). Cryst. Rev.,10, 73-84.
  4. Pan, S., Shavit, G., Penas-Centeno, M., Dong-Hui, X., Shapiro, L., Ladner, R., Riskin, E., Hol, W. and Meldrum, D. (2006). Acta Cryst., D62, 271-279.
  5. Bern, M., Goldberg, D., Stevens, R.C. and Kuhn, P. (2004). J. Appl. Cryst., 37, 279-287.
  6. Spraggon, G., Lesley, S.A., Kreusch, A. and Priestle, J.P. (2002). Acta Cryst. D58, 1915-1923.
  7. Saitoh, K., Kawabata, K., Asama, H., Mishima, T., Sugahara, M. and Miyano, M. (2005). Acta Cryst., D61, 873-880.
  8. Cumbaa, C.A., Lauricella, A., Fehrman, N., Veatch, C., Collins, R., Luft, J., DeTitta, G. and Jurisica, I. (2003) Acta Cryst. D59, 1619-1627.
  9. Mallat, S. (1987). Proc. IEEE Workhop Comput. Vision, Miami, FL.
  10. Watts, D., Cowtan, C. and Wilson, J. In preparation.
  11. Gonzalez, R.C. and Woods, R.E. (2002) In Digital Image Processing, Second Edition. Prentice Hall.
  12. van der Schaaf, A. and van Hateren, J.H. (1996).Vision Research, 36 (17), 2759-2770.
  13. Walker, C.G., Foadi, J. and Wilson, J. J. Appl. Cryst. To appear.
  14. Papoulis, A. (1965) In Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York.
  15. Kohonen, T., Kangas, J., Laaksonen, J. and Torkkola, J. (1992). Neural networks, 1, 725-730. The LVQ_PAK program can be downloaded from http://www.cis.hut.fi/research/som-research/nnrc-programs.shtml.

Julie Wilson

Julie Wilson is a Royal Society University Research Fellow and moved to theoretical crystallography after completing a PhD in pure mathematics. Her research into the use of wavelet transforms and image processing techniques for phasing low-resolution electron density maps led to her current interest in the analysis of images from crystallisation experiments. She is now developing software to automatically evaluate these images in collaboration with the Oxford Protein Production Facility and, through involvement in EU Framework projects; with laboratories throughout Europe. Dr Wilson is also involved in data analysis projects in collaboration with the Central Science Laboratory (CSL). Analytical chemistry provides metabolomics data from a range of services, including Nuclear Magnetic Resonance (NMR), Capillary Electrophoresis and Mass Spectrometry (MS). The aim is to link the statistics to the biochemistry and thus provides the perfect opportunity for a joint mathematics/chemistry approach.