Bio-Rad - Preparing for a Stress-free QC Audit

Role of AI in optimizing the whole-slide image acquisition workflow

By Dr Maria Ada Prusicki

Advances in whole-slide scanning technology enabled the automation of microscopy. Researchers can run image acquisition overnight and check the results remotely, minimizing the need for supervision and optimizing their use of time. Nonetheless, a few steps still require intense human supervision, especially for sample identification and scan area definition, where classic algorithms do not suffice in covering the intrinsic variability of biological samples. Deep learning approaches tackle this issue, enabling accurate sample detection and leading to sample specific solutions.

Background

Whole-slide imaging (WSI) has been firmly established as a practical microscopy method in recent years in many scientific fields – including clinical research, drug development, pathology and neurobiology, among others – demonstrated by these two studies in cancer research by Song et al. [1] and Burn et al. [2].

The core concept of WSI is the acquisition of large scan areas – ideally the whole slide – in a digital format. The main advantages of WSI include enabling researchers to easily visualize, analyse, share and, not least of all, save the digitized information of unique biological samples without needing to physically store large volumes of material or, for example, send slides overseas to a collaborators.

Slide scanners are now fully integrated into the daily workflow of biology branches where microscopy has always been the main methodology, such as the previously mentioned pathology. Moreover, they are also entering biological fields that were traditionally based on different techniques and which are now shifting toward a more visual approach, such as spatial genomics and transcriptomics.

As the use of slide scanners spreads, the interest in automating the WSI workflow increases. Automation offers the potential to not only increase throughput, but also optimize time management. The development of new technologies in this direction frees the scientific staff from cumbersome and repetitive tasks, enabling them to focus on results analysis and higher-end activities. Moreover, automation of a scientific workflow is reflected in higher precision and reproducibility, which are pillars of scientific research.

Microscopy workflow steps and advances in automation

The workflow of a microscopist can be largely summarized as:
• position the slides on the microscope stage;
• find the specimen;
• set up the acquisition (i.e. the observation method, e.g. transmitted light path, magnification, exposure time);
• select the correct focal plane;
• acquire the image;
• save the data;
• process the files; and
• analyse/annotate/share the results.

On the one hand, some of these steps have been successfully automated with new mechanical technology and software controls. For example, slides are now loaded autonomously on the microscope stage. This is achieved with robotic arms handling trays or slide cassettes. The image acquisition step can also be performed somewhat automatically, as imaging settings for a group of slides can be adjusted in advance and applied later, allowing the running of unsupervised overnight experiments. Moreover, images can be saved and uploaded on databases automatically.

On the other hand, the automation of tasks such as sample detection and focal plane adjustment is lagging behind, and these steps are still strongly dependent on human supervision. A possible solution for the focal plane issue, as well as limitations, have been extensively discussed [3]. Here, though, we will focus on the issue of sample detection.

A classic algorithm for sample detection separates the specimen from its background based on pixel intensity and colour saturation. The algorithm identifies a potential sample among the pixels that are neither too bright (background) nor too dark (dust/coverslip borders). In addition, filters on the sample size and colour can be used to refine the detection.

This conventional sample detection method is used in many slide scanners, and it works very accurately for well-stained samples with high-contrast images, such as samples prepared for brightfield microscopy and stained with preparations such as H&E (Fig. 1a) and immunostaining mediated by horseradish peroxidase (Fig. 1b) or alkaline phosphatase.

Unfortunately, biological samples present intrinsic variability. They have different morphologies and thicknesses, and the stain does not penetrate every tissue in a uniform way, leading to irregularly coloured or faint samples. Moreover, specimens stained with fluorescent labels are often difficult to visualize when observed with a transmitted light path. All these factors can influence the efficacy of the sample detection algorithm, leading to false results. Figure 1 shows an example of a detection algorithm that performs well on a highly contrasted specimen (Fig. 1a). The same algorithm applied to a sample with non-uniform staining and varying transparency levels results in an inaccurate sample mask that fails to detect faint areas of the tissue (Fig. 1b).

When automatic sample detection fails, there are a few possible paths to follow, including the following two options that have inherent impractical implications.
• Enlarge the scan area, choosing, for example, to include the complete slide in the scan. Consequently, the image acquisition will require more time and the data will include more noninformative pixels. Bigger file sizes will increase the complexity of downstream steps, such as image processing and analysis. Moreover, this might increase the risk of acquiring misfocused images, as the system will search for an in-focus specimen where there is none.
• Forego automation and adjust the scan area manually.

Figure1 edited

Figure 1. Two examples of biological tissues stained for brightfield microscopy, scanned by WSI
The sample mask, in green, shows the result of automated sample detection algorithms. (a) Sections of rat brain stained with H&E. The staining is homogenous, and all the sections can be easily recognized by automated sample detection algorithm as well as by neural network (NN; second and third column). (b) Section of immunohistochemical (IHC)-stained human tissue. The stain is irregularly distributed over the sample. A classic detection algorithm is not able to identify the fainter areas, whereas the NN detectionleads to a highly precise result.

An AI-based solution to WSI automation

A third way to automate sample detection is now emerging – a method that is based on deep learning. Deep learning (DL) is a branch of AI (artificial intelligence) that intends to mimic the human neuronal structure. A DL system is presented with an input of analysed data that highlights a specific feature. For example, if a scientist is interested in counting cells undergoing mitosis, the DL algorithm should be presented with numerous images where such a sample is labelled. The system then uses these labelled data to ‘learn’ and generate a predictive model, also called a neural network (NN). The NN can be applied to numerous images, and it is able to extract from them the defined feature, in our example, the dividing cells.

DL has already been applied extensively for image analysis, taking over digital pathology tasks such as cancer identification, for example [4]. Applying such a technology to WSI sample detection during image acquisition drastically improves automation. If the NN is properly trained, it is possible to adjust the workflow to identify the exact samples of interest with very high precision. In Figure 1, you can see an example of an NN designed to identify faint samples, compared with the classic sample detection algorithm based on intensity thresholds. Although for brain tissue the difference in detection is minimal, the detection of the second tissue sample has a higher precision. With the help of a trained NN, unsupervised sample detection can potentially be extended to any type of sample, automating the entire acquisition workflow and removing the need for manual adjustments.

DL technology is highly flexible and can be adapted to the demands of specific working groups and specific sample preparation as it depends on the presented training data. However, this approach is not limited to identifying sample and non-sample areas. In a procedure called selective detection, DL technology can also be used to scan specific regions within a tissue when the complete tissue is also stained, which

greatly expands the application possibilities. For example, as shown in Figure 2, one such NN is trained to selectively detect pancreatic islets in the entire tissue section. This offers significant improvement compared with conventional segmentation algorithms based on intensity alone, as a NN can be trained to detect structures based on morphology, even if those structures have similar intensities to the tissue background.

In conclusion, AI is a powerful technology that expands the automation capability of current WSI workflows. It reduces human effort and makes image acquisition easier and more convenient for a wide variety of samples.

Acknowledgment

Human tissue section in Figure 1 is courtesy of Dr. Silvia Ferro, DMV, Department of Comparative Biomedicine and Food Science, University of Padova, Italy

Pancreas tissue section in Figure 2 is courtesy of Univ. Prof. Dr. Simone E. Baltrusch, Institute for Medical Biochemistry and Molecular Biology at Rostock University Medical Center in Rostock, Germany

Figure2 edited

Figure 2. The figure presents an example of selective detection performed using a NN trained to identify pancreatic islets (PI)
(a) Overview of a rat pancreas section stained with fluorescent labelling (Alexa 594, in yellow) at ×4 magnification. Seven PIs are identified within the tissue section and highlighted by the green sample mask. (b) and (c) are images of two of the PIs at higher magnification. The PIs are distinguished from the rest of the tissue only morphologically as the staining covers homogenously the complete section (b). (d) Illustrates the two final scans (DAPI in blue and Alexa 594 in yellow) superimposed on the overview. Only the two regions covering the PIs have been scanned at higher magnification (×40).

All images have been acquired with the SLIDEVIEW™ VS200 digital slide scanner.

The author

Maria Ada Prusicki PhD
Evident Scientific GmbH, Münster, 48149 Germany

E-mail: maria.prusicki@evidentscientific.com

References

1. Song Y, Wang L, Li J et al. The expression of semaphorin 7A in human periapical lesions. J Endod 2021;47(10):1631–1639.
2. Burn OK, Farrand K, Pritchard T et al. Glycolipid-peptide conjugate vaccines elicit CD8+ T-cell responses and prevent breast cancer metastasis. Clin Transl Immunology 2022;11(7):e1401 (https://doi.org/10.1002/cti2.1401).
3. Bian Z, Guo C, Jiang S et al. Autofocusing technologies for whole slide imaging and automated microscopy. J Biophotonics 2020 13(12):e202000227.
4. Cong L, Feng W, Yao Z et al. Deep learning model as a new trend in computer-aided diagnosis of tumor pathology for lung cancer. J Cancer 2020;11(12):3615–3622 (https://doi.org/10.7150/jca.43268).