Thousands of strain-specific whole-genome sequences are now available for a wide range of pathogenic bacteria. Using these data, approaches based on machine learning can now be used to predict the results of antimicrobial susceptibility tests from sequence alone. Recent studies have demonstrated the ability to predict minimum inhibitory concentrations with accuracies up to 95 %. Employing these tools to prioritize antibiotic treatment could improve patient outcomes and help to avoid the antibiotic resistance crisis.
by Dr Jonathan M. Monk
Importance of antimicrobial resistance (AMR) prediction
Today over 700 000 people die of antibiotic resistant infections per year [1]. Frighteningly, it has been estimated that this number could rise to 10 million deaths per year if nothing is done to stop the increase and spread of antibiotic resistant bacteria [2]. To help combat this threat it is critical to limit the use of ineffective antibiotics and to prescribe the appropriate antimicrobial therapy to patients as quickly as possible. Although antimicrobial susceptibility testing is now routine in microbiology laboratories, this testing often takes too long to impact clinical diagnosis.
New tools that rapidly predict antibiotic resistance could improve antibiotic stewardship and, when effectively implemented, have led to reductions in levels of resistant bacteria in hospitals [3]. Thus, accurately diagnosing antibiotic resistant bacteria would avoid the evolutionary pressures that accelerate resistance and would aid antibiotic stewardship approaches. This could enable physicians to select the optimal antibiotic regimen to cure a patient, rather than enhancing a given strain’s resistance. Whole-genome sequencing may offer this possibility.
The genomics revolution has made available thousands of strain-specific whole-genome sequences (WGS) for a range of pathogenic bacteria. For example the Pathosystems Resource Integration Center (PATRIC) [the all-bacterial Bioinformatics Resource Center (BRC) funded by the National Institute of Allergy and Infectious Diseases (NIAID)] currently contains over 15 000 Escherichia genomes, more than 14 000 Staphylococcus genomes and nearly 11 000 Mycobacteria genomes [4]. Increasingly, these genomes are coupled with clinical metadata, including minimum inhibitory concentration (MIC) values for various antibiotics.
This large-scale coupling of resistance data with strain-specific genome sequences enables machine learning and other big-data science approaches to study and predict antibiotic resistance. For example, it is now possible to apply case-control studies whereby a group of strains that exhibit a biological phenotype (e.g. antibiotic resistance) is compared to a group of strains that do not. Machine learning techniques can be used to identify biomarkers (e.g. presence/absence of genes or mutations) that are predictive of a given phenotype. These biomarkers can then serve as a basis for diagnostic tests.
Here we discuss recent literature using machine learning approaches to predict antibiotic resistance and highlight considerations required for their application.
Introduction to machine learning approaches for WGS-based prediction of AMR
Setting up a machine learning problem involves breaking data into two groups (Fig. 1a):
(1) The y-array containing genomes or samples (m rows) matched with the phenotype to be predicted. In the case of AMR prediction, a phenotype could be binary e.g. ‘resistant’ versus ‘susceptible’ or the actual experimentally measured MIC. Predicting MICs is often preferable owing to changing breakpoints used to define resistance. For example, a given strain with a MIC of 8 µg/mL gentamicin may have previously been classified as resistant, but new CLSI 2017 guidelines specify that gentamicin resistance requires a MIC above 16 µg/mL. This can lead to inconsistent AMR annotations that can confound binary predictions.
(2) The X-matrix containing the samples (m rows) and their associated features (n columns) that will be used to make a prediction. Features range from those that are completely knowledge-based, such as the presence of genes known to confer antibiotic resistance (e.g. a beta-lactamase) to those that require no previous knowledge such as the presence of short (~10 bp) segments of DNA on the chromosome (Fig. 1b). These features have unique benefits and drawbacks that have been used in several recent studies described below.
Selecting appropriate features for AMR prediction
Knowledge-based features
Knowledge-based features can be obtained by mapping a genome of interest using curated databases of gene products that have already been demonstrated to confer antibiotic resistance. As of February 2019, the Comprehensive Antibiotic Resistance Database (CARD) houses 2553 reference sequences and 1216 SNPs demonstrated to confer resistances for 79 different pathogens [5]. These approaches are akin to laboratory tools that offer PCR-based identification of AMR determinants, such as BioFire. Models trained using previously annotated features often have good accuracies and are easier to interpret because of the accumulated knowledge present in such predictions [6, 7].
However, despite their high accuracy, these tools are limited because they often require many rounds of multiple sequence alignment, which can become computationally expensive at large scale. Also, reliance on known AMR determinants may cause such algorithms to miss newly evolved resistance mechanisms. A useful machine learning approach should be capable of analysing future outbreaks and identifying new mechanisms of resistance, rather than being limited to past knowledge.
Gene- and allele-based features
An approach that balances these two extremes involves assembling features by annotating the genome for known protein coding genes, but keeping the feature types agnostic, for example by including genes with functions ranging from cell replication, to cell wall synthesis to metabolism. This approach has the advantage of not requiring known determinants of antimicrobial resistance but does still require annotated genomes, potentially biasing results by annotation methods.
Recent studies have used this approach to predict antibiotic resistance in E. coli with accuracies above 90 % [8]. Importantly, this approach identified features that outperformed genes established in the literature. Such an approach can go even deeper by breaking the genes down into their constituent alleles to account for potential mutations in each coding sequence. Another study took this approach to examine 1595 strains of M. tuberculosis and identified 33 known AMR-conferring genes and 24 new potentially novel antibiotic resistance conferring genes [9]. Thus, methods that rely on several features extracted from the genome, rather than restricting them to those with previous knowledge, can be used to accurately predict AMR and identify novel mechanisms of resistance making them extensible to mutations and mechanisms of resistance that may emerge in the future.
Kmer feature selection
A contrasting approach that can identify new mechanisms of resistance and requires no a priori knowledge involves breaking up a genome into short (~10 bp) long segments of DNA and using these to create ‘features’ from short segments of DNA on the genome, known as ‘kmers’. All genomes in the collection can be divided into kmers that are then added to the X-matrix where presence of a specific kmer becomes a feature. Thus, this kmer-based approach contrasts with knowledge-based methods to predict AMR that rely on a database of curated genes and mutations previously shown to confer antibiotic resistance.
Studies of A. baumannii, S. aureus, S. pneumoniae, K. pneumoniae and collections of over 5000 Salmonella genomes have demonstrated ability to predict MIC with an average accuracies above 90 % within +/−1 twofold dilution step [10–12]. Unfortunately, this high accuracy and ability to predict new mechanisms of resistance has the trade-off of being difficult to interpret. For example, a model may imply a strong relationship between predicted resistance and segments of the genome without annotated functions or to biological processes.
Building and evaluating a machine learning model
Once the features for a model have been selected it is time to apply a machine learning algorithm to the data. Several such algorithms exist, each with benefits and drawbacks related to accuracy and interpretability [13]. Unfortunately, often the more accurate models are difficult to interpret whereas more intelligible models have worse predictive capabilities. In healthcare applications it is vital for the treating physician to be able to understand, validate and trust a model, and thus relying on easier to interpret methods like a decision tree or simple logistic regressors may be best.
When evaluating a machine learning model it is imperative to question how the model was trained. A major pitfall for machine learning approaches is their tendency to ‘overfit’ datasets. For example, a model trained on data used to make a prediction could simply ‘remember’ that data and use it to correctly predict any point in the same training set. However, if the model is too rigid it may perform poorly on new data. Robust machine learning models avoid such overfitting by splitting the data into non-overlapping sets where ~80 % of the data is used for training and ~20 % of the data is used for tests (Fig. 2a). This splitting process should be random and be performed several times to assess the overall accuracy and sensitivity of a model, thereby limiting overfitting and ensuring that predictions remained generalizable and robust.
Once a model’s ability to predict new data is established it is finally possible to evaluate the model’s predictive performance (Fig. 2b). Thus far, we have described previous studies in terms of correct predictions and accuracies. However, it’s often more important to evaluate cases where a model fails. Requirements for AMR diagnostic devices are strict. Devices typically describe their utility in terms of error rate. Major errors (MEs) occur when susceptible genomes are incorrectly predicted to have resistant MICs. The opposite case, when resistant genomes are incorrectly assigned susceptible MICs, are termed very major errors (VMEs). US Food and Drug Administration (FDA) standards for automated systems recommend a ME rate ≤3 %. A recent study of over 5000 Salmonella genomes used kmers to train a model that demonstrated MIC predictions for 15 antibiotics with ME rates in this range [10]. The FDA standards for VME rates indicate that the lower 95 % confidence limit should be ≤1.5 % and upper limit should be ≤7.5 %. Models for seven of the 15 antibiotics in the same study had acceptable VME rates based on this requirement. Thus, such an approach would make acceptable predictions for diagnostic applications.
Summary and outlook
In summary, options for WGS-based predictions of antimicrobial susceptibility testing are becoming a reality. This brief summary limits the scope to tools and methods to predict antibiotic susceptibility from WGS. However, in the future it may be possible to combine genomic features with information from the patient, like age, gender, comorbidities, etc. Furthermore, rather than predicting only antibiotic susceptibility it would be possible to train an algorithm to predict patient outcome and adjust treatment regimens to improve patient care [14].
Such approaches are sorely needed because despite improvements in antibiotic use, the Centres for Disease Control and Prevention (CDC) estimates that approximately 50 % of antibiotics are still prescribed unnecessarily in the US at a yearly cost of $1.1 billion [15], and the annual impact of resistant infections in the US is estimated to be $20 billion in excess healthcare costs and 8 million additional days patients stay in the hospital [16]. Significant improvements in patient outcome have been observed when reducing the time of treatment with optimal antibiotic therapy [17, 18]. Rapid identification and targeted treatment of pathogenic bacteria using tools assisted by algorithms presented here would enable precision medicine for pathogens that would lower the incidence of antibiotic resistance, improve patient health, and lead to decreased hospital costs.
Figure 1. How to set up a whole-genome sequence (WGS)-based machine learning problem for antimicrobial resistance (AMR) prediction. (a) Samples (m=rows) with sequenced genomes and known phenotypes of interest [‘susceptible’ vs ‘resistant’ phenotypes or minimum inhibitory concentration (MIC) value] are used to train a machine learning model. All values to be predicted are placed into the ‘y’ array. The ‘features’ used to train a model form the columns of the X-matrix. (b) For WGS-based antimicrobial-susceptibility-test prediction possible feature types include: (1) known antibiotic resistance conferring genes or mutations, (2) annotated protein coding genes (independent of known functions) and even (3) the presence of short fragments of DNA sequence on the chromosome known as ‘kmers’. These different feature types have a trade-off between ease of interpretation (easiest for previously identified features) and ability to detect novel AMR determinants (best for short sequence fragments).
Figure 2. Evaluating predictions from a machine learning model. (a) A machine learning model that is ‘overfit’ is inflexible to new data. To ensure a model is robust enough to predict new samples, all models should be cross-validated. This process involves randomly splitting the whole dataset into training (± 80 % of samples) and testing (± 20 %) sets. The sets should be shuffled multiple times to check model accuracy across different samples and features. (b) The results of running the model on testing sets can then be compared for each randomly sampled set (different colored lines). The model’s performance is compared by calculating the area under the curve (AUC) on a plot of the true positive rate vs the false positive rate often called a receiver operating characteristic (ROC) curve. Model accuracy can be calculated from the number of true positive (TP) [model predictions, resistant (R); experimental result, R] and true negative (TN) predictions divided by the total number of predictions. However, it is often more important to gauge how a model fails: for example a false positive ‘major error’ [model prediction, R; experimental result, susceptible (S)] may lead to incorrectly withholding an effective antibiotic. Even worse, false negative ‘very major error’ predictions (model prediction, S; experimental result, R) could lead to prescribing an antibiotic that is ineffective.
References
1. The dangers of hubris on human health. Global Risks – Reports. World Economic Forum 2013 (http://reports.weforum.org/global-risks-2013/risk-case-1/the-dangers-of-hubris-on-human-health/).
2. O’Neill J. Antimicrobial resistance: tackling a crisis for the health and wealth of nations. Rev Antimicrob Resist 2014; 20: 1–16.
3. Carling P, Fung T, Killion A, Terrin N, Barza M. Favorable impact of a multidisciplinary antibiotic management program conducted during 7 years. Infect Control Hosp Epidemiol 2003; 24(9): 699–706.
4. Wattam AR, Abraham D, Dalay O, Disz TL, Driscoll T, Gabbard JL, Gillespie JJ, Gough R, Hix D, Kenyon R, Machi D, Mao C, Nordberg EK, Olson R, Overbeek R, Pusch GD, Shukla M, Schulman J, Stevens RL, Sullivan DE, Vonstein V, Warren A, Will R, Wilson MJ, Yoo HS, Zhang C, Zhang Y, Sobral BW. PATRIC, the bacterial bioinformatics database and analysis resource. Nucleic Acids Res 2014; 42(Database issue): D581–591.
5. Jia B, Raphenya AR, Alcock B, Waglechner N, Guo P, Tsang KK, Guo P, Tsang KK, Lago BA, Dave BM, Pereira S, Sharma AN, Doshi S, Courtot M, Lo R, Williams LE, Frye JG, Elsayegh T, Sardar D, Westman EL, Pawlowski AC, Johnson TA, Brinkman FS, Wright GD, McArthur AG. CARD 2017: expansion and model-centric curation of the comprehensive antibiotic resistance database. Nucleic Acids Res 2017; 45(D1): D566–573.
6. Jeukens J, Freschi L, Kukavica-Ibrulj I, Emond-Rheault J-G, Tucker NP, Levesque RC. Genomics of antibiotic-resistance prediction in Pseudomonas aeruginosa. Ann N Y Acad Sci 2019; 1435(1): 5–17 (First published 2017 Online: https://nyaspubs.onlinelibrary.wiley.com/doi/abs/10.1111/nyas.13358).
7. Bradley P, Gordon NC, Walker TM, Dunn L, Heys S, Huang B, Earle S, Pankhurst LJ, Anson L, de Cesare M, Piazza P, Votintseva AA, Golubchik T, Wilson DJ, Wyllie DH, Diel R, Niemann S, Feuerriegel S, Kohl TA, Ismail N, Omar SV, Smith EG, Buck D, McVean G, et al. Rapid antibiotic-resistance predictions from genome sequence data for Staphylococcus aureus and Mycobacterium tuberculosis. Nat Commun 2015; 6: 10063.
8. Her H-L, Wu Y-W. A pan-genome-based machine learning approach for predicting antimicrobial resistance activities of the Escherichia coli strains. Bioinformatics 2018; 34(13): i89–i95.
9. Kavvas ES, Catoiu E, Mih N, Yurkovich JT, Seif Y, Dillon N, Heckmann D, Anand A, Yang L, Nizet V, Monk JM, Palsson BO. Machine learning and structural analysis of Mycobacterium tuberculosis pan-genome identifies genetic signatures of antibiotic resistance. Nat Commun 2018; 9(1): 4306.
10. Nguyen M, Long SW, McDermott PF, Olsen RJ, Olson R, Stevens RL, Tyson GH, Zhao S, Davis JJ. Using machine learning to predict antimicrobial MICs and associated genomic features for nontyphoidal Salmonella. J Clin Microbiol 2019; 57(2): pii: e01260-18 (http://dx.doi.org/10.1128/JCM.01260-18).
11. Davis JJ, Boisvert S, Brettin T, Kenyon RW, Mao C, Olson R, Overbeek R, Santerre J, Shukla M, Wattam AR, Will R, Xia F, Stevens R. Antimicrobial resistance prediction in PATRIC and RAST. Sci Rep 2016; 6: 27930.
12. Nguyen M, Brettin T, Long SW, Musser JM, Olsen RJ, Olson R, Shukla M, Stevens RL, Xia F, Yoo H, Davis JJ. Developing an in silico minimum inhibitory concentration panel test for Klebsiella pneumoniae. Sci Rep 2018; 8(1): 421.
13. Deo RC. Machine learning in medicine. Circulation 2015; 132(20): 1920–1930.
14. Kachroo P, Eraso JM, Beres SB, Olsen RJ, Zhu L, Nasser W, Bernard PE, Cantu CC, Saavedra MO, Arredondo MJ, Strope B, Do H, Kumaraswami M, Vuopio J, Gröndahl-Yli-Hannuksela K, Kristinsson KG, Gottfredsson M, Pesonen M, Pensar J, Davenport ER, Clark AG, Corander J, Caugant DA, Gaini S. Integrated analysis of population genomics, transcriptomics and virulence provides novel insights into Streptococcus pyogenes pathogenesis. Nat Genet 2019; 51(3): 548–559 (http://dx.doi.org/10.1038/s41588-018-0343-1).
15. Antibiotic resistance threats in the United States, 2013. US Centers for Disease Control and Prevention 2013 (https://www.cdc.gov/drugresistance/pdf/ar-threats-2013-508.pdf).
16. Fair RJ, Tor Y. Antibiotics and bacterial resistance in the 21st century. Perspect Medicin Chem 2014; 6: 25–64.
17. Kumar A, Roberts D, Wood KE, Light B, Parrillo JE, Sharma S, Suppes R, Feinstein D, Zanotti S, Taiberg L, Gurka D, Kumar A, Cheang M. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med 2006; 34(6): 1589–1596.
18. Palmer HR, Palavecino EL, Johnson JW, Ohl CA, Williamson JC. Clinical and microbiological implications of time-to-positivity of blood cultures in patients with Gram-negative bacilli bacteremia. Eur J Clin Microbiol Infect Dis 2013; 32(7): 955–959.
The author
Jonathan M. Monk PhD
Department of Bioengineering, UC San Diego, San Diego, California, USA
E-mail: jmonk@ucsd.edu
Detection and typing of HPV for cervical cancer prevention according to Meijer criteria
, /in Featured Articles /by 3wmediaHPV testing is a linchpin of cervical cancer prevention, providing an effective alternative to the long-standing Pap test. The HPV analysis should encompass all relevant anogenital HPV types and differentiate between high-risk types, which can induce cancer, and low-risk types, which cause benign genital warts. It is also critical to identify multiple and persistent infections, since these are associated with a high tumour risk. The EUROArray HPV provides fast and reliable detection and typing of all 30 relevant anogenital high-risk and low-risk HPV types in one reaction and meets the international criteria for HPV screening defined by Meijer et al. The test is simple to perform and includes fully automated data evaluation and documentation.
by Dr Jacqueline Gosink
Cervical cancer
Cervical cancer is worldwide the third most frequent cancer in women. For example, in Germany there are approximately 4600 new cases each year, even though many women attend cancer screening. Tumours of the cervix are caused by human papillomaviruses (HPV), which are spread by sexual contact. The immune system usually eliminates the HPV within a few months. However, if an HPV infection persists over a longer period of time, this can cause changes in cervical cells, depending on the HPV type, which may subsequently lead to cancer. The cellular changes are histologically classified as grade 1, 2 or 3 cervical intraepithelial neoplasia (CIN). Mild cases (CIN 1) often clear without any treatment. Moderate and severe cases (CIN 2 and CIN 3) are usually treated to prevent development of cervical cancer.
High- and low-risk HPV types
There are over 200 types of HPV, of which 30 can cause infections in the genital area. These are divided into low-risk and high-risk types. Low-risk types cause warts in the genital area or slight tissue changes. High-risk types are significantly more aggressive. A persistent infection with a high-risk HPV type can cause tissue changes that significantly increase the risk of a tumour. The most common high-risk types are 16 and 18, which are responsible for about 70% of cervical cancers and precancers. Other high-risk types are 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, 73 and 82. Low-risk types are 6, 11, 40, 42, 43, 44, 54, 61, 70, 72, 81 and 89.
HPV vaccination protects against infection with the high-risk types 16 and 18 and the low-risk types 6 and 11. Depending on the vaccine preparation used, protection against five additional high-risk types (31, 33, 45, 52, 58) is possible. Vaccinated persons can still become infected with other types. It is therefore important to participate in cancer screening even after vaccination.
HPV infection and cancer risk
Persistent infections with a single high-risk type are associated with a clearly increased tumour risk. In a study panel (n=40), all patients with a persistent infection with a single high-risk HPV type developed CIN 2 or worse within seven years (Figure 1) (1). Furthermore, simultaneous infections with different high-risk types lead with high probability to malignant cytological changes to the cervical mucosa and therefore present a greater cancer risk for patients. Sequential infections with different high-risk HPV types do not increase the risk of cervical cancer. It is therefore important to differentiate persistent from transient infections and multiple from single infections. This is only possible with tests that are able to subtype the different HPV.
Cervical cancer screening
Cervical cancer screening is traditionally based on the Papanicolaou or Pap test, which detects morphological cell changes in cervical smear samples. However, several countries have already established or are currently switching to high-risk-HPV testing as the first-line screening method, as evidence mounts that is more effective and efficient for the prevention of invasive cervical cancer and mortality than the Pap test. Several randomized trials have shown that the cumulative incidence of cervical cancer five years after a negative HPV test is lower than the incidence three years after a normal cytology result.
Molecular biological testing allows early identification of an HPV infection, even before dysplasia is visible in the mucosa. Subtyping tests reveal at the same time whether the infection is due to low-risk or high-risk HPV and exactly which HPV types are present. Patients who have simultaneous infections with different high-risk types or a persistent infection with the same high-risk HPV type can be monitored more frequently to ensure timely treatment to minimize the risk of cervical cancer. A negative result excludes an HPV infection and thus the risk of developing cervical cancer with high probability.
Detection of the viral oncogenes E6/E7
A prerequisite for the development of carcinoma is the integration of the HPV genome into the DNA of the epidermal cells. The proportion of infected cells containing integrated viral DNA increases as the infection progresses. During the integration into human DNA, particular regions of the HPV genome (generally the E1, E2, L1 and L2 genes) are split. Test systems that detect these genes are therefore unreliable. For example, when test systems based on the L1 gene are used to detect HPV types 16 and 18, between 8% and 28% of high-grade dysplasia cases can be overlooked (2). In contrast, testing for essential viral markers such as the oncogenes E6 and E7 allows all HPV infections to be reliably detected, since these genes are essential for malignant transformation of the host cells and they remain intact even after integration. Detection of variable sequences within these genes allows the different HPV types to be differentiated.
Meijer HPV test criteria
In 2009, an international team of experts proposed criteria for the requirements and validation of HPV tests for primary cervical cancer screening, known as the Meijer criteria (3, 4). To support the clinical performance evaluation of HPV subtyping tests, the VALGENT (VALidation of HPV GENotyping Tests) protocol was subsequently established. The key issue for HPV testing in cervical screening is to detect high-risk HPV infections that are associated with or develop into ≥CIN 2 and to differentiate them from transient high-risk HPV infections. HPV tests should provide high clinical sensitivity for detection of cervical precancer and cancer, and at the same time high clinical specificity to limit unnecessary procedures and follow-up of HPV-positive women.
The validation guidelines for HPV tests encompass clinical sensitivity (criterion 1), clinical specificity (criterion 2), and intra-laboratory and inter-laboratory reproducibility (criterion 3). Candidate assays are validated by comparative analysis with fully clinically and epidemiologically validated reference HPV tests, such as the Hybrid Capture 2 (hc2) assay (QIAGEN) or the GP5+6+ PCR-EIA using samples from women aged 30 or older. One of the criteria stipulates that the sensitivity for ≥CIN 2 of the candidate assay should amount to ≥90% of the sensitivity of the hc2 assay; the specificity for ≥CIN 2 should reach ≥98% of the specificity of the hc2 assay.
EUROArray HPV
One test that fulfils the criteria of the Meijer protocol is the EUROArray HPV from EUROIMMUN. This multiplex PCR-based assay provides detection and typing of all 30 genitally relevant HPV types in one reaction. The individual typing enables differentiation of high- and low-risk infections, as well as identification of multiple infections. The precise HPV genotyping also allows differentiation between new and persistent infections when determinations are performed over a time course, e.g. two analyses at a time interval of 12 to 18 months. The test is based on detection of E6/E7 DNA, ensuring highest sensitivity even in infections where the viral genome has already integrated into the DNA of the host epithelial cells.
The EUROArray procedure is easy to perform and does not require expertise in molecular biology. DNA isolated from patient cervical smear samples is analysed using multiplex PCR and a microarray biochip slide containing DNA probes corresponding to each HPV type (Figure 2). Results are evaluated and interpreted fully automatically using the user-friendly EUROArrayScan software. A detailed result report is produced for each patient and all data is documented and archived. Integrated controls such as DNA positive control and cross-contamination control ensure high result security. Meticulously designed primers and ready-to-use PCR components further contribute to the reliability of the analysis. The entire procedure is IVD validated and CE registered.
Fulfilment of Meijer criteria
The EUROArray HPV was evaluated alongside other HPV tests (5, 6) using cervical specimens from 404 women undergoing follow-up of high-grade cytological abnormality. The HPV tests were used to detect high-risk HPV genotypes and predict histologically confirmed ≥CIN 2 in these patients. There was excellent agreement between the EUROArray HPV and all other HPV tests. The authors concluded that the EUROArray HPV fulfils the first Meijer criterion of ≥90% of the clinical sensitivity of hc2 for detection of ≥CIN 2. Moreover, the genotyping for 30 individual types would also allow the EUROArray HPV to be used in epidemiological and surveillance applications.
In a further study (7) the analytical and clinical performance of the EUROArray HPV was evaluated using a total of 1300 consecutive and 300 cytologically abnormal cervical samples (Tables 1A and 1B). The relative sensitivity of the EUROArray HPV with respect to the hc2 assay was 93% for ≥CIN 2. This value was further increased to 98% using an optimized cut-off for HPV16, which has now been incorporated into the test evaluation. The relative specificity of the EUROArray HPV for ≤CIN 1 with respect to the hc2 was 100%. Finally, the EUROArray reported excellent intra- and inter-assay reproducibility. Thus, the EUROArray HPV fulfilled all of the Meijer criteria for use in cervical cancer screening.
Conclusions
The EUROArray HPV is ideally suited for HPV genotyping in primary cervical cancer screening programmes. It has been shown to be non-inferior to the hc2 comparator test for both sensitivity and specificity, as stipulated in the Meijer criteria and validated using the VALGENT framework. The EUROArray HPV is currently the only commercially available test that enables genotyping of all 30 anogenital HPV types on the basis of the E6/E7 oncogenes. In the future it is speculated that the use of HPV genotyping in cervical cancer might be extended to testing for cure and further stratifying the disease risk. HPV are also associated with some other types of cancer, such as anal cancer, head and neck cancers, vulvar and vaginal cancers, and penile cancer. HPV testing is also important in the diagnosis of these cancers.
References
1. Elfgren et al. Am J. Obstet Gynecol (2016), 7:11-22
2. Tjalma et al. Eur J Obstet Gynecol Reprod Biol (2013), 170(1): 45-46
3. Meijer et al. Int J Cancer (2009), 124: 516-520
4. Arbyn et al. Clin Microbiol Infect (2015), 21:817-826
5. Cornall et al. Eur J Clin Microbiol Infect Dis (2016), 35(6): 1033-1036
6. Cornall et al. Papillomavirus Research (2017), 4: 79-84
7. Viti et al. J Clin Virol (2018), 108: 38-42
The author
Jacqueline Gosink, PhD
EUROIMMUN AG
Seekamp 31
23560 Lubeck
Germany
Implementing digital blood cell analysis technology in a distributed laboratory network
, /in Featured Articles /by 3wmediaThe recent introduction of the CellaVision DC-1 makes it possible for small labs to implement the same digital methodology for performing blood cell differentials that is commonly used by large laboratory organizations. CellaVision recently teamed up with Alberta Public Laboratories (APL) to conduct an in-situ product evaluation assessing the utility and impact of CellaVision DC-1 in a distributed laboratory network. APL is a leading medical diagnostic laboratory serving a large catchment of Southern Alberta, Canada. CLI talked to Dr Etienne Mahe, consultant pathologist at APL, who shares here his experience of this technology.
1. Could you briefly describe your laboratory setting and specific requirements regarding hematology testing ?
Laboratory testing in Southern Alberta (and in many other jurisdictions elsewhere in the world) can easily be summarized as a “hub-and-spoke” model. We have a large central high-throughput laboratory to which geographically dispersed small referral laboratories or collection sites send specimens.
Since many of these smaller sites are at substantial distances from the central referral laboratory, the strategy in hematology has been to situate low-complexity low-throughput analysers at the spoke sites and reserve the high-throughput high-complexity infrastructure for the hub labs. In the case of peripheral smear review, CBC data are generated at the peripheral sites on low-complexity low-throughput analysers, but slides (as required) are referred to the hub for additional review and interpretation. In cases requiring pathologist review, delays of up to several days are possible, with the significant itinerant potential for delayed patient care.
2. In your view, what are the most interesting characteristics of the Cellavision DC-1 analyser and the main advantages of its technology ?
The Cellavision suite has provided our hub labs in Southern Alberta with league improvements in efficiency for high-throughput hematology testing. We employ Cellavision integrated analysers in our hub labs to perform nearly all peripheral smear manual differential and morphology review activities. We also use the Cellavision body fluid analysis features to assist with review and interpretation of most fluid specimens. The networking capabilities of the Cellavision suite have allowed for seamless data exchange between our network of hospital-based hub labs. The Cellavision suite has also allowed for improved training and quality control workflows.
The Cellavision DC-1, designed to better address the digital hematology and pathology needs of lower throughput laboratories, raised significant interest for us as a means to better improve our spoke-to-hub workflows. In particular, while our performance parameters for basic CBC resulting are reasonable, we currently experience heavy delays in the morphological review of peripheral smears by hub technologists and pathologists by virtue of transportation delays from spoke centers. The Cellavision DC-1 presents the opportunity for real-time digital interpretation of peripheral smears originating from spoke sites by expert hub lab staff, entirely negating the need for slide transport.
3. What was the aim of the product evaluation carried out by your laboratory network ? Could you briefly explain the methodology employed ?
When presented with the opportunity to test the Cellavision DC-1 instrument, we immediately wanted to prove the theoretical turn-around time benefit could be realized in our lab system. We obtained research ethics and institutional approval to perform a prospective study of turn-around times (from specimen collection at spoke sites to expert review by the hub lab), comparing a Cellavision DC-1 assisted workflow with the current standard-of-care. We assessed in comparison the reported results from morphology review using the Cellavision DC-1 assisted workflow relative to the standard-of-care workflow. We also undertook a comparison to historical turn-around time data in order to estimate the volume of cases (and hence the length of the study) required for a reasonable comparison.
4. What were the results of the evaluation and did they meet your expectations ?
Since we hope to publish our results in the future, I won’t divulge them in their totality as yet, except to say that we identified statistically significant improvements in all parameters assessed, including turn-around times, without evidence of any discordance in the quality of morphologic assessment. While we were not at all surprised to see a statistically significant difference between the workflows, we were impressed by the degree to which these improvements in turn-around time were realized, which we anticipate will mean a clinically significant improvement for labs facing similar workflow hurdles.
5. Can you tell us anything more about your experience of this technology and do you have any particular advice or recommendation for labs interested in its implementation ?
We have been working with the Cellavision suite of technologies for several years and have incorporated it into the vast majority of our routine hematology workflows. Several years ago, as part of a small implementation project, I asked a number of our technologist super users to provide their feedback on instrument usage and software usability. By far and away, the feedback was positive.
As part of our current work, we are also hoping to provide more tangible data relating to Cellavision software useability. More specifically, we have undertaken several exercises across a broad cadre of technical staff, to identify how much more time-efficient the process of technologist classification using Cellavision software is compared with manual morphology assessment. As with our turn-around time results, we will soon be reporting a significant advantage to a Cellavision based workflow.
For laboratories and laboratory networks thinking of implementing Cellavision enabled technologies, it is important to first understand the nature of your laboratory structure and its hematology workflows. For single lab sites with high-throughput, Cellavision offers a number of solutions geared to high-volume needs. Now, as we have seen, Cellavision also offers solutions for smaller low-throughput labs, especially labs frequently faced with the challenges of material referrals.
6. What do you see as the next step for your organization ?
While our data support improvements in time-based metrics using the Cellavision DC-1 in our distributed laboratory network, we are hoping next to make an economic argument to support the integration of the Cellavision suite of technologies across our hub-and-spoke network. More specifically, we are hoping to liaise with local health economics experts to prove that improvements in turn-around times (and the commensurate assumed cost reductions if materials transportation is not required) support the necessary investments in infrastructure required, as well as where such investments should be made across our network.
Dr Etienne Mahe is Clinical Assistant Professor, Department of Pathology & Laboratory Medicine, University of Calgary, and Consultant Pathologist, Division of Hematology, South Sector, Alberta Public Laboratories
Using whole-genome sequencing to predict antimicrobial resistance
, /in Featured Articles /by 3wmediaThousands of strain-specific whole-genome sequences are now available for a wide range of pathogenic bacteria. Using these data, approaches based on machine learning can now be used to predict the results of antimicrobial susceptibility tests from sequence alone. Recent studies have demonstrated the ability to predict minimum inhibitory concentrations with accuracies up to 95 %. Employing these tools to prioritize antibiotic treatment could improve patient outcomes and help to avoid the antibiotic resistance crisis.
by Dr Jonathan M. Monk
Importance of antimicrobial resistance (AMR) prediction
Today over 700 000 people die of antibiotic resistant infections per year [1]. Frighteningly, it has been estimated that this number could rise to 10 million deaths per year if nothing is done to stop the increase and spread of antibiotic resistant bacteria [2]. To help combat this threat it is critical to limit the use of ineffective antibiotics and to prescribe the appropriate antimicrobial therapy to patients as quickly as possible. Although antimicrobial susceptibility testing is now routine in microbiology laboratories, this testing often takes too long to impact clinical diagnosis.
New tools that rapidly predict antibiotic resistance could improve antibiotic stewardship and, when effectively implemented, have led to reductions in levels of resistant bacteria in hospitals [3]. Thus, accurately diagnosing antibiotic resistant bacteria would avoid the evolutionary pressures that accelerate resistance and would aid antibiotic stewardship approaches. This could enable physicians to select the optimal antibiotic regimen to cure a patient, rather than enhancing a given strain’s resistance. Whole-genome sequencing may offer this possibility.
The genomics revolution has made available thousands of strain-specific whole-genome sequences (WGS) for a range of pathogenic bacteria. For example the Pathosystems Resource Integration Center (PATRIC) [the all-bacterial Bioinformatics Resource Center (BRC) funded by the National Institute of Allergy and Infectious Diseases (NIAID)] currently contains over 15 000 Escherichia genomes, more than 14 000 Staphylococcus genomes and nearly 11 000 Mycobacteria genomes [4]. Increasingly, these genomes are coupled with clinical metadata, including minimum inhibitory concentration (MIC) values for various antibiotics.
This large-scale coupling of resistance data with strain-specific genome sequences enables machine learning and other big-data science approaches to study and predict antibiotic resistance. For example, it is now possible to apply case-control studies whereby a group of strains that exhibit a biological phenotype (e.g. antibiotic resistance) is compared to a group of strains that do not. Machine learning techniques can be used to identify biomarkers (e.g. presence/absence of genes or mutations) that are predictive of a given phenotype. These biomarkers can then serve as a basis for diagnostic tests.
Here we discuss recent literature using machine learning approaches to predict antibiotic resistance and highlight considerations required for their application.
Introduction to machine learning approaches for WGS-based prediction of AMR
Setting up a machine learning problem involves breaking data into two groups (Fig. 1a):
(1) The y-array containing genomes or samples (m rows) matched with the phenotype to be predicted. In the case of AMR prediction, a phenotype could be binary e.g. ‘resistant’ versus ‘susceptible’ or the actual experimentally measured MIC. Predicting MICs is often preferable owing to changing breakpoints used to define resistance. For example, a given strain with a MIC of 8 µg/mL gentamicin may have previously been classified as resistant, but new CLSI 2017 guidelines specify that gentamicin resistance requires a MIC above 16 µg/mL. This can lead to inconsistent AMR annotations that can confound binary predictions.
(2) The X-matrix containing the samples (m rows) and their associated features (n columns) that will be used to make a prediction. Features range from those that are completely knowledge-based, such as the presence of genes known to confer antibiotic resistance (e.g. a beta-lactamase) to those that require no previous knowledge such as the presence of short (~10 bp) segments of DNA on the chromosome (Fig. 1b). These features have unique benefits and drawbacks that have been used in several recent studies described below.
Selecting appropriate features for AMR prediction
Knowledge-based features
Knowledge-based features can be obtained by mapping a genome of interest using curated databases of gene products that have already been demonstrated to confer antibiotic resistance. As of February 2019, the Comprehensive Antibiotic Resistance Database (CARD) houses 2553 reference sequences and 1216 SNPs demonstrated to confer resistances for 79 different pathogens [5]. These approaches are akin to laboratory tools that offer PCR-based identification of AMR determinants, such as BioFire. Models trained using previously annotated features often have good accuracies and are easier to interpret because of the accumulated knowledge present in such predictions [6, 7].
However, despite their high accuracy, these tools are limited because they often require many rounds of multiple sequence alignment, which can become computationally expensive at large scale. Also, reliance on known AMR determinants may cause such algorithms to miss newly evolved resistance mechanisms. A useful machine learning approach should be capable of analysing future outbreaks and identifying new mechanisms of resistance, rather than being limited to past knowledge.
Gene- and allele-based features
An approach that balances these two extremes involves assembling features by annotating the genome for known protein coding genes, but keeping the feature types agnostic, for example by including genes with functions ranging from cell replication, to cell wall synthesis to metabolism. This approach has the advantage of not requiring known determinants of antimicrobial resistance but does still require annotated genomes, potentially biasing results by annotation methods.
Recent studies have used this approach to predict antibiotic resistance in E. coli with accuracies above 90 % [8]. Importantly, this approach identified features that outperformed genes established in the literature. Such an approach can go even deeper by breaking the genes down into their constituent alleles to account for potential mutations in each coding sequence. Another study took this approach to examine 1595 strains of M. tuberculosis and identified 33 known AMR-conferring genes and 24 new potentially novel antibiotic resistance conferring genes [9]. Thus, methods that rely on several features extracted from the genome, rather than restricting them to those with previous knowledge, can be used to accurately predict AMR and identify novel mechanisms of resistance making them extensible to mutations and mechanisms of resistance that may emerge in the future.
Kmer feature selection
A contrasting approach that can identify new mechanisms of resistance and requires no a priori knowledge involves breaking up a genome into short (~10 bp) long segments of DNA and using these to create ‘features’ from short segments of DNA on the genome, known as ‘kmers’. All genomes in the collection can be divided into kmers that are then added to the X-matrix where presence of a specific kmer becomes a feature. Thus, this kmer-based approach contrasts with knowledge-based methods to predict AMR that rely on a database of curated genes and mutations previously shown to confer antibiotic resistance.
Studies of A. baumannii, S. aureus, S. pneumoniae, K. pneumoniae and collections of over 5000 Salmonella genomes have demonstrated ability to predict MIC with an average accuracies above 90 % within +/−1 twofold dilution step [10–12]. Unfortunately, this high accuracy and ability to predict new mechanisms of resistance has the trade-off of being difficult to interpret. For example, a model may imply a strong relationship between predicted resistance and segments of the genome without annotated functions or to biological processes.
Building and evaluating a machine learning model
Once the features for a model have been selected it is time to apply a machine learning algorithm to the data. Several such algorithms exist, each with benefits and drawbacks related to accuracy and interpretability [13]. Unfortunately, often the more accurate models are difficult to interpret whereas more intelligible models have worse predictive capabilities. In healthcare applications it is vital for the treating physician to be able to understand, validate and trust a model, and thus relying on easier to interpret methods like a decision tree or simple logistic regressors may be best.
When evaluating a machine learning model it is imperative to question how the model was trained. A major pitfall for machine learning approaches is their tendency to ‘overfit’ datasets. For example, a model trained on data used to make a prediction could simply ‘remember’ that data and use it to correctly predict any point in the same training set. However, if the model is too rigid it may perform poorly on new data. Robust machine learning models avoid such overfitting by splitting the data into non-overlapping sets where ~80 % of the data is used for training and ~20 % of the data is used for tests (Fig. 2a). This splitting process should be random and be performed several times to assess the overall accuracy and sensitivity of a model, thereby limiting overfitting and ensuring that predictions remained generalizable and robust.
Once a model’s ability to predict new data is established it is finally possible to evaluate the model’s predictive performance (Fig. 2b). Thus far, we have described previous studies in terms of correct predictions and accuracies. However, it’s often more important to evaluate cases where a model fails. Requirements for AMR diagnostic devices are strict. Devices typically describe their utility in terms of error rate. Major errors (MEs) occur when susceptible genomes are incorrectly predicted to have resistant MICs. The opposite case, when resistant genomes are incorrectly assigned susceptible MICs, are termed very major errors (VMEs). US Food and Drug Administration (FDA) standards for automated systems recommend a ME rate ≤3 %. A recent study of over 5000 Salmonella genomes used kmers to train a model that demonstrated MIC predictions for 15 antibiotics with ME rates in this range [10]. The FDA standards for VME rates indicate that the lower 95 % confidence limit should be ≤1.5 % and upper limit should be ≤7.5 %. Models for seven of the 15 antibiotics in the same study had acceptable VME rates based on this requirement. Thus, such an approach would make acceptable predictions for diagnostic applications.
Summary and outlook
In summary, options for WGS-based predictions of antimicrobial susceptibility testing are becoming a reality. This brief summary limits the scope to tools and methods to predict antibiotic susceptibility from WGS. However, in the future it may be possible to combine genomic features with information from the patient, like age, gender, comorbidities, etc. Furthermore, rather than predicting only antibiotic susceptibility it would be possible to train an algorithm to predict patient outcome and adjust treatment regimens to improve patient care [14].
Such approaches are sorely needed because despite improvements in antibiotic use, the Centres for Disease Control and Prevention (CDC) estimates that approximately 50 % of antibiotics are still prescribed unnecessarily in the US at a yearly cost of $1.1 billion [15], and the annual impact of resistant infections in the US is estimated to be $20 billion in excess healthcare costs and 8 million additional days patients stay in the hospital [16]. Significant improvements in patient outcome have been observed when reducing the time of treatment with optimal antibiotic therapy [17, 18]. Rapid identification and targeted treatment of pathogenic bacteria using tools assisted by algorithms presented here would enable precision medicine for pathogens that would lower the incidence of antibiotic resistance, improve patient health, and lead to decreased hospital costs.
Figure 1. How to set up a whole-genome sequence (WGS)-based machine learning problem for antimicrobial resistance (AMR) prediction. (a) Samples (m=rows) with sequenced genomes and known phenotypes of interest [‘susceptible’ vs ‘resistant’ phenotypes or minimum inhibitory concentration (MIC) value] are used to train a machine learning model. All values to be predicted are placed into the ‘y’ array. The ‘features’ used to train a model form the columns of the X-matrix. (b) For WGS-based antimicrobial-susceptibility-test prediction possible feature types include: (1) known antibiotic resistance conferring genes or mutations, (2) annotated protein coding genes (independent of known functions) and even (3) the presence of short fragments of DNA sequence on the chromosome known as ‘kmers’. These different feature types have a trade-off between ease of interpretation (easiest for previously identified features) and ability to detect novel AMR determinants (best for short sequence fragments).
Figure 2. Evaluating predictions from a machine learning model. (a) A machine learning model that is ‘overfit’ is inflexible to new data. To ensure a model is robust enough to predict new samples, all models should be cross-validated. This process involves randomly splitting the whole dataset into training (± 80 % of samples) and testing (± 20 %) sets. The sets should be shuffled multiple times to check model accuracy across different samples and features. (b) The results of running the model on testing sets can then be compared for each randomly sampled set (different colored lines). The model’s performance is compared by calculating the area under the curve (AUC) on a plot of the true positive rate vs the false positive rate often called a receiver operating characteristic (ROC) curve. Model accuracy can be calculated from the number of true positive (TP) [model predictions, resistant (R); experimental result, R] and true negative (TN) predictions divided by the total number of predictions. However, it is often more important to gauge how a model fails: for example a false positive ‘major error’ [model prediction, R; experimental result, susceptible (S)] may lead to incorrectly withholding an effective antibiotic. Even worse, false negative ‘very major error’ predictions (model prediction, S; experimental result, R) could lead to prescribing an antibiotic that is ineffective.
References
1. The dangers of hubris on human health. Global Risks – Reports. World Economic Forum 2013 (http://reports.weforum.org/global-risks-2013/risk-case-1/the-dangers-of-hubris-on-human-health/).
2. O’Neill J. Antimicrobial resistance: tackling a crisis for the health and wealth of nations. Rev Antimicrob Resist 2014; 20: 1–16.
3. Carling P, Fung T, Killion A, Terrin N, Barza M. Favorable impact of a multidisciplinary antibiotic management program conducted during 7 years. Infect Control Hosp Epidemiol 2003; 24(9): 699–706.
4. Wattam AR, Abraham D, Dalay O, Disz TL, Driscoll T, Gabbard JL, Gillespie JJ, Gough R, Hix D, Kenyon R, Machi D, Mao C, Nordberg EK, Olson R, Overbeek R, Pusch GD, Shukla M, Schulman J, Stevens RL, Sullivan DE, Vonstein V, Warren A, Will R, Wilson MJ, Yoo HS, Zhang C, Zhang Y, Sobral BW. PATRIC, the bacterial bioinformatics database and analysis resource. Nucleic Acids Res 2014; 42(Database issue): D581–591.
5. Jia B, Raphenya AR, Alcock B, Waglechner N, Guo P, Tsang KK, Guo P, Tsang KK, Lago BA, Dave BM, Pereira S, Sharma AN, Doshi S, Courtot M, Lo R, Williams LE, Frye JG, Elsayegh T, Sardar D, Westman EL, Pawlowski AC, Johnson TA, Brinkman FS, Wright GD, McArthur AG. CARD 2017: expansion and model-centric curation of the comprehensive antibiotic resistance database. Nucleic Acids Res 2017; 45(D1): D566–573.
6. Jeukens J, Freschi L, Kukavica-Ibrulj I, Emond-Rheault J-G, Tucker NP, Levesque RC. Genomics of antibiotic-resistance prediction in Pseudomonas aeruginosa. Ann N Y Acad Sci 2019; 1435(1): 5–17 (First published 2017 Online: https://nyaspubs.onlinelibrary.wiley.com/doi/abs/10.1111/nyas.13358).
7. Bradley P, Gordon NC, Walker TM, Dunn L, Heys S, Huang B, Earle S, Pankhurst LJ, Anson L, de Cesare M, Piazza P, Votintseva AA, Golubchik T, Wilson DJ, Wyllie DH, Diel R, Niemann S, Feuerriegel S, Kohl TA, Ismail N, Omar SV, Smith EG, Buck D, McVean G, et al. Rapid antibiotic-resistance predictions from genome sequence data for Staphylococcus aureus and Mycobacterium tuberculosis. Nat Commun 2015; 6: 10063.
8. Her H-L, Wu Y-W. A pan-genome-based machine learning approach for predicting antimicrobial resistance activities of the Escherichia coli strains. Bioinformatics 2018; 34(13): i89–i95.
9. Kavvas ES, Catoiu E, Mih N, Yurkovich JT, Seif Y, Dillon N, Heckmann D, Anand A, Yang L, Nizet V, Monk JM, Palsson BO. Machine learning and structural analysis of Mycobacterium tuberculosis pan-genome identifies genetic signatures of antibiotic resistance. Nat Commun 2018; 9(1): 4306.
10. Nguyen M, Long SW, McDermott PF, Olsen RJ, Olson R, Stevens RL, Tyson GH, Zhao S, Davis JJ. Using machine learning to predict antimicrobial MICs and associated genomic features for nontyphoidal Salmonella. J Clin Microbiol 2019; 57(2): pii: e01260-18 (http://dx.doi.org/10.1128/JCM.01260-18).
11. Davis JJ, Boisvert S, Brettin T, Kenyon RW, Mao C, Olson R, Overbeek R, Santerre J, Shukla M, Wattam AR, Will R, Xia F, Stevens R. Antimicrobial resistance prediction in PATRIC and RAST. Sci Rep 2016; 6: 27930.
12. Nguyen M, Brettin T, Long SW, Musser JM, Olsen RJ, Olson R, Shukla M, Stevens RL, Xia F, Yoo H, Davis JJ. Developing an in silico minimum inhibitory concentration panel test for Klebsiella pneumoniae. Sci Rep 2018; 8(1): 421.
13. Deo RC. Machine learning in medicine. Circulation 2015; 132(20): 1920–1930.
14. Kachroo P, Eraso JM, Beres SB, Olsen RJ, Zhu L, Nasser W, Bernard PE, Cantu CC, Saavedra MO, Arredondo MJ, Strope B, Do H, Kumaraswami M, Vuopio J, Gröndahl-Yli-Hannuksela K, Kristinsson KG, Gottfredsson M, Pesonen M, Pensar J, Davenport ER, Clark AG, Corander J, Caugant DA, Gaini S. Integrated analysis of population genomics, transcriptomics and virulence provides novel insights into Streptococcus pyogenes pathogenesis. Nat Genet 2019; 51(3): 548–559 (http://dx.doi.org/10.1038/s41588-018-0343-1).
15. Antibiotic resistance threats in the United States, 2013. US Centers for Disease Control and Prevention 2013 (https://www.cdc.gov/drugresistance/pdf/ar-threats-2013-508.pdf).
16. Fair RJ, Tor Y. Antibiotics and bacterial resistance in the 21st century. Perspect Medicin Chem 2014; 6: 25–64.
17. Kumar A, Roberts D, Wood KE, Light B, Parrillo JE, Sharma S, Suppes R, Feinstein D, Zanotti S, Taiberg L, Gurka D, Kumar A, Cheang M. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med 2006; 34(6): 1589–1596.
18. Palmer HR, Palavecino EL, Johnson JW, Ohl CA, Williamson JC. Clinical and microbiological implications of time-to-positivity of blood cultures in patients with Gram-negative bacilli bacteremia. Eur J Clin Microbiol Infect Dis 2013; 32(7): 955–959.
The author
Jonathan M. Monk PhD
Department of Bioengineering, UC San Diego, San Diego, California, USA
E-mail: jmonk@ucsd.edu
Activated partial thromboplastin time assay
, /in Featured Articles /by 3wmediaThe activated partial thromboplastin time coagulation assay is one of the most frequently performed tests in hematology, and has a variety of uses in clinical practice. Accurate interpretation of the test depends on both clinical context (i.e. why the test was ordered) as well as an understanding of each laboratory’s normal reference range and assay sensitivity regarding detection of factor deficiencies, (unfractionated) heparin therapy and lupus anticoagulant.
by Dr Julianne Falconer and Dr Emmanuel J. Favaloro
Introduction
The activated partial thromboplastin time (APTT) assay is a commonly requested coagulation test, perhaps second only to the prothrombin time (PT)/international normalized ratio (INR), as used to monitor vitamin K antagonist (VKA) therapy such as warfarin. The APTT test assesses the intrinsic pathway of coagulation and has a variety of clinical uses; however, it is primarily used to screen for hemostasis issues, factor deficiencies, lupus anticoagulant (LA) or to monitor unfractionated heparin (UFH) therapy dosing. The test is sensitive to, but not specific for, detection of these abnormalities or influences. APTT prolongation may also be seen in liver disease, disseminated intravascular coagulation (DIC) and in the presence of factor inhibitors. Interpretation of an APTT result, be it normal or prolonged, is dependent on both the clinical context and the characteristics of the reagents and the assay as performed on particular instruments. The establishment of normal reference intervals (NRIs) and assessment of the assay in terms of its sensitivity to heparin, LA and clotting factors are important to provide accurate information for clinical interpretation [1].
Uses of the APTT assay
The APTT test is a global assay that measures the time to fibrin clot formation via the contact factor (‘intrinsic’) pathway (Fig. 1). The APTT test is usually performed on fully automated platforms, and involves activation of coagulation within the test (plasma) sample by the addition of specific reagents (containing phospholipids, contact factor activator and calcium chloride). The type of contact factor activator, and the type and concentration of phospholipid, used in the APTT reagent affects the sensitivity of the assay to, and thus its prolongation by, factor deficiencies, as well as to the presence of UFH and LA [1, 2].
The APTT is commonly used to monitor anticoagulation therapy using UFH (Table 1). It may also be prolonged, however, in the presence of VKAs including warfarin, as well as direct oral anticoagulants (DOACs) such as dabigatran (direct thrombin inhibitor) and rivaroxaban (anti-FXa inhibitor). The APTT is generally less sensitive to, but may still be slightly prolonged, by anticoagulation with low molecular weight heparin (LMWH) and with apixaban, another DOAC (anti-FXa inhibitor).
In the absence of anticoagulation therapy, an ‘isolated’ prolonged APTT may be used to determine a clinically important factor deficiency, for example as a screen for hemophilia A (FVIII deficiency), hemophilia B (FIX deficiency), or hemophilia C (FXI deficiency), or even von Willebrand Disease (VWD; which may be associated with loss of FVIII) [1]. An ‘isolated’ prolonged APTT, however, could instead be indicative of a clinically unimportant factor deficiency such as FXII or other contact factor deficiency. Other alternatives for an ‘isolated’ prolonged APTT include a factor inhibitor or LA. Despite causing prolongation of APTT in vitro, LA may be associated clinically with increased risk of thrombosis rather than bleeding. A prolonged APTT may be accompanied by a prolonged PT in the context of liver disease, DIC or fibrinogen (or other ‘common factor pathway’ deficiency/ies). Clinical context, therefore, must form the basis for accurate interpretation of APTT, be it either normal or prolonged, and together with other routine coagulation studies is essential to guide further investigations (Fig. 2).
A large number of commercial APTT reagents are now available, with wide variation in the type of contact factor activator and phospholipid source and concentration used. This will result in variation in sensitivity to all typical influences; thus also causing substantial variation in NRIs between APTT reagents, and requiring the establishment and verification of NRIs based on both the reagent and instrument in use. Unawareness of variation in APTT reagent sensitivity in context of clinical picture will lead to flawed clinical interpretation of results.
Establishing and verification of NRIs
A minimum of 20 normal individuals may be sufficient to establish a NRI for PT and APTT, according to guidance documents provided by the Clinical and Laboratory Standards Institute (CLSI) [3, 4]. However, a larger number of normal individuals is recommended to establish an initial NRI, following which a smaller sample of normal individuals may be used for future verification purposes [1].
As an example, Figure 3 shows an initial (historical) NRI estimation for APTT testing using a dataset of nearly 80 normal individuals. This included one outlier sample result (Fig. 3a), which was removed to produce the cleaner dataset used to produce the subsequent NRI. A statistical normality test was performed and showed the distribution to be near Gaussian, allowing parametric statistical assessment. For APTT testing, the NRI would aim to evaluate the 95 % confidence interval, approximating a mean
± 2 standard deviation (SD) assessment (Fig. 3b). Logarithmic transformation can instead be used to normalize test data when it is non-parametric and fits a log distribution (e.g. Fig. 3c).
If a NRI has been previously established by the laboratory or by the manufacturer of the APTT reagent using a specific reagent/instrument combination, the laboratory could use a process of transference to verify the ‘established’ NRI as fit for purpose. This may be done by establishing that a majority of samples in a small set of normal donors give values within the established NRI (e.g. >18 out of a set of 20 normal samples). Samples obtained from normal individuals or a dataset of normal patient test results may be used to assess a new lot of reagent to establish whether an existing NRI can be maintained when changing reagent lots.
Factor (deficiency) sensitivity
Factor sensitivity of an APTT assay (representing a specific reagent/instrument combination) can be assessed in a number of ways. One method involves serial dilution of either in-house or commercially derived normal plasma, into single-factor deficient plasma, in order to generate a series of aliquots with reducing factor levels. These samples are then tested by APTT and for factor level. The APTT reagent is regarded to be sensitive to the level of factor that correlates with the upper limit of the NRI.
A more accurate process, though particularly difficult to perform outside of a hemophilia centre, is to establish APTT values from true patients with various known factor levels [1, 2] (e.g. Fig. 4).
As a general guide, if the APTT is used for screening factor deficiencies, then the patient APTT value should be above the NRI when their factor level is below around 30–40 U/dL for FVIII, FIX, and FXI.
Sensitivity of APTT to UFH
Despite the changing landscape of anticoagulation therapy with the addition of direct anti-Xa inhibitors (rivaroxaban and apixaban) and a direct thrombin inhibitor (dabigatran) [5, 6], both LMWH and UFH continue to be frequently used in clinical practice. In turn, the APTT continues to be a generally preferred method of UFH monitoring over anti-FXa, given the wide availability and relative low cost of the assay. However, unlike the calibrated anti-FXa assay, APTT results are subject to variation between different instruments, be they be based on optical or mechanical clot detection methods [7], different APTT reagents (including variation between different lots of the same reagent type) and algorithms used on instruments for raw data processing. This poses a substantial problem with regards to historical recommendations to maintain patients on UFH between 1.5 and 2.5 times the ‘normal reference value’ (as based on limited evidence [8]). Therapeutic ranges should therefore be defined with specific reference to the instrument/reagent combination used locally [9].
One ‘spiking method’ involves testing samples containing known quantities of UFH diluted into normal pool plasma, as then tested by APTT and anti-FXa methods, allowing an estimation of the APTT therapeutic interval [1]. However, variations in certain components of patient plasma, as well as the non-physiologically processed nature of the UFH used, can impact on the interpretation of data obtained using this method. A better method involves ex vivo assessment of plasma obtained from patients on UFH therapy, with these tested for both APTT and anti-FXa, and then to establish a UFH therapeutic range for APTT that matches the therapeutic range for anti-FXa (e.g. 0.3–0.7 U/mL). It is important to recognize that individual response to UFH according to APTT is affected by many influences, including (but not limited to): antithrombin level; high or low levels of coagulation factors and proteins such as von Willebrand factor or proteins released from endothelial cells or platelets, competing with antithrombin for heparin binding; or increased FVIII levels in acute phase response; or reduction in FXII; or presence of LA (etc).
To obtain a cleaner data set to establish UFH therapeutic ranges, the following steps can be undertaken during sample collection and processing [1].
• Ensure baseline PT, APTT and INR testing prior to commencement of UFH are within their NRIs.
• Exclude underfilled samples, samples with visible hemolysis or likely platelet activation and release of heparin neutralizer platelet factor 4 (PF4).
• Exclude samples containing LMWH or other anticoagulants (e.g. VKAs, DOACs).
• Adhere to manufacturer guidelines with regards to the window from time of blood collection to testing.
• Double centrifuge samples when freezing them for batch testing (to remove residual platelets, which release PF4 and phospholipids on thawing).
• Accumulate data over a suitable time period to account for day-to-day test result variability.
• Aim for 30 or more data points.
• Appropriately dilute samples with anti-Xa activity above the test’s linearity limit.
• Remove data points reflecting ‘gross’ outliers.
LA sensitivity
The LA sensitivity of a particular APTT reagent can be assessed by comparing APTT tests of samples containing LA, for example by comparison of mean clotting times for each reagent.
Given that the APTT is a phospholipid-dependent assay, the test may be susceptible to prolongation in the presence of LA. However, differences in the phospholipid type and concentration between APTT reagents account for wide variation seen in the degree of prolongation of APTT, including due to LA. The LA sensitivity of the APTT reagent also has bearing on the use of APTT to monitor UFH and must inform the establishment of an algorithm to further investigate unexpectedly prolonged APTTs.
In one empirical method, initial testing using an LA sensitive method (e.g. dilute Russell viper venom time; dRVVT) is initially used to formulate a set of LA-positive samples of various ‘strengths’. Different APTT reagents can then be used to test the samples and the data for each sample can be plotted again the upper reference limit of the APTT for each reagent [1]. The ratio of clotting time of each LA-positive sample (of varying strengths) to the mean normal APTT derived from normal plasma samples is calculated. The median of these ratios allows different reagents to be ranked according to LA sensitivity. It can then become clear which APTT reagents are most (versus least) sensitive to LA. These can then be differentially selected according to the laboratory desire. For example, a laboratory may prefer to select an APTT reagent that is relatively LA ‘insensitive’, as combined with good factor VIII/IX/XI and UFH sensitivity if there is a desire to use a general purpose APTT screening reagent (i.e. hospital laboratory monitoring UFH, but wishing to avoid LA detection in asymptomatic patients). Alternatively, a laboratory may select an LA sensitive and an insensitive APTT reagent pair if they wish to assess for LA in symptomatic (thrombosis and/or pregnancy morbidity) patients.
Conclusion
Interpretation of a normal or a prolonged APTT must take into account both clinical context, including presence of anticoagulant therapy, as well as the methods and reagents used by the laboratory. The sensitivity of a particular APTT reagent to detect UFH therapy, LA and factor deficiencies has significant bearing on diagnostic assessment and therapy monitoring, and thus reflects essential knowledge for laboratory and clinical staff alike.
Figure 1. The activated partial thromboplastin time (APTT) assay measures the clot time to formation of fibrin via the contact factor pathway and is dependent on contact factors (FXII and above), and then FXI, FIX, FVIII, FX, FV, and FII. The APTT is also affected by vitamin K antagonists (VKAs; ‘W’), but more importantly is used to monitor unfractionated heparin (UFH; ‘H’) therapy and also to assess for potential hemophilia (FVIII, FIX or FXI deficiency). The APTT is also sensitive to the presence of other anticoagulants, including direct oral anticoagulants (DOACs) such as dabigatran (‘D’) and rivaroxaban (‘R’), and potentially also apixaban (‘A’) for some reagents. The APTT may also be utilized as part of a panel of tests to help assess for lupus anticoagulant (LA). (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Figure 2. An algorithm that provides one recommended approach for the follow-up of an abnormal APTT. Always exclude an anticoagulant effect first – there is no point investigating a prolonged APTT associated with anticoagulant use. Then consider the patient’s history, or the clinical reason for the test order, both of which assist in terms of follow-up approach. APTT, activated partial thromboplastin time; FBC/CBC, full blood count (UK/Australia)/complete blood count (USA); DIC, disseminated intravascular coagulation; DOAC, direct oral anticoagulant; EDTA, ethylenediaminetetraacetic acid; F, factor; LA, lupus anticoagulant; PT, prothrombin time. (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Table 1. The APTT test. A multipurpose and sensitive assay, but not specific for any individual parameter. List is not meant to be all inclusive.
DOACs, direct oral anticoagulants; VWD, von Willebrand disease.
*PT should also be prolonged if APTT is prolonged in the indicated setting.
(Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Figure 3. Historical data from our laboratory to illustrate the process of deriving a normal reference interval (NRI) for the APTT, and using nearly 80 normal individual plasma samples. (a) APTT of all samples tested shown as a dot plot; one clear outlier shown as a red asterisk. (b) Data cleaned of outliers [i.e. in this case the single red asterisk sample in (a)]. (c) NRR estimate as mean ± 2 standard deviations (SDs) to provide approximate 95 % coverage. Bar graphs of parametric data processing and log transformed data processing shown. The NRI for this data set approximates 27–38 sec. (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Figure 4. Ex vivo heparin versus APTT evaluation. (a) Samples from all patients identified to be on heparin (as identified by our laboratory information system) and for which an APTT was performed at the time of evaluation are also tested for anti-FXa level. The APTT therapeutic range is that corresponding to a heparin level of 0.3–0.7 U/mL by anti-Xa. However, many data points in this figure do not reflect UFH alone. Some points may instead reflect low molecular weight heparin (e.g. likely to be the sample yielding an anti-Xa value close to 0.7 U/mL but with normal APTT) or alternatively UFH co-incident to FXII deficiency or LA, or else patients potentially transitioning from UFH to VKAs. These data points can be removed to yield a ‘cleaner’ data set, as shown in (b). (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Disclaimer: The views expressed in this paper are those of the authors, and are not necessarily those of NSW Health Pathology.
References
1. Favaloro EJ, Kershaw G, Mohammed S, Lippi G. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35.
2. Kershaw G. Performance of activated partial thromboplastin time (APTT): determining reagent sensitivity to factor deficiencies, heparin, and lupus anticoagulants. Methods Mol Biol 2017; 1646: 75–83.
3. Defining, establishing, and verifying reference intervals in the clinical laboratory; proposed guideline—third edition. CLSI document C28–P3. Clinical and Laboratory Standards Institute (CLSI) 2008.
4. One-Stage Prothrombin time (PT) test and activated partial thromboplastin time (APTT) test; approved guideline—second edition. CLSI document H47-A2. CLSI 2008.
5. Favaloro EJ, McCaughan GJ, Mohammed S, Pasalic L. Anticoagulation therapy in Australia. Ann Blood 2018; 3: 48.
6. Lippi G, Mattiuzzi C, Adcock D, Favaloro EJ. Oral anticoagulants around the world: an updated state-of the art analysis. Ann Blood 2018; 3: 49.
7. Favaloro EJ, Lippi G. Recent advances in mainstream hemostasis diagnostics and coagulation testing. Semin Thromb Hemost. 2019; 45(3): 228–246.
8. Baluwala I, Favaloro EJ, Pasalic L. Therapeutic monitoring of unfractionated heparin – trials and tribulations. Expert Rev Hematol 2017; 10(7): 595–605.
9. Marlar RA, Clement B, Gausman J. Activated partial thromboplastin time monitoring of unfractionated heparin therapy: issues and recommendations. Semin Thromb Hemost 2017; 43(3): 253–260.
The authors
Julianne Falconer1 MBBS and Emmanuel J. Favaloro*1,2 PhD, FFSc (RCPA)
1Haematology, Institute of Clinical Pathology and Medical Research (ICPMR), NSW Health Pathology, Westmead Hospital, NSW, Australia.
2Sydney Centres for Thrombosis and Hemostasis, Westmead Hospital
*Corresponding author
E-mail: Emmanuel.Favaloro@health.nsw.gov.au
Novel strategies for clinical coagulation diagnostics and therapy monitoring
, /in Featured Articles /by 3wmediaClinical coagulation assays are an important part of anticoagulation measurements and monitoring. Despite the rise of new promising technologies, traditional coagulation assays were largely unchanged in the last decades. Here we discuss the application of microfluidics and nanotechnology to clinical coagulation diagnostics and anticoagulation therapy monitoring.
by Dr Francesco Padovani and Prof. Martin Hegner
Introduction
Fast, accurate and reliable determination of multiple coagulation parameters is crucial for a correct diagnosis of blood coagulation disorders. The two most common coagulation assays performed regularly in hospital environments are prothrombin time (PT) and activated partial thromboplastin time (aPTT). These two assays measure the time required for the onset of fibrinogen proteolysis that is followed by the formation of a fibrin network [1]. The measurement is usually performed by increased impedance or turbidity. Upon determination of an abnormal coagulation time, further testing is required (e.g. one-stage clotting assays or chromogenic substrate assays). Despite their extreme usefulness, these assays are not factor specific and they are sensitive only if the factor activity is below 50 %. Additionally, fibrinolysis, crosslinking, clot strength or initial blood plasma viscosity (important mechanical parameters that relate to coagulation) are not measured, and finally they do not evaluate or monitor acute bleeding or thrombosis risk. These drawbacks demand for the development/standardization of novel strategies that can improve the clinical diagnosis process. Global hemostasis assays such as thromboelastography (TEG), thrombin generation, and overall hemostasis potential are promising technologies that, despite being around for decades, are not routinely used by hematologists. These assays are based on bench-top devices and require dedicated clinical laboratories and qualified personnel. Novel strategies based on microfluidics and nanotechnology may enable point-of-care testing (with potential for self-testing), self-monitoring and a great reduction in sample volume needed [2].
Anticoagulation monitoring and measurement
Accurate, reliable and frequent measurement and monitoring of anticoagulant therapies such as warfarin or heparin is vital to their effectiveness. When control is poor, patients experience more complications such as joint pain, bleeding and strokes [3]. The gold standards used for assessing the level of anticoagulation control are the percent time in therapeutic range (TTR) and international normalized ratio (INR). Both of these assays rely on standardization of the patient’s PT against an international standard. TTR is usually calculated with the method by Rosendaal that employs linear interpolation to assign an INR value to each day between successive observed INR values [4]. Therefore, patients who undergo an anticoagulation therapy have to frequently assess coagulation parameters. Systematic reviews showed that self-testing and self-management are an effective and safe intervention [5]. Self-testing devices should be of simple use, provide fast and analytically accurate results, and they should require minimal amount of sample. Ideally, they should also be portable.
Novel strategies exploiting microfluidics and nanotechnology
Novel approaches that employ microfluidics and nanotechnology have been developed in recent years. The main advantages of these techniques are high sensitivity and a great potential for miniaturization and point-of-care testing. Some studies proposed the use of quartz crystal microbalance (QCM) to measure the viscoelastic properties of blood plasma clot formation [6–9]. QCM consists of a quartz crystal resonator whose resonant frequency is dependent on the mass adsorbed onto the sensor and on the viscoelastic properties of the fluid surrounding the resonator. These studies showed superior performances to conventional TEG and required relatively small sample volumes. However, deconvolution of unspecific protein adsorption and liquid viscoelastic properties are very complex, hindering the potential to accurately measure clot strength development during coagulation. Other studies employed surface plasmon resonance (SPR) detection. SPR is a popular technology in the field of biomarker detection. A polarized light beam hits a glass/liquid interface causing an electromagnetic field exiting the glass. If a thin metal film is applied between the glass and the liquid surface plasmons are excited. The reflected light is collected by a sensor and upon receptor/target recognition the reflectivity curve shifts [10]. Extrapolation of viscoelastic parameters is not feasible. To the best of our knowledge, only PT time was measured using this technology [11]. Our laboratory exploited nanomechanical resonators to quantify coagulation parameters. The resonators are arrays of microcantilevers (beams clamped at one end) that oscillate at high speed. When immersed in a fluid, the viscosity and density can be measured in real time by tracking quality factor and resonant frequency of the oscillation [12]. By combining microfluidics technology, ensuring uniform mixing of coagulation reagents, with a high degree of automation and accurate extrapolation of the results, nanoresonators demonstrated great ability to measure clinically relevant coagulation parameters [13]. Along with PT and aPTT, other parameters are measured within the same test run, such as initial plasma viscosity, clot strength (final viscosity), initial and final coagulation rates. For example, patients with severe hemophilia showed a low initial plasma viscosity, low clot strength (bleeding), and low coagulation rates. By mixing hemophiliac patients’ plasma with 30 % of normal control the coagulation rates and the clot strength were improved, but not completely restored indicating the degree of severity (Fig. 1). To detect deficiencies of specific factors, an immunoassay can be integrated in situ allowing for diagnosis of factor deficiency within a single test run. Furthermore, the diagnostic array can be reused repeatably by regeneration in a cleaning solution [13]. The same microcantilever technology was applied to measure fibrinolysis in real time. It is well known that impaired function of the fibrinolytic system increases the risk of thrombosis [14]. By pre-mixing a patient’s blood plasma with tissue plasminogen activator and performing a PT (or aPTT) assay, the PT (or aPTT) and the following induced fibrinolysis can be measured. Parameters such as starting clot strength, final dissolved clot strength and 50 % lysis time (Fig. 2) provide useful information for assessing the patient’s thrombotic risk. Finally, anticoagulation treatment (heparin) was measured with low and high concentration of heparin mixed with normal control plasma (Fig. 3). Potentially, a patient under anticoagulation treatment could self-monitor their status and self-manage their therapy according to the results. For example, the final clot strength could indicate bleeding risk and the therapy can be adjusted to suit the particular needs of the specific patient (personalized medicine). All these measurements were performed with a low sample volume (<20 µl) and a high degree of automation (reducing operator intervention and complexity).
Summary
Anticoagulation measurement and monitoring employs assays that have gone largely unchanged for decades. The rise of new technologies such as microfluidics and nanotechnology carry great potential for integration with standard clinical assays. Global hemostasis assays could pave the way for an improvement in the current clinical coagulation diagnostics. Miniaturization, personalized medicine, point-of-care testing, automation, self-testing and self-monitoring are all interesting approaches that could overcome current drawbacks of gold standards in coagulation measurements. However, all these strategies require more standardization and more clinical studies to assess and exploit their potential.
Figure 1. Representation of the suspended microresonators oscillating at high speeds (approx. 300 kHz) and microfluidics set-up. Clot strength (viscosity) curves over time for normal control samples, mild hemophilia and severe hemophilia patients’ plasma during activated partial thromboplastin time (aPTT) assays performed with nanoresonators. The array of sensors is first immersed in human blood plasma (green area) and then, at time 0 s, coagulation is triggered with the specific reagents (orange area). Final clot strength, coagulation rates and aPTT values are dependent on the degree of severity. (Padovani F, Duffy J, Hegner M. Nanomechanical clinical coagulation diagnostics and monitoring of therapies. Nanoscale 2017; 9(45): 17939–17947 [13] – Reproduced by permission of The Royal Society of Chemistry.)
Figure 2. Clot strength developing over time for tissue plasminogen activator (tPA) assisted fibrinolysis. Normal control plasma was mixed with a 350 ng/ml tPA solution. After the measurement of the plasma viscosity, the coagulation is triggered at time 0 s with PT reagents. As soon as the coagulation is triggered, the clot strength increases, but at the same time the activity of tPA starts to lyse the fibrin network. After approx. 32 min, the clot is completely dissolved and the final strength is lower than the starting plasma viscosity. This difference is due to the fibrin breakage into soft fibrin particles that have no viscosity. Some of the parameters that can be extracted are PT (see zoom plot), starting clot strength (C+B), final dissolved clot strength (C), and time (50 % Ly) required to reach half-clot strength (50 % B). (Padovani F, Duffy J, Hegner M. Nanomechanical clinical coagulation diagnostics and monitoring of therapies. Nanoscale 2017; 9(45): 17939–17947 [13] – Reproduced by permission of The Royal Society of Chemistry.)
Figure 3. Effects of heparin on the clot strength development during an aPTT test. After measurement of plasma viscosity, coagulation is triggered at time 0 s with aPTT reagents. Higher concentrations of heparin cause a more prolonged aPTT but the final clot strength is always in the normal range. (Padovani F, Duffy J, Hegner M. Nanomechanical clinical coagulation diagnostics and monitoring of therapies. Nanoscale 2017; 9(45): 17939–17947 [13] – Reproduced by permission of The Royal Society of Chemistry.)
References
1. McPherson RA, Pincus MR. Henry’s clinical diagnosis and management by laboratory methods, 23rd edn (E-book). Elsevier Health Sciences 2017.
2. Al-Samkari H, Croteau SE. Shifting landscape of hemophilia therapy: implications for current clinical laboratory coagulation assays. Am J Hematol 2018; 93(8): 1082–1090.
3. Connolly S, Pogue J, Eikelboom J, Flaker G, Commerford P, Franzosi MG, Healey JS, Yusuf S; ACTIVE W Investigators. Benefit of oral anticoagulant over antiplatelet therapy in AF depends on the quality of the INR control achieved as measured by time in therapeutic range. Circulation 2008; 118: 2029–2037.
4. Razouki Z, Burgess JF Jr, Ozonoff A, Zhao S, Berlowitz D, Rose AJ. Improving anticoagulation measurement: novel warfarin composite measure. Circ Cardiovasc Qual Outcomes 2015; 8(6): 600–607.
5. Heneghan C, Ward A, Perera R, Self-Monitoring Trialist Collaboration, Bankhead C, Fuller A, Stevens R, Bradford K, Tyndel S, Alonso-Coello P, et al. Self-monitoring of oral anticoagulation: systematic review and meta-analysis of individual patient data. Lancet 2012; 379(9813): 322–334.
6. Lakshmanan RS, Efremov V, O’Donnell JS, Killard AJ. Measurement of the viscoelastic properties of blood plasma clot formation in response to tissue factor concentration-dependent activation. Anal Bioanal Chem 2016; 408(24): 6581–6588.
7. Lakshmanan RS, Efremov V, Cullen S, Byrne B, Killard AJ. Monitoring the effects of fibrinogen concentration on blood coagulation using quartz crystal microbalance (QCM) and its comparison with thromboelastography. SPIE Microtechnologies 2013, Genoble, France. Conference paper in Proc SPIE 8765, Bio-MEMS and Medical Microdevices 2013.
8. Bandey HL, Cernosek RW, Lee WE 3rd, Ondrovic LE. Blood rheological characterization using the thickness-shear mode resonator. Biosens Bioelectron 2004; 19(12): 1657–1665.
9. Hussain M. Prothrombin time (PT) for human plasma on QCM-D platform: a better alternative to ‘gold standard’. UK J Pharm Biosci 2015; 3(6): 1–8 (DOI: http://dx.doi.org/10.20510/ukjpb/3/i6/87830).
10. Hansson KM, Tengvall P, Lundström I, Rånby M, Lindahl TL. Surface plasmon resonance and free oscillation rheometry in combination: a useful approach for studies on haemostasis and interactions between whole blood and artificial surfaces. Biosens Bioelectron 2002; 17(9): 747–759.
11. Hansson KM, Vikinge TP, Rånby M, Tengvall P, Lundström I, Johansen K, Lindahl TL. Surface plasmon resonance (SPR) analysis of coagulation in whole blood with application in prothrombin time assay. Biosens Bioelectron 1999; 14(8–9): 671–682.
12. Padovani F, Duffy J, Hegner M. Microrheological coagulation assay exploiting micromechanical resonators. Anal Chem 2016; 89(1): 751–758.
13. Padovani F, Duffy J, Hegner M. Nanomechanical clinical coagulation diagnostics and monitoring of therapies. Nanoscale 2017; 9(45): 17939–17947.
14. Meltzer ME, Doggen CJ, de Groot PG, Rosendaal FR, Lisman T. The impact of the fibrinolytic system on the risk of venous and arterial thrombosis. Semin Thromb Hemost 2009; 35(05): 468–477.
The authors
Francesco Padovani PhD and Martin Hegner*PhD
Centre for Research on Adaptive Nanostructures and Nanodevices (CRANN), School of Physics, Trinity College Dublin, Dublin, Ireland
*Corresponding author
E-mail: hegnerm@tcd.ie
Aqueous humour biomarkers for retinoblastoma, a pediatric ocular malignancy
, /in Featured Articles /by 3wmediaFor decades, attempts to biopsy or obtain fluid from eyes with retinoblastoma had been contraindicated, however recent changes in the management of retinoblastoma have allowed for safe sampling of the aqueous humour (AH) during therapy. Use of the AH as a liquid biopsy enables tumour biomarker analysis in these eyes; this has potential to dramatically alter the management of this pediatric cancer.
by Dr Benjamin K. Ghiam, Dr Liya Xu and Dr Jesse L. Berry
Introduction
Retinoblastoma (Rb) is the most common intraocular cancer in children, comprising 4 % of all pediatric malignancies [1, 2]. This potentially fatal malignancy often goes undiagnosed until the tumour is advanced and has damaged intraocular structures. Survival rates for Rb are in excess of 90 % in developed countries, though a critical, and often challenging, focus of Rb therapy is globe and vision preservation [3]. Throughout decades of ocular medicine and surgery, any attempt to biopsy these tumours, or even obtain fluid from Rb eyes had been fervently contraindicated for risk of tumour seeding and dissemination. Thus, much of the diagnosis and management of Rb is dependent on information gathered by the ophthalmologist through careful eye examination, and without histopathologic evidence.
In 2012, Munier et al. described a safety-enhanced protocol for intravitreal chemotherapy injections in the eyes of patients with Rb; this protocol requires an initial paracentesis [4]. As described by the authors of the study, a volume of 0.1 ml of aqueous fluid is aspirated to induce transient hypotony before the intravitreal injection as a safety measure to prevent reflux to the injection site. This protocol for intravitreal injection of chemotherapy has now been widely adopted worldwide and the risk of extraocular spread is considered extremely low (zero reported cases with the safety-enhanced procedure) [5]. This demonstrated safety record paved the way for aqueous humour (AH) extraction in eyes with Rb undergoing active therapy.
AH is the clear intraocular fluid produced by the ciliary processes that fills the front part of the eye (anterior chamber). The AH functions to maintain intraocular pressure, provide nutrients to the cornea, and remove waste products. It has also been shown to be a rich source of information for intraocular disease, including Rb [6]. Researchers have long sought to evaluate AH for the presence of biomarkers which may correlate with features of intraocular disease and provide diagnostic and prognostic value. However, before 2017, any evaluation of the AH was only done on eyes after enucleation. Now that the AH can be safely extracted during therapy, we hypothesized that previous evaluations of AH biomarkers (post-enucleation) may now be clinically applicable for the diagnosis, prognosis and/or management of Rb. This article excerpts our recently published systematic review, titled “Aqueous Humor Biomarkers for Retinoblastoma, a review” in the journal Translational Vision Science and Technology [7].
Lactate dehydrogenase
Lactate dehydrogenase (LDH) is an enzyme found in nearly all cells that acts as a regulator of metabolism; it has been used clinically as a non-specific marker found within body fluids in various pathological conditions, including malignant tumours.
In the early 1970s, Dias et al. examined LDH levels in the AH from enucleated Rb eyes [8]. Early reports demonstrated a significant increase in the levels of LDH within the AH of enucleated eyes with Rb when compared to patients without Rb, such that levels >1000 U/L strongly support the diagnosis of Rb (Table 1). Multiple studies on LDH levels in the AH from enucleated eyes were done between the years 1971 and 2008 which found that LDH levels were significantly elevated compared to controls, and more elevated in advanced eyes with delayed diagnosis; however, these levels did not correlate with other clinical features or outcomes. Elevation in AH LDH have been described in patients with other ocular conditions, including primary open angle glaucoma and Coats’ disease. Although LDH was the first described marker of tumour activity in the AH, the lack of specificity and correlation with patient or tumour features limits its use clinically. Owing to this lack of correlation this research was previously abandoned.
Enolase/neuron-specific enolase
Neuron-specific enolase (NSE) is an isoenzyme of the glycolytic enzyme enolase; it is highly specific for neurons and peripheral neuroendocrine cells. Increased body fluid levels of NSE occur with malignant proliferation and thus have been of value in the diagnosis and characterization of neuroendocrine tumours, including small cell lung cancer and retinoblastoma [9].
Evaluation of the isoenzyme patterns of enolase in the AH of enucleated Rb eyes demonstrated that NSE levels were elevated in AH Rb, whereas enolase was not detectable in the AH from controls (Table 1) [10–12]. Elevated levels of NSE significantly correlated with inflammation and tumour invasion into the anterior chamber [13]. NSE levels did not correlate with histological tumour parameters (tumour necrosis, calcification, optic nerve/choroidal invasion) as well as clinicopathological parameters (sex, enucleation age, presentation age, family history, previous treatment, and metastatic disease). Moreover, NSE levels were found to be within the control range in children more than 5 years after active therapy [14]. This suggests that NSE may be used clinically to indicate remission status. Although obtaining serial AH NSE measurements may have a significant role in determining tumour status in Rb patients in the future, additional evidence is required to further substantiate the use of this tumour marker clinically.
Surviving and transforming growth factor beta-1
Survivin is a protein that inhibits apoptosis. It has garnered significant interest as a diagnostic and prognostic factor in human neoplasms, including Rb. Elevated survivin levels are found in many human neoplasms, and it is used as a prognostic factor in several human neoplasms, including lung and colorectal cancers [15, 16]
Survivin expression in the AH from enucleated eyes of children with Rb was found to be significantly elevated, when compared to patients with non-malignant ophthalmic disease, such as congenital cataracts and glaucoma [17, 18]. AH survivin levels correlated with tumour stage and histopathologic post laminar optic nerve involvement.
Transforming growth factor beta-1 (TGF-β1) expression in the AH of enucleated Rb eyes was associated with poor differentiation of the tumour [17]. The authors demonstrated high sensitivity and specificity of these AH proteins which makes them promising markers for Rb, particularly of more aggressive pathologic features.
Uric acid and xanthine
During cell turnover, nucleic acids and nucleotides are degraded into xanthine and uric acid. Elevated levels of serum uric acid have been associated with many malignancies, as well as after rapid destruction such as after treatment with chemotherapy or radiation.
Elevated concentrations of uric acid and xanthines were found in the AH of children with Rb compared with control eyes (Table 1) [19]. Elevated levels of xanthine and uric acid in AH may support the diagnosis of Rb in children suspected of having the disease, however further studies are necessary to establish optimal cut-offs, explore clinicopathological correlations, and compare Rb levels to lesions simulating Rb (Coats’ disease and persistent fetal vasculature).
Protein content
Normally, the AH is virtually protein-free to ensure a clear optical media between the cornea and the lens. An increase in globulin content and an albumin/globulin ratio < 1 has been found in enucleated eyes with Rb [20]. Concentrations of interleukin (IL)-6, IL-7, IL-8, interferon gamma (IFN-γ), placental growth factor 1 (PlGF-1), vascular endothelial growth factor A (VEGF-A), beta-nerve growth factor (β-NGF), hepatocyte growth factor (HGF), epidermal growth factor (EGF) and fibroblast growth factor 2 (FGF-2) were significantly higher in the AH of patients with Rb than those in the control group [21]. Additionally, significantly decreased protein concentration was demonstrated in Rb eyes following treatment with selective intra-arterial chemotherapy (melphalan injection in the ophthalmic artery) that were subsequently enucleated after attempts at salvage, compared to primarily enucleated eyes [22].
Nucleic acids
Recent studies from Berry et al. demonstrated the presence of tumour-derived nucleic acids (DNA, RNA, miRNA) in the AH of Rb eyes [23]. Because of this, the authors suggest that the AH may be a rich source of tumour DNA and, thus, could be used as a liquid biopsy in children with Rb, without undergoing enucleation. A subsequent analysis by Berry et al. in 2018 showed that evaluation of the cell-free DNA (cfDNA) in the AH for chromosomal alterations has potential prognostic value as in indicator of aggressive disease [24]. Specifically, there was a significant increased odds of an eye failing therapy and requiring enucleation due to persistent and/or progressive cancer activity if gain of chromosome 6p was found in the AH cfDNA. Further research is required before this can be applied clinically, however this holds potential as a prognostic biomarker for Rb.
Conclusion
Despite significant investigation into tumour biomarkers for Rb spanning more than four decades, currently there are no active uses for the AH in a clinical setting. Diagnosis is made on the basis of examination and ancillary imaging findings without a biopsy, and molecular tumour markers are presently not used for diagnosis, prognosis, or to monitor therapeutic response. This is due in large part to the contraindication to biopsy in Rb; therefore, previously neither tumour nor AH or other ocular fluids were evaluated outside of specimens from enucleated eyes; clearly this limited the ability to correlate these markers with meaningful clinical outcomes. However, with recent advances in local therapy for Rb, paracentesis with extraction of the AH has now been shown to be safe in eyes being actively treated. This opens the door to for an AH liquid biopsy and thus there is renewed interest in these potential disease biomarkers.
Acknowledgement
This article excerpts our recently published systematic review, titled “Aqueous Humor Biomarkers for Retinoblastoma, a review” in the journal Translational Vision Science and Technology [7].
References
1. Shields, JA. Management and prognosis of retinoblastoma. In: Intraocular tumors: a text and atlas, pp377–391. WB Saunders 1992. ISBN 978-0721642680.
2. Shields JA, Shields CL. Intraocular tumors: an atlas and textbook, p574. Lippincott Williams & Wilkins 2008. ASIN B00XWR8WM6.
3. Pavan-Langston D. Manual of ocular diagnosis and therapy, p533. Lippincott Williams & Wilkins 2008. ISBN 978-0781765121.
4. Munier FL, Soliman S, Moulin AP, et al. Profiling safety of intravitreal injections for retinoblastoma using an anti-reflux procedure and sterilisation of the needle track. Br J Ophthalmol 2012; 96(8): 1084–1087.
5. Smith SJ, Smith BD, Mohney BG. Ocular side effects following intravitreal injection therapy for retinoblastoma: a systematic review. Br J Ophthalmol 2013; 98(3): 292–297.
6. Macknight AD, McLaughlin CW, Peart D, et al. Formation of the aqueous humor. Clin Exp Pharmacol Physiol 2000; 27(1-2): 100–106.
7. Ghiam BK, Xu L, Berry JL. Aqueous humor markers in retinoblastoma, a review. Transl Vis Sci Technol 2019; 8(2): 13.
8. Dias PL, Shanmuganathan SS, Rajaratnam M. Lactic dehydrogenase activity of aqueous humour in retinoblastoma. Br J Ophthalmol 1971; 55(2): 130–132.
9. Kivelä T. Neuron-specific enolase in retinoblastoma. Acta Ophthalmol 2009; 64(1): 19–25.
10. Wu Z, Yang H, Pan S, et al. Electrophoretic determination of aqueous and serum neuron-specific enolase in the diagnosis of retinoblastoma. Yan Ke Xue Bao 1997; 13(1): 12–16.
11. Shine BS, Hungerford J, Vaghela B, et al. Electrophoretic assessment of aqueous and serum neurone-specific enolase in retinoblastoma and ocular malignant melanoma. Br J Ophthalmol 1990; 74(7): 427–430.
12. Nakajima T, Kato K, Kaneko A, et al. High concentrations of enolase, alpha- and gamma-subunits, in the aqueous humor in cases of retinoblastoma. Am J Ophthalmol 1986; 101(1): 102–106.
13. Abramson DH, Greenfield DS, Ellsworth RM, et al. Neuron-specific enolase and retinoblastoma. Clinicopathologic correlations. Retina 1989; 9(2): 148–152.
14. Comoy E, Roussat B, Henry I, et al. Neuron-specific enolase in the aqueous humor. Its significance in the differential diagnosis of retinoblastoma Ophtalmologie 1990; 4(3): 233–235 [in French].
15. Andersen MH, Svane IM, Becker JC, et al. The universal character of the tumor-associated antigen survivin. Clin Cancer Res 2007; 13(20): 5991–5994.
16. Rohayem J, Diestelkoetter P, Weigle B, et al. Antibody response to the tumor-associated inhibitor of apoptosis protein survivin in cancer patients. Cancer Res 2000; 60(7): 1815–1817.
17. Shehata HH, Abou Ghalia AH, Elsayed EK, et al. Clinical significance of high levels of survivin and transforming growth factor beta-1 proteins in aqueous humor and serum of retinoblastoma patients. J AAPOS 2016; 20(5): 444.e1–444.e9.
18. Shehata HH, Abou Ghalia AH, Elsayed EK, Z et al. Detection of survivin protein in aqueous humor and serum of retinoblastoma patients and its clinical significance. Clin Biochem 2010; 43(4-5): 362–366.
19. Mendelsohn ME, Abramson DH, Senft S, et al. Uric acid in the aqueous humor and tears of retinoblastoma patients. J AAPOS 1998; 2(6): 369–371.
20. Dias PL. Postinflammatory and malignant protein patterns in aqueous humour. Br J Ophthalmol 1979; 63(3): 161–164.
21. Cheng Y, Zheng S, Pan C-T, et al. Analysis of aqueous humor concentrations of cytokines in retinoblastoma. PLoS One 2017; 12(5): e0177337.
22. Hadjistilianou T, Giglioni S, Micheli L, et al. Analysis of aqueous humour proteins in patients with retinoblastoma. Clin Experiment Ophthalmol 2012; 40(1): e8–15.
23. Berry JL, Xu L, Murphree AL, et al. Potential of aqueous humor as a surrogate tumor biopsy for retinoblastoma. JAMA Ophthalmol 2017; 135(11): 1221–1230.
24. Berry JL, Xu L, Kooi I, et al. Genomic analysis of aqueous humor cell-free dna in retinoblastoma predicts eye salvage: the surrogate tumor biopsy for retinoblastoma. Mol Cancer Res 2018; 16: 1701–1712.
25. Kabak J, Romano PE. Aqueous humour lactic dehydrogenase isoenzymes in retinoblastoma. Br J Ophthalmol 1975; 59(5): 268–269.
26. Piro PA Jr, Abramson DH, Ellsworth RM, et al. Aqueous humor lactate dehydrogenase in retinoblastoma patients. Clinicopathologic correlations. Arch Ophthalmol 1978; 96(10): 1823–1825.
27. Abramson DH, Piro PA, Ellsworth RM, et al. Lactate dehydrogenase levels and isozyme patterns. Measurements in the aqueous humor and serum of retinoblastoma patients. Arch Ophthalmol 1979; 97(5): 870–871.
28. Dias PL. Correlation of aqueous humour lactic acid dehydrogenase activity with intraocular pathology. Br J Ophthalmol 1979; 63(8): 574–577.
29. Dias PL. Prognostic significance of aqueous humour lactic dehydrogenase activity. Br J Ophthalmol 1979; 63(8): 571–573.
30. Das A, Roy IS, Maitra TK. Lactate dehydrogenase level and protein pattern in the aqueous humour of patients with retinoblastoma. Can J Ophthalmol 1983; 18(7): 337–339.
31. Dias PL. Electrolyte imbalances in the aqueous humour in retinoblastoma. Br J Ophthalmol 1985; 69(6): 462–463.
32. Dayal Y, Goyal JL, Jaffery NF, et al. Lactate dehydrogenase levels in aqueous humor and serum in retinoblastoma. Jpn J Ophthalmol 1985; 29(4): 417–422.
33. Singh R, Kaurya OP, Shukla PK, et al. Lactate dehydrogenase (LDH) isoenzymes patterns in ocular tumours. Indian J Ophthalmol 1991; 39(2): 44–47.
34. Mukhopadhyay S, Ghosh S, Biswas PN, et al. A cross-sectional study on aqueous humour lactate dehydrogenase level in retinoblastoma. J Indian Med Assoc 2008; 106(2): 99–100.
35. Cheng Y, Meng Q, Huang L, et al. iTRAQ-based quantitative proteomic analysis and bioinformatics study of proteins in retinoblastoma. Oncol Lett 2017; 14(6): 8084–8091.
36. Cheng Y, Zheng S, Pan C-T, et al. Analysis of aqueous humor concentrations of cytokines in retinoblastoma. PLoS One 2017; 12(5): e0177337.
The authors
Benjamin K. Ghiam1 MD, Liya Xu2 PhD, Jesse L. Berry, MD*3,4 MD
1Oakland University, William Beaumont School of Medicine, Rochester, MI, USA
2Department of Biological Sciences, Dornsife College of Letters, Arts, and Sciences, University of Southern California, Los Angeles, CA, USA
3The Vision Center at Children’s Hospital Los Angeles, Los Angeles, CA, USA
4USC Roski Eye Institute, Keck School of Medicine of USC, University of Southern California (USC), Los Angeles, CA, USA
*Corresponding author
E-mail: Jesse.Berry@med.usc.edu
Boost lab performance.
, /in Featured Articles /by 3wmediaVACUETTE Serum Fast Tube
, /in Featured Articles /by 3wmediaYou’re looking at the next big thing in hematology.
, /in Featured Articles /by 3wmediaThe only glucose meter cleared by the U.S. FDA for use with critically ill patients
, /in Featured Articles /by 3wmedia