Thousands of strain-specific whole-genome sequences are now available for a wide range of pathogenic bacteria. Using these data, approaches based on machine learning can now be used to predict the results of antimicrobial susceptibility tests from sequence alone. Recent studies have demonstrated the ability to predict minimum inhibitory concentrations with accuracies up to 95 %. Employing these tools to prioritize antibiotic treatment could improve patient outcomes and help to avoid the antibiotic resistance crisis.
by Dr Jonathan M. Monk
Importance of antimicrobial resistance (AMR) prediction
Today over 700 000 people die of antibiotic resistant infections per year [1]. Frighteningly, it has been estimated that this number could rise to 10 million deaths per year if nothing is done to stop the increase and spread of antibiotic resistant bacteria [2]. To help combat this threat it is critical to limit the use of ineffective antibiotics and to prescribe the appropriate antimicrobial therapy to patients as quickly as possible. Although antimicrobial susceptibility testing is now routine in microbiology laboratories, this testing often takes too long to impact clinical diagnosis.
New tools that rapidly predict antibiotic resistance could improve antibiotic stewardship and, when effectively implemented, have led to reductions in levels of resistant bacteria in hospitals [3]. Thus, accurately diagnosing antibiotic resistant bacteria would avoid the evolutionary pressures that accelerate resistance and would aid antibiotic stewardship approaches. This could enable physicians to select the optimal antibiotic regimen to cure a patient, rather than enhancing a given strain’s resistance. Whole-genome sequencing may offer this possibility.
The genomics revolution has made available thousands of strain-specific whole-genome sequences (WGS) for a range of pathogenic bacteria. For example the Pathosystems Resource Integration Center (PATRIC) [the all-bacterial Bioinformatics Resource Center (BRC) funded by the National Institute of Allergy and Infectious Diseases (NIAID)] currently contains over 15 000 Escherichia genomes, more than 14 000 Staphylococcus genomes and nearly 11 000 Mycobacteria genomes [4]. Increasingly, these genomes are coupled with clinical metadata, including minimum inhibitory concentration (MIC) values for various antibiotics.
This large-scale coupling of resistance data with strain-specific genome sequences enables machine learning and other big-data science approaches to study and predict antibiotic resistance. For example, it is now possible to apply case-control studies whereby a group of strains that exhibit a biological phenotype (e.g. antibiotic resistance) is compared to a group of strains that do not. Machine learning techniques can be used to identify biomarkers (e.g. presence/absence of genes or mutations) that are predictive of a given phenotype. These biomarkers can then serve as a basis for diagnostic tests.
Here we discuss recent literature using machine learning approaches to predict antibiotic resistance and highlight considerations required for their application.
Introduction to machine learning approaches for WGS-based prediction of AMR
Setting up a machine learning problem involves breaking data into two groups (Fig. 1a):
(1) The y-array containing genomes or samples (m rows) matched with the phenotype to be predicted. In the case of AMR prediction, a phenotype could be binary e.g. ‘resistant’ versus ‘susceptible’ or the actual experimentally measured MIC. Predicting MICs is often preferable owing to changing breakpoints used to define resistance. For example, a given strain with a MIC of 8 µg/mL gentamicin may have previously been classified as resistant, but new CLSI 2017 guidelines specify that gentamicin resistance requires a MIC above 16 µg/mL. This can lead to inconsistent AMR annotations that can confound binary predictions.
(2) The X-matrix containing the samples (m rows) and their associated features (n columns) that will be used to make a prediction. Features range from those that are completely knowledge-based, such as the presence of genes known to confer antibiotic resistance (e.g. a beta-lactamase) to those that require no previous knowledge such as the presence of short (~10 bp) segments of DNA on the chromosome (Fig. 1b). These features have unique benefits and drawbacks that have been used in several recent studies described below.
Selecting appropriate features for AMR prediction
Knowledge-based features
Knowledge-based features can be obtained by mapping a genome of interest using curated databases of gene products that have already been demonstrated to confer antibiotic resistance. As of February 2019, the Comprehensive Antibiotic Resistance Database (CARD) houses 2553 reference sequences and 1216 SNPs demonstrated to confer resistances for 79 different pathogens [5]. These approaches are akin to laboratory tools that offer PCR-based identification of AMR determinants, such as BioFire. Models trained using previously annotated features often have good accuracies and are easier to interpret because of the accumulated knowledge present in such predictions [6, 7].
However, despite their high accuracy, these tools are limited because they often require many rounds of multiple sequence alignment, which can become computationally expensive at large scale. Also, reliance on known AMR determinants may cause such algorithms to miss newly evolved resistance mechanisms. A useful machine learning approach should be capable of analysing future outbreaks and identifying new mechanisms of resistance, rather than being limited to past knowledge.
Gene- and allele-based features
An approach that balances these two extremes involves assembling features by annotating the genome for known protein coding genes, but keeping the feature types agnostic, for example by including genes with functions ranging from cell replication, to cell wall synthesis to metabolism. This approach has the advantage of not requiring known determinants of antimicrobial resistance but does still require annotated genomes, potentially biasing results by annotation methods.
Recent studies have used this approach to predict antibiotic resistance in E. coli with accuracies above 90 % [8]. Importantly, this approach identified features that outperformed genes established in the literature. Such an approach can go even deeper by breaking the genes down into their constituent alleles to account for potential mutations in each coding sequence. Another study took this approach to examine 1595 strains of M. tuberculosis and identified 33 known AMR-conferring genes and 24 new potentially novel antibiotic resistance conferring genes [9]. Thus, methods that rely on several features extracted from the genome, rather than restricting them to those with previous knowledge, can be used to accurately predict AMR and identify novel mechanisms of resistance making them extensible to mutations and mechanisms of resistance that may emerge in the future.
Kmer feature selection
A contrasting approach that can identify new mechanisms of resistance and requires no a priori knowledge involves breaking up a genome into short (~10 bp) long segments of DNA and using these to create ‘features’ from short segments of DNA on the genome, known as ‘kmers’. All genomes in the collection can be divided into kmers that are then added to the X-matrix where presence of a specific kmer becomes a feature. Thus, this kmer-based approach contrasts with knowledge-based methods to predict AMR that rely on a database of curated genes and mutations previously shown to confer antibiotic resistance.
Studies of A. baumannii, S. aureus, S. pneumoniae, K. pneumoniae and collections of over 5000 Salmonella genomes have demonstrated ability to predict MIC with an average accuracies above 90 % within +/−1 twofold dilution step [10–12]. Unfortunately, this high accuracy and ability to predict new mechanisms of resistance has the trade-off of being difficult to interpret. For example, a model may imply a strong relationship between predicted resistance and segments of the genome without annotated functions or to biological processes.
Building and evaluating a machine learning model
Once the features for a model have been selected it is time to apply a machine learning algorithm to the data. Several such algorithms exist, each with benefits and drawbacks related to accuracy and interpretability [13]. Unfortunately, often the more accurate models are difficult to interpret whereas more intelligible models have worse predictive capabilities. In healthcare applications it is vital for the treating physician to be able to understand, validate and trust a model, and thus relying on easier to interpret methods like a decision tree or simple logistic regressors may be best.
When evaluating a machine learning model it is imperative to question how the model was trained. A major pitfall for machine learning approaches is their tendency to ‘overfit’ datasets. For example, a model trained on data used to make a prediction could simply ‘remember’ that data and use it to correctly predict any point in the same training set. However, if the model is too rigid it may perform poorly on new data. Robust machine learning models avoid such overfitting by splitting the data into non-overlapping sets where ~80 % of the data is used for training and ~20 % of the data is used for tests (Fig. 2a). This splitting process should be random and be performed several times to assess the overall accuracy and sensitivity of a model, thereby limiting overfitting and ensuring that predictions remained generalizable and robust.
Once a model’s ability to predict new data is established it is finally possible to evaluate the model’s predictive performance (Fig. 2b). Thus far, we have described previous studies in terms of correct predictions and accuracies. However, it’s often more important to evaluate cases where a model fails. Requirements for AMR diagnostic devices are strict. Devices typically describe their utility in terms of error rate. Major errors (MEs) occur when susceptible genomes are incorrectly predicted to have resistant MICs. The opposite case, when resistant genomes are incorrectly assigned susceptible MICs, are termed very major errors (VMEs). US Food and Drug Administration (FDA) standards for automated systems recommend a ME rate ≤3 %. A recent study of over 5000 Salmonella genomes used kmers to train a model that demonstrated MIC predictions for 15 antibiotics with ME rates in this range [10]. The FDA standards for VME rates indicate that the lower 95 % confidence limit should be ≤1.5 % and upper limit should be ≤7.5 %. Models for seven of the 15 antibiotics in the same study had acceptable VME rates based on this requirement. Thus, such an approach would make acceptable predictions for diagnostic applications.
Summary and outlook
In summary, options for WGS-based predictions of antimicrobial susceptibility testing are becoming a reality. This brief summary limits the scope to tools and methods to predict antibiotic susceptibility from WGS. However, in the future it may be possible to combine genomic features with information from the patient, like age, gender, comorbidities, etc. Furthermore, rather than predicting only antibiotic susceptibility it would be possible to train an algorithm to predict patient outcome and adjust treatment regimens to improve patient care [14].
Such approaches are sorely needed because despite improvements in antibiotic use, the Centres for Disease Control and Prevention (CDC) estimates that approximately 50 % of antibiotics are still prescribed unnecessarily in the US at a yearly cost of $1.1 billion [15], and the annual impact of resistant infections in the US is estimated to be $20 billion in excess healthcare costs and 8 million additional days patients stay in the hospital [16]. Significant improvements in patient outcome have been observed when reducing the time of treatment with optimal antibiotic therapy [17, 18]. Rapid identification and targeted treatment of pathogenic bacteria using tools assisted by algorithms presented here would enable precision medicine for pathogens that would lower the incidence of antibiotic resistance, improve patient health, and lead to decreased hospital costs.
Figure 1. How to set up a whole-genome sequence (WGS)-based machine learning problem for antimicrobial resistance (AMR) prediction. (a) Samples (m=rows) with sequenced genomes and known phenotypes of interest [‘susceptible’ vs ‘resistant’ phenotypes or minimum inhibitory concentration (MIC) value] are used to train a machine learning model. All values to be predicted are placed into the ‘y’ array. The ‘features’ used to train a model form the columns of the X-matrix. (b) For WGS-based antimicrobial-susceptibility-test prediction possible feature types include: (1) known antibiotic resistance conferring genes or mutations, (2) annotated protein coding genes (independent of known functions) and even (3) the presence of short fragments of DNA sequence on the chromosome known as ‘kmers’. These different feature types have a trade-off between ease of interpretation (easiest for previously identified features) and ability to detect novel AMR determinants (best for short sequence fragments).
Figure 2. Evaluating predictions from a machine learning model. (a) A machine learning model that is ‘overfit’ is inflexible to new data. To ensure a model is robust enough to predict new samples, all models should be cross-validated. This process involves randomly splitting the whole dataset into training (± 80 % of samples) and testing (± 20 %) sets. The sets should be shuffled multiple times to check model accuracy across different samples and features. (b) The results of running the model on testing sets can then be compared for each randomly sampled set (different colored lines). The model’s performance is compared by calculating the area under the curve (AUC) on a plot of the true positive rate vs the false positive rate often called a receiver operating characteristic (ROC) curve. Model accuracy can be calculated from the number of true positive (TP) [model predictions, resistant (R); experimental result, R] and true negative (TN) predictions divided by the total number of predictions. However, it is often more important to gauge how a model fails: for example a false positive ‘major error’ [model prediction, R; experimental result, susceptible (S)] may lead to incorrectly withholding an effective antibiotic. Even worse, false negative ‘very major error’ predictions (model prediction, S; experimental result, R) could lead to prescribing an antibiotic that is ineffective.
References
1. The dangers of hubris on human health. Global Risks – Reports. World Economic Forum 2013 (http://reports.weforum.org/global-risks-2013/risk-case-1/the-dangers-of-hubris-on-human-health/).
2. O’Neill J. Antimicrobial resistance: tackling a crisis for the health and wealth of nations. Rev Antimicrob Resist 2014; 20: 1–16.
3. Carling P, Fung T, Killion A, Terrin N, Barza M. Favorable impact of a multidisciplinary antibiotic management program conducted during 7 years. Infect Control Hosp Epidemiol 2003; 24(9): 699–706.
4. Wattam AR, Abraham D, Dalay O, Disz TL, Driscoll T, Gabbard JL, Gillespie JJ, Gough R, Hix D, Kenyon R, Machi D, Mao C, Nordberg EK, Olson R, Overbeek R, Pusch GD, Shukla M, Schulman J, Stevens RL, Sullivan DE, Vonstein V, Warren A, Will R, Wilson MJ, Yoo HS, Zhang C, Zhang Y, Sobral BW. PATRIC, the bacterial bioinformatics database and analysis resource. Nucleic Acids Res 2014; 42(Database issue): D581–591.
5. Jia B, Raphenya AR, Alcock B, Waglechner N, Guo P, Tsang KK, Guo P, Tsang KK, Lago BA, Dave BM, Pereira S, Sharma AN, Doshi S, Courtot M, Lo R, Williams LE, Frye JG, Elsayegh T, Sardar D, Westman EL, Pawlowski AC, Johnson TA, Brinkman FS, Wright GD, McArthur AG. CARD 2017: expansion and model-centric curation of the comprehensive antibiotic resistance database. Nucleic Acids Res 2017; 45(D1): D566–573.
6. Jeukens J, Freschi L, Kukavica-Ibrulj I, Emond-Rheault J-G, Tucker NP, Levesque RC. Genomics of antibiotic-resistance prediction in Pseudomonas aeruginosa. Ann N Y Acad Sci 2019; 1435(1): 5–17 (First published 2017 Online: https://nyaspubs.onlinelibrary.wiley.com/doi/abs/10.1111/nyas.13358).
7. Bradley P, Gordon NC, Walker TM, Dunn L, Heys S, Huang B, Earle S, Pankhurst LJ, Anson L, de Cesare M, Piazza P, Votintseva AA, Golubchik T, Wilson DJ, Wyllie DH, Diel R, Niemann S, Feuerriegel S, Kohl TA, Ismail N, Omar SV, Smith EG, Buck D, McVean G, et al. Rapid antibiotic-resistance predictions from genome sequence data for Staphylococcus aureus and Mycobacterium tuberculosis. Nat Commun 2015; 6: 10063.
8. Her H-L, Wu Y-W. A pan-genome-based machine learning approach for predicting antimicrobial resistance activities of the Escherichia coli strains. Bioinformatics 2018; 34(13): i89–i95.
9. Kavvas ES, Catoiu E, Mih N, Yurkovich JT, Seif Y, Dillon N, Heckmann D, Anand A, Yang L, Nizet V, Monk JM, Palsson BO. Machine learning and structural analysis of Mycobacterium tuberculosis pan-genome identifies genetic signatures of antibiotic resistance. Nat Commun 2018; 9(1): 4306.
10. Nguyen M, Long SW, McDermott PF, Olsen RJ, Olson R, Stevens RL, Tyson GH, Zhao S, Davis JJ. Using machine learning to predict antimicrobial MICs and associated genomic features for nontyphoidal Salmonella. J Clin Microbiol 2019; 57(2): pii: e01260-18 (http://dx.doi.org/10.1128/JCM.01260-18).
11. Davis JJ, Boisvert S, Brettin T, Kenyon RW, Mao C, Olson R, Overbeek R, Santerre J, Shukla M, Wattam AR, Will R, Xia F, Stevens R. Antimicrobial resistance prediction in PATRIC and RAST. Sci Rep 2016; 6: 27930.
12. Nguyen M, Brettin T, Long SW, Musser JM, Olsen RJ, Olson R, Shukla M, Stevens RL, Xia F, Yoo H, Davis JJ. Developing an in silico minimum inhibitory concentration panel test for Klebsiella pneumoniae. Sci Rep 2018; 8(1): 421.
13. Deo RC. Machine learning in medicine. Circulation 2015; 132(20): 1920–1930.
14. Kachroo P, Eraso JM, Beres SB, Olsen RJ, Zhu L, Nasser W, Bernard PE, Cantu CC, Saavedra MO, Arredondo MJ, Strope B, Do H, Kumaraswami M, Vuopio J, Gröndahl-Yli-Hannuksela K, Kristinsson KG, Gottfredsson M, Pesonen M, Pensar J, Davenport ER, Clark AG, Corander J, Caugant DA, Gaini S. Integrated analysis of population genomics, transcriptomics and virulence provides novel insights into Streptococcus pyogenes pathogenesis. Nat Genet 2019; 51(3): 548–559 (http://dx.doi.org/10.1038/s41588-018-0343-1).
15. Antibiotic resistance threats in the United States, 2013. US Centers for Disease Control and Prevention 2013 (https://www.cdc.gov/drugresistance/pdf/ar-threats-2013-508.pdf).
16. Fair RJ, Tor Y. Antibiotics and bacterial resistance in the 21st century. Perspect Medicin Chem 2014; 6: 25–64.
17. Kumar A, Roberts D, Wood KE, Light B, Parrillo JE, Sharma S, Suppes R, Feinstein D, Zanotti S, Taiberg L, Gurka D, Kumar A, Cheang M. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med 2006; 34(6): 1589–1596.
18. Palmer HR, Palavecino EL, Johnson JW, Ohl CA, Williamson JC. Clinical and microbiological implications of time-to-positivity of blood cultures in patients with Gram-negative bacilli bacteremia. Eur J Clin Microbiol Infect Dis 2013; 32(7): 955–959.
The author
Jonathan M. Monk PhD
Department of Bioengineering, UC San Diego, San Diego, California, USA
E-mail: jmonk@ucsd.edu
A Breakthrough in Hemostasis Quality Management
, /in Featured Articles /by 3wmediaEvaluate your performance. Advance patient care.
, /in Featured Articles /by 3wmedia#MASproductivity. Maximize Quality Control for Your Lab.
, /in Featured Articles /by 3wmediaACUSERA. True Third Party Controls.
, /in Featured Articles /by 3wmediaMolecular Diagnostics Literature Review
, /in Featured Articles /by 3wmediaGenome-wide analysis of circulating cell-free DNA copy number detects active melanoma and predicts survival
Silva S, Danson S, Teare D, Taylor F, Bradford J, et al. Clin Chem 2018; 64(9): 1338–1346.
BACKGROUND: A substantial number of melanoma patients develop local or metastatic recurrence, and early detection of these is vital to maximize benefit from new therapies such as inhibitors of BRAF and MEK, or immune checkpoints. This study explored the use of novel DNA copy-number profiles in circulating cell-free DNA (cfDNA) as a potential biomarker of active disease and survival.
PATIENTS AND METHODS: Melanoma patients were recruited from oncology and dermatology clinics in Sheffield, UK, and cfDNA was isolated from stored blood plasma. Using low-coverage whole-genome sequencing, we created copy-number profiles from cfDNA from 83 melanoma patients, 44 of whom had active disease. We used scoring algorithms to summarize copy-number aberrations and investigated their utility in multivariable logistic and Cox regression analyses.
RESULTS: The copy-number aberration score (CNAS) was a good discriminator of active disease (odds ratio, 3.1; 95 % CI, 1.5–6.2; P=0.002), and CNAS above or below the 75th percentile remained a significant discriminator in multivariable analysis for active disease (P=0.019, with area under ROC curve of 0.90). Additionally, mortality was higher in those with CNASs above the 75th percentile than in those with lower scores (HR, 3.4; 95 % CI, 1.5–7.9; P=0.005), adjusting for stage of disease, disease status (active or resected), BRAF status, and cfDNA concentration.
CONCLUSIONS: This study demonstrates the potential of a de novo approach utilizing copy-number profiling of cfDNA as a biomarker of active disease and survival in melanoma. Longitudinal analysis of copy-number profiles as an early marker of relapsed disease is warranted.
Microbiological diagnostics of bloodstream infections in Europe – An ESGBIES survey
Idelevich EA, Seifert H, Sundqvist M, Scudeller L, Amit S, et al. Clin Microbiol Infect 2019; doi: 10.1016/j.cmi.2019.03.024 [Epub ahead of print].
OBJECTIVES: High-quality diagnosis of bloodstream infections (BSIs) is important for successful patient management. As knowledge on current practices of microbiological BSI diagnostics is limited, this project aimed to assess its current state in European microbiological laboratories.
METHODS: We performed an online questionnaire-based cross-sectional survey comprising 34 questions on practices of microbiological BSI diagnostics. The ESCMID Study Group for Bloodstream Infections, Endocarditis and Sepsis (ESGBIES) was the primary platform to engage national coordinators who recruited laboratories within their countries.
RESULTS: Responses were received from 209 laboratories in 25 European countries. While 32.5 % (68/209) of laboratories only used the classical processing of positive blood cultures (BCs), two-thirds applied rapid technologies. Of laboratories that provided data for respective question, 42.2 % (78/185) were able to start incubating blood cultures in automated BC incubators around-the-clock, and only 13 % (25/192) had established a 24-hour service to start immediate processing of positive BCs. Only 4.7 % (9/190) of laboratories validated and transmitted the results of identification and antimicrobial susceptibility testing (AST) of BC pathogens to clinicians 24 hours/day. MALDI-TOF MS from shortly incubated sub-cultures on solid media was the most commonly used approach to rapid pathogen identification from positive BCs, and direct disk diffusion was the most common rapid AST method from positive BCs.
CONCLUSIONS: Laboratories have started to implement novel technologies for rapid identification and AST for positive BCs. However, progress is severely compromised by limited operating hours such that current practice of BC diagnostics in Europe complies only partly with the requirements for optimal BSI management.
An integrated next-generation sequencing system for analyzing DNA mutations, gene fusions, and RNA expression in lung cancer
Haynes BC, Blidner RA, Cardwell RD, Zeigler R, Gokul S, et al. Transl Oncol 2019; 12(6): 836–845.
We developed and characterized a next-generation sequencing (NGS) technology for streamlined analysis of DNA and RNA using low-input, low-quality cancer specimens. A single-workflow, targeted NGS panel for non-small cell lung cancer (NSCLC) was designed covering 135 RNA and 55 DNA disease-relevant targets. This multiomic panel was used to assess 219 formalin-fixed paraffin-embedded NSCLC surgical resections and core needle biopsies. Mutations and expression phenotypes were identified consistent with previous large-scale genomic studies, including mutually exclusive DNA and RNA oncogenic driver events. Evaluation of a second cohort of low cell count fine-needle aspirate smears from the BATTLE-2 trial yielded 97 % agreement with an independent, validated NGS panel that was used with matched surgical specimens. Collectively, our data indicate that broad, clinically actionable insights that previously required independent assays, workflows, and analyses to assess both DNA and RNA can be conjoined in a first-tier, highly multiplexed NGS test, thereby providing faster, simpler, and more economical results.
Molecular diagnosis of asparagine synthetase (ASNS) deficiency in two Indian families and literature review of 29 ASNS deficient cases
Devi ARR, Naushad SM. Gene 2019; doi: 10.1016/j.gene.2019.04.024 [Epub ahead of print].
In the current study, we report three cases of asparagine synthetase (ASNS) deficiency from two consanguineous families. Family 1 had two early neonatal deaths due to a novel mutation in the ASNS gene c.788C > T (p.S263F) and both the children presented with microcephaly and one of them had severe intracranial hemorrhage. The proband from the second family was homozygous for c.146G > A (p.R49Q) and manifested myoclonic seizures, developmental delay, coarse hair and diffuse cortical atrophy. Molecular docking studies of both the mutations revealed alteration in the ligand binding site. To date, 26 mutations were reported in ASNS gene in 29 affected children indicating high degree of genetic heterogeneity and high mortality. Although asparagine depletion is not of diagnostic utility, multiple linear regression model suggested that asparagine levels vary to the extent of 20.6 % based on glutamine and aspartate levels and ASNS deficiency results in depletion of arginine synthesis. ASNS deficiency should be suspected in any neonate with microcephaly and epileptic encephalopathy.
Placental growth factor testing reduces time to diagnosis for women with suspected pre-eclampsia
, /in Featured Articles /by 3wmediaPre-eclampsia is a condition that affects approximately 2–8% pregnancies worldwide and, although the cause is not really understood, is thought to be due to poor function of the placenta. Early signs that create suspicion of pre-eclampsia in the mother typically include hypertension, proteinuria and edema (particularly of the pitting type) of the ankles. Symptoms of more severe pre-eclampsia can include pulmonary edema, headaches, visual disturbance, epigastric/right upper quadrant abdominal pain and vomiting, before the development of seizures (eclampsia). Symptoms in the fetus include fetal growth restriction.
Left untreated, pre-eclampsia is associated with a high risk of adverse outcome for both the mother and fetus. The only treatment is delivery of the baby and the placenta. Diagnosis of pre-eclampsia is challenging because of the vagueness of the symptoms, but becomes suspected with new onset hypertension after 20 weeks’ gestation. The management of women presenting with pre-eclampsia from 37 weeks of gestation is through planned delivery. However, the management of patients with suspected pre-eclampsia earlier in pregnancy involves careful surveillance, and therefore increased use of health resources, balancing the risk to maternal health against the risk of preterm delivery for the fetus. Angiogenic factors, such as vascular endothelial growth factor (VEGF), placental growth factor (PlGF) and soluble fms-like tyrosine kinase-1 (sFlt-1) have shown potential for the diagnosis of pre-eclampsia in cohort studies. Recently, however, the results of study using a stepped-wedge cluster-randomized controlled trial measuring PlGF levels alongside the use of a clinical management algorithm have been published in The Lancet (Duhig KE, et al. Lancet 2019; pii: S0140-6736(18)33212-4). The study involved more than 1000 women with suspected pre-eclampsia and the women were divided into two groups – where the PlGF levels were either made known (revealed PlGF) or not (concealed PlGF). The results showed that in the revealed PlGF group, time to diagnosis fell from 4.1 to 1.9 days compared with the concealed PlGF group and that serious maternal complications fell from 5.3% (24 of 447 women) to 4% (22 of 573 women). The findings from the study, have resulted in NHS England deciding to make the test more widely available, allowing more timely patient management and more appropriate use of resources in high-risk women, so helping to avoid life-threatening complications for both mother and baby as well as providing reassurance when
pre-eclampsia is ruled out.
Detection and typing of HPV for cervical cancer prevention according to Meijer criteria
, /in Featured Articles /by 3wmediaHPV testing is a linchpin of cervical cancer prevention, providing an effective alternative to the long-standing Pap test. The HPV analysis should encompass all relevant anogenital HPV types and differentiate between high-risk types, which can induce cancer, and low-risk types, which cause benign genital warts. It is also critical to identify multiple and persistent infections, since these are associated with a high tumour risk. The EUROArray HPV provides fast and reliable detection and typing of all 30 relevant anogenital high-risk and low-risk HPV types in one reaction and meets the international criteria for HPV screening defined by Meijer et al. The test is simple to perform and includes fully automated data evaluation and documentation.
by Dr Jacqueline Gosink
Cervical cancer
Cervical cancer is worldwide the third most frequent cancer in women. For example, in Germany there are approximately 4600 new cases each year, even though many women attend cancer screening. Tumours of the cervix are caused by human papillomaviruses (HPV), which are spread by sexual contact. The immune system usually eliminates the HPV within a few months. However, if an HPV infection persists over a longer period of time, this can cause changes in cervical cells, depending on the HPV type, which may subsequently lead to cancer. The cellular changes are histologically classified as grade 1, 2 or 3 cervical intraepithelial neoplasia (CIN). Mild cases (CIN 1) often clear without any treatment. Moderate and severe cases (CIN 2 and CIN 3) are usually treated to prevent development of cervical cancer.
High- and low-risk HPV types
There are over 200 types of HPV, of which 30 can cause infections in the genital area. These are divided into low-risk and high-risk types. Low-risk types cause warts in the genital area or slight tissue changes. High-risk types are significantly more aggressive. A persistent infection with a high-risk HPV type can cause tissue changes that significantly increase the risk of a tumour. The most common high-risk types are 16 and 18, which are responsible for about 70% of cervical cancers and precancers. Other high-risk types are 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, 73 and 82. Low-risk types are 6, 11, 40, 42, 43, 44, 54, 61, 70, 72, 81 and 89.
HPV vaccination protects against infection with the high-risk types 16 and 18 and the low-risk types 6 and 11. Depending on the vaccine preparation used, protection against five additional high-risk types (31, 33, 45, 52, 58) is possible. Vaccinated persons can still become infected with other types. It is therefore important to participate in cancer screening even after vaccination.
HPV infection and cancer risk
Persistent infections with a single high-risk type are associated with a clearly increased tumour risk. In a study panel (n=40), all patients with a persistent infection with a single high-risk HPV type developed CIN 2 or worse within seven years (Figure 1) (1). Furthermore, simultaneous infections with different high-risk types lead with high probability to malignant cytological changes to the cervical mucosa and therefore present a greater cancer risk for patients. Sequential infections with different high-risk HPV types do not increase the risk of cervical cancer. It is therefore important to differentiate persistent from transient infections and multiple from single infections. This is only possible with tests that are able to subtype the different HPV.
Cervical cancer screening
Cervical cancer screening is traditionally based on the Papanicolaou or Pap test, which detects morphological cell changes in cervical smear samples. However, several countries have already established or are currently switching to high-risk-HPV testing as the first-line screening method, as evidence mounts that is more effective and efficient for the prevention of invasive cervical cancer and mortality than the Pap test. Several randomized trials have shown that the cumulative incidence of cervical cancer five years after a negative HPV test is lower than the incidence three years after a normal cytology result.
Molecular biological testing allows early identification of an HPV infection, even before dysplasia is visible in the mucosa. Subtyping tests reveal at the same time whether the infection is due to low-risk or high-risk HPV and exactly which HPV types are present. Patients who have simultaneous infections with different high-risk types or a persistent infection with the same high-risk HPV type can be monitored more frequently to ensure timely treatment to minimize the risk of cervical cancer. A negative result excludes an HPV infection and thus the risk of developing cervical cancer with high probability.
Detection of the viral oncogenes E6/E7
A prerequisite for the development of carcinoma is the integration of the HPV genome into the DNA of the epidermal cells. The proportion of infected cells containing integrated viral DNA increases as the infection progresses. During the integration into human DNA, particular regions of the HPV genome (generally the E1, E2, L1 and L2 genes) are split. Test systems that detect these genes are therefore unreliable. For example, when test systems based on the L1 gene are used to detect HPV types 16 and 18, between 8% and 28% of high-grade dysplasia cases can be overlooked (2). In contrast, testing for essential viral markers such as the oncogenes E6 and E7 allows all HPV infections to be reliably detected, since these genes are essential for malignant transformation of the host cells and they remain intact even after integration. Detection of variable sequences within these genes allows the different HPV types to be differentiated.
Meijer HPV test criteria
In 2009, an international team of experts proposed criteria for the requirements and validation of HPV tests for primary cervical cancer screening, known as the Meijer criteria (3, 4). To support the clinical performance evaluation of HPV subtyping tests, the VALGENT (VALidation of HPV GENotyping Tests) protocol was subsequently established. The key issue for HPV testing in cervical screening is to detect high-risk HPV infections that are associated with or develop into ≥CIN 2 and to differentiate them from transient high-risk HPV infections. HPV tests should provide high clinical sensitivity for detection of cervical precancer and cancer, and at the same time high clinical specificity to limit unnecessary procedures and follow-up of HPV-positive women.
The validation guidelines for HPV tests encompass clinical sensitivity (criterion 1), clinical specificity (criterion 2), and intra-laboratory and inter-laboratory reproducibility (criterion 3). Candidate assays are validated by comparative analysis with fully clinically and epidemiologically validated reference HPV tests, such as the Hybrid Capture 2 (hc2) assay (QIAGEN) or the GP5+6+ PCR-EIA using samples from women aged 30 or older. One of the criteria stipulates that the sensitivity for ≥CIN 2 of the candidate assay should amount to ≥90% of the sensitivity of the hc2 assay; the specificity for ≥CIN 2 should reach ≥98% of the specificity of the hc2 assay.
EUROArray HPV
One test that fulfils the criteria of the Meijer protocol is the EUROArray HPV from EUROIMMUN. This multiplex PCR-based assay provides detection and typing of all 30 genitally relevant HPV types in one reaction. The individual typing enables differentiation of high- and low-risk infections, as well as identification of multiple infections. The precise HPV genotyping also allows differentiation between new and persistent infections when determinations are performed over a time course, e.g. two analyses at a time interval of 12 to 18 months. The test is based on detection of E6/E7 DNA, ensuring highest sensitivity even in infections where the viral genome has already integrated into the DNA of the host epithelial cells.
The EUROArray procedure is easy to perform and does not require expertise in molecular biology. DNA isolated from patient cervical smear samples is analysed using multiplex PCR and a microarray biochip slide containing DNA probes corresponding to each HPV type (Figure 2). Results are evaluated and interpreted fully automatically using the user-friendly EUROArrayScan software. A detailed result report is produced for each patient and all data is documented and archived. Integrated controls such as DNA positive control and cross-contamination control ensure high result security. Meticulously designed primers and ready-to-use PCR components further contribute to the reliability of the analysis. The entire procedure is IVD validated and CE registered.
Fulfilment of Meijer criteria
The EUROArray HPV was evaluated alongside other HPV tests (5, 6) using cervical specimens from 404 women undergoing follow-up of high-grade cytological abnormality. The HPV tests were used to detect high-risk HPV genotypes and predict histologically confirmed ≥CIN 2 in these patients. There was excellent agreement between the EUROArray HPV and all other HPV tests. The authors concluded that the EUROArray HPV fulfils the first Meijer criterion of ≥90% of the clinical sensitivity of hc2 for detection of ≥CIN 2. Moreover, the genotyping for 30 individual types would also allow the EUROArray HPV to be used in epidemiological and surveillance applications.
In a further study (7) the analytical and clinical performance of the EUROArray HPV was evaluated using a total of 1300 consecutive and 300 cytologically abnormal cervical samples (Tables 1A and 1B). The relative sensitivity of the EUROArray HPV with respect to the hc2 assay was 93% for ≥CIN 2. This value was further increased to 98% using an optimized cut-off for HPV16, which has now been incorporated into the test evaluation. The relative specificity of the EUROArray HPV for ≤CIN 1 with respect to the hc2 was 100%. Finally, the EUROArray reported excellent intra- and inter-assay reproducibility. Thus, the EUROArray HPV fulfilled all of the Meijer criteria for use in cervical cancer screening.
Conclusions
The EUROArray HPV is ideally suited for HPV genotyping in primary cervical cancer screening programmes. It has been shown to be non-inferior to the hc2 comparator test for both sensitivity and specificity, as stipulated in the Meijer criteria and validated using the VALGENT framework. The EUROArray HPV is currently the only commercially available test that enables genotyping of all 30 anogenital HPV types on the basis of the E6/E7 oncogenes. In the future it is speculated that the use of HPV genotyping in cervical cancer might be extended to testing for cure and further stratifying the disease risk. HPV are also associated with some other types of cancer, such as anal cancer, head and neck cancers, vulvar and vaginal cancers, and penile cancer. HPV testing is also important in the diagnosis of these cancers.
References
1. Elfgren et al. Am J. Obstet Gynecol (2016), 7:11-22
2. Tjalma et al. Eur J Obstet Gynecol Reprod Biol (2013), 170(1): 45-46
3. Meijer et al. Int J Cancer (2009), 124: 516-520
4. Arbyn et al. Clin Microbiol Infect (2015), 21:817-826
5. Cornall et al. Eur J Clin Microbiol Infect Dis (2016), 35(6): 1033-1036
6. Cornall et al. Papillomavirus Research (2017), 4: 79-84
7. Viti et al. J Clin Virol (2018), 108: 38-42
The author
Jacqueline Gosink, PhD
EUROIMMUN AG
Seekamp 31
23560 Lubeck
Germany
Implementing digital blood cell analysis technology in a distributed laboratory network
, /in Featured Articles /by 3wmediaThe recent introduction of the CellaVision DC-1 makes it possible for small labs to implement the same digital methodology for performing blood cell differentials that is commonly used by large laboratory organizations. CellaVision recently teamed up with Alberta Public Laboratories (APL) to conduct an in-situ product evaluation assessing the utility and impact of CellaVision DC-1 in a distributed laboratory network. APL is a leading medical diagnostic laboratory serving a large catchment of Southern Alberta, Canada. CLI talked to Dr Etienne Mahe, consultant pathologist at APL, who shares here his experience of this technology.
1. Could you briefly describe your laboratory setting and specific requirements regarding hematology testing ?
Laboratory testing in Southern Alberta (and in many other jurisdictions elsewhere in the world) can easily be summarized as a “hub-and-spoke” model. We have a large central high-throughput laboratory to which geographically dispersed small referral laboratories or collection sites send specimens.
Since many of these smaller sites are at substantial distances from the central referral laboratory, the strategy in hematology has been to situate low-complexity low-throughput analysers at the spoke sites and reserve the high-throughput high-complexity infrastructure for the hub labs. In the case of peripheral smear review, CBC data are generated at the peripheral sites on low-complexity low-throughput analysers, but slides (as required) are referred to the hub for additional review and interpretation. In cases requiring pathologist review, delays of up to several days are possible, with the significant itinerant potential for delayed patient care.
2. In your view, what are the most interesting characteristics of the Cellavision DC-1 analyser and the main advantages of its technology ?
The Cellavision suite has provided our hub labs in Southern Alberta with league improvements in efficiency for high-throughput hematology testing. We employ Cellavision integrated analysers in our hub labs to perform nearly all peripheral smear manual differential and morphology review activities. We also use the Cellavision body fluid analysis features to assist with review and interpretation of most fluid specimens. The networking capabilities of the Cellavision suite have allowed for seamless data exchange between our network of hospital-based hub labs. The Cellavision suite has also allowed for improved training and quality control workflows.
The Cellavision DC-1, designed to better address the digital hematology and pathology needs of lower throughput laboratories, raised significant interest for us as a means to better improve our spoke-to-hub workflows. In particular, while our performance parameters for basic CBC resulting are reasonable, we currently experience heavy delays in the morphological review of peripheral smears by hub technologists and pathologists by virtue of transportation delays from spoke centers. The Cellavision DC-1 presents the opportunity for real-time digital interpretation of peripheral smears originating from spoke sites by expert hub lab staff, entirely negating the need for slide transport.
3. What was the aim of the product evaluation carried out by your laboratory network ? Could you briefly explain the methodology employed ?
When presented with the opportunity to test the Cellavision DC-1 instrument, we immediately wanted to prove the theoretical turn-around time benefit could be realized in our lab system. We obtained research ethics and institutional approval to perform a prospective study of turn-around times (from specimen collection at spoke sites to expert review by the hub lab), comparing a Cellavision DC-1 assisted workflow with the current standard-of-care. We assessed in comparison the reported results from morphology review using the Cellavision DC-1 assisted workflow relative to the standard-of-care workflow. We also undertook a comparison to historical turn-around time data in order to estimate the volume of cases (and hence the length of the study) required for a reasonable comparison.
4. What were the results of the evaluation and did they meet your expectations ?
Since we hope to publish our results in the future, I won’t divulge them in their totality as yet, except to say that we identified statistically significant improvements in all parameters assessed, including turn-around times, without evidence of any discordance in the quality of morphologic assessment. While we were not at all surprised to see a statistically significant difference between the workflows, we were impressed by the degree to which these improvements in turn-around time were realized, which we anticipate will mean a clinically significant improvement for labs facing similar workflow hurdles.
5. Can you tell us anything more about your experience of this technology and do you have any particular advice or recommendation for labs interested in its implementation ?
We have been working with the Cellavision suite of technologies for several years and have incorporated it into the vast majority of our routine hematology workflows. Several years ago, as part of a small implementation project, I asked a number of our technologist super users to provide their feedback on instrument usage and software usability. By far and away, the feedback was positive.
As part of our current work, we are also hoping to provide more tangible data relating to Cellavision software useability. More specifically, we have undertaken several exercises across a broad cadre of technical staff, to identify how much more time-efficient the process of technologist classification using Cellavision software is compared with manual morphology assessment. As with our turn-around time results, we will soon be reporting a significant advantage to a Cellavision based workflow.
For laboratories and laboratory networks thinking of implementing Cellavision enabled technologies, it is important to first understand the nature of your laboratory structure and its hematology workflows. For single lab sites with high-throughput, Cellavision offers a number of solutions geared to high-volume needs. Now, as we have seen, Cellavision also offers solutions for smaller low-throughput labs, especially labs frequently faced with the challenges of material referrals.
6. What do you see as the next step for your organization ?
While our data support improvements in time-based metrics using the Cellavision DC-1 in our distributed laboratory network, we are hoping next to make an economic argument to support the integration of the Cellavision suite of technologies across our hub-and-spoke network. More specifically, we are hoping to liaise with local health economics experts to prove that improvements in turn-around times (and the commensurate assumed cost reductions if materials transportation is not required) support the necessary investments in infrastructure required, as well as where such investments should be made across our network.
Dr Etienne Mahe is Clinical Assistant Professor, Department of Pathology & Laboratory Medicine, University of Calgary, and Consultant Pathologist, Division of Hematology, South Sector, Alberta Public Laboratories
Using whole-genome sequencing to predict antimicrobial resistance
, /in Featured Articles /by 3wmediaThousands of strain-specific whole-genome sequences are now available for a wide range of pathogenic bacteria. Using these data, approaches based on machine learning can now be used to predict the results of antimicrobial susceptibility tests from sequence alone. Recent studies have demonstrated the ability to predict minimum inhibitory concentrations with accuracies up to 95 %. Employing these tools to prioritize antibiotic treatment could improve patient outcomes and help to avoid the antibiotic resistance crisis.
by Dr Jonathan M. Monk
Importance of antimicrobial resistance (AMR) prediction
Today over 700 000 people die of antibiotic resistant infections per year [1]. Frighteningly, it has been estimated that this number could rise to 10 million deaths per year if nothing is done to stop the increase and spread of antibiotic resistant bacteria [2]. To help combat this threat it is critical to limit the use of ineffective antibiotics and to prescribe the appropriate antimicrobial therapy to patients as quickly as possible. Although antimicrobial susceptibility testing is now routine in microbiology laboratories, this testing often takes too long to impact clinical diagnosis.
New tools that rapidly predict antibiotic resistance could improve antibiotic stewardship and, when effectively implemented, have led to reductions in levels of resistant bacteria in hospitals [3]. Thus, accurately diagnosing antibiotic resistant bacteria would avoid the evolutionary pressures that accelerate resistance and would aid antibiotic stewardship approaches. This could enable physicians to select the optimal antibiotic regimen to cure a patient, rather than enhancing a given strain’s resistance. Whole-genome sequencing may offer this possibility.
The genomics revolution has made available thousands of strain-specific whole-genome sequences (WGS) for a range of pathogenic bacteria. For example the Pathosystems Resource Integration Center (PATRIC) [the all-bacterial Bioinformatics Resource Center (BRC) funded by the National Institute of Allergy and Infectious Diseases (NIAID)] currently contains over 15 000 Escherichia genomes, more than 14 000 Staphylococcus genomes and nearly 11 000 Mycobacteria genomes [4]. Increasingly, these genomes are coupled with clinical metadata, including minimum inhibitory concentration (MIC) values for various antibiotics.
This large-scale coupling of resistance data with strain-specific genome sequences enables machine learning and other big-data science approaches to study and predict antibiotic resistance. For example, it is now possible to apply case-control studies whereby a group of strains that exhibit a biological phenotype (e.g. antibiotic resistance) is compared to a group of strains that do not. Machine learning techniques can be used to identify biomarkers (e.g. presence/absence of genes or mutations) that are predictive of a given phenotype. These biomarkers can then serve as a basis for diagnostic tests.
Here we discuss recent literature using machine learning approaches to predict antibiotic resistance and highlight considerations required for their application.
Introduction to machine learning approaches for WGS-based prediction of AMR
Setting up a machine learning problem involves breaking data into two groups (Fig. 1a):
(1) The y-array containing genomes or samples (m rows) matched with the phenotype to be predicted. In the case of AMR prediction, a phenotype could be binary e.g. ‘resistant’ versus ‘susceptible’ or the actual experimentally measured MIC. Predicting MICs is often preferable owing to changing breakpoints used to define resistance. For example, a given strain with a MIC of 8 µg/mL gentamicin may have previously been classified as resistant, but new CLSI 2017 guidelines specify that gentamicin resistance requires a MIC above 16 µg/mL. This can lead to inconsistent AMR annotations that can confound binary predictions.
(2) The X-matrix containing the samples (m rows) and their associated features (n columns) that will be used to make a prediction. Features range from those that are completely knowledge-based, such as the presence of genes known to confer antibiotic resistance (e.g. a beta-lactamase) to those that require no previous knowledge such as the presence of short (~10 bp) segments of DNA on the chromosome (Fig. 1b). These features have unique benefits and drawbacks that have been used in several recent studies described below.
Selecting appropriate features for AMR prediction
Knowledge-based features
Knowledge-based features can be obtained by mapping a genome of interest using curated databases of gene products that have already been demonstrated to confer antibiotic resistance. As of February 2019, the Comprehensive Antibiotic Resistance Database (CARD) houses 2553 reference sequences and 1216 SNPs demonstrated to confer resistances for 79 different pathogens [5]. These approaches are akin to laboratory tools that offer PCR-based identification of AMR determinants, such as BioFire. Models trained using previously annotated features often have good accuracies and are easier to interpret because of the accumulated knowledge present in such predictions [6, 7].
However, despite their high accuracy, these tools are limited because they often require many rounds of multiple sequence alignment, which can become computationally expensive at large scale. Also, reliance on known AMR determinants may cause such algorithms to miss newly evolved resistance mechanisms. A useful machine learning approach should be capable of analysing future outbreaks and identifying new mechanisms of resistance, rather than being limited to past knowledge.
Gene- and allele-based features
An approach that balances these two extremes involves assembling features by annotating the genome for known protein coding genes, but keeping the feature types agnostic, for example by including genes with functions ranging from cell replication, to cell wall synthesis to metabolism. This approach has the advantage of not requiring known determinants of antimicrobial resistance but does still require annotated genomes, potentially biasing results by annotation methods.
Recent studies have used this approach to predict antibiotic resistance in E. coli with accuracies above 90 % [8]. Importantly, this approach identified features that outperformed genes established in the literature. Such an approach can go even deeper by breaking the genes down into their constituent alleles to account for potential mutations in each coding sequence. Another study took this approach to examine 1595 strains of M. tuberculosis and identified 33 known AMR-conferring genes and 24 new potentially novel antibiotic resistance conferring genes [9]. Thus, methods that rely on several features extracted from the genome, rather than restricting them to those with previous knowledge, can be used to accurately predict AMR and identify novel mechanisms of resistance making them extensible to mutations and mechanisms of resistance that may emerge in the future.
Kmer feature selection
A contrasting approach that can identify new mechanisms of resistance and requires no a priori knowledge involves breaking up a genome into short (~10 bp) long segments of DNA and using these to create ‘features’ from short segments of DNA on the genome, known as ‘kmers’. All genomes in the collection can be divided into kmers that are then added to the X-matrix where presence of a specific kmer becomes a feature. Thus, this kmer-based approach contrasts with knowledge-based methods to predict AMR that rely on a database of curated genes and mutations previously shown to confer antibiotic resistance.
Studies of A. baumannii, S. aureus, S. pneumoniae, K. pneumoniae and collections of over 5000 Salmonella genomes have demonstrated ability to predict MIC with an average accuracies above 90 % within +/−1 twofold dilution step [10–12]. Unfortunately, this high accuracy and ability to predict new mechanisms of resistance has the trade-off of being difficult to interpret. For example, a model may imply a strong relationship between predicted resistance and segments of the genome without annotated functions or to biological processes.
Building and evaluating a machine learning model
Once the features for a model have been selected it is time to apply a machine learning algorithm to the data. Several such algorithms exist, each with benefits and drawbacks related to accuracy and interpretability [13]. Unfortunately, often the more accurate models are difficult to interpret whereas more intelligible models have worse predictive capabilities. In healthcare applications it is vital for the treating physician to be able to understand, validate and trust a model, and thus relying on easier to interpret methods like a decision tree or simple logistic regressors may be best.
When evaluating a machine learning model it is imperative to question how the model was trained. A major pitfall for machine learning approaches is their tendency to ‘overfit’ datasets. For example, a model trained on data used to make a prediction could simply ‘remember’ that data and use it to correctly predict any point in the same training set. However, if the model is too rigid it may perform poorly on new data. Robust machine learning models avoid such overfitting by splitting the data into non-overlapping sets where ~80 % of the data is used for training and ~20 % of the data is used for tests (Fig. 2a). This splitting process should be random and be performed several times to assess the overall accuracy and sensitivity of a model, thereby limiting overfitting and ensuring that predictions remained generalizable and robust.
Once a model’s ability to predict new data is established it is finally possible to evaluate the model’s predictive performance (Fig. 2b). Thus far, we have described previous studies in terms of correct predictions and accuracies. However, it’s often more important to evaluate cases where a model fails. Requirements for AMR diagnostic devices are strict. Devices typically describe their utility in terms of error rate. Major errors (MEs) occur when susceptible genomes are incorrectly predicted to have resistant MICs. The opposite case, when resistant genomes are incorrectly assigned susceptible MICs, are termed very major errors (VMEs). US Food and Drug Administration (FDA) standards for automated systems recommend a ME rate ≤3 %. A recent study of over 5000 Salmonella genomes used kmers to train a model that demonstrated MIC predictions for 15 antibiotics with ME rates in this range [10]. The FDA standards for VME rates indicate that the lower 95 % confidence limit should be ≤1.5 % and upper limit should be ≤7.5 %. Models for seven of the 15 antibiotics in the same study had acceptable VME rates based on this requirement. Thus, such an approach would make acceptable predictions for diagnostic applications.
Summary and outlook
In summary, options for WGS-based predictions of antimicrobial susceptibility testing are becoming a reality. This brief summary limits the scope to tools and methods to predict antibiotic susceptibility from WGS. However, in the future it may be possible to combine genomic features with information from the patient, like age, gender, comorbidities, etc. Furthermore, rather than predicting only antibiotic susceptibility it would be possible to train an algorithm to predict patient outcome and adjust treatment regimens to improve patient care [14].
Such approaches are sorely needed because despite improvements in antibiotic use, the Centres for Disease Control and Prevention (CDC) estimates that approximately 50 % of antibiotics are still prescribed unnecessarily in the US at a yearly cost of $1.1 billion [15], and the annual impact of resistant infections in the US is estimated to be $20 billion in excess healthcare costs and 8 million additional days patients stay in the hospital [16]. Significant improvements in patient outcome have been observed when reducing the time of treatment with optimal antibiotic therapy [17, 18]. Rapid identification and targeted treatment of pathogenic bacteria using tools assisted by algorithms presented here would enable precision medicine for pathogens that would lower the incidence of antibiotic resistance, improve patient health, and lead to decreased hospital costs.
Figure 1. How to set up a whole-genome sequence (WGS)-based machine learning problem for antimicrobial resistance (AMR) prediction. (a) Samples (m=rows) with sequenced genomes and known phenotypes of interest [‘susceptible’ vs ‘resistant’ phenotypes or minimum inhibitory concentration (MIC) value] are used to train a machine learning model. All values to be predicted are placed into the ‘y’ array. The ‘features’ used to train a model form the columns of the X-matrix. (b) For WGS-based antimicrobial-susceptibility-test prediction possible feature types include: (1) known antibiotic resistance conferring genes or mutations, (2) annotated protein coding genes (independent of known functions) and even (3) the presence of short fragments of DNA sequence on the chromosome known as ‘kmers’. These different feature types have a trade-off between ease of interpretation (easiest for previously identified features) and ability to detect novel AMR determinants (best for short sequence fragments).
Figure 2. Evaluating predictions from a machine learning model. (a) A machine learning model that is ‘overfit’ is inflexible to new data. To ensure a model is robust enough to predict new samples, all models should be cross-validated. This process involves randomly splitting the whole dataset into training (± 80 % of samples) and testing (± 20 %) sets. The sets should be shuffled multiple times to check model accuracy across different samples and features. (b) The results of running the model on testing sets can then be compared for each randomly sampled set (different colored lines). The model’s performance is compared by calculating the area under the curve (AUC) on a plot of the true positive rate vs the false positive rate often called a receiver operating characteristic (ROC) curve. Model accuracy can be calculated from the number of true positive (TP) [model predictions, resistant (R); experimental result, R] and true negative (TN) predictions divided by the total number of predictions. However, it is often more important to gauge how a model fails: for example a false positive ‘major error’ [model prediction, R; experimental result, susceptible (S)] may lead to incorrectly withholding an effective antibiotic. Even worse, false negative ‘very major error’ predictions (model prediction, S; experimental result, R) could lead to prescribing an antibiotic that is ineffective.
References
1. The dangers of hubris on human health. Global Risks – Reports. World Economic Forum 2013 (http://reports.weforum.org/global-risks-2013/risk-case-1/the-dangers-of-hubris-on-human-health/).
2. O’Neill J. Antimicrobial resistance: tackling a crisis for the health and wealth of nations. Rev Antimicrob Resist 2014; 20: 1–16.
3. Carling P, Fung T, Killion A, Terrin N, Barza M. Favorable impact of a multidisciplinary antibiotic management program conducted during 7 years. Infect Control Hosp Epidemiol 2003; 24(9): 699–706.
4. Wattam AR, Abraham D, Dalay O, Disz TL, Driscoll T, Gabbard JL, Gillespie JJ, Gough R, Hix D, Kenyon R, Machi D, Mao C, Nordberg EK, Olson R, Overbeek R, Pusch GD, Shukla M, Schulman J, Stevens RL, Sullivan DE, Vonstein V, Warren A, Will R, Wilson MJ, Yoo HS, Zhang C, Zhang Y, Sobral BW. PATRIC, the bacterial bioinformatics database and analysis resource. Nucleic Acids Res 2014; 42(Database issue): D581–591.
5. Jia B, Raphenya AR, Alcock B, Waglechner N, Guo P, Tsang KK, Guo P, Tsang KK, Lago BA, Dave BM, Pereira S, Sharma AN, Doshi S, Courtot M, Lo R, Williams LE, Frye JG, Elsayegh T, Sardar D, Westman EL, Pawlowski AC, Johnson TA, Brinkman FS, Wright GD, McArthur AG. CARD 2017: expansion and model-centric curation of the comprehensive antibiotic resistance database. Nucleic Acids Res 2017; 45(D1): D566–573.
6. Jeukens J, Freschi L, Kukavica-Ibrulj I, Emond-Rheault J-G, Tucker NP, Levesque RC. Genomics of antibiotic-resistance prediction in Pseudomonas aeruginosa. Ann N Y Acad Sci 2019; 1435(1): 5–17 (First published 2017 Online: https://nyaspubs.onlinelibrary.wiley.com/doi/abs/10.1111/nyas.13358).
7. Bradley P, Gordon NC, Walker TM, Dunn L, Heys S, Huang B, Earle S, Pankhurst LJ, Anson L, de Cesare M, Piazza P, Votintseva AA, Golubchik T, Wilson DJ, Wyllie DH, Diel R, Niemann S, Feuerriegel S, Kohl TA, Ismail N, Omar SV, Smith EG, Buck D, McVean G, et al. Rapid antibiotic-resistance predictions from genome sequence data for Staphylococcus aureus and Mycobacterium tuberculosis. Nat Commun 2015; 6: 10063.
8. Her H-L, Wu Y-W. A pan-genome-based machine learning approach for predicting antimicrobial resistance activities of the Escherichia coli strains. Bioinformatics 2018; 34(13): i89–i95.
9. Kavvas ES, Catoiu E, Mih N, Yurkovich JT, Seif Y, Dillon N, Heckmann D, Anand A, Yang L, Nizet V, Monk JM, Palsson BO. Machine learning and structural analysis of Mycobacterium tuberculosis pan-genome identifies genetic signatures of antibiotic resistance. Nat Commun 2018; 9(1): 4306.
10. Nguyen M, Long SW, McDermott PF, Olsen RJ, Olson R, Stevens RL, Tyson GH, Zhao S, Davis JJ. Using machine learning to predict antimicrobial MICs and associated genomic features for nontyphoidal Salmonella. J Clin Microbiol 2019; 57(2): pii: e01260-18 (http://dx.doi.org/10.1128/JCM.01260-18).
11. Davis JJ, Boisvert S, Brettin T, Kenyon RW, Mao C, Olson R, Overbeek R, Santerre J, Shukla M, Wattam AR, Will R, Xia F, Stevens R. Antimicrobial resistance prediction in PATRIC and RAST. Sci Rep 2016; 6: 27930.
12. Nguyen M, Brettin T, Long SW, Musser JM, Olsen RJ, Olson R, Shukla M, Stevens RL, Xia F, Yoo H, Davis JJ. Developing an in silico minimum inhibitory concentration panel test for Klebsiella pneumoniae. Sci Rep 2018; 8(1): 421.
13. Deo RC. Machine learning in medicine. Circulation 2015; 132(20): 1920–1930.
14. Kachroo P, Eraso JM, Beres SB, Olsen RJ, Zhu L, Nasser W, Bernard PE, Cantu CC, Saavedra MO, Arredondo MJ, Strope B, Do H, Kumaraswami M, Vuopio J, Gröndahl-Yli-Hannuksela K, Kristinsson KG, Gottfredsson M, Pesonen M, Pensar J, Davenport ER, Clark AG, Corander J, Caugant DA, Gaini S. Integrated analysis of population genomics, transcriptomics and virulence provides novel insights into Streptococcus pyogenes pathogenesis. Nat Genet 2019; 51(3): 548–559 (http://dx.doi.org/10.1038/s41588-018-0343-1).
15. Antibiotic resistance threats in the United States, 2013. US Centers for Disease Control and Prevention 2013 (https://www.cdc.gov/drugresistance/pdf/ar-threats-2013-508.pdf).
16. Fair RJ, Tor Y. Antibiotics and bacterial resistance in the 21st century. Perspect Medicin Chem 2014; 6: 25–64.
17. Kumar A, Roberts D, Wood KE, Light B, Parrillo JE, Sharma S, Suppes R, Feinstein D, Zanotti S, Taiberg L, Gurka D, Kumar A, Cheang M. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med 2006; 34(6): 1589–1596.
18. Palmer HR, Palavecino EL, Johnson JW, Ohl CA, Williamson JC. Clinical and microbiological implications of time-to-positivity of blood cultures in patients with Gram-negative bacilli bacteremia. Eur J Clin Microbiol Infect Dis 2013; 32(7): 955–959.
The author
Jonathan M. Monk PhD
Department of Bioengineering, UC San Diego, San Diego, California, USA
E-mail: jmonk@ucsd.edu
Activated partial thromboplastin time assay
, /in Featured Articles /by 3wmediaThe activated partial thromboplastin time coagulation assay is one of the most frequently performed tests in hematology, and has a variety of uses in clinical practice. Accurate interpretation of the test depends on both clinical context (i.e. why the test was ordered) as well as an understanding of each laboratory’s normal reference range and assay sensitivity regarding detection of factor deficiencies, (unfractionated) heparin therapy and lupus anticoagulant.
by Dr Julianne Falconer and Dr Emmanuel J. Favaloro
Introduction
The activated partial thromboplastin time (APTT) assay is a commonly requested coagulation test, perhaps second only to the prothrombin time (PT)/international normalized ratio (INR), as used to monitor vitamin K antagonist (VKA) therapy such as warfarin. The APTT test assesses the intrinsic pathway of coagulation and has a variety of clinical uses; however, it is primarily used to screen for hemostasis issues, factor deficiencies, lupus anticoagulant (LA) or to monitor unfractionated heparin (UFH) therapy dosing. The test is sensitive to, but not specific for, detection of these abnormalities or influences. APTT prolongation may also be seen in liver disease, disseminated intravascular coagulation (DIC) and in the presence of factor inhibitors. Interpretation of an APTT result, be it normal or prolonged, is dependent on both the clinical context and the characteristics of the reagents and the assay as performed on particular instruments. The establishment of normal reference intervals (NRIs) and assessment of the assay in terms of its sensitivity to heparin, LA and clotting factors are important to provide accurate information for clinical interpretation [1].
Uses of the APTT assay
The APTT test is a global assay that measures the time to fibrin clot formation via the contact factor (‘intrinsic’) pathway (Fig. 1). The APTT test is usually performed on fully automated platforms, and involves activation of coagulation within the test (plasma) sample by the addition of specific reagents (containing phospholipids, contact factor activator and calcium chloride). The type of contact factor activator, and the type and concentration of phospholipid, used in the APTT reagent affects the sensitivity of the assay to, and thus its prolongation by, factor deficiencies, as well as to the presence of UFH and LA [1, 2].
The APTT is commonly used to monitor anticoagulation therapy using UFH (Table 1). It may also be prolonged, however, in the presence of VKAs including warfarin, as well as direct oral anticoagulants (DOACs) such as dabigatran (direct thrombin inhibitor) and rivaroxaban (anti-FXa inhibitor). The APTT is generally less sensitive to, but may still be slightly prolonged, by anticoagulation with low molecular weight heparin (LMWH) and with apixaban, another DOAC (anti-FXa inhibitor).
In the absence of anticoagulation therapy, an ‘isolated’ prolonged APTT may be used to determine a clinically important factor deficiency, for example as a screen for hemophilia A (FVIII deficiency), hemophilia B (FIX deficiency), or hemophilia C (FXI deficiency), or even von Willebrand Disease (VWD; which may be associated with loss of FVIII) [1]. An ‘isolated’ prolonged APTT, however, could instead be indicative of a clinically unimportant factor deficiency such as FXII or other contact factor deficiency. Other alternatives for an ‘isolated’ prolonged APTT include a factor inhibitor or LA. Despite causing prolongation of APTT in vitro, LA may be associated clinically with increased risk of thrombosis rather than bleeding. A prolonged APTT may be accompanied by a prolonged PT in the context of liver disease, DIC or fibrinogen (or other ‘common factor pathway’ deficiency/ies). Clinical context, therefore, must form the basis for accurate interpretation of APTT, be it either normal or prolonged, and together with other routine coagulation studies is essential to guide further investigations (Fig. 2).
A large number of commercial APTT reagents are now available, with wide variation in the type of contact factor activator and phospholipid source and concentration used. This will result in variation in sensitivity to all typical influences; thus also causing substantial variation in NRIs between APTT reagents, and requiring the establishment and verification of NRIs based on both the reagent and instrument in use. Unawareness of variation in APTT reagent sensitivity in context of clinical picture will lead to flawed clinical interpretation of results.
Establishing and verification of NRIs
A minimum of 20 normal individuals may be sufficient to establish a NRI for PT and APTT, according to guidance documents provided by the Clinical and Laboratory Standards Institute (CLSI) [3, 4]. However, a larger number of normal individuals is recommended to establish an initial NRI, following which a smaller sample of normal individuals may be used for future verification purposes [1].
As an example, Figure 3 shows an initial (historical) NRI estimation for APTT testing using a dataset of nearly 80 normal individuals. This included one outlier sample result (Fig. 3a), which was removed to produce the cleaner dataset used to produce the subsequent NRI. A statistical normality test was performed and showed the distribution to be near Gaussian, allowing parametric statistical assessment. For APTT testing, the NRI would aim to evaluate the 95 % confidence interval, approximating a mean
± 2 standard deviation (SD) assessment (Fig. 3b). Logarithmic transformation can instead be used to normalize test data when it is non-parametric and fits a log distribution (e.g. Fig. 3c).
If a NRI has been previously established by the laboratory or by the manufacturer of the APTT reagent using a specific reagent/instrument combination, the laboratory could use a process of transference to verify the ‘established’ NRI as fit for purpose. This may be done by establishing that a majority of samples in a small set of normal donors give values within the established NRI (e.g. >18 out of a set of 20 normal samples). Samples obtained from normal individuals or a dataset of normal patient test results may be used to assess a new lot of reagent to establish whether an existing NRI can be maintained when changing reagent lots.
Factor (deficiency) sensitivity
Factor sensitivity of an APTT assay (representing a specific reagent/instrument combination) can be assessed in a number of ways. One method involves serial dilution of either in-house or commercially derived normal plasma, into single-factor deficient plasma, in order to generate a series of aliquots with reducing factor levels. These samples are then tested by APTT and for factor level. The APTT reagent is regarded to be sensitive to the level of factor that correlates with the upper limit of the NRI.
A more accurate process, though particularly difficult to perform outside of a hemophilia centre, is to establish APTT values from true patients with various known factor levels [1, 2] (e.g. Fig. 4).
As a general guide, if the APTT is used for screening factor deficiencies, then the patient APTT value should be above the NRI when their factor level is below around 30–40 U/dL for FVIII, FIX, and FXI.
Sensitivity of APTT to UFH
Despite the changing landscape of anticoagulation therapy with the addition of direct anti-Xa inhibitors (rivaroxaban and apixaban) and a direct thrombin inhibitor (dabigatran) [5, 6], both LMWH and UFH continue to be frequently used in clinical practice. In turn, the APTT continues to be a generally preferred method of UFH monitoring over anti-FXa, given the wide availability and relative low cost of the assay. However, unlike the calibrated anti-FXa assay, APTT results are subject to variation between different instruments, be they be based on optical or mechanical clot detection methods [7], different APTT reagents (including variation between different lots of the same reagent type) and algorithms used on instruments for raw data processing. This poses a substantial problem with regards to historical recommendations to maintain patients on UFH between 1.5 and 2.5 times the ‘normal reference value’ (as based on limited evidence [8]). Therapeutic ranges should therefore be defined with specific reference to the instrument/reagent combination used locally [9].
One ‘spiking method’ involves testing samples containing known quantities of UFH diluted into normal pool plasma, as then tested by APTT and anti-FXa methods, allowing an estimation of the APTT therapeutic interval [1]. However, variations in certain components of patient plasma, as well as the non-physiologically processed nature of the UFH used, can impact on the interpretation of data obtained using this method. A better method involves ex vivo assessment of plasma obtained from patients on UFH therapy, with these tested for both APTT and anti-FXa, and then to establish a UFH therapeutic range for APTT that matches the therapeutic range for anti-FXa (e.g. 0.3–0.7 U/mL). It is important to recognize that individual response to UFH according to APTT is affected by many influences, including (but not limited to): antithrombin level; high or low levels of coagulation factors and proteins such as von Willebrand factor or proteins released from endothelial cells or platelets, competing with antithrombin for heparin binding; or increased FVIII levels in acute phase response; or reduction in FXII; or presence of LA (etc).
To obtain a cleaner data set to establish UFH therapeutic ranges, the following steps can be undertaken during sample collection and processing [1].
• Ensure baseline PT, APTT and INR testing prior to commencement of UFH are within their NRIs.
• Exclude underfilled samples, samples with visible hemolysis or likely platelet activation and release of heparin neutralizer platelet factor 4 (PF4).
• Exclude samples containing LMWH or other anticoagulants (e.g. VKAs, DOACs).
• Adhere to manufacturer guidelines with regards to the window from time of blood collection to testing.
• Double centrifuge samples when freezing them for batch testing (to remove residual platelets, which release PF4 and phospholipids on thawing).
• Accumulate data over a suitable time period to account for day-to-day test result variability.
• Aim for 30 or more data points.
• Appropriately dilute samples with anti-Xa activity above the test’s linearity limit.
• Remove data points reflecting ‘gross’ outliers.
LA sensitivity
The LA sensitivity of a particular APTT reagent can be assessed by comparing APTT tests of samples containing LA, for example by comparison of mean clotting times for each reagent.
Given that the APTT is a phospholipid-dependent assay, the test may be susceptible to prolongation in the presence of LA. However, differences in the phospholipid type and concentration between APTT reagents account for wide variation seen in the degree of prolongation of APTT, including due to LA. The LA sensitivity of the APTT reagent also has bearing on the use of APTT to monitor UFH and must inform the establishment of an algorithm to further investigate unexpectedly prolonged APTTs.
In one empirical method, initial testing using an LA sensitive method (e.g. dilute Russell viper venom time; dRVVT) is initially used to formulate a set of LA-positive samples of various ‘strengths’. Different APTT reagents can then be used to test the samples and the data for each sample can be plotted again the upper reference limit of the APTT for each reagent [1]. The ratio of clotting time of each LA-positive sample (of varying strengths) to the mean normal APTT derived from normal plasma samples is calculated. The median of these ratios allows different reagents to be ranked according to LA sensitivity. It can then become clear which APTT reagents are most (versus least) sensitive to LA. These can then be differentially selected according to the laboratory desire. For example, a laboratory may prefer to select an APTT reagent that is relatively LA ‘insensitive’, as combined with good factor VIII/IX/XI and UFH sensitivity if there is a desire to use a general purpose APTT screening reagent (i.e. hospital laboratory monitoring UFH, but wishing to avoid LA detection in asymptomatic patients). Alternatively, a laboratory may select an LA sensitive and an insensitive APTT reagent pair if they wish to assess for LA in symptomatic (thrombosis and/or pregnancy morbidity) patients.
Conclusion
Interpretation of a normal or a prolonged APTT must take into account both clinical context, including presence of anticoagulant therapy, as well as the methods and reagents used by the laboratory. The sensitivity of a particular APTT reagent to detect UFH therapy, LA and factor deficiencies has significant bearing on diagnostic assessment and therapy monitoring, and thus reflects essential knowledge for laboratory and clinical staff alike.
Figure 1. The activated partial thromboplastin time (APTT) assay measures the clot time to formation of fibrin via the contact factor pathway and is dependent on contact factors (FXII and above), and then FXI, FIX, FVIII, FX, FV, and FII. The APTT is also affected by vitamin K antagonists (VKAs; ‘W’), but more importantly is used to monitor unfractionated heparin (UFH; ‘H’) therapy and also to assess for potential hemophilia (FVIII, FIX or FXI deficiency). The APTT is also sensitive to the presence of other anticoagulants, including direct oral anticoagulants (DOACs) such as dabigatran (‘D’) and rivaroxaban (‘R’), and potentially also apixaban (‘A’) for some reagents. The APTT may also be utilized as part of a panel of tests to help assess for lupus anticoagulant (LA). (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Figure 2. An algorithm that provides one recommended approach for the follow-up of an abnormal APTT. Always exclude an anticoagulant effect first – there is no point investigating a prolonged APTT associated with anticoagulant use. Then consider the patient’s history, or the clinical reason for the test order, both of which assist in terms of follow-up approach. APTT, activated partial thromboplastin time; FBC/CBC, full blood count (UK/Australia)/complete blood count (USA); DIC, disseminated intravascular coagulation; DOAC, direct oral anticoagulant; EDTA, ethylenediaminetetraacetic acid; F, factor; LA, lupus anticoagulant; PT, prothrombin time. (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Table 1. The APTT test. A multipurpose and sensitive assay, but not specific for any individual parameter. List is not meant to be all inclusive.
DOACs, direct oral anticoagulants; VWD, von Willebrand disease.
*PT should also be prolonged if APTT is prolonged in the indicated setting.
(Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Figure 3. Historical data from our laboratory to illustrate the process of deriving a normal reference interval (NRI) for the APTT, and using nearly 80 normal individual plasma samples. (a) APTT of all samples tested shown as a dot plot; one clear outlier shown as a red asterisk. (b) Data cleaned of outliers [i.e. in this case the single red asterisk sample in (a)]. (c) NRR estimate as mean ± 2 standard deviations (SDs) to provide approximate 95 % coverage. Bar graphs of parametric data processing and log transformed data processing shown. The NRI for this data set approximates 27–38 sec. (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Figure 4. Ex vivo heparin versus APTT evaluation. (a) Samples from all patients identified to be on heparin (as identified by our laboratory information system) and for which an APTT was performed at the time of evaluation are also tested for anti-FXa level. The APTT therapeutic range is that corresponding to a heparin level of 0.3–0.7 U/mL by anti-Xa. However, many data points in this figure do not reflect UFH alone. Some points may instead reflect low molecular weight heparin (e.g. likely to be the sample yielding an anti-Xa value close to 0.7 U/mL but with normal APTT) or alternatively UFH co-incident to FXII deficiency or LA, or else patients potentially transitioning from UFH to VKAs. These data points can be removed to yield a ‘cleaner’ data set, as shown in (b). (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Disclaimer: The views expressed in this paper are those of the authors, and are not necessarily those of NSW Health Pathology.
References
1. Favaloro EJ, Kershaw G, Mohammed S, Lippi G. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35.
2. Kershaw G. Performance of activated partial thromboplastin time (APTT): determining reagent sensitivity to factor deficiencies, heparin, and lupus anticoagulants. Methods Mol Biol 2017; 1646: 75–83.
3. Defining, establishing, and verifying reference intervals in the clinical laboratory; proposed guideline—third edition. CLSI document C28–P3. Clinical and Laboratory Standards Institute (CLSI) 2008.
4. One-Stage Prothrombin time (PT) test and activated partial thromboplastin time (APTT) test; approved guideline—second edition. CLSI document H47-A2. CLSI 2008.
5. Favaloro EJ, McCaughan GJ, Mohammed S, Pasalic L. Anticoagulation therapy in Australia. Ann Blood 2018; 3: 48.
6. Lippi G, Mattiuzzi C, Adcock D, Favaloro EJ. Oral anticoagulants around the world: an updated state-of the art analysis. Ann Blood 2018; 3: 49.
7. Favaloro EJ, Lippi G. Recent advances in mainstream hemostasis diagnostics and coagulation testing. Semin Thromb Hemost. 2019; 45(3): 228–246.
8. Baluwala I, Favaloro EJ, Pasalic L. Therapeutic monitoring of unfractionated heparin – trials and tribulations. Expert Rev Hematol 2017; 10(7): 595–605.
9. Marlar RA, Clement B, Gausman J. Activated partial thromboplastin time monitoring of unfractionated heparin therapy: issues and recommendations. Semin Thromb Hemost 2017; 43(3): 253–260.
The authors
Julianne Falconer1 MBBS and Emmanuel J. Favaloro*1,2 PhD, FFSc (RCPA)
1Haematology, Institute of Clinical Pathology and Medical Research (ICPMR), NSW Health Pathology, Westmead Hospital, NSW, Australia.
2Sydney Centres for Thrombosis and Hemostasis, Westmead Hospital
*Corresponding author
E-mail: Emmanuel.Favaloro@health.nsw.gov.au