p14 02

An electronic nose for asthma diagnosis

An electronic nose consists of an array of chemical sensors for the detection of volatile organic compounds and an algorithm for pattern recognition. Breath analysis with an electronic nose has a high diagnostic performance for atopic asthma that can be increased when combined with measurement of fractional exhaled nitric oxide.

by Dr Paolo Montuschi

Several volatile organic compounds (VOCs) have been identified in exhaled breath in healthy subjects and patients with respiratory disease by gas-chromatography/mass spectrometry (GC/MS) [1]. An electronic nose (e-nose) is an artificial system that generally consists of an array of chemical sensors for volatile detection and an algorithm for pattern recognition [2]. Several types of e-noses are available. An e-nose has been used for distinguishing between asthmatic and healthy subjects [3,4], between patients with asthma of different severity [3], between patients with lung cancer and healthy subjects [5], between patients with lung cancer and COPD [6], and between patients with asthma and COPD [7].
We compared the diagnostic performance of an e-nose with fractional exhaled nitric oxide (FENO), an independent method for assessing airway inflammation, and lung function testing in patients with asthma. We also investigated whether an e-nose could discriminate between asthmatic and healthy subjects and to establish the best sampling protocol (alveolar air vs oro-pharyngeal/airway air) for e-nose analysis. The results presented here are from a previously published study [4].

Methods
Study subjects
Twenty-four healthy subjects and 27 Caucasian patients with intermittent or mild persistent atopic asthma were studied [Table 1]. All asthmatic patients had a physician-based diagnosis of asthma, and the diagnosis and classification of asthma was based on clinical history, examination and pulmonary function parameters according to current guidelines [8]. Patients had intermittent asthma with symptoms equal to or less often than twice a week (step 1) or mild persistent asthma with symptoms more often than twice a week (step 2), forced expiratory volume in one second (FEV1) of 80% or greater of predicted value, and positive skin prick tests. Asthma patients were not taking any regular medication, but used inhaled short-acting β2-agonists as needed for symptom relief. Healthy subjects had no history of asthma and atopy, had negative skin prick tests and normal spirometry.

All subjects were never-smokers, had no upper respiratory tract infections in the previous 3 weeks, and were not being treated with corticosteroids or anti-inflammatory drugs for asthma in the previous 4 weeks.

Study design
The type of study was cross-sectional. Subjects attended on one occasion for clinical examination, FENO measurement, e-nose analysis, lung function tests, and skin prick testing. Informed consent was obtained from patients. The study was approved by the Ethics Committee of the Catholic University of the Sacred Heart, Rome, Italy.

Pulmonary function
Spirometry was performed with a Pony FX spirometer (Cosmed, Rome, Italy) and the best of three consecutive manoeuvres chosen.

Exhaled nitric oxide measurement

FENO was measured with the NIOX system (Aerocrine, Stockholm, Sweden) with a single breath on-line method at constant flow of 50 ml/sec according to American Thoracic Society guidelines [9].

Collection of exhaled breath
No food or drinks were allowed at least 2 hours prior to breath sampling. Two procedures for collecting exhaled breath were followed to study the differences between total exhaled breath and alveolar breath [4]. Subjects were asked to inhale to total lung capacity and to exhale into a mouthpiece connected to a Tedlar bag through a three-way valve [3]. In the first sampling procedure, the first 150 ml, considered as dead space volume, were collected into a separate Tedlar bag and discarded [Fig. 1a]. The remaining exhaled breath, principally derived from the alveolar compartment, was collected and immediately analysed with e-nose [4]. In the second sampling procedure, total exhaled breath was
collected [Fig. 1b] [4].

Electronic nose
A prototype e-nose (Libranose, University of Rome Tor Vergata, Italy), consisting of an array of eight quartz microbalance gas sensors coated by molecular films of metallo-porphyrins, was used [4]. E-nose responses are expressed as frequency changes for each sensor [Fig. 2] and then analysed by pattern recognition algorithms [2]. Ambient VOCs were subtracted from measures. Results were automatically adjusted for ambient VOCs.

Skin testing
Atopy was assessed by skin prick tests for common aeroallergens (Stallergenes, Antony, France).

Multivariate data analysis
Feed forward neural network was used to classify e-nose, FENO, spirometry data. A feed-forward neural network, a biologically derived classification model, is formed by a number of processing units (neurons), organised in layers. The datasets were divided into a training and a testing set. The first 27 measures collected were used for training and the remaining 24 measures for testing.

Statistical analysis
FENO values were expressed as medians and interquartile ranges (25th and 75th percentiles), whereas spirometry values were expressed as mean ±SEM. Unpaired t-test and Mann–Whitney U test were used for comparing groups for normally distributed and nonparametric data, respectively. Correlation was expressed as a Pearson coefficient and significance defined as a value of P<0.05. Results
Electronic nose
The best results were obtained when e-nose analysis was performed on alveolar air as opposed to total exhaled breath [Table 2]. The diagnostic performance was determined in terms of the number of correct identifications of asthma diagnosis in the test dataset. Combination of e-nose analysis of alveolar air and FENO had the highest diagnostic performance for asthma (95.8%). The E-nose (87.5%) had a discriminating capacity that was higher than that of FENO (79.2%), spirometry (70.8%), combination of FENO and spirometry (83.3%), and combination of e-nose analysis of total exhaled breath and FENO (83.3%) [Fig. 3].

Exhaled nitric oxide
Median FENO values were higher in asthmatic patients than in healthy subjects [37. 6 (26.0–61.5) ppb vs 13.4 (10.0–19.9) ppb, P<0.0001, respectively].
Lung function tests

Both study groups had normal FEV1 values [Table 1]. Asthmatic patients had lower absolute (P = 0.032) and percentage of predicted FEV1 values (P = 0.004) than healthy subjects [Table 1]. Asthmatic patients had lower absolute (P = 0.003) and percentage of predicted forced expiratory flow between 25% and 75% of forced vital capacity (FEF25%–75%) (P = 0.002) than healthy subjects [Table 2].

Correlation between electronic nose, FeNO, and lung function tests
E-nose, FENO and lung function testing data were not correlated in either asthma or healthy control group.

Discussion
The original aspects of our study are:
1) the comparison between an e-nose and FENO, in addition to spirometry;
2) the comparison between total and alveolar exhaled air;
3) the analysis of data based on a neural network that included a training and a test analysis performed in two separate datasets for stringent quality control.

Our study indicates that an e-nose might be useful for asthma diagnosis, particularly in combination with FENO. Spirometry had the lowest diagnostic performance in line with a well-maintained lung function in patients with intermittent and persistent mild asthma. Our study confirms that FENO has a good diagnostic performance for asthma and suggests the possibility of using different non-invasive techniques for achieving a greater asthma diagnostic performance.

However, large powered studies are required to establish the diagnostic performance of e-nose, FENO and lung function testing in asthma patients. Ascertaining whether an e-nose could be used for screening of asthmatic patients requires large prospective studies. Also, the E-nose is not suitable for identifying and quantifying single breath VOCs, for which GC/MS is required.

Asthma is principally characterized by airway inflammation. It may seem surprising that the best results with the e-nose were obtained when collecting alveolar air rather than total exhaled breath which includes exhaled breath from the airways. This might reflect the contribution of oro-pharyngeal air which might introduce confounding factors making it e-nose analysis less reflective of what occurs within the respiratory system [10]. Moreover, the results of e-nose analysis of alveolar air could partially reflect the production of VOCs within the peripheral airways (mixed airways/alveolar air) due to significant inter-individual variability in dead space volume.

The lack of correlation between the e-nose results and those from FENO might indicate that these techniques reflect different aspects of airway inflammation. Formal studies to ascertain whether the e-nose could be used for assessing and monitoring airway inflammation in asthmatic patients are warranted. The E-nose is not suitable for ascertaining the cellular source of breath VOCs. Persistent airway inflammation can modify the metabolic pathways in patients with asthma. As patients included in our study were not on regular, anti-inflammatory drugs for asthma, we were unable to assess the effect of pharmacological treatment on breath VOCs, which requires controlled studies. Likewise, the effect of atopy on e-nose classification of asthma patients has to be addressed in future studies.

Validation of the classification model is essential. In our study, two different datasets for training and testing, obtained in different periods of time, were used. This way, the predictive capacity of the classification model is more suitable for a real life situation.

The E-nose analysis is a non-invasive technique that is potentially applicable to respiratory medicine. Several methodological issues including optimisation and standardisation of sample collection, transfer and storage of samples, use of calibration VOC mixtures, and qualitative and quantitative GC/MS analysis, have to be addressed.

In conclusion, an e-nose discriminates between asthma and healthy subjects and usage in combination with FENO increases the e-nose’s discriminatory ability. Large studies are required to establish the asthma diagnostic performance of e-nose. Whether this integrated non-invasive approach will translate into an early asthma diagnosis has still to be clarified.

Abbreviations
Abbreviations: FEF25%–75%, forced expiratory flow at 25% to 75% of forced vital capacity; FeNO, fractional exhaled nitric oxide; FEV1, forced expiratory volume in one second; FVC, forced vital capacity; GC/MS, gas chromatography/mass spectrometry; PEF, peak expiratory flow; VOC, volatile organic compound.

Acknowledgements
This study was supported by Merck Sharp and Dohme, and the  Catholic University of the Sacred Heart.

References
1. Phillips M, Herrera J, et al. Variation in volatile organic compounds in the breath of normal humans. J Chromatogr B Biomed Sci Appl 1999; 729: 75–88.
2. Montuschi P, Mores N, et al. The electronic nose in respiratory medicine. Respiration (DOI: 10.1159/000340044, in press).
3. Dragonieri S, et al. An electronic nose in the discrimination of patients with asthma and controls. Allergy Clin Immunol. 2007; 120: 856–862.
4. Montuschi P, et al. Diagnostic performance of an electronic nose, fractional exhaled nitric oxide and lung function testing in asthma. Chest 2010; 137: 790–796.
5. Machado R, et al. Detection of lung cancer by sensor array analyses of exhaled breath. Am J Respir Care Med 2005; 171: 2186–1291.
6. Dragonieri S, et al. An electronic nose in the discrimination of patients with non-small cell lung cancer and COPD. Lung Cancer. 2009; 64: 166–170.
7. Fens N, et al: Exhaled breath profiling enables discrimination of chronic obstructive pulmonary disease and asthma. Am J Respir Crit Care Med 2009; 180: 1076–1082.
8. National Asthma Education and Prevention Program: Expert panel report III. Guidelines for the diagnosis and management of asthma. MD, Bethesda: National Heart, Lung, and Blood Institute, 2007; 1–61 (NIH publication no. 08-5847). Available at: www.nhlbi.nih.gov.
9. Recommendations for standardized procedures for the on-line and off-line measurement of exhaled lower respiratory nitric oxide and nasal nitric oxide in adults and children-1999: official statement of the American Thoracic Society 1999. Am J Respir Crit Care Med 1999; 160: 2104–2117.
10. van den Velde S, et al. Differences between alveolar air and mouth air. Anal Chem 2007; 79: 3425–3429.

The author
Paolo Montuschi, MD
Department of Pharmacology, Faculty of Medicine
Catholic University of the Sacred Heart
Largo F. Vito 1, 00168 Rome, Italy
E-mail: pmontuschi@rm.unicatt.it

C62 fig1

Warfarin: a case for pharmacogenomics testing

The ability to use genetic information to inform clinical decision making has emerged as a new tool in clinical practice, with noteworthy examples across many area of medicine. Cardiology and anticoagulation in particular have led the way in the translation of genetic findings into actionable clinical recommendations, spurred by the addition of genetically guided dosing in the drug label for warfarin by the FDA. This review covers the pharmacogenomics related to warfarin therapy.

by Dr Minoli Perera

Pharmacogenomics is primarily aimed at identifying genetic variation that influences inter-individual differences in drug response. The guiding principle is “the right drug, at the right dose for the right person”. Its application promises to enable targeted drug administration, improve therapeutic outcomes, and inform drug development. Pharmacogenomic insights have also improved our understanding of the underlying pathways and mechanisms behind adverse drug reactions. Such adverse reactions account for approximately 100 000 deaths per year in the US, and markedly increase healthcare costs. Advances made over the last 30 years in molecular biology, molecular medicine, and genomics have had a major impact on the development of pharmacogenomics.
Currently a variety of approaches are used to associate genetic variants associated with drug response. Commonly used strategies include candidate gene approach and genome-wide association studies (GWAS). Candidate gene studies investigate single nucleotide polymorphisms (SNPs) that are correlated to drug response. These studies are usually restricted to genes or SNPs that have been shown to be involved in the pathway of drug action or drug clearance. Genome-wide association studies investigate up to 5 million SNPs spaced throughout the genome which are genotyped to identify genetic variants with the drug phenotype in an unbiased fashion. Each of these methods has advantages and disadvantages. Genome-wide studies comprehensively cover the entire genome, but their power to detect moderate associations is greatly limited by the multiple testing burden, which is a requirement for correction for false-positive associations. The candidate gene approach narrows the focus to a few important SNPs and therefore has higher power, but may miss the real causative SNP, and require a priori knowledge for the selection of SNPs/genes to study.

Both these methods have yielded important clinical findings that can immediately be used to reduce the incidence of adverse effects (many of which have been added to drug labels). Notable examples such as the SLCO1B1*5 polymorphism (associated with myopathy with statin use) and CYP2C19*2 (associated with clopidogrel non-response) have shown clinically meaningful outcomes related to genetic variants. However, the translation of these findings has been slower in coming and has many clinicians wondering about the utility of this new technology.

The “test case” for pharmacogenetics was thought to be pharmacogenetically guided warfarin dosing. In this review we will cover the genetic polymorphism effecting warfarin dose requirements and the currently available diagnostic tests, as a case study for the implementation of pharmacogenetics in the clinic.

Genes that affect warfarin dose
Numerous studies, predominately conducted in Caucasian and Asian populations, demonstrate that the CYP2C9 and VKORC1 genotypes contribute significantly to warfarin dose variability [1–3]. The role of these gene products can be seen in Figure 1. The CYP2C9 enzyme metabolises the more active S-enantiomer of warfarin to inactive 7-hydroxywarfarin. Warfarin inhibits the enzyme VKOR (encoded by the gene VKORC1) to prevent conversion of Vitamin K epoxide to its reduced form necessary for activation of the clotting factors II, VII, IX and X. Thus, SNPs in the CYP2C9 gene affect warfarin pharmacokinetics, whereas variation in the VKORC1 gene impacts warfarin pharmacodynamics.

The most extensively studied CYP2C9 variants are the CYP2C9*2 and CYP2C9*3 alleles, which lead to significant reductions in CYP2C9 activity. Compared with the people without this genotype, carriers of the *2 or *3 genotypes have S-warfarin clearance reduced to between 40 to 90% or normal levels. As a result, significantly lower doses are usually needed in individuals with a CYP2C9*2 or CYP2C9*3 allele. The CYP2C9 genotype is also implicated in the risk of bleeding during warfarin therapy, especially during the warfarin initiation period [4].

The VKORC1 genotype was originally recognised for causing warfarin resistance through mutations in the gene-coding region. More recently, common VKORC1 SNPs occurring in gene-regulatory regions and underlying usual warfarin dose variability were discovered [3]. Two such SNPs, one in the promoter region (-1639G>A) and one in intron 1 (1173C>T), show the strongest association and possible functional effects [5]. Thus, the majority of warfarin pharmacogenetic studies have focused on one of these two SNPs, which are in strong linkage disequilibrium across populations (meaning they are inherited together and strongly associated with each other). This means that only one of these SNPs needs to be taken into account for pharmacogenetic dosing of warfarin. Most investigators chose the VKORC1 -1639G>A as the predictive SNP in warfarin pharmacogenetic studies. This SNP explains approximately 20–28% of the overall variability in dose requirements in Caucasians, but only 5–7% of the variability in African–Americans, mainly due to the difference in allele frequency between populations [6, 7]. Unlike CYP2C9, the VKORC1 genotype does not appear to affect the risk of bleeding with warfarin treatment[4].

These findings have been confirmed in several genome-wide association studies (GWAS) in Caucasians and Asian individuals, showing that VKORC1 -1639G>A, CYP2C9*2 and CYP2C9*3 polymorphisms are the primary genetic determinants of warfarin dose requirements in these populations. The combination of VKORC1 -1639G>A, CYP2C9 (*2 and *3) and clinical factors (e.g., age, sex, weight and amiodarone use) explains approximately 55% of the total variance in warfarin maintenance dose in Caucasians, but only about 25% among African–Americans. With the exception of the CYP4F2 genotype, found in a GWAS study of Swedish patients [8], no other genetic variant has met genome-wide significance for association with warfarin dose requirements. Both genetic and non-genetic variables have been included in dosing algorithms that can be used to predict dose, such as WarfarinDosing.org.

Warfarin genetic testing and guidelines
Insurance companies consider genetic testing for genetic variants in CYP2C19 related to clopidogrel response and the HLA-B* 1502 allele for prediction of adverse effects related to carbamazepine as “medically necessary” and may therefore cover the cost of these tests. The same does not hold for warfarin testing, which is considered investigational and will only be covered in the context of a clinical trial.

In 2007, the US Food and Drug Administration modified the package insert for warfarin to include information on the relationship of safe and effective dosage to SNPs in CYP2C9 and VKORC1, including a table of recommended doses for each genotype combination [Table 1]. These recommendations give the range of doses that should be considered when dosing a patient that is a carrier of any of the tested SNPs. However, there is increasing evidence that additional alleles outside of CYP2C9*2 and *3 and VKORC1 -1639G/A may play a role in warfarin dose response. These SNPs are not included in the FDA dose recommendations and not all tests cover all these additional variants.

Currently, four warfarin pharmacogenetic tests are available as in vitro diagnostic devices [shown in Table 2]. All of these tests genotype for three loci: CYP2C9*2, CYP2C9*3 and one VKORC1 -1639G/A or 1173C/T (both of which give equivalent information because of the afore mentioned LD in all populations), with some including other known genetic variants that are associated with warfarin dose. All of the tests can be completed in 8 hours, including DNA extraction, with the fastest ones providing genotype results in less than 2 hours.

The Clinical Pharmacogenetics Implementation Consortium (CPIC) recently published guidelines on how to interpret and apply genetic test results to adjust warfarin doses [9]. These guidelines do not address when to order a genetic test, but rather how to dose warfarin when genetic test results are available. The guidelines strongly support the use of genetic information to guide warfarin dosing when genotype is known and recommend using either the International Warfarin Pharmacogenetics Consortium (IWPC) or Gage algorithm to do so.

Although the availability of FDA-cleared devices for warfarin pharmacogenetic testing makes genotype-guided warfarin initiation possible, several barriers to clinical adoption remain. First, many medical centres do not have warfarin pharmacogenetic testing available. In a recent survey, only 20% of hospitals in North America have testing available on site, suggesting the majority of the hospitals rely on outside commercial clinical laboratories. This outsourcing may make genotype-guided warfarin initiation impractical because of 3–7 days of turnaround time. Second, no professional organisation endorses warfarin pharmacogenetic testing in its guidelines because of the lack of the clinical utility data. Inclusion of a testing recommendation in professional guidelines has been identified as a factor influencing reimbursement of new technology. As such, the Centers for Medicare and Medicaid Services (CMS) and many commercial insurance plans generally do not reimburse the cost of testing ($300–500). Because of these barriers, warfarin pharmacogenetic testing is performed mainly for research purposes and for patients willing to pay the cost.

Future perspective and conclusions
There are substantial and convincing data supporting the clinical and analytic validity of warfarin pharmacogenetics. The CYP2C9 and VKORC1 genes are the primary determinants of warfarin dose requirements. There are several FDA-cleared tests available for CYP2C9 and VKORC1 genotyping. However, genotype-guided warfarin dosing has not yet become a reality in most medical centres despite the wealth of data supporting genetic influences of warfarin dose requirements. Many clinicians and third party payers are awaiting evidence of clinical utility and cost-effectiveness before adopting genetic testing for anticoagulation management in the clinic setting. Results from ongoing clinical trials (such as the NIH-sponsored COAG trial) are expected to address these issues and will likely determine the course of genotype-guided anticoagulant therapy. Whether pharmacogenetics will have a role in the treatment with newer anticoagulant agents has yet to be determined. However, the pharmacogenetics with these anticoagulants could be of great importance given the unavailability of routine monitoring parameters with these agents.

References
1. Wadelius M, et al. Blood 2009; 113: 784–792.
2. Klein TE, et al. N Engl J Med 2009; 360: 753–764.
3. Rieder MJ, et al. N Engl J Med 2005; 352: 2285–2293.
4. Limdi NA, et al. Clinical pharmacology and therapeutics 2008; 83: 312–321.
5. Wang D, et al. Blood 2008; 112: 1013–1021.
6. Cavallari LH, et al. Clin Pharmacol Ther 2010; 87: 459–464.
7. Perera MA, et al. Clin Pharmacol Ther 2011; 89: 408–415.
8. Takeuchi F, et al. PLoS Genet 2009; 5: e1000433.
9. Johnson JA, et al. Clinical pharmacology and therapeutics 2011; 90: 625–629.
10. Coumadin package insert. 2007. (Accessed October, 2007, at http://www.bms.com/cgi-bin/anybin.pl?sql=PI_SEQ=91.)

The author
Minoli A Perera, PharmD., PhD
Knapp Center for Biomedical Discovery
Room 3220B, University of Chicago, 900 E.
57th Street, Chicago, IL 60637, USA
E-mail: mperera@bsd.uchicago.edu

p26 01

Managing chronic disease: do clinical labs hold the key ?

Chronic diseases are placing an increasingly heavy burden on the healthcare systems of both development and emerging countries. Together with renewed prevention strategies based on systematic and coordinated approaches, clinical laboratories will have an essential role to play with the advent of new biomarkers and the development of e-health systems.

Chronic diseases are acknowledged to be one of the biggest challenges for healthcare systems. Traditionally, chronic diseases were non-communicable. Using World Health Organization (WHO) data [1], they consisted of four major groups – cardiovascular diseases, cancers, chronic respiratory diseases and diabetes, as well as some neuropsychiatric disorders and arthritis. More recently, an increase in survival rates for infectious and genetic diseases has led to expanding the definition to certain communicable diseases (such as HIV/AIDS) as well as genetic disorders like cystic fibrosis.

Attention to chronic diseases has been growing, largely due to three factors:
1. Ageing populations.
2. Early detection, or ‘secondary prevention’.
3. E-health – the possibility offered by sophisticated at-home monitoring and timely treatment.

Ageing populations
The elderly are far more susceptible to chronic disease. In the US, some 10% of the beneficiaries of Medicare, almost all with chronic disease, account for three-quarters of its budget. [2] Per capita spending is 3-10 times more for older adults with chronic diseases than those without. [3] In Europe, the EU Council has noted the “enormous burden” posed by chronic diseases and also warned that the next decade (2011-2020) will see this grow further due to an ageing population. [4]

Early detection
The early detection of chronic disease has been revolutionized by virtue of innovative and ever-faster diagnostic techniques in clinical laboratories. Clinical laboratories have, for some years, taken the lead in reducing the gap between the evolution of a chronic disease and interventional treatment, both at home and in the hospital.

In 2007, a report by the influential Milken Institute think-tank made a powerful argument to include prevention and early detection, rather than treatment alone, in the US debate on funding healthcare. The Milken report was titled ‘An Unhealthy America: The Economic Burden of Chronic Disease’. [5] It was one of the most ambitious attempts to quantify the reduction in case burden that could be achieved by such strategic reorientation: a drop by as many as 40 million cases of chronic diseases in the year 2023, in the US alone. At the time of the report’s launch, former US Surgeon General Richard Carmona noted the biggest problem with the present healthcare system was that it waited for people to get sick and then treated them at high cost.
   
The story is similar in Europe. Though EU-wide statistics do not yet exist, in the UK, half of hospital bed day use is accounted for by only 2.7% of all medical conditions, most of which are chronic diseases. [6] The EU Commission has called for technology-driven strategies to permit both early detection and timely monitoring of chronic disease – and do this in the context of healthy ageing.

As in the US, much European thinking about managing the burden of chronic disease involves e-Health, especially in the context of structured programmes of home care for patients. In January 2007, a major EU Commission study called “Healthy Ageing: Keystone for a Sustainable Europe” [7] approvingly highlighted a Swedish program called ‘Preventive Home Visits’ as leading to both a decrease in GP visits and lower mortality. It called for promoting and using such best-of-class practices across the EU.

E-health and clinical laboratories
All such plans essentially consist of remote acquisition of patient data using lower skilled and mobile personnel. They transfer the data in real- or near real-time for remote interpretation at a clinical laboratory, followed by consultation with a physician (e.g to modify dosage/change medicines), or to transfer the patient for intervention at a hospital.

The role of the clinical laboratory in e-Health is already advanced in telepathology. Though some telepathology efforts have aimed at remote manipulation of diagnostic equipment, the more proven approach has been to transmit images from a slide. Such systems have been in use since the mid-1990s, especially in sparsely populated areas such as parts of Canada and the north-western US, and in Norway and Sweden. France’s RESINTEL was, however, one of the first systems to establish that telepathology was at least as reliable as a physical slide examination, in a transatlantic pilot project. [8]

The largest application for telepathology has so far been in cytology. Nevertheless, microbiologists have been remotely interpreting gram stains, and hematologists have reported success with blood films.

Biomarkers: promises and challenges
The next frontier is likely to be biomarkers – pre-symptomatic signals of early disease states, detectable in blood/serum. In 2011, an article by 61 healthcare experts from Europe, the US, Brazil, Russia, India, China and some other countries called for a systemic approach to combat chronic disease, with a roadmap “for predictive, preventive, personalized and participatory (P4) medicine.” [9] The core of the proposal is to systematically identify biomarkers, which would then (progressively) be used to chart out a matrix of co-morbidities, disease severity and progression – including the critical trigger signals which predict the occurrence of abrupt transitions in the stages of a chronic disease.
   
The authors of the above paper cite an in-depth study on the clinical impact of telemedicine in four major chronic diseases – diabetes, asthma, heart failure and hypertension, [10] and propose that continuous monitoring of individual clinical histories and their development would be a key source of primary data, to build up a robust and extensive knowledge management infrastructure.

The role of clinical laboratories in much of the above system – from biomarker discovery to the monitoring of patients – is evident.  At the moment, tests on the bulk of approved biomarkers (such as Oncotype DX and Trofile) are conducted in large reference laboratories. However, a great deal of research is also being directed at tests for use at home or at point-of-care; for example, CRP (C-reactive protein) and the hormone prolactonin are biomarkers which differentiate between bacterial and viral pneumonia in less than an hour, and reduce the use of precautionary antibiotics.

Nevertheless, there is still some way to go before biomarkers and systemic/personal approaches to medication and treatment of chronic disease become commonplace. Most barriers are regulatory [see box on this page], and are a consequence of the relative novelty of biomarkers – and their potentially sweeping impact.

In the light of this, the challenge for clinical laboratories will be to develop acceptable technical standards for the use of biomarkers, jointly with regulators and manufacturers. Clearly, given the massive challenge posed by chronic diseases in the decades ahead, any serious solution will have to involve a combination of biomarker-based personalized medicine, at-home care and clinical laboratories.

References
1. http://www.who.int/nmh/Actionplan-PC-NCD-2008.pdf
2. Berk ML, Monheit AC. The Concentration of Health Expenditures: An Update. HealthAffairs 1992; 11 (4): 145–149.
3. Fishman P, et al. Chronic Care Costs in Managed Care. Health Affairs 1997; 16 (3): 239–247.
4. http://www.consilium.europa.eu/uedocs/cms_Data/docs/pressdata/en/lsa/118282.pdf
5. http://www.milkeninstitute.org/healthreform/pdf/AnUnhealthyAmericaExecSumm.pdf
6. Chronic Disease management – a compendium of information, UK Department of Health, May 2004
7. http://ec.europa.eu/health/archive/ph_information/indicators/docs/healthy_ageing_en.pdf
8. http://pubmedcentralcanada.ca/pmcc/articles/PMC2579163/pdf/procascamc00009-0625.pdf
9. http://genomemedicine.com/content/3/7/43#B46
10. Pare G, Moqadem K, Pineau G, St-Hilaire C. Clinical effects of home telemonitoring in the context of diabetes, asthma, heart failure and hypertension: a systematic review. J Med Internet Res 2010 (12:e21).
11. http://ec.europa.eu/research/health/pdf/biomarkers-for-patient-stratification_en.pdf
12. http://www.phgfoundation.org/file/3998/

Scientific literature: autoimmunity

There are many peer-reviewed papers covering autoimmunity, and it is frequently difficult for healthcare professionals to keep up with the literature. As a special service to our readers, CLI presents a few key literature abstracts from the clinical and scientific literature chosen by our editorial board as being particularly worthy of attention.

Unraveling multiple MHC gene associations with systemic lupus erythematosus: model choice indicates a role for HLA alleles and non-HLA genes in Europeans

Morris DL et al. A J Hum Genet. 2012; doi: 10.1016/j.ajhg.2012.08.026.l

In order to determine the association with both SNPs and classical human-leukocyte-antigen (HLA) alleles, a meta-analysis of the major-histocompatibility-complex (MHC) region in systemic lupus erythematosus (SLE) was performed. Results from six studies and well-known out-of-study control data sets were combined, providing 3701 independent SLE cases and 12 110 independent controls of European ancestry. The study used genotypes for 7199 SNPs within the MHC region and for classical HLA alleles (typed and imputed). The results from conditional analysis and model choice with the use of the Bayesian information criterion showed that the best model for SLE association includes both classical loci (HLA-DRB1*03:01, HLA-DRB1*08:01, and HLA-DQA1*01:02) and two SNPs, rs8192591 (in class III and upstream of NOTCH4) and rs2246618 (MICB in class I). The authors’ approach was to perform a stepwise search from multiple baseline models deduced from a priori evidence on HLA-DRB1 lupus-associated alleles, a stepwise regression on SNPs alone, and a stepwise regression on HLA alleles. This enabled them to identify a model that was a much better fit to the data than one identified by simple stepwise regression either on SNPs alone [Bayes factor (BF) > 50] or on classical HLA alleles alone (BF > 1,000).


Cellular targeting in autoimmunity

Rogers JL et al. Curr Allergy Asthma Rep. 2012; doi: 10.1007/s11882-012-0307-y.

Many biologic agents that were first approved for the treatment of malignancies are now being actively investigated and used in a variety of autoimmune diseases such as rheumatoid arthritis (RA), antineutrophil cytoplasmic antibody (ANCA)-associated vasculitis, systemic lupus erythematosus (SLE), and Sjogren’s syndrome. The relatively recent advance of selective immune targeting has significantly changed the management of autoimmune disorders and in part can be attributed to the progress made in understanding effector cell function and their signalling pathways. This review discusses the recent FDA-approved biologic therapies that directly target immune cells as well as the most promising investigational drugs affecting immune cell function and signalling for the treatment of autoimmune disease.

Mechanisms of premature athero-sclerosis in rheumatoid arthritis and lupus

Kahlenberg JM & Kaplan MJ. Annu Rev Med. 2012; doi: 10.1146/annurev-med-060911-090007.

Rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE), the two most common systemic autoimmune disorders, have both unique and overlapping manifestations. One feature they share is a significantly enhanced risk of atherosclerotic cardiovascular (CV) disease that significantly contributes to morbidity and mortality. The primary mechanisms that drive CV damage in these diseases remain to be fully characterized, but recent discoveries indicate that distinct inflammatory pathways and immune dysregulation characteristic of RA and SLE are likely to play prominent roles. This review focuses on analysing the major mechanisms and pathways that are potentially implicated in the acceleration of atherothrombosis and CV risk in SLE and RA, as well as in the identification of putative preventive strategies that may mitigate vascular complications in systemic autoimmunity.

The role of epigenetic mechanisms and processes in autoimmune disorders

Greer JM & McCombe PA. Biologics 2012; 6: 307–27. l

The lack of complete concordance of autoimmune disease in identical twins suggests that non-genetic factors play a major role in determining disease susceptibility. This review considers how epigenetic mechanisms could affect the immune system and effector mechanisms in autoimmunity and/or the target organ of autoimmunity and thus affect the development of autoimmune diseases. The authors also discuss the types of stimuli that lead to epigenetic modifications and how these relate to the epidemiology of autoimmune diseases and the biological pathways operative in different autoimmune diseases. Increasing our knowledge of these epigenetic mechanisms and processes will increase the prospects for controlling or preventing autoimmune diseases in the future through the use of drugs that target the epigenetic pathways.

C58 Fig2 Anti PLA2R binding

Autoantibody diagnostics in glomerulonephritis

The determination of autoantibodies is an important component in the diagnosis and differentiation of glomerular disease. Key analyses include antibodies against phospholipase A2 receptors (anti-PLA2R), the glomerular basement membrane (anti-GBM), neutrophil granulocyte cytoplasm (ANCA), double-stranded DNA (anti-dsDNA) and nucleosomes (ANuA). With these tests autoimmune reactions can be identified as causative factors of renal disease.

by Dr Jacqueline Gosink

Glomerulonephritis (GN) is an inflammation of the blood-filtering structures of the kidneys (glomeruli) which can lead to kidney failure if left untreated. The disease is associated with the symptom complexes nephritic syndrome and nephrotic syndrome. Nephritic syndrome is characterised by hematuria, mild to moderate proteinuria and hypertension and is observedain diseases such as post-infectious GN, lupus nephritis, rapid progressive GN and IgA nephropathy. Nephrotic syndrome combines heavy proteinuria, hypoalbuminemia, hyperlipidemia and edema and is typical of membranous GN, minimal change GN and focal segmental glomerulosclerosis.

Because of the wide range of potential causes, the diagnosis of GN can be difficult. The diagnostic process is based on clinical examination, biopsy, and laboratory tests on urine and blood. The serological analysis of specific autoantibodies allows autoimmune forms of GN to be identified and distinguished from nephropathies of other origins, for example hereditary conditions, infections, drug intoxication, electrolyte or acid-base disturbances, diabetes and hypertension.

Autoantibodies in GN may be directed against specific renal targets, such as PLA2R or the GBM, resulting in diseases that predominantly injure the kidneys. Or they may be non-organ-specific, for example ANCA, anti-dsDNA or ANuA. Non-organ-specific autoantibodies cause damage to a wide variety of organs. Thus, GN may represent just one manifestation of a complex systemic autoimmune disease, for example systemic lupus erythematosus (SLE) or ANCA-associated vasculitis (AAV).

Anti-PLA2R antibodies
Autoantibodies against PLA2R are a new and highly specific marker for primary membranous glomerulonephritis (MGN), also known as idiopathic membranous nephropathy. Primary MGN is a chronic inflammatory autoimmune disease of the glomeruli and is one of the leading causes of nephrotic syndrome in adults. It is distinguished from secondary MGN, which is triggered by an underlying disease such as a malignant tumour, an infection, drug intoxication or another autoimmune disease such as SLE. Primary MGN accounts for 70-80% of cases of MGN, while the secondary form comprises around 20-30%. Clinical differentiation of the two forms is crucial since primary MGN is treated with immunosuppressants, whereas therapy for secondary MGN focuses on the causal disease.

The immune reactions leading to primary MGN, which were first described in 2009 [1], stem from autoantibodies binding to PLA2R (transmembrane glycoproteins, [Figure 1]) on the surface of the podocytes [Figure 2]. PLA2R of type M have been identified as the major target antigen of the autoantibodies. The antigen-antibody complexes are deposited in the GBM, triggering complement activation with overproduction of collagen IV and laminin. This damages the podocytes, resulting in protein entering the primary urine. With increasing proteinuria there is a higher long-term risk of kidney failure with major morbidity and mortality, especially from thromboembolic and cardiovascular complications.

Primary MGN is diagnosed by kidney puncture followed by histological examination or electron microscopy of the tissue to detect immunoglobulin-containing deposits in the GBM. Serological determination of anti-PLA2R antibodies supports the diagnostic procedure and has the advantage of being less time-consuming and less stressful for patients. Anti-PLA2R antibody analysis is, moreover, suitable for monitoring the activity of primary MGN and the response to therapy.

Until recently there was no reliable test to detect anti-PLA2R antibodies. A new recombinant-cell anti-PLA2R indirect immunofluorescence test (IIFT) developed to address this deficit has rapidly established itself as the gold standard for the serological diagnosis of primary MGN. The assay utilizes transfected human cells expressing recombinant PLA2R as the antigenic substrate [Figure 3] to provide monospecific antibody detection [2, 3]. The sensitivity of the test for primary MGN amounts to around 50-80% depending on the characteristics of cohort individuals, for example their disease activity or therapy status. In a retrospective clinical study [2] the Anti-PLA2R IIFT demonstrated a sensitivity of 52% in a cohort of 100 patients with biopsy-proven primary MGN and a specificity of 100% with respect to control subjects. In the first prospective study [4] the sensitivity amounted to 82% in patients with biopsy-proven MGN where no secondary cause could be found. An ELISA based on purified recombinant PLA2R has also been developed. It demonstrates >98% correlation with the IIFT and is particularly useful for quantification of antibody levels in therapy monitoring.

Anti-GBM antibodies
Autoantibodies against GBM are a highly specific and sensitive marker for Goodpasture’s syndrome, a rare, but potentially fatal autoimmune disease which is characterized by rapidly progressive GN and lung haemosiderosis. Diagnosis of this disease is challenging because of the speed of progression to organ failure and the initially unspecific symptoms. Serological parameters such as anti-GBM play a crucial role in obtaining an early diagnosis.

The primary target antigen of anti-GBM antibodies is the NC1 domain of the alpha chain of type IV collagen. The antibodies target the alveolar basement membrane or the GBM. In cases without lung involvement they are detected in more than 60% of patients and in cases with lung involvement in over 90%. Clinical progression of the disease correlates with antibody concentration, with high-titre circulating anti-GBM antibodies indicating an unfavourable prognosis.

Anti-GBM antibodies can be detected serologically by IIFT using sections of primate kidney as the antigenic substrate. Inclusion of a second substrate comprising microdots of purified GBM allows results to be confirmed at a glance. The substrates are positioned side by side as BIOCHIP Mosaics in the test fields of a microscope slide [Figure 4] and incubated in parallel. Further substrates for differential diagnostics, for example HEp-2 cells, granulocytes or other microdot substrates, can also be included in the BIOCHIP Mosaics, yielding a detailed patient antibody profile following a single incubation. Serum anti-GBM antibodies can alternatively be detected or confirmed quantitatively using the Anti-GBM ELISA.

ANCA
ANCA determination is a well-established tool for serological diagnosis and differentiation of different types of AAV, which often present as a rapidly progressive GN among other symptoms. The most important ANCA parameters include antibodies against proteinase 3, which are sensitive and specific markers for Wegener’s granulomatosis, and antibodies against myeloperoxidase (MPO), which occur in microscopic polyangiitis and other forms of AAV.

The standard method for detecting ANCA is IIFT using granulocytes to identify the typical staining patterns of anti-PR3 antibodies (cytoplasmic, cANCA) and anti-MPO antibodies (perinuclear, pANCA). BIOCHIP Mosaics are particularly useful for this application as they allow different substrates to be combined and analysed in parallel [Figure 5]. Recently, several new substrates have been developed to improve the ease and reliability of ANCA analysis still further. HEp-2 cells coated with granulocytes allow immediate differentiation between ANCA and anti-nuclear antibodies, while BIOCHIPs containing microdots of purified MPO or PR3 enable monospecific antibody characterization at the same time as the ANCA screening [5, 6].

Monospecific enzyme immunoassays such as ELISA or immunoblot are used to characterize the specificity of the target antigen. A recent major advance in ANCA ELISA is the development of a novel PR3 diagnostic antigen comprising an optimized mixture of native human (hn) PR3 and designer recombinant PR3 expressed authentically in human cells (hr). An ELISA based on this combined antigen provides unsurpassed sensitivity for the detection of anti-PR3 antibodies – 14% higher than even a capture ELISA (7). The Anti-PR3-hn-hr ELISA thus enhances ANCA diagnostics and is also suitable for long-term evaluation of patients.

Anti-dsDNA and anti-nucleosome antibodies
Anti-dsDNA and ANuA are among the immunological parameters used to diagnose SLE, which counts nephritis among its many and variable manifestations. These two markers provide the highest specificity and sensitivity in the serological diagnosis of SLE.

Anti-dsDNA antibodies are found in 60-90% of patients and represent the most established marker for SLE. A recently developed ELISA provides an exceptionally high sensitivity and specificity for detection of these antibodies owing to the use of a novel coating technology based on highly adhesive nucleosomes. The unspecific reactions that typically occur with traditionally used coating materials are thus avoided, and the clear presentation of the major DNA epitopes ensures a remarkably high sensitivity. In a published clinical comparison study using a large cohort of patients with SLE and other diseases [8], the Anti-dsDNA-NcX ELISA demonstrated the highest sensitivity for SLE (60.8%), exceeding that of conventional ELISA (35.4%), Crithidia luciliae IIFT (27.4%) and even Farr-RIA (53.1%) [Figure 6].

ANuA  [Figure 7] are specific for SLE and are a prognostic indicator for SLE with renal involvement. The frequency of ANuA is especially high in severe cases requiring transplantation (79%), compared to less severe lupus nephritis (18%) and SLE without nephritis (9%) [9]. The relevance of ANuA is, however, highly dependent on the assay used to detect them. If insufficiently purified nucleosomes are used in ELISA, then sera from patients with scleroderma or other diseases also frequently react, resulting in an unacceptably low specificity. The 2nd generation Anti-Nucleosome ELISA, in contrast, is based on a patented preparation of highly purified mononucleosomes, which are free of contaminating histone H1, non-histone proteins such as Scl-70, and chromatin DNA fragments. This ELISA provides an SLE specificity of close to 100% and a sensitivity of around 54%. Significantly, with this highly specific test ANuA have been shown to be present in 16-18% of SLE sera that are negative for anti-dsDNA antibodies [Table 1] [10, 11]. Thus, the determination of ANuA substantially enriches the serological diagnosis of SLE. When both ANuA and anti-dsDNA antibodies are analysed in parallel as first-line serological tests, the detection rate for SLE can be increased to 87%.

Conclusions
Recent developments in autoantibody diagnostics for nephrology include the groundbreaking anti-PLA2R IIFT for identifying primary MGN, as well as considerable  improvements in the sensitivity, specificity and convenience of tests for ANCA, anti-GBM, anti-dsDNA and ANuA. These advances have boosted the ease, reliability and relevance of autoantibody testing, aiding the diagnosis of autoimmune forms of GN, especially in their early stages. This is crucial to allow the implementation of interventional therapy and prevent the nephropathy progressing to a fatal end stage.

References
1. Beck et al. N. Engl. J. Med. 2009: 361: 11.21
2. Hoxha et al. Nephrology Diagnosis Transplantation 2011: 26 (8): 2526-32.
3. Debiec et al. Nat. Rev. Nephrol. 2011: 7(9): 496-8
4. Hoxha et al. Kidney International. 2012: 82: 797-804
5. Buschtez et al. Zeitschrift für Rheumatologie 2007: Band 66: 43, 10942-10.
6. Damoiseaux et al. JIM 2009: 348: 67-73
7. Damoiseaux J. et al. Ann. Rheum. Dis. 2009; 68: 228-233.
8. Biesen et al. Lupus 2008; 17(5): 506-507.
9. Stinton et al. Lupus 2007; 15: 394-400.
10. Suer et al. J. Autoimmunity 2004: 22: 325-334.
11. Schluter et al. J. Lab Med. 2002; 26: 516-517.

The author
Jacqueline Gosink, PhD
Euroimmun AG
Luebeck, Germany

C50 Figure 1

ANCA-IIF: Still the screening method of choice

by Dr Petraki Munujos Systemic vasculitides are a group of inflammatory idiopathic clinical syndromes usually classified by the size of the vessels being affected. Among them, the small vessels vasculitides show clear associations with the presence in the patients sera of antibodies directed against cytoplasmic antigens of neutrophils (ANCA).

C59 Fig1a

Anti-TNF-α levels and Anti-TNF-α antibodies in inflammatory bowel disease

The introduction of infliximab and adalimumab, monoclonal antibodies against TNF-α, to induce and maintain clinical remission in patients with moderate to severe inflammatory bowel disease has generated new perspectives managing these disorders [1]. However, about a third of all CED patients in clinical studies treated with TNF-α antibodies exhibited no primary response (primary treatment failures) and up to 40% of patients, after having exhibited primary therapy response, show a decrease in efficacy with increasing duration of therapy (secondary treatment failures*) and do need multiple dose adjustments to re-induce or maintain clinical response.

by Dr J. Stein

Clinically important factors predicting treatment response include brief disease duration, a predominantly inflammatory disease course and disease involving the colon, non-smoker status and moderate to severe disease activity (overview in Yanai and Hanauer [2]).  At first, the development of antibodies against infliximab in the sense of anti-drug antibodies (ADA) [3] occurring especially in patients undergoing episodic administration of IFX (36–61% of cases) [4] was proposed as the primary cause of this phenomenon. It is now considered increasingly questionable that ADA alone are responsible for therapy failure in patients treated with IFX. In fact, multiple studies have reported IFX trough levels that were either undetectable or very low despite the absence of ADA [5-7], which points to other factors that may influence the pharmacokinetics of these agents. A retrospective analysis of the ACT1 and ACT2 studies found an inverse correlation with serum albumin concentrations [8]. Albumin concentrations < 3 g/dl correlated with a significantly poorer initial response (primary non-responders). Several strategies can be undertaken in cases of loss of response: dose escalation (increasing the dose or shortening the interval), switching to another anti-TNF-α drug, or changing to other immunosuppressive drugs. The decision as to which is the best option for the management of these patients remains largely empirical. Data from studies suggests that measurement of anti-TNF-α trough levels and ADAs could be useful in therapeutic drug monitoring in IBD patients, as part of an individualized therapy. Figures 1a and 1b summarize an algorithm for the management of TNF-α antibody therapy based on currently available data. Methods used to detect anti-TNF-α drug concentrations and ADA concentrations are mainly based on enzyme-linked immunosorbent assay (ELISA) and radioimmunoassay (RIA) or less frequently EMSA (electrophoretic mobility shift assay). Compared to the more complex RIA- or EMSA- based detection methods, the most commonly used and, as a rule, easily performed enzyme-coupled immunoabsorptive assays (EIA) exhibit some limitations which are important for the timing of measurement: Since anti-TNF-α drugs are able to bind to antibodies to form immune complexes, they cannot be detected by ELISA and their presence can only be ascertained by detectable ADAs regardless of the serum levels of the anti-TNF-a drug. However, when ADAs are negative, it is important to know the levels of the anti-TNF-α drugs: if anti-TNF-α serum levels are undetectable, the result is a true negative, but if anti-TNF-α levels are detectable and ADA levels are negative, the result is considered inconclusive because it might be either a true negative result or a false negative result if the antibodies have bound to the anti-TNF-α drug. Therefore, anti-TNF-α drug concentrations should be determined when the drug levels are expected to be lowest, i.e. just before the next administration of the drug (trough level) and at the same time antibody titres are measured to enable further interpretation. When an EIA is used, the optimum trough level stands at > 4–5 μg/ml [5,8], compared with cut-off values of > 1 μg/ml with RIA [9,10].

* Defined as recurrence following initially effective remission maintenance with TNF-α antibodies.

References

1. Chaparro M, et al. Aliment Pharmacol Ther 2012; 35: 971–986.
2. Yanai H, et al. Am J Gastroenterol. 2011; 106: 685–98.
3. Baert F, et al. N Engl J Med. 2003; 348(7): 601–608.
4. Cassinotti A, et al. Pract Gastroenterol. 2010; 34:11–20.
5. Maser EA, et al. Clin Gastroenterol Hepatol. 2006; 4:1248–1254.
6. St Clair EW, et al. Arthritis Rheum. 2002; 46: 1451–9.
7. Fasanmade AA, et al. Int J Clin Pharmacol Ther. 2010; 48: 297–308.
8. Seow CH, et al. Gut. 2010; 59: 49–54.
9. Steenholdt C, et al. Scand J Gastroenterol. 2011; 46: 310–318.
10. Bendtzen K, et al. Scand J Gastroenterol. 2009; 44: 774–781.

The author
J. Stein, MD, PhD
Crohn Colitis Centre
Frankfurt, Germany

C57 Fig1

The role of βHB in diagnosing ketosis in diabetic patients

ß-Hydroxybutyrate (BHB) is the main ketone body produced during ketosis and diabetic ketosis. This article demonstrates how quantitatively measuring plasma/serum BHB levels can give a direct indication of blood ketone levels and provide a more accurate method of diagnosing and managing ketosis than via traditional nitroprusside-based urine dipstick testing. Rapid identification of ketosis through BHB testing can improve clinical management and patient care in diabetes.

by Dr Cormac Kilty and Al Blanco

What is Ketosis?
Ketosis occurs when the body begins to break down its stored fats in response to a low supply of energy (glucose) to produce ketone bodies. These water soluble by-products of fatty acid metabolism are then used by the body as alternative energy sources to reduce tissue demand for glucose.

Ketone bodies are always present in the blood and are normally broken down into carbon dioxide and water. However, ketone build-up in the blood (ketonemia) can result from both physiological and pathological causes. Physiological ketosis, leading to a mild to moderate build-up, can result from prolonged exercise, fasting or a high-fat diet. If the cause is pathological, then ultimately the excessive build-up of ketones causes an acid/base imbalance known as ketoacidosis.  Pathological causes of ketosis include: diabetes mellitus, alcoholism, glycogen storage disease, alkalosis, ingestion of isopropyl alcohol, and salicylate poisoning. If not diagnosed and treated, ketoacidosis is potentially fatal.

The ketone bodies produced during ketosis within the liver are ß-Hydroxybutyrate (BHB), acetoacetate (AcAc) and acetone, where BHB is the predominant ketone body (78%) which is metabolized from AcAc [Figure 1]. The ketone body ratio, which is the ratio of BHB to AcAc, is approximately 1:1 in healthy people, but this can rise to nearly 6:1 after prolonged fasting and even 10:1 in cases of acute pathological ketosis [1].

Diabetic Ketoacidosis
Pathological ketosis most commonly arises due to diabetes mellitus (DM), a metabolic disease resulting in chronically high blood sugar. This occurs due to glucose under-utilisation and over-production in response to either: 1) an inability to produce and secrete insulin (Type 1 diabetes), or 2) insulin resistance (Type 2 diabetes) [2].

Diabetic ketoacidosis (DKA) is a life threatening complication of untreated or poorly managed diabetes which is most typically seen in the setting of Type 1 diabetes; in these cases the lack of insulin prevents the body from utilizing glucose for energy. This is because insulin acts on cell receptors to assist with glucose absorption, so in its absence cells are unable to take in, and subsequently metabolize, glucose. When the body senses glucose is not readily available, DKA occurs as fat is broken down instead. Furthermore, blood glucose levels rise (usually higher than 300 mg/dL) due to the over production of glucose by the liver to try to compensate for the problem. However, this additional glucose also cannot be metabolized without insulin [Figure 2] resulting in hyperglycemia. Although more common in patients with Type 1 diabetes, patients with Type 2 diabetes are also at risk of developing DKA during catabolic stress in the setting of trauma, surgery or infection [5]. There is also a subset of patients with Type 2 diabetes who are prone to ketosis.  They present with transient and severe beta cell dysfunction and the clinical course is variable. 

Rapid diagnosis of DKA is essential because a delay in starting insulin treatment is associated with an increase in morbidity and mortality [4]. Before insulin treatment was available, DKA was once the leading cause of death among Type 1 diabetics. Even now there is still a high mortality rate of 5 to 10% in developed countries, and it is the leading cause of death in pediatrics and young adults [5].

Testing for DKA
Like glucose, ketones can be tested or monitored in either urine or blood. Historically, DKA has been identified using a colorimetric semi-quantitative method for detection of ketones in urine. Nitroprusside turns purple in the presence of acetoacetate (Figure 3).  Although it is simple and rapid the dipstick nitroprusside test has several limitations, primarily that it only measures acetoacetate. False positives can result from interference by drugs such as L-Dopa, Captopril and other ACE inhibitors. False negatives can also occur because nitroprusside does not test for the presence of BHB which is the predominant ketone in DKA (>0.27 mmol/L is abnormal). Consequently, tests that only recognise the presence of AcAc will underestimate the total ketone body concentration (6). Furthermore, monitoring AcAc levels by using nitroprusside testing during DKA treatment can be misleading because a patient in DKA converts BHB to AcAc and acetone with insulin treatment. Therefore, nitroprusside tests will have a stronger reaction than prior to treatment, even though ketoacidosis is actually improving. This is because the fall in AcAc lags behind the improvement in ketoacidosis. By monitoring BHB levels instead, clinicians are able to assess the patient’s direct response to DKA treatment and ascertain immediately when ketoacidosis is resolved.

There are several different methods of testing for BHB in blood, plasma or serum, these include gas chromatography and capillary electrophoresis.  Such methodologies are specific, but they are more complex procedures that are not amenable to all hospital laboratory or clinic testing. In addition, the turnaround time can be longer than the one to two minutes of the nitroprusside method.

Rapid β-Hydroxybutyrate measurement

An enzymatic assay is also available for direct quantitative measurement of BHB in blood. This is rapid, has minimal cross reactivity with interfering substances and can be performed on both automated laboratory instrumentation (for plasma/serum), or using whole blood samples on devices at the point of care.  An example of this assay is presented by the β-Hydroxybutyrate LiquiColor® Reagent System (Stanbio, Boerne, TX, USA). Figure 4 details the enzymatic reaction which gives a purple colour proportional to the concentration of BHB; where normal levels are 0 – 0.3 mM/L, ketosis is >0.3 mM/L and possible ketoacidosis is >5 mM/L.

Recent prospective studies have shown that blood BHB enzymatic testing has a far superior specificity in comparison to the nitroprusside urine test [7].  One such study prospectively screened for DKA in emergency department (ED) patients who had a blood sugar of >250 mg/dL, regardless of the reason for the ED visit. Both a urine dipstick and a point of care capillary BHB test were performed, with both tests displaying an acceptable sensitivity of at least 98%. However, the BHB was markedly more specific at 78.6%, in comparison to the urine dipstick (35.1% specificity) [7].  The American Diabetes Association discourages the use of urine nitroprusside testing and instead recommends quantitative serum BHB testing for diagnosing and monitoring ketoacidosis [8]. Furthermore, the Association recommends that blood ketone determinations that rely on the nitroprusside reaction should only be used as an adjunct to diagnose DKA and should not be used to monitor DKA treatment due to the lag in decrease in AcAc after resolution of ketoacidosis. In contrast, specific measurement of BHB in blood can be used for diagnosis and monitoring of DKA [9, 10].

Clinical advantages of BHB
As BHB testing is rapid and more specific than urine nitroprusside testing for ketones; it can be used to identify ketosis in multiple settings. BHB in serum and plasma can be used to clinically diagnose and monitor the disease status and severity of diabetes mellitus, alcoholism, as well as starvation-induced ketosis. It may also have potential application for diagnosing and monitoring glycogen storage disease, high fat/low carbohydrate diets, ingestion of isopropyl alcohol and salicylate poisoning. BHB testing is rapid and more specific than urine nitroprusside testing for ketones since it tests for the main ketone produced during ketosis (78%).

During ketosis, BHB levels increase more than the levels of acetone and acetoacetate, clearly indicating the patient’s trend in metabolic status. Consequently, quantitative, objective BHB results provide a better tool for determining and monitoring ketosis than qualitative nitroprusside testing that detects only 22% of ketones present during ketosis.

BHB testing gives the earliest detection of clinically significant ketosis, enabling clinicians to diagnose DKA with confidence based on quantifiable results. Rapid identification of ketosis through BHB testing can improve clinical management and patient care [4]. As such, early detection could enable shorter triage times and faster treatment of patients which could in turn lead to improved clinical outcomes and Emergency Department efficiency, and decreased turnaround times [11]. Furthermore, unnecessary patient admissions could also be avoided through faster and more accurate patient assessments for ketosis and ketoacidosis, particularly within the Emergency Department, which may also result in ever important cost savings.

Acknowledgement

The authors thank Dr. James H. Nichols, Ph.D., DABCC, FACB, Professor of Pathology, Tufts University School of Medicine and Medical Director for Clinical Chemistry at Baystate Health in Springfield, MA. This manuscript is based on a presentation given by Dr. Nichols at the July, 2012 American Association for Clinical Chemistry meeting in Los Angeles.

References

1. Laffel L. Diabetes/Metabolism Research and Reviews 1999; 15:412-426.
2. Shoback, edited by David G. Gardner, Dolores (2011). Greenspan’s basic & clinical endocrinology (9th ed. ed.). New York: McGraw-Hill Medical. pp. Chapter 17
3. Kitabchi AE, et al. Diabetes Care 2009; 32(7):1335-1343.
4. Singh RK, et al. Diabet Med 1997;14:482-486.
5. Felner E, et al. Pediatrics 2001;108:735-740.
6. Sacks DB, et al.  Diabetes Care 2004:34:e61-e99.
7. Arora S, et al. Diabetes Care. 2011; 34(4):852-4.
8. American Diabetes Association. Diabetes Care 2010; 33 (Suppl 1); S62-69.
9. Sacks DB, et al.  Diabetes Care 2011; 34:1419-1423
10. Savage MW, et al. Diabetic Medicine 2011; 28(5):508-515.
11. Foreback C, Former Director of Clinical Chemistry, Henry Ford Hospital, Detroit, MI, White Paper, Clinical effectiveness of Beta-Hydroxybutryate assays in a clinical decision unit, (1998).

The authors
Dr Cormac Kilty
EKF Diagnostics Holdings plc, UK
Tel. +44 (0)2920 710 570
E-mail: cormackilty@ekfdiagnostics.com

Al Blanco
Stanbio Laboratory (An EKF Diagnostics company), USA
Tel. +1 (0)830 249 0772

proof CLI Nov12 Mindray 1

BS-2000 Molecular System

26139 Ani Biotech CLI ad 05

Mycoplasma Pneumoniae IgM