Nucleic acids, which are among the best signatures of disease and pathogens, have traditionally been measured in centralised screening facilities using expensive instruments. Such tests are seldom available on point-of-care (POC) testing platforms. Advancements in simple microfluidics, cellphones and low-cost devices, isothermal and other novel amplification techniques, and reagent stabilisation approaches are now making it possible to bring some of the assays to POCs. This article highlights selected advancements in this area.
by Dr Robert Stedtfeld, Maggie Kronlein and Professor Syed Hashsham
Why point-of-care diagnostics?
Point-of-care diagnostics (POCs) bring selected capabilities of centralised screening to thousands of primary health care centres, hospitals, and clinics. Quick turnaround time, enhanced access to specialised testing by the physicians and patients, sample-in-result-out capability, simplicity, ruggedness and lower cost are among the leading reasons for the emergence of POCs. Another advantage of POCs is its flexibility to be adopted for assays that have received less attention and therefore are often “home brewed”, meaning an analyst develops it within the screening facility for routine patient care. The societal benefit–cost analysis of POCs may often exceed the traditional approaches by 10- to 100-fold. However, POCs must deliver the same quality of test results that is available with the existing centralised screening. Centralised screening is well established, has a performance record and analytical expertise ensuring reliability. POCs are emerging and, therefore, for successful integration into the overall healthcare system, POCs must provide an advantage over the existing system consisting of sample transport to a centralised location followed by analysis and reporting. Besides answering why POCs are better than the existing approaches, they must face validation and deployment challenges.
On the positive side, POCs are expected to have lower financial and acceptance barriers compared to what is faced by more expensive traditional approaches because of the need for lowering the cost of diagnostics in general. In 2011, the global in vitro testing market was $47.6 billion and projected to be $126.9 billion by 2022 (http://www.visiongain.com/). At present POCs constitute approximately one third of the total market – distributed in cardiac markers (31%), HbA1c (21%), cholesterol/lipids (16%), fecal occult blood (14%), cancer markers 98%), drug abuse (4%), and pregnancy (4%). Market forces critically determine the pace of technical development and deployment of POCs. Consider, for example, the global market for blood sugar testing (examples for genetic assays on POCs are non-existent) that is estimated to be $18 billion by 2015 and the alternative test, A1c that is only $272 million in 2012. Even though, A1c testing is now indispensable in managing diabetes, it has not received the priority it deserves due to much lower frequency of testing and therefore smaller market. Lowering the cost further, makes its deployment and diffusion even more challenging. Thus POCs must tackle the inherent bottleneck in their business model, i.e. how to succeed with an emerging or new technology, priced to be low cost, but without the access to market and high sales volumes – at least initially.
One option is to use the existing network of cellphones as one component of the POCs. Diagnostic tools based on cellphones and mobile devices have the potential to significantly reduce the economic burden and play an important role in providing universal healthcare. By 2015 the number of cellphone users will reach 1.4 billion and at least 500 million of them will have used health related applications (mHealth) in some form. Currently, more than 17,000 mHealth apps are available on various platforms. However, their ability to carry out genetic assays is yet to be harnessed. Out of the more than 2,500 genetic assays available, perhaps none are available on a mobile platform (GeneTests: www.genetests.org/). The coming decade is predicted to merge genomics, microfluidics and miniaturisation and multiply its impact many-fold by leveraging the resources and cellphone networks. Such platforms may allow the possibility of establishing an open source model for assays that are commercially not viable due to very low volumes.
A key question and the focus of this article is can genetic assays that are currently possible only in centralised screening facilities be carried out on POC platforms? We believe that through a combination of emerging molecular techniques, low-cost simple microfluidic systems, and some additional developments in detection systems and information transfer, it is possible to carry out genetic assays including mutation detection on POCs within the next 5 years and possibly sequencing within a decade.
Existing POC-adaptable genetic technologies
Nucleic acid-based amplification techniques remain the widely used analytical technique for genetic diagnostics. However, integrated systems capable of reliable detection with sensitivity and specificity required for clinical applications are still scarce. In centralised screening facilities, quantitative polymerase chain reaction (qPCR) is the workhorse for genetic analyses. Compared to qPCR, isothermal amplification strategies have been recognised as a promising alternative especially for POCs. This is because of the complexity of establishing the temperature cycling for qPCR and detection systems in POC devices. The advantages of isothermal amplification include high amplification yields (in some instances allowing a positive reaction to be observed with the naked eye), savings in power consumption without the need for temperature cycling, and low time to a positive amplification (as low as 5 minutes for larger copy numbers). Many isothermal techniques have been developed [1] including: loop-mediated isothermal amplification (LAMP), recombinase polymerase amplification (RPA), nucleic acid sequence-based amplification (NASBA), smart amplification process (SmartAmp), rolling circle amplification (RCA), multiple displacement amplification (MDA), helicase-dependent amplification (tHDA), strand displacement amplification (SDA), isothermal and chimeric primer-initiated amplification (ICAN), cross-priming amplification (CPA), single primer isothermal amplification (SPIA), self-sustained sequence replication reaction (3SR), transcription mediated amplification (TMA), genome exponential amplification reaction (GEAR) and exponential amplification reaction (EXPAR).
The benefits of one isothermal technique over another will depend on the application of interest. Techniques requiring a large number of enzymes and that are carried out at low temperature may be less amenable to POCs than those that require a single enzyme. More than one enzyme may, in general, increase the cost, rigor and complexity of the amplification reaction in a POC. While larger number of primer sets will increase the specificity, they will also make the design of primers to target a certain phylogenetic group or divergent functional gene more difficult, if not impossible. This is because of the need for multiple target specific regions, each being a certain distance (number of bases) between the other, and the increased complexity when trying to incorporate degenerate bases in multiple primer sequences within an assay. Isothermal assay enzymes that work at low temperature (less than 40°C) may have a disadvantage in hot and warm climatic conditions. However, an isothermal amplification strategy that directly incorporates primers/probes designed for previously validated qPCR assays, uses a single enzyme, can be performed at higher temperatures, and allows for accurate quantification, will greatly increase the attraction of isothermal amplification, ushering in a new era of point of care genetic diagnostics. The cost associated with licensing an amplification technique will also dictate if it can be used for POCs applications, specifically in low resource settings.
Existing POC platforms for genetic analysis
Multiple platforms have been developed for POC genetic testing with an emphasis on reduced costs, sizes, throughput, accuracy and simplicity. Table 1 is a non-exhaustive list to illustrate some of the capabilities. Ideally, POCs must simplify the genetic analysis by accepting crude or unprocessed samples. All of the listed qPCR platforms automatically perform sample processing (cell lysing and DNA purification) directly within the cartridge that the sample is dispensed. Compared to qPCR POCs, isothermal assay POCs have not focused as much on sample processing. There are two reasons for this. One, isothermal assays are generally less influenced by sample inhibitors and may not even require it in certain cases. Second, development of POCs based on isothermal assays has lagged because the assays themselves are relatively new for the diagnostics application.
Development of isothermal genetic POC devices, however, is relatively easier compared to qPCR devices. This is because isothermal genetic POCs utilise components that are inexpensive, smaller and have less power consumption. Use of such components is possible due to the high product yields of isothermal amplification techniques. LAMP, for example, produces 10 µg of DNA in a 25 µl volume compared to 0.2 µg in PCR. This high yield can be quantified with less sophisticated optics compared to those used in qPCR devices. The Gene-Z platform [figure 1], for example, uses an array of individually controlled low power light emitting diodes for excitation and optical fibres (one for each reaction well) for channelling the excitation light to a single photodiode detector for real time measurement [2].
Although POCs are generally considered as a single-assay device, multiplexing of targets (e.g. in co-infections) and analysing a given pathogen with greater depth (e.g. methicillin resistance Staphylococcus aureus, or HIV genotyping) is becoming absolutely critical. Genetic analysis is expected to allow resolution of genotype that is better than that possible by immunoassays. Use of simpler but powerful microfluidic chips (e.g. used with Gene-Z or GenePOC) instead of conventional Eppendorf tubes can be advantageous in terms of cost and power of analysis. Such microfluidic chips are increasingly changing their shape, form, and material and are bound to be simpler, better and more accessible. An example is the paper-based diagnostics platform developed by Whiteside’s group [3]. Miniaturisation obviously leads to significant reagent cost saving provided it does not run into detection-limit issues. Multiplexed detection also simplifies the analysis since manual dispensing into individual reaction tubes is not required. For example, the chip used with Gene-Z does not require external active elements for sealing, pumping, or distributing samples into individual reaction wells, eliminating potential for contamination between chips or to the device.
Type of genetic assays on POCs
So what types of genetic assays are more likely to move to POCs first? For regions with excellent centralised screening, it may be those assays where getting the results quickly using POCs saves lives or has tangible long term benefits, e.g. quickly deterring the infection and its antibiotic resistance. The leading example of this is MRSA, for which resistance has continuously increased over the past few decades. It is now known that patients are more likely to acquire MRSA in wards where the organism is screened by culturing compared to rapid molecular techniques. In such cases, detection of antibiotic resistance genes using a panel of genetic assays and POCs would minimise the practice of administering broad spectrum antibiotics because the results are not available soon enough.
In limited resource settings, the examples of genetic testing by POCs are literally endless – TB, malaria, dengue fever, HIV, flu, fungal infections and so on. This is because very little or no centralised screening occurs in such scenarios. The ability to measure dengue virus, for example, in 1–4 µl of blood could provide better tools to the 2.5 billion people who are at risk of infection and the 50–100 million people who do contract it every year. Similarly, multidrug-resistant and extensively drug-resistant TB is a global concern due to the high cost of treatment. At present, large numbers of mutations cannot be measured simultaneously using POCs. However, except the fact that isothermal mutation assays are fewer and the success rate for primer development is much lower than the signature marker probe/primer based assays, there are no technical barriers. The availability of a simple isothermal mutation assay will go a long way in making many genotyping-based diagnostics available on POCs.
In the long run, POCs may even be used to detect and quantify genetic markers associated with non-infectious diseases, such as cancer, and selected assays focusing on human genetics. Globally, cancer is responsible for 7.6 million deaths (2008 data) and projected to be rise to 13.1 million by 2030. Simple and quantitative tools capable of measuring a panel of markers may play an additional role – they may help collect data related to potentially useful but un-tested markers. Both PCR and isothermal-based assays are amenable to this application using circulating tumour cells, circulating free DNA/RNA, gene mutations, and microRNA panels. Currently utilised methods of cancer detection are highly invasive and time consuming. Minimally invasive methods on POCs may significantly increase the deployment of such capabilities.
Why do we need the wireless connectivity for POCs?
With POCs, comes the question of connectivity. Is it a must or good to have? We envision that it is important to have, but that a less useful form of device may be deployed without connectivity. Wireless connectivity via cellular phones has many advantages. Paramount among them is access to the physician and/or nurse for expert input and support. Technical advantages are automated data transfer, increased efficiency in reporting, saving time, lower equipment costs due to complexity of a touch-screen user interface and the computational power needed for data analysis.
The use of cellphones is an obvious possibility due to its ubiquitous availability and the vast network of mobile services. “There are 7 billion people on Earth; 5.1 billion own a cellphone; 4.2 billion own a toothbrush (Mobile Marketing Association Asia, 2011). By 2015 it is estimated that one third of cellphone users will have used mobile health solution in some form. However, POC genetic diagnostics and mobile networking have not yet crossed their paths. Some gene analysers (e.g. Gene-Z, Hunter) already have network enabled wireless connectivity to bridge these paths. More work is needed, however. One critical element is that transfer of data including through wireless mode must meet the requirements of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy and Security Rules set by the U.S. Department of Health and Human Services. FDA clearance standards and specifications are still evolving for this area [4].
Impacts of the resulting products and devices are expected on both communicable and non-communicable diseases. Qualcomm Life provides a platform (2Net), that could be used for many different applications. According to them, “The 2Net platform is designed as an end-to-end, technology-agnostic cloud-based service that interconnects medical devices so that information is easily accessible by device users and their healthcare providers and caregivers” (http://www.qualcommlife.com/). Although the famous medical scanner or Tricorder of Star Wars fame is not yet possible, the recently announced $10 million prize by X-Prize Institute sponsored by Qualcom Life, for developing a Tricorder that can diagnose a set of 15 diseases without the intervention of the physician and weighs less than 2.3 kg is not too far from reality. In ten years, we should expect nothing less than a POC platform that is capable of sequencing-based diagnostics with assay cost of less than a dollar.
References
1. Craw P, Balachandran W. Isothermal nucleic acid amplification technologies for point-of-care diagnostics: a critical review. Lab Chip 2012; 12: 2469–2486.
2. Stedtfeld RD, Tourlousse DM, Seyrig G, Stedtfeld TM, Kronlein M, Price S, Ahmad F, Erdogan G, Tiedje JM, Hashsham SA. Gene-Z: a device for point of care genetic testing using a smartphone. Lab Chip 2012; 12: 1454–1462.
3. Martinez AW, Phillips ST, Whitesides GM, Carrilho E. Diagnostics for the developing world: microfluidic paper-based analytical devices. Anal Chem 2010; 82: 3–10.
4. Draft Guidance for Industry and Food and Drug Administration Staff – Mobile Medical Applications. July 21, 2011. www.fda.gov/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm263280.htm.
The authors
Robert Stedtfeld PhD, Maggie Kronlein and
Syed Hashsham, PhD*
Civil and Environmental Engineering
1449 Engineering Research Court Rm A127
Michigan State University
East Lansing, MI 48824, USA
*Corresponding author:
E-mail: hashsham@egr.msu.edu
Pharmacogenetics and pharmacogenomics: moving towards personalized medicine
, /in Featured Articles /by 3wmediaGenetic polymorphisms are well recognized as one of the main cause of variations in personal drug response. Pharmacogenetics investigates the role of polymorphisms in the individual response to pharmacological treatments in order to design specific genetic tests, which can be performed before drug administration to optimize drug response and reduce adverse events.
by Dr Francesca Marini and Professor Maria Luisa Brandi
Personalized medicine based on genetics
The complete sequencing of the human genome in 2001, by the Human Genome Project, has opened the new era of personalized medicine based on genetics. Polymorphic variations are suspected to cover at least 20% of the entire human genome. An average of about 6 million single nucleotide polymorphisms (SNPs) and other sequence variations (i.e. copy number variations, CNVs) are estimated to exist between any two randomly selected human individuals. Advancements in understanding of variations in the human genome and rapid improvements in high-throughput genotyping technology have made it feasible to study most of the human genetic diversity in relation to phenotypes. Today, the challenge for genomic medicine is contextualizing the myriad of genomic variations in terms of their functional consequences for disease predisposition and for different responses to medications.
The ability to predict the outcome of drug therapies, by a simple analysis of common variants in the genotype, is today one of the main challenges for individualised medicine. Pharmacogenetics and its whole-genome application, pharmacogenomics, are the utilization of individual genetic data to predict the individual response to drug treatment with respect to both beneficial and adverse effects.
They, currently, represent one of the disciplines most pursued by basic and clinical research. Pharmacogenetics examines the single gene and/or single polymorphism influences in drug response in terms of drug absorption and disposition (pharmacokinetics) or drug action (pharmacodynamics), including polymorphic variations in genes encoding drug-metabolizing enzymes, drug transporters, drug receptors and drug biological targets. Pharmacogenomics studies alterations in gene and protein expression that may be correlated with pharmacological function and therapeutic response, encompassing factors beyond those that are inherited, such as epigenetics (pharmacoepigenomics).
One of the main goals of pharmacogenetics and pharmacogenomics is the identification of genetic biomarkers that lead to the recognition, in advance, of patients who will not respond to a therapy, or who will be at risk of developing adverse reactions, in order to design specific pre-prescription genetic tests. A biomarker is most commonly a genetic variant, but can also include functional deficiencies, expression changes, chromosomal abnormalities, epigenetic variants, etc. A necessary step, for the application of pharmacogenetic results into clinical practice is the validation of biomarkers, a process that requires several stages: 1) the correct design of prospective association studies and setting of all experimental conditions to increase sensitivity, reliability and specificity of the assay; 2) replication of results in different, independent studies; 3) biomarker characterization, through evaluation of variability of a particular biomarker in different human populations to determine ethnical differences, relevant interactions and potential confounders; and 4) expression and functional studies, to establish the possibility of a casual relationship between a candidate biomarker and the response to a drug.
Pharmacogenetic data on more than 110 commonly used drugs and over 35 genes are currently depicted in the Food and Drug Administration (FDA)-approved “Table of Pharmacogenomic Biomarkers in Drug Labels” (http://www.fda.gov/drugs/scienceresearch/researchareas/pharmacogenetics/ucm083378.htm), and, for many of them, the list includes specific clinical actions to be taken based on genetic information. These specific tests are currently used in clinical practice, mostly in oncology, psychiatry, neurology and cardiovascular disease. The first clinical application of a pharmacogenetic test was approved by the FDA in January 2005: the AmpliChip CYP450 test that includes genetic variants of CYP2C19 and CYP2D6 genes (two drug–metabolising P450 cytochromes, responsible of the most frequent variation in phase I metabolism of approximately 80% of all prescribed drugs today). In June 2007, the FDA released an online “Guidance on pharmacogenetic tests and genetic tests for heritable markers” (available at http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm077862.htm), which presents general guidelines for the rapid transfer of experimental results to the clinical practice and for the correct performing and data handling of pharmacogenetics screenings. The application of rapid, simple and non-invasive pharmacogenetic tests, that can be easily performed on a blood sample and do not need to be repeated during the patient’s lifetime, would help clinicians in tailoring the best therapy for each patient, reducing adverse events and maximizing positive effects. Results from pharmacogenetic tests would allow clinicians to adjust dosages, choose between similar drugs or offer an alternative therapy, if available, before the administration of each treatment. Data obtained from pharmacogenetic tests should become part of the patient medical records, with access protected by medical privacy laws, and available, before drug administration, to clinicians granted the official permission of the patient.
The accuracy of pharmacogenetic testing and the correct management and interpretation of the results will become crucial factors in determining the benefits and/or risks for patients. Also, all the new technologies, including the development of pharmacogenetic diagnostic tools, will require a high level of expertise to be appropriately applied. Several studies have documented the lack of knowledge and confidence of primary care physicians in the field of genetic tests, with only 4% of general practitioners in the US and only 21% in the UK feeling confident and sufficiently prepared for counselling patients regarding genetic tests [1, 2]. Specific training programmes about pharmacogenetic testing for medical geneticists and health care professionals are strongly recommended and they should encompass clinical genetics, genetic counselling, knowledge of inherited and ancillary genetic data management and legal protection.
Pharmacogenetics and osteoporosis: state of the art and translation into clinical practice
Osteoporosis is the most common metabolic bone disorder of the elderly, affecting both sexes (with a higher prevalence in women) and all ethnicities, and is characterized by a low bone mass and bone microarchitectural deterioration, with a consequent increase in bone fragility and susceptibility to spontaneous fracture. Today it is well known that, despite the fact that osteoporosis is a multifactorial complex disorder, genetic factors exert a key role in the acquisition of personal bone mass peak, in the determination of microarchitectural bone structure, and in the regulation of bone metabolism. Numerous and effective anti-fracture treatments, acting on bone cells to restore a normal bone turnover, are today available: hormone replacement therapy (HRT), selective estrogen receptor modulators (SERMs), bisphosphonates, calcitonin, parathyroid hormone (PTH), Teriparatide, Strontium Ranelate, and anti-RANK monoclonal antibody (Denusomab), administered alone or in combination with supplements of vitamin D and calcium. Response to all of these drugs is variable among treated patients both in terms of efficacy [evaluated as bone mineral density (BMD) gain, reduction of bone turnover, reduction of fracture risk] and of adverse reactions. In the last two decades, some pharmacogenetic studies on anti-osteoporotic drugs have been performed, but their number is still very limited and no conclusive results are available yet.
The main characteristics and results of these studies have been summarized in some recent reviews [3–5]. Results, replicated in at least two different unrelated studies, seem to indicate that:
These preliminary data appear to be promising, but they surely need to be implemented and validated before any application to clinical practice. Association studies on pharmacogenetics of osteoporosis need to be confirmed in larger cohorts, different ethnical populations and multicentre studies, preferentially from prospective controlled clinical trials, including analysis of genetic variations in genes encoding for drug transporters, drug receptors, drug metabolizing enzymes and drug molecular targets. Moreover, the single gene-approach should be integrated with multi-candidate gene and genome-wide association studies on large cohorts to individuate also unsuspected candidate genes and polymorphisms. Subsequently, data obtained from genetic studies should be implemented and validated using gene expression and proteomic analyses and by performing specific functional in vitro and in vivo studies. Also, the effects of epigenetic mechanisms (i.e. histone modifications, cytosine methylation in gene promoters and microRNAs), on the regulation of expression of genes encoding drug metabolic enzymes, transporters receptors and targets, should be taken into account and investigated.
References
1. Burke W, Emery J. Nat Rev Genet 2002; 3(7): 561–566.
2. Suchard MA, Yudkin P, Sinsheimer JS, Fowler GH. Br J Gen Pract 1999; 49(438): 45–46.
3. Marini F, Brandi ML. Expert Rev Endocrinol Metab 2010; 5(6): 905–910.
4. Marini F, Brandi ML. Curr Osteoporos Rep 2012; 10(3): 221–227.
5. Marini F, Brandi ML. J Pharmacogenom Pharmacoproteomics 2012; 3(3): 109.
The authors
Francesca Marini PhD and Maria Luisa Brandi MD, PhD
Metabolic Bone Unit, Department of Internal Medicine, University of Florence, Florence, 50139, Italy.
E-mail: m.brandi@dmi.unifi.it
DIY diagnostics – to do or not to do
, /in Featured Articles /by 3wmediaMuch like do-it-yourself (DIY) hardware stores, DIY or at-home diagnostic test kits possess both benefits and drawbacks. Making a decision is tricky. It may become even more so as a host of new kits arrive on the market, some of which are aimed at potentially deadly diseases like cancers.
The growth in the DIY kit market is driven by a combination of several factors:
Many kit developers are beginning to see easier opportunities in the developing world, especially in large emerging countries such as Brazil, India and China. All these have a rising number of affluent consumers, accompanied by lifestyle changes which heighten the risk of diseases such as diabetes or AIDS. At the same time, medical regulations are more lax than in the West; for example, it is not impossible that kits are packaged differently, without the visible labels which warn that a specific test is not (yet) approved by health regulators.
What is common to both the West and large emerging markets, as far as DIY test kits go, is the Internet. Not only does the Net allow consumers to become aware of new tests, but it also provides them a channel for access to vendors, credit card payments and delivery by mail order. As a follow-on, some DIY kit producers are working to provide encrypted transfer of data and access to the test results, again over the Internet.
No one doubts the utility of DIY kits in areas such as ovulation and pregnancy testing. Most physicians also agree that the monitoring of chronic diseases is far better served by emerging DIY diagnostic technologies. For example, a relatively new test for patients taking the anticoagulant warfarin does away with the need for weekly visits to a physician – to ensure that their blood is neither too thick to cause a stroke, nor too thin to be life threatening in case of a wound or high blood pressure. This is also the case for at-home diabetes tests, which permit day-to-day modifications in insulin intake. Blood pressure too, it is now accepted, needs to be monitored throughout the day to give a true reading, rather than once at a doctor’s.
However, there are several areas where healthcare professionals are apprehensive about the growth in DIY tests, and are likely to remain so for some time. This is mainly because even state-of-the-art DIY technology has an approximately 10% risk of error. While the psychological impact of a false positive – which has a similar error level to false negatives in most DIY tests – can be serious, a false negative on a major allergy, urinary tract or yeast infection, or for that matter, HIV, would be devastating.
Neonatal screening for lysosomal storage disorders
, /in Featured Articles /by 3wmediaThe interest in newborn screening for lysosomal storage disorders (LSDs) has increased significantly due to newly developed enzyme replacement therapies, the need for early diagnosis, and advances in technical developments. However, testing for lysosomal storage disorders in newborn screening (NBS) raises many challenges for primary health care and their providers. The high frequency of late-onset mutations makes lysosomal storage disorders a broad health problem beyond childhood, as well as a challenge for diagnosis and therapy.
by Professor David C. Kasper
Clinical Background
Lysosomal storage disorders (LSDs) may be an attractive candidate for newborn screening (NBS). These disorders result in the accumulation of macromolecular substrates that would normally be degraded by enzymes involved in lysosomal metabolism [1]. Although individual LSDs are rare, their combined incidence has been estimated at 1 per 7,700 live births for Caucasians [2]. LSDs have a progressive course, and can present at any age affecting any number of tissues and organ systems [3]. In most cases, treatment is directed toward symptomatic care of secondary complications. The development of novel diagnostic techniques was strengthened by the availability of treatment strategies including enzyme replacement, stem cell transplantation and substrate reduction although limitations of these therapies still exist [4]. Nonetheless, early diagnosis and treatment is essential for optimal treatment thus leading to the support of implementing LSDs to the NBS panel. However, the current experience of nationwide screening for LSDs is still limited.
Laboratory diagnostics
The increased technological capacity implies that expanded NBS programmes can now identify a broader range of conditions where early detection and pre-symptomatic treatment result in clinical benefit. However, the technology for a simultaneous screening of several enzyme activities related to LSDs from more or less one single blood sample was initially complicated, time-consuming and laborious but finally new protocols and technologies are now available that allow a simplified screening procedure. For future implementation of high-throughput LSD assays in routine clinical diagnostics, sample handling and mass spectrometric analysis has to be simplified; specifically, sample pre-treatment, speed of analysis and finally detection must become more integrated [5]. In this context it is also mandatory to achieve high laboratory standards in terms of technical proficiency and reproducibility of results. Hereby, quality control materials provided by the Newborn Screening Quality Assurance Program at the Centers for Disease Control and Prevention (CDC, Atlanta, GA, USA) are available [6].
Protocols for analysing lysosomal enzyme activities evolve continuously. In addition to fluorescent methods, using, for example, 4-methylumbelliferone, efforts have been made to use tandem mass spectrometry (MS/MS) particularly for high-throughput analysis in routine newborn screening laboratories. MS/MS procedures were refined and optimised, but the complexity of sample preparation prior to mass spectrometry still remains. Drawbacks of these protocols were the need of liquid-liquid extraction (LLE), solid phase extraction (SPE), and the handling with hazardous organic compounds such as ethyl acetate. Novel aspects such as online multi-dimensional chromatography prior to flow injection analysis facilitate ease-of-use sample introduction and increased speed of analysis. Our research group previously reported the use of TurboFlow (Turbulent Flow Chromatography) for online sample clean-up to remove matrix interferences such as salts, proteins and detergents for the analysis of lysosomal enzyme activities in dried blood spot samples [7]. Subsequently, purified analytes of interest that were removed from potential matrix interferences were transferred from a TurboFlow column to an analytical column for ultra high performance liquid chromatography (UHPLC) separation prior to MS/MS analysis in order to separate enzymatic products from residual substrate. This simplified protocol has recently been evaluated in a comprehensive pilot screening of more than 8,500 newborns to demonstrate the technical feasibility and robustness [8]. Moreover, the incubation time was reduced tremendously from 12–16h to 3h [9]. However, novel buffer systems for the combined incubation of more than 6 or 9 enzymes simultaneously are on the horizon including substrates for mucopolysaccharidosis type II, IVA and VI [10]. These new buffer systems might allow the incubation of several enzymes in one reaction vial, and help to reduce costs for personnel, consumables and reagents. We conclude that multiplex MS/MS screening assays are reliable for nationwide LSD NBS, and for selective metabolite screening in high-risk population.
In our experience, comparing biochemical with genetic data of affected patients, we did not observe any correlation between mutation and lack of enzyme activity measured biochemically by MS/MS, nor could type of mutation be estimated by the level of decreased enzyme activity. However, it is mandatory to confirm biochemically suspected cases by genetic mutation analysis.
The nationwide LSD screening experience
The nationwide screening for LSDs is the beginning of a new category of disorders that will confront us with challenging topics regarding NBS. Currently, routine newborn screening for LSDs has been introduced for Pompe disease in Taiwan and for Krabbe disease in the State of New York, respectively. The Austrian Newborn Screening Center and others, for example in Washington State and Italy, have successfully started pilot studies using multiplexed MS/MS screening assays.
We report the results of a comprehensive pilot screening of ~35,000 newborns for four LSDs using a multiplex MS/MS based assay including genetic mutation analysis [11]. Our results revealed a surprisingly high number of enzyme deficiencies among a predominantly Caucasian population in a Central European country. The results finally confirmed 15 newborns with at least one mutation including diminished lysosomal enzyme activity, demonstrating the high overall incidence of 1 : 2315 among the Austrian population. Frequency, positive predictive value and technical practicability make nationwide NBS for LSDs technical feasible. In our screening, the positive cases contribute predominantly to Fabry disease with an incidence of late-onset Fabry disease of 1 : 4100 among the Austrian population. Fabry disease is found among all ethnic, racial, and demographic groups and is not restricted to a specific ethnic background. Our results are concurrent with those from Spada et al. who reported a high incidence of 1 : 3100 for late-onset and 1 : 37 000 for the classic phenotype [12]. Furthermore, several studies have shown that patients with renal insufficiency, cerebral infarctions, or left ventricular hypertrophy of unknown aetiology might suffer from Fabry disease [13]. We conclude that a putative NBS may be beneficial to identify severe clinical cases and but has the drawbacks of detecting mild forms, late onsets and asymptomatic cases.
Future perspectives
The high incidence of the late-onset phenotypes in Fabry, Gaucher and Pompe disease raises the question when genetic screening for this disease should be undertaken, in the neonatal period or at early maturity. Clearly, early detection, genetic counselling, and therapeutic intervention are beneficial for the classic phenotype but the time of screening for the late-onset variants of Fabry and other treatable diseases may raise concerns. A recent study revealed that long-term treatment led to substantial and sustained clinical benefits; however advanced cardiac and renal disease cannot be reversed later on making early diagnosis crucial. NBS is less controversial for infantile Pompe. In Taiwan, first prospective Pompe screening including the initiation of treatment before onset of obvious symptoms and significant irreversible muscle damage clearly demonstrated the benefit for infants. The central nervous system cannot be treated by enzyme replacement therapies for neuronopathic LSDs like for Gaucher II and Niemann-Pick A, and thus highlights the importance of consented genotyping and phenotype prediction after biochemical first-line screening. Apart the potential clinical benefit for patients, NBS for LSDs can provide reproductive risk information for parents and future adults. This situation is common for screening of metabolic disorders as they are inherited predominately in a recessive manner.
In conclusion, our study shows that Pompe, Gaucher and Fabry are frequent disorders with great public health implications. Even though the American College of Medical Genetics (ACMG) ranked LSDs with low priority in 2006, two LSDs including Pompe and Krabbe were finally nominated for consideration by the federal advisory committee. Currently, three states initiated NBS for LSDs, three other states have passed legislation [14]. LSDs belong to a new category of disorders for which population-based screening assays exist, and new high-throughput screening assays and novel treatment strategies are on the horizon for many others. Challenges of the future may include the implementation of the LSDs in routine NBS, dealing with the identification of late-onset phenotypes, and optimal therapy schemes potentially including cost-intensive enzyme replacement therapies.
References
1. Wenger DA, Coppola S, and Liu SL. Insights into the diagnosis and treatment of lysosomal storage diseases. Arch Neurol 2003; 60(3): 322–328.
2. Ranierri E, et al. Pilot neonatal screening program for lysosomal storage disorders, using lamp-1. Southeast Asian J Trop Med Public Health 1999; 30(Suppl 2): 111–113.
3. Beck M. Variable clinical presentation in lysosomal storage disorders. J Inherit Metab Dis 2001; 24(Suppl 2): 47–51; discussion 45–46.
4. Beck M. Therapy for lysosomal storage disorders. IUBMB Life 2010; 62(1): 33–40.
5. Annesley T, et al. Mass spectrometry in the clinical laboratory: how have we done, and where do we need to be? Clin Chem 2009; 55(6): 1236–1239.
6. De Jesus VR, et al. Development and evaluation of quality control dried blood spot materials in newborn screening for lysosomal storage disorders. Clin Chem 2009; 55(1): 158–64.
7. Kasper DC, et al. The application of multiplexed, multi-dimensional ultra-high-performance liquid chromatography/tandem mass spectrometry to the high-throughput screening of lysosomal storage disorders in newborn dried bloodspots. Rapid Commun Mass Spectrom 2010; 24(7): 986–994.
8. Metz TF, et al. Simplified newborn screening protocol for lysosomal storage disorders. Clin Chem 2011; 57(9): 1286–1294.
9. Mechtler TP, et al. Short-incubation mass spectrometry assay for lysosomal storage disorders in newborn and high-risk population screening. Journal of Chromatography B 2012; in press.
10. Gelb MH, and Scott CR. Screening for three lysosomal storage diseases in a NBS laboratory and the potential to expand to a nine-plex assay. APHL Newborn Screening and Genetics Testing Symposium San Diego, CA, USA; 7–10 November, 2011.
11. Mechtler TP, et al. Neonatal screening for lysosomal storage disorders: feasibility and incidence from a nationwide study in Austria. Lancet 2012; 379(9813): 335–341.
12. Spada M, et al. High incidence of later-onset fabry disease revealed by newborn screening. Am J Hum Genet 2006; 79(1): 31–40.
13. Monserrat L, et al. Prevalence of fabry disease in a cohort of 508 unrelated patients with hypertrophic cardiomyopathy. J Am Coll Cardiol 2007; 50(25): 2399–2403.
14. Zhou H, Fernhoff P, and Vogt RF. Newborn bloodspot screening for lysosomal storage disorders. Journal of Pediatrics 2011; 159: 7–13.
The author
David Kasper, PhD
Department of Pediatrics and Adolescent Medicine, Medical University of Vienna, Währinger Gürtel 18–20, A-1090 Vienna, Austria
e-mail: david.kasper@meduniwien.ac.at
Benefits of polychromatic flow cytometry in a clinical setting
, /in Featured Articles /by 3wmediaIncreasingly sophisticated instruments and an expanding range of fluorochromes are making it possible to detect an increasing number of markers on a single cell. These advances are encouraging the wider adoption of polychromatic flow cytometry (PFC). This review looks at the benefits of PFC in clinical laboratories, and how to deal with the associated challenges.
by Sandy Smith and Professor William Sewell
Flow cytometry is a valuable tool in today’s diagnostic pathology laboratories [1]. The main strengths of flow cytometry are the ability to detect and characterise abnormal populations, the capacity to assess several markers simultaneously on the one cell and the relative speed with which results can be produced. In recent years, there has been a progressive introduction into clinical laboratories of polychromatic flow cytometry (PFC), using instruments that detect 5–10 markers simultaneously. This paper will focus on how increasing colours can impact a clinical flow cytometry laboratory.
The advantages of PFC
Arguably the biggest impact of increasing colours is the exponential increase in the amount of information obtained from paucicellular samples, such as cerebrospinal fluid (CSF). Often all the sample needs to be committed to a single tube to obtain enough events. Studies have shown flow cytometry improves the detection rate of CSF involvement of haematopoietic neoplasms [2]. With low cell numbers, background events become a significant proportion of total events, thus having sufficient colours available to include a nuclear stain can be very useful to identify true cells from debris. Another major benefit of increased colours is in the analysis of complex populations [3]. Light chain expression is the key to demonstrating monoclonality on B cell populations, so the more markers in the light chain tube, the better the sensitivity. The availability of more markers increases the ability to separate populations and analyse them independently. T cell phenotyping is significantly more complex than for B cells [4], and PFC can improve the effectiveness of panels investigating T cell disorders. However specificity becomes an issue since there are often many T cell subsets in reactive samples. CD7 negativity is used to identify some T-NHL cells, however CD7 negative populations are commonly found in normal samples. False negativity can be reduced by appropriate selection of clones and fluorochromes, and we have found that switching CD7 from FITC to APC has reduced the amount of dim-negative populations [Fig. 1]. However, T cell malignancies are relatively rare thus would rarely justify an instrument upgrade alone. PFC can aid in the detection of minimal residual disease (MRD) populations, by allowing the inclusion of more markers to identify targeted populations. In recent years, MRD detection has benefitted from developments in instrumentation that improve consistency in settings over different collection time points, and improved computers and software packages that allow fast analysis of >500,000 events. As these technologies are more widely adopted, the benefits of PFC will have a greater impact in MRD detection. PFC has made panel construction both easier and harder. Using more colours means fewer decisions when assigning markers to tubes, but this will be limited by the range of conjugates for rarer fluorochromes, and complicated by compensation and spreading. Sorting out compensations for overlapping fluorochrome emissions does become more complex with more fluorochromes, but is reduced when they are excited by different lasers. With advances in software, compensation can be managed with automated matrices and manual optimisation by experienced users. Although there is an increased range of fluorochromes, it is helpful to use one fluorochrome per channel (i.e. always FITC or always AF488) to avoid generating and maintaining too many separate settings files. The spreading effect is the expansion either side of the zero point of an axis due to the bright positive intensity of a second fluorochrome [Fig. 2]. This phenomenon is unique to instruments producing digital data, and can be managed by arranging mutually exclusive combinations on affected fluorochromes [5].
Quality control
Increasing the number of colours does not increase the number of QC procedures, however these can become more complex as there are more things that could go wrong. No matter how many colours are used, any lab will still need daily bead calibration to ensure consistent instrument operation, plus a biological control to ensure appropriate assay and acquisition set up. Upgraded instruments will have more detectors, lasers and fluorochromes to check, therefore a greater knowledge base is required to troubleshoot problems. Labs with the expertise to resolve technical issues in-house will have less instrument downtime. For biological controls, a very effective form of QC is to utilise internal controls, which are negatively stained cells in the same sample. These are independent of the number and type of fluorochromes used and are especially useful in high throughput labs.
Data handling
As the number of colours increase, the information becomes harder to express in traditional graphic form [6]. Standard graphs are two-dimensional; gates can be combined in Boolean formulas, but each region is still adjusted in two dimensions. The number of graphs required to display each marker against each other marker increases. Careful planning should enable each lymphocyte marker to be shown only once for each tube in panels targeting lymphocyte lineage neoplasms, reducing the time taken to review data. In myeloid panels the emphasis is on tracking development pathways, thus some markers are required to help track multiple pathways. For example, CD33 is useful for blasts, and for both monocytic and granulocytic development. Another strategy to clarify data is to use colour schemes to track cells on different 2D plots from one tube; these schemes can then be applied to all tubes in the panel to help tie the information together. Traditionally, analysis software has been provided by the cytometer manufacturer. However, the increased complexity of analysis in PFC means that specialist software companies are playing a greater role. For the next stage of software development, many of these companies are developing complex algorithms to define clusters of cells in multi-dimensional space in a way that the traditional approach of sequential gating cannot. However the main issue seems to be around expression of the data in a user friendly way so that subtle populations can be visualised in a persuasive fashion.
Technology development
In recent years, there have been efforts to standardise antibody panels. Increasing colours can make the choice of which markers to combine in the same tube easier, and allows ‘backbone’ markers to be included. Backbone markers refer to markers used in every tube of a panel to allow more specific gating across tubes; an example of B cell panel markers in multiple tubes is shown in Table 1. Various international groups have recommended approaches to standardisation [7]. However adoption has been slow, likely for practical reasons. Increasing colours increases information, but also complexity of analysis and range of technical issues, thus staff need to have a greater knowledge and experience. This issue is worthy of its own paper so is not discussed in depth here; major issues are listed in Table 2. Labs tend to use reagents recommended by their instrument manufacturer which makes technical support easier. The appropriate number of colours and most suitable instrumentation for each laboratory is very site specific, which decreases the capacity for standardisation. It is desirable, and indeed more practical, to standardise user expertise; the implementation of the International Cytometry Certification Examination is a significant first step.
Fluorochrome availability and cost
The average number of colours used in the clinical world depends on both suppliers and labs. It requires a critical mass of usage from laboratories to make a larger range of fluorochromes and conjugates commercially viable. As the increased range is more widely adopted, experience increases and more suppliers take on larger ranges, thus prices may be reduced; both of which encourage more laboratories to upgrade their systems, and so on. This cycle relies on both commercial investment in new technologies, as well as laboratories investing resources to trial and optimise these technologies. In labs this tends to rely on individuals being personally motivated plus supported by the lab, which is difficult in the current economic climate. One solution is for multiple sites to pool resources, for one centre to investigate and implement options, which can then be adopted and optimised by all. Also, research groups will concentrate resources into creating single panels to glean maximum information from samples. Here, the more unusual fluorochromes and instruments can be tested and optimised, and these experiences passed onto clinical users.
Conclusion
Practicalities and cost effectiveness will always play a part in the future directions of clinical flow cytometry labs. There are many benefits to increasing the colour capabilities of clinical labs. More information can be taken from each assay tube improving sensitivity for abnormal populations in a normal or reactive background and in the analysis of paucicellular specimens. Workflow can also improve with fewer tubes to run. More colours will potentially lead to more technical issues and more resources for trial and validation; ultimately the availability of resources will dictate the appropriate number of colours for each laboratory. Labs should regularly assess how many colours would be of benefit to them, and how many colours they can handle. These developments will continue to enhance the contribution of flow cytometry to laboratory diagnosis.
References
1. Craig FE, Foon KA. Blood 2008; 111: 3941–3967.
2. de Graaf MT, de Jongste AH, et al. Cytometry B Clin Cytom 2011; 80: 271–281.
3. Sewell WA, Smith SA. Pathology 2011; 43: 580–591.
4. Tembhare P, Yuan CM, Xi L, et al. Am J Clin Pathol 2011; 135: 890–900.
5. Roederer M. Cytometry 2001; 45: 194–205.
6. Mahnke YD, Roederer M. Clin Lab Med 2007; 27: 469–485, v.
7. Davis BH, Holden JT, et al. Cytometry B Clin Cytom 2007; 72(S 1): S5–13.
The authors
Sandy ABC Smith1 MSc, and William A Sewell1,2,3 MBBS, PhD
1 Immunology Department, SydPath, St Vincent’s Pathology, St Vincent’s Hospital Sydney, Victoria St, Darlinghurst, NSW 2010, Australia.
2 St Vincent’s Clinical School, University of NSW, NSW 2052, Australia.
3 Garvan Institute of Medical Research, Victoria St, Darlinghurst, NSW 2010, Australia
Progress in the management of prostate cancer
, /in Featured Articles /by 3wmediaAlthough globally prostate cancer (PCa) is the second most common cancer in men after lung cancer, and around one in six men in the West will eventually be diagnosed with the disease, the majority of these patients will die of unrelated causes. Thus PCa management should ideally not only involve diagnosis and provision of the most appropriate therapy, but also a decision as to whether any treatment is actually necessary.
Traditional screening based on an elevated level (above 4ng/mL) of the far from specific marker Prostate Specific Antigen (PSA) to diagnose PCa has led to over-diagnosis, unnecessary biopsies and over-treatment. It has also led to PCa cases with PSA levels below the cut-off value remaining undetected. The phi test, available in Europe since 2010 and very recently approved by the FDA, was developed to improve prostate cancer management. Intended for use in men with a PSA level in the range of 4 – 10 ng/mL, the test combines measurements of total PSA, free PSA and an isoform of free PSA, namely [-2]pro-PSA, to determine the probability of prostate cancer. The test does help to discriminate between PCa and benign disease, and reduces the number of negative prostate biopsies.
However many elderly men diagnosed with a tumour confined to the prostate gland that would not have affected their survival are still undergoing aggressive and unnecessary therapy; the majority of these patients suffer from erectile dysfunction after treatment and many have urinary leakage and intestinal problems. There is still a major need for accurate, preferably blood tests to determine which elderly patients with PCa currently confined to the prostate gland are likely to suffer eventually from life-threatening metastatic prostate cancer.
Two papers published in the October issue of The Lancet Oncology give cause for optimism, however. It was found that whole blood gene profiling in men with metastatic castration-resistant prostate cancer (defined as disease progression despite androgen depletion therapy) was able to stratify patients into two distinct prognostic groups. In addition the European Medicines Agency (EMA) is about to approve a new drug, Enzalutamide, to be taken orally once a day, for the treatment of metastatic castration-resistant prostate cancer. Data from Phase III clinical trials show that as well as extending life in sufferers, the drug also improves quality of life by reducing pain and increasing appetite and energy levels.
Hopefully at least some of the unnecessary suffering resulting from PCa management will soon be alleviated.
Point-of-care platforms, genetic assays and wireless connectivity
, /in Featured Articles /by 3wmediaNucleic acids, which are among the best signatures of disease and pathogens, have traditionally been measured in centralised screening facilities using expensive instruments. Such tests are seldom available on point-of-care (POC) testing platforms. Advancements in simple microfluidics, cellphones and low-cost devices, isothermal and other novel amplification techniques, and reagent stabilisation approaches are now making it possible to bring some of the assays to POCs. This article highlights selected advancements in this area.
by Dr Robert Stedtfeld, Maggie Kronlein and Professor Syed Hashsham
Why point-of-care diagnostics?
Point-of-care diagnostics (POCs) bring selected capabilities of centralised screening to thousands of primary health care centres, hospitals, and clinics. Quick turnaround time, enhanced access to specialised testing by the physicians and patients, sample-in-result-out capability, simplicity, ruggedness and lower cost are among the leading reasons for the emergence of POCs. Another advantage of POCs is its flexibility to be adopted for assays that have received less attention and therefore are often “home brewed”, meaning an analyst develops it within the screening facility for routine patient care. The societal benefit–cost analysis of POCs may often exceed the traditional approaches by 10- to 100-fold. However, POCs must deliver the same quality of test results that is available with the existing centralised screening. Centralised screening is well established, has a performance record and analytical expertise ensuring reliability. POCs are emerging and, therefore, for successful integration into the overall healthcare system, POCs must provide an advantage over the existing system consisting of sample transport to a centralised location followed by analysis and reporting. Besides answering why POCs are better than the existing approaches, they must face validation and deployment challenges.
On the positive side, POCs are expected to have lower financial and acceptance barriers compared to what is faced by more expensive traditional approaches because of the need for lowering the cost of diagnostics in general. In 2011, the global in vitro testing market was $47.6 billion and projected to be $126.9 billion by 2022 (http://www.visiongain.com/). At present POCs constitute approximately one third of the total market – distributed in cardiac markers (31%), HbA1c (21%), cholesterol/lipids (16%), fecal occult blood (14%), cancer markers 98%), drug abuse (4%), and pregnancy (4%). Market forces critically determine the pace of technical development and deployment of POCs. Consider, for example, the global market for blood sugar testing (examples for genetic assays on POCs are non-existent) that is estimated to be $18 billion by 2015 and the alternative test, A1c that is only $272 million in 2012. Even though, A1c testing is now indispensable in managing diabetes, it has not received the priority it deserves due to much lower frequency of testing and therefore smaller market. Lowering the cost further, makes its deployment and diffusion even more challenging. Thus POCs must tackle the inherent bottleneck in their business model, i.e. how to succeed with an emerging or new technology, priced to be low cost, but without the access to market and high sales volumes – at least initially.
One option is to use the existing network of cellphones as one component of the POCs. Diagnostic tools based on cellphones and mobile devices have the potential to significantly reduce the economic burden and play an important role in providing universal healthcare. By 2015 the number of cellphone users will reach 1.4 billion and at least 500 million of them will have used health related applications (mHealth) in some form. Currently, more than 17,000 mHealth apps are available on various platforms. However, their ability to carry out genetic assays is yet to be harnessed. Out of the more than 2,500 genetic assays available, perhaps none are available on a mobile platform (GeneTests: www.genetests.org/). The coming decade is predicted to merge genomics, microfluidics and miniaturisation and multiply its impact many-fold by leveraging the resources and cellphone networks. Such platforms may allow the possibility of establishing an open source model for assays that are commercially not viable due to very low volumes.
A key question and the focus of this article is can genetic assays that are currently possible only in centralised screening facilities be carried out on POC platforms? We believe that through a combination of emerging molecular techniques, low-cost simple microfluidic systems, and some additional developments in detection systems and information transfer, it is possible to carry out genetic assays including mutation detection on POCs within the next 5 years and possibly sequencing within a decade.
Existing POC-adaptable genetic technologies
Nucleic acid-based amplification techniques remain the widely used analytical technique for genetic diagnostics. However, integrated systems capable of reliable detection with sensitivity and specificity required for clinical applications are still scarce. In centralised screening facilities, quantitative polymerase chain reaction (qPCR) is the workhorse for genetic analyses. Compared to qPCR, isothermal amplification strategies have been recognised as a promising alternative especially for POCs. This is because of the complexity of establishing the temperature cycling for qPCR and detection systems in POC devices. The advantages of isothermal amplification include high amplification yields (in some instances allowing a positive reaction to be observed with the naked eye), savings in power consumption without the need for temperature cycling, and low time to a positive amplification (as low as 5 minutes for larger copy numbers). Many isothermal techniques have been developed [1] including: loop-mediated isothermal amplification (LAMP), recombinase polymerase amplification (RPA), nucleic acid sequence-based amplification (NASBA), smart amplification process (SmartAmp), rolling circle amplification (RCA), multiple displacement amplification (MDA), helicase-dependent amplification (tHDA), strand displacement amplification (SDA), isothermal and chimeric primer-initiated amplification (ICAN), cross-priming amplification (CPA), single primer isothermal amplification (SPIA), self-sustained sequence replication reaction (3SR), transcription mediated amplification (TMA), genome exponential amplification reaction (GEAR) and exponential amplification reaction (EXPAR).
The benefits of one isothermal technique over another will depend on the application of interest. Techniques requiring a large number of enzymes and that are carried out at low temperature may be less amenable to POCs than those that require a single enzyme. More than one enzyme may, in general, increase the cost, rigor and complexity of the amplification reaction in a POC. While larger number of primer sets will increase the specificity, they will also make the design of primers to target a certain phylogenetic group or divergent functional gene more difficult, if not impossible. This is because of the need for multiple target specific regions, each being a certain distance (number of bases) between the other, and the increased complexity when trying to incorporate degenerate bases in multiple primer sequences within an assay. Isothermal assay enzymes that work at low temperature (less than 40°C) may have a disadvantage in hot and warm climatic conditions. However, an isothermal amplification strategy that directly incorporates primers/probes designed for previously validated qPCR assays, uses a single enzyme, can be performed at higher temperatures, and allows for accurate quantification, will greatly increase the attraction of isothermal amplification, ushering in a new era of point of care genetic diagnostics. The cost associated with licensing an amplification technique will also dictate if it can be used for POCs applications, specifically in low resource settings.
Existing POC platforms for genetic analysis
Multiple platforms have been developed for POC genetic testing with an emphasis on reduced costs, sizes, throughput, accuracy and simplicity. Table 1 is a non-exhaustive list to illustrate some of the capabilities. Ideally, POCs must simplify the genetic analysis by accepting crude or unprocessed samples. All of the listed qPCR platforms automatically perform sample processing (cell lysing and DNA purification) directly within the cartridge that the sample is dispensed. Compared to qPCR POCs, isothermal assay POCs have not focused as much on sample processing. There are two reasons for this. One, isothermal assays are generally less influenced by sample inhibitors and may not even require it in certain cases. Second, development of POCs based on isothermal assays has lagged because the assays themselves are relatively new for the diagnostics application.
Development of isothermal genetic POC devices, however, is relatively easier compared to qPCR devices. This is because isothermal genetic POCs utilise components that are inexpensive, smaller and have less power consumption. Use of such components is possible due to the high product yields of isothermal amplification techniques. LAMP, for example, produces 10 µg of DNA in a 25 µl volume compared to 0.2 µg in PCR. This high yield can be quantified with less sophisticated optics compared to those used in qPCR devices. The Gene-Z platform [figure 1], for example, uses an array of individually controlled low power light emitting diodes for excitation and optical fibres (one for each reaction well) for channelling the excitation light to a single photodiode detector for real time measurement [2].
Although POCs are generally considered as a single-assay device, multiplexing of targets (e.g. in co-infections) and analysing a given pathogen with greater depth (e.g. methicillin resistance Staphylococcus aureus, or HIV genotyping) is becoming absolutely critical. Genetic analysis is expected to allow resolution of genotype that is better than that possible by immunoassays. Use of simpler but powerful microfluidic chips (e.g. used with Gene-Z or GenePOC) instead of conventional Eppendorf tubes can be advantageous in terms of cost and power of analysis. Such microfluidic chips are increasingly changing their shape, form, and material and are bound to be simpler, better and more accessible. An example is the paper-based diagnostics platform developed by Whiteside’s group [3]. Miniaturisation obviously leads to significant reagent cost saving provided it does not run into detection-limit issues. Multiplexed detection also simplifies the analysis since manual dispensing into individual reaction tubes is not required. For example, the chip used with Gene-Z does not require external active elements for sealing, pumping, or distributing samples into individual reaction wells, eliminating potential for contamination between chips or to the device.
Type of genetic assays on POCs
So what types of genetic assays are more likely to move to POCs first? For regions with excellent centralised screening, it may be those assays where getting the results quickly using POCs saves lives or has tangible long term benefits, e.g. quickly deterring the infection and its antibiotic resistance. The leading example of this is MRSA, for which resistance has continuously increased over the past few decades. It is now known that patients are more likely to acquire MRSA in wards where the organism is screened by culturing compared to rapid molecular techniques. In such cases, detection of antibiotic resistance genes using a panel of genetic assays and POCs would minimise the practice of administering broad spectrum antibiotics because the results are not available soon enough.
In limited resource settings, the examples of genetic testing by POCs are literally endless – TB, malaria, dengue fever, HIV, flu, fungal infections and so on. This is because very little or no centralised screening occurs in such scenarios. The ability to measure dengue virus, for example, in 1–4 µl of blood could provide better tools to the 2.5 billion people who are at risk of infection and the 50–100 million people who do contract it every year. Similarly, multidrug-resistant and extensively drug-resistant TB is a global concern due to the high cost of treatment. At present, large numbers of mutations cannot be measured simultaneously using POCs. However, except the fact that isothermal mutation assays are fewer and the success rate for primer development is much lower than the signature marker probe/primer based assays, there are no technical barriers. The availability of a simple isothermal mutation assay will go a long way in making many genotyping-based diagnostics available on POCs.
In the long run, POCs may even be used to detect and quantify genetic markers associated with non-infectious diseases, such as cancer, and selected assays focusing on human genetics. Globally, cancer is responsible for 7.6 million deaths (2008 data) and projected to be rise to 13.1 million by 2030. Simple and quantitative tools capable of measuring a panel of markers may play an additional role – they may help collect data related to potentially useful but un-tested markers. Both PCR and isothermal-based assays are amenable to this application using circulating tumour cells, circulating free DNA/RNA, gene mutations, and microRNA panels. Currently utilised methods of cancer detection are highly invasive and time consuming. Minimally invasive methods on POCs may significantly increase the deployment of such capabilities.
Why do we need the wireless connectivity for POCs?
With POCs, comes the question of connectivity. Is it a must or good to have? We envision that it is important to have, but that a less useful form of device may be deployed without connectivity. Wireless connectivity via cellular phones has many advantages. Paramount among them is access to the physician and/or nurse for expert input and support. Technical advantages are automated data transfer, increased efficiency in reporting, saving time, lower equipment costs due to complexity of a touch-screen user interface and the computational power needed for data analysis.
The use of cellphones is an obvious possibility due to its ubiquitous availability and the vast network of mobile services. “There are 7 billion people on Earth; 5.1 billion own a cellphone; 4.2 billion own a toothbrush (Mobile Marketing Association Asia, 2011). By 2015 it is estimated that one third of cellphone users will have used mobile health solution in some form. However, POC genetic diagnostics and mobile networking have not yet crossed their paths. Some gene analysers (e.g. Gene-Z, Hunter) already have network enabled wireless connectivity to bridge these paths. More work is needed, however. One critical element is that transfer of data including through wireless mode must meet the requirements of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy and Security Rules set by the U.S. Department of Health and Human Services. FDA clearance standards and specifications are still evolving for this area [4].
Impacts of the resulting products and devices are expected on both communicable and non-communicable diseases. Qualcomm Life provides a platform (2Net), that could be used for many different applications. According to them, “The 2Net platform is designed as an end-to-end, technology-agnostic cloud-based service that interconnects medical devices so that information is easily accessible by device users and their healthcare providers and caregivers” (http://www.qualcommlife.com/). Although the famous medical scanner or Tricorder of Star Wars fame is not yet possible, the recently announced $10 million prize by X-Prize Institute sponsored by Qualcom Life, for developing a Tricorder that can diagnose a set of 15 diseases without the intervention of the physician and weighs less than 2.3 kg is not too far from reality. In ten years, we should expect nothing less than a POC platform that is capable of sequencing-based diagnostics with assay cost of less than a dollar.
References
1. Craw P, Balachandran W. Isothermal nucleic acid amplification technologies for point-of-care diagnostics: a critical review. Lab Chip 2012; 12: 2469–2486.
2. Stedtfeld RD, Tourlousse DM, Seyrig G, Stedtfeld TM, Kronlein M, Price S, Ahmad F, Erdogan G, Tiedje JM, Hashsham SA. Gene-Z: a device for point of care genetic testing using a smartphone. Lab Chip 2012; 12: 1454–1462.
3. Martinez AW, Phillips ST, Whitesides GM, Carrilho E. Diagnostics for the developing world: microfluidic paper-based analytical devices. Anal Chem 2010; 82: 3–10.
4. Draft Guidance for Industry and Food and Drug Administration Staff – Mobile Medical Applications. July 21, 2011. www.fda.gov/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm263280.htm.
The authors
Robert Stedtfeld PhD, Maggie Kronlein and
Syed Hashsham, PhD*
Civil and Environmental Engineering
1449 Engineering Research Court Rm A127
Michigan State University
East Lansing, MI 48824, USA
*Corresponding author:
E-mail: hashsham@egr.msu.edu
An electronic nose for asthma diagnosis
, /in Featured Articles /by 3wmediaAn electronic nose consists of an array of chemical sensors for the detection of volatile organic compounds and an algorithm for pattern recognition. Breath analysis with an electronic nose has a high diagnostic performance for atopic asthma that can be increased when combined with measurement of fractional exhaled nitric oxide.
by Dr Paolo Montuschi
Several volatile organic compounds (VOCs) have been identified in exhaled breath in healthy subjects and patients with respiratory disease by gas-chromatography/mass spectrometry (GC/MS) [1]. An electronic nose (e-nose) is an artificial system that generally consists of an array of chemical sensors for volatile detection and an algorithm for pattern recognition [2]. Several types of e-noses are available. An e-nose has been used for distinguishing between asthmatic and healthy subjects [3,4], between patients with asthma of different severity [3], between patients with lung cancer and healthy subjects [5], between patients with lung cancer and COPD [6], and between patients with asthma and COPD [7].
We compared the diagnostic performance of an e-nose with fractional exhaled nitric oxide (FENO), an independent method for assessing airway inflammation, and lung function testing in patients with asthma. We also investigated whether an e-nose could discriminate between asthmatic and healthy subjects and to establish the best sampling protocol (alveolar air vs oro-pharyngeal/airway air) for e-nose analysis. The results presented here are from a previously published study [4].
Methods
Study subjects
Twenty-four healthy subjects and 27 Caucasian patients with intermittent or mild persistent atopic asthma were studied [Table 1]. All asthmatic patients had a physician-based diagnosis of asthma, and the diagnosis and classification of asthma was based on clinical history, examination and pulmonary function parameters according to current guidelines [8]. Patients had intermittent asthma with symptoms equal to or less often than twice a week (step 1) or mild persistent asthma with symptoms more often than twice a week (step 2), forced expiratory volume in one second (FEV1) of 80% or greater of predicted value, and positive skin prick tests. Asthma patients were not taking any regular medication, but used inhaled short-acting β2-agonists as needed for symptom relief. Healthy subjects had no history of asthma and atopy, had negative skin prick tests and normal spirometry.
All subjects were never-smokers, had no upper respiratory tract infections in the previous 3 weeks, and were not being treated with corticosteroids or anti-inflammatory drugs for asthma in the previous 4 weeks.
Study design
The type of study was cross-sectional. Subjects attended on one occasion for clinical examination, FENO measurement, e-nose analysis, lung function tests, and skin prick testing. Informed consent was obtained from patients. The study was approved by the Ethics Committee of the Catholic University of the Sacred Heart, Rome, Italy.
Pulmonary function
Spirometry was performed with a Pony FX spirometer (Cosmed, Rome, Italy) and the best of three consecutive manoeuvres chosen.
Exhaled nitric oxide measurement
FENO was measured with the NIOX system (Aerocrine, Stockholm, Sweden) with a single breath on-line method at constant flow of 50 ml/sec according to American Thoracic Society guidelines [9].
Collection of exhaled breath
No food or drinks were allowed at least 2 hours prior to breath sampling. Two procedures for collecting exhaled breath were followed to study the differences between total exhaled breath and alveolar breath [4]. Subjects were asked to inhale to total lung capacity and to exhale into a mouthpiece connected to a Tedlar bag through a three-way valve [3]. In the first sampling procedure, the first 150 ml, considered as dead space volume, were collected into a separate Tedlar bag and discarded [Fig. 1a]. The remaining exhaled breath, principally derived from the alveolar compartment, was collected and immediately analysed with e-nose [4]. In the second sampling procedure, total exhaled breath was
collected [Fig. 1b] [4].
Electronic nose
A prototype e-nose (Libranose, University of Rome Tor Vergata, Italy), consisting of an array of eight quartz microbalance gas sensors coated by molecular films of metallo-porphyrins, was used [4]. E-nose responses are expressed as frequency changes for each sensor [Fig. 2] and then analysed by pattern recognition algorithms [2]. Ambient VOCs were subtracted from measures. Results were automatically adjusted for ambient VOCs.
Skin testing
Atopy was assessed by skin prick tests for common aeroallergens (Stallergenes, Antony, France).
Multivariate data analysis
Feed forward neural network was used to classify e-nose, FENO, spirometry data. A feed-forward neural network, a biologically derived classification model, is formed by a number of processing units (neurons), organised in layers. The datasets were divided into a training and a testing set. The first 27 measures collected were used for training and the remaining 24 measures for testing.
Statistical analysis
FENO values were expressed as medians and interquartile ranges (25th and 75th percentiles), whereas spirometry values were expressed as mean ±SEM. Unpaired t-test and Mann–Whitney U test were used for comparing groups for normally distributed and nonparametric data, respectively. Correlation was expressed as a Pearson coefficient and significance defined as a value of P<0.05. Results
Electronic nose
The best results were obtained when e-nose analysis was performed on alveolar air as opposed to total exhaled breath [Table 2]. The diagnostic performance was determined in terms of the number of correct identifications of asthma diagnosis in the test dataset. Combination of e-nose analysis of alveolar air and FENO had the highest diagnostic performance for asthma (95.8%). The E-nose (87.5%) had a discriminating capacity that was higher than that of FENO (79.2%), spirometry (70.8%), combination of FENO and spirometry (83.3%), and combination of e-nose analysis of total exhaled breath and FENO (83.3%) [Fig. 3].
Exhaled nitric oxide
Median FENO values were higher in asthmatic patients than in healthy subjects [37. 6 (26.0–61.5) ppb vs 13.4 (10.0–19.9) ppb, P<0.0001, respectively].
Lung function tests
Both study groups had normal FEV1 values [Table 1]. Asthmatic patients had lower absolute (P = 0.032) and percentage of predicted FEV1 values (P = 0.004) than healthy subjects [Table 1]. Asthmatic patients had lower absolute (P = 0.003) and percentage of predicted forced expiratory flow between 25% and 75% of forced vital capacity (FEF25%–75%) (P = 0.002) than healthy subjects [Table 2].
Correlation between electronic nose, FeNO, and lung function tests
E-nose, FENO and lung function testing data were not correlated in either asthma or healthy control group.
Discussion
The original aspects of our study are:
1) the comparison between an e-nose and FENO, in addition to spirometry;
2) the comparison between total and alveolar exhaled air;
3) the analysis of data based on a neural network that included a training and a test analysis performed in two separate datasets for stringent quality control.
Our study indicates that an e-nose might be useful for asthma diagnosis, particularly in combination with FENO. Spirometry had the lowest diagnostic performance in line with a well-maintained lung function in patients with intermittent and persistent mild asthma. Our study confirms that FENO has a good diagnostic performance for asthma and suggests the possibility of using different non-invasive techniques for achieving a greater asthma diagnostic performance.
However, large powered studies are required to establish the diagnostic performance of e-nose, FENO and lung function testing in asthma patients. Ascertaining whether an e-nose could be used for screening of asthmatic patients requires large prospective studies. Also, the E-nose is not suitable for identifying and quantifying single breath VOCs, for which GC/MS is required.
Asthma is principally characterized by airway inflammation. It may seem surprising that the best results with the e-nose were obtained when collecting alveolar air rather than total exhaled breath which includes exhaled breath from the airways. This might reflect the contribution of oro-pharyngeal air which might introduce confounding factors making it e-nose analysis less reflective of what occurs within the respiratory system [10]. Moreover, the results of e-nose analysis of alveolar air could partially reflect the production of VOCs within the peripheral airways (mixed airways/alveolar air) due to significant inter-individual variability in dead space volume.
The lack of correlation between the e-nose results and those from FENO might indicate that these techniques reflect different aspects of airway inflammation. Formal studies to ascertain whether the e-nose could be used for assessing and monitoring airway inflammation in asthmatic patients are warranted. The E-nose is not suitable for ascertaining the cellular source of breath VOCs. Persistent airway inflammation can modify the metabolic pathways in patients with asthma. As patients included in our study were not on regular, anti-inflammatory drugs for asthma, we were unable to assess the effect of pharmacological treatment on breath VOCs, which requires controlled studies. Likewise, the effect of atopy on e-nose classification of asthma patients has to be addressed in future studies.
Validation of the classification model is essential. In our study, two different datasets for training and testing, obtained in different periods of time, were used. This way, the predictive capacity of the classification model is more suitable for a real life situation.
The E-nose analysis is a non-invasive technique that is potentially applicable to respiratory medicine. Several methodological issues including optimisation and standardisation of sample collection, transfer and storage of samples, use of calibration VOC mixtures, and qualitative and quantitative GC/MS analysis, have to be addressed.
In conclusion, an e-nose discriminates between asthma and healthy subjects and usage in combination with FENO increases the e-nose’s discriminatory ability. Large studies are required to establish the asthma diagnostic performance of e-nose. Whether this integrated non-invasive approach will translate into an early asthma diagnosis has still to be clarified.
Abbreviations
Abbreviations: FEF25%–75%, forced expiratory flow at 25% to 75% of forced vital capacity; FeNO, fractional exhaled nitric oxide; FEV1, forced expiratory volume in one second; FVC, forced vital capacity; GC/MS, gas chromatography/mass spectrometry; PEF, peak expiratory flow; VOC, volatile organic compound.
Acknowledgements
This study was supported by Merck Sharp and Dohme, and the Catholic University of the Sacred Heart.
References
1. Phillips M, Herrera J, et al. Variation in volatile organic compounds in the breath of normal humans. J Chromatogr B Biomed Sci Appl 1999; 729: 75–88.
2. Montuschi P, Mores N, et al. The electronic nose in respiratory medicine. Respiration (DOI: 10.1159/000340044, in press).
3. Dragonieri S, et al. An electronic nose in the discrimination of patients with asthma and controls. Allergy Clin Immunol. 2007; 120: 856–862.
4. Montuschi P, et al. Diagnostic performance of an electronic nose, fractional exhaled nitric oxide and lung function testing in asthma. Chest 2010; 137: 790–796.
5. Machado R, et al. Detection of lung cancer by sensor array analyses of exhaled breath. Am J Respir Care Med 2005; 171: 2186–1291.
6. Dragonieri S, et al. An electronic nose in the discrimination of patients with non-small cell lung cancer and COPD. Lung Cancer. 2009; 64: 166–170.
7. Fens N, et al: Exhaled breath profiling enables discrimination of chronic obstructive pulmonary disease and asthma. Am J Respir Crit Care Med 2009; 180: 1076–1082.
8. National Asthma Education and Prevention Program: Expert panel report III. Guidelines for the diagnosis and management of asthma. MD, Bethesda: National Heart, Lung, and Blood Institute, 2007; 1–61 (NIH publication no. 08-5847). Available at: www.nhlbi.nih.gov.
9. Recommendations for standardized procedures for the on-line and off-line measurement of exhaled lower respiratory nitric oxide and nasal nitric oxide in adults and children-1999: official statement of the American Thoracic Society 1999. Am J Respir Crit Care Med 1999; 160: 2104–2117.
10. van den Velde S, et al. Differences between alveolar air and mouth air. Anal Chem 2007; 79: 3425–3429.
The author
Paolo Montuschi, MD
Department of Pharmacology, Faculty of Medicine
Catholic University of the Sacred Heart
Largo F. Vito 1, 00168 Rome, Italy
E-mail: pmontuschi@rm.unicatt.it
Warfarin: a case for pharmacogenomics testing
, /in Featured Articles /by 3wmediaThe ability to use genetic information to inform clinical decision making has emerged as a new tool in clinical practice, with noteworthy examples across many area of medicine. Cardiology and anticoagulation in particular have led the way in the translation of genetic findings into actionable clinical recommendations, spurred by the addition of genetically guided dosing in the drug label for warfarin by the FDA. This review covers the pharmacogenomics related to warfarin therapy.
by Dr Minoli Perera
Pharmacogenomics is primarily aimed at identifying genetic variation that influences inter-individual differences in drug response. The guiding principle is “the right drug, at the right dose for the right person”. Its application promises to enable targeted drug administration, improve therapeutic outcomes, and inform drug development. Pharmacogenomic insights have also improved our understanding of the underlying pathways and mechanisms behind adverse drug reactions. Such adverse reactions account for approximately 100 000 deaths per year in the US, and markedly increase healthcare costs. Advances made over the last 30 years in molecular biology, molecular medicine, and genomics have had a major impact on the development of pharmacogenomics.
Currently a variety of approaches are used to associate genetic variants associated with drug response. Commonly used strategies include candidate gene approach and genome-wide association studies (GWAS). Candidate gene studies investigate single nucleotide polymorphisms (SNPs) that are correlated to drug response. These studies are usually restricted to genes or SNPs that have been shown to be involved in the pathway of drug action or drug clearance. Genome-wide association studies investigate up to 5 million SNPs spaced throughout the genome which are genotyped to identify genetic variants with the drug phenotype in an unbiased fashion. Each of these methods has advantages and disadvantages. Genome-wide studies comprehensively cover the entire genome, but their power to detect moderate associations is greatly limited by the multiple testing burden, which is a requirement for correction for false-positive associations. The candidate gene approach narrows the focus to a few important SNPs and therefore has higher power, but may miss the real causative SNP, and require a priori knowledge for the selection of SNPs/genes to study.
Both these methods have yielded important clinical findings that can immediately be used to reduce the incidence of adverse effects (many of which have been added to drug labels). Notable examples such as the SLCO1B1*5 polymorphism (associated with myopathy with statin use) and CYP2C19*2 (associated with clopidogrel non-response) have shown clinically meaningful outcomes related to genetic variants. However, the translation of these findings has been slower in coming and has many clinicians wondering about the utility of this new technology.
The “test case” for pharmacogenetics was thought to be pharmacogenetically guided warfarin dosing. In this review we will cover the genetic polymorphism effecting warfarin dose requirements and the currently available diagnostic tests, as a case study for the implementation of pharmacogenetics in the clinic.
Genes that affect warfarin dose
Numerous studies, predominately conducted in Caucasian and Asian populations, demonstrate that the CYP2C9 and VKORC1 genotypes contribute significantly to warfarin dose variability [1–3]. The role of these gene products can be seen in Figure 1. The CYP2C9 enzyme metabolises the more active S-enantiomer of warfarin to inactive 7-hydroxywarfarin. Warfarin inhibits the enzyme VKOR (encoded by the gene VKORC1) to prevent conversion of Vitamin K epoxide to its reduced form necessary for activation of the clotting factors II, VII, IX and X. Thus, SNPs in the CYP2C9 gene affect warfarin pharmacokinetics, whereas variation in the VKORC1 gene impacts warfarin pharmacodynamics.
The most extensively studied CYP2C9 variants are the CYP2C9*2 and CYP2C9*3 alleles, which lead to significant reductions in CYP2C9 activity. Compared with the people without this genotype, carriers of the *2 or *3 genotypes have S-warfarin clearance reduced to between 40 to 90% or normal levels. As a result, significantly lower doses are usually needed in individuals with a CYP2C9*2 or CYP2C9*3 allele. The CYP2C9 genotype is also implicated in the risk of bleeding during warfarin therapy, especially during the warfarin initiation period [4].
The VKORC1 genotype was originally recognised for causing warfarin resistance through mutations in the gene-coding region. More recently, common VKORC1 SNPs occurring in gene-regulatory regions and underlying usual warfarin dose variability were discovered [3]. Two such SNPs, one in the promoter region (-1639G>A) and one in intron 1 (1173C>T), show the strongest association and possible functional effects [5]. Thus, the majority of warfarin pharmacogenetic studies have focused on one of these two SNPs, which are in strong linkage disequilibrium across populations (meaning they are inherited together and strongly associated with each other). This means that only one of these SNPs needs to be taken into account for pharmacogenetic dosing of warfarin. Most investigators chose the VKORC1 -1639G>A as the predictive SNP in warfarin pharmacogenetic studies. This SNP explains approximately 20–28% of the overall variability in dose requirements in Caucasians, but only 5–7% of the variability in African–Americans, mainly due to the difference in allele frequency between populations [6, 7]. Unlike CYP2C9, the VKORC1 genotype does not appear to affect the risk of bleeding with warfarin treatment[4].
These findings have been confirmed in several genome-wide association studies (GWAS) in Caucasians and Asian individuals, showing that VKORC1 -1639G>A, CYP2C9*2 and CYP2C9*3 polymorphisms are the primary genetic determinants of warfarin dose requirements in these populations. The combination of VKORC1 -1639G>A, CYP2C9 (*2 and *3) and clinical factors (e.g., age, sex, weight and amiodarone use) explains approximately 55% of the total variance in warfarin maintenance dose in Caucasians, but only about 25% among African–Americans. With the exception of the CYP4F2 genotype, found in a GWAS study of Swedish patients [8], no other genetic variant has met genome-wide significance for association with warfarin dose requirements. Both genetic and non-genetic variables have been included in dosing algorithms that can be used to predict dose, such as WarfarinDosing.org.
Warfarin genetic testing and guidelines
Insurance companies consider genetic testing for genetic variants in CYP2C19 related to clopidogrel response and the HLA-B* 1502 allele for prediction of adverse effects related to carbamazepine as “medically necessary” and may therefore cover the cost of these tests. The same does not hold for warfarin testing, which is considered investigational and will only be covered in the context of a clinical trial.
In 2007, the US Food and Drug Administration modified the package insert for warfarin to include information on the relationship of safe and effective dosage to SNPs in CYP2C9 and VKORC1, including a table of recommended doses for each genotype combination [Table 1]. These recommendations give the range of doses that should be considered when dosing a patient that is a carrier of any of the tested SNPs. However, there is increasing evidence that additional alleles outside of CYP2C9*2 and *3 and VKORC1 -1639G/A may play a role in warfarin dose response. These SNPs are not included in the FDA dose recommendations and not all tests cover all these additional variants.
Currently, four warfarin pharmacogenetic tests are available as in vitro diagnostic devices [shown in Table 2]. All of these tests genotype for three loci: CYP2C9*2, CYP2C9*3 and one VKORC1 -1639G/A or 1173C/T (both of which give equivalent information because of the afore mentioned LD in all populations), with some including other known genetic variants that are associated with warfarin dose. All of the tests can be completed in 8 hours, including DNA extraction, with the fastest ones providing genotype results in less than 2 hours.
The Clinical Pharmacogenetics Implementation Consortium (CPIC) recently published guidelines on how to interpret and apply genetic test results to adjust warfarin doses [9]. These guidelines do not address when to order a genetic test, but rather how to dose warfarin when genetic test results are available. The guidelines strongly support the use of genetic information to guide warfarin dosing when genotype is known and recommend using either the International Warfarin Pharmacogenetics Consortium (IWPC) or Gage algorithm to do so.
Although the availability of FDA-cleared devices for warfarin pharmacogenetic testing makes genotype-guided warfarin initiation possible, several barriers to clinical adoption remain. First, many medical centres do not have warfarin pharmacogenetic testing available. In a recent survey, only 20% of hospitals in North America have testing available on site, suggesting the majority of the hospitals rely on outside commercial clinical laboratories. This outsourcing may make genotype-guided warfarin initiation impractical because of 3–7 days of turnaround time. Second, no professional organisation endorses warfarin pharmacogenetic testing in its guidelines because of the lack of the clinical utility data. Inclusion of a testing recommendation in professional guidelines has been identified as a factor influencing reimbursement of new technology. As such, the Centers for Medicare and Medicaid Services (CMS) and many commercial insurance plans generally do not reimburse the cost of testing ($300–500). Because of these barriers, warfarin pharmacogenetic testing is performed mainly for research purposes and for patients willing to pay the cost.
Future perspective and conclusions
There are substantial and convincing data supporting the clinical and analytic validity of warfarin pharmacogenetics. The CYP2C9 and VKORC1 genes are the primary determinants of warfarin dose requirements. There are several FDA-cleared tests available for CYP2C9 and VKORC1 genotyping. However, genotype-guided warfarin dosing has not yet become a reality in most medical centres despite the wealth of data supporting genetic influences of warfarin dose requirements. Many clinicians and third party payers are awaiting evidence of clinical utility and cost-effectiveness before adopting genetic testing for anticoagulation management in the clinic setting. Results from ongoing clinical trials (such as the NIH-sponsored COAG trial) are expected to address these issues and will likely determine the course of genotype-guided anticoagulant therapy. Whether pharmacogenetics will have a role in the treatment with newer anticoagulant agents has yet to be determined. However, the pharmacogenetics with these anticoagulants could be of great importance given the unavailability of routine monitoring parameters with these agents.
References
1. Wadelius M, et al. Blood 2009; 113: 784–792.
2. Klein TE, et al. N Engl J Med 2009; 360: 753–764.
3. Rieder MJ, et al. N Engl J Med 2005; 352: 2285–2293.
4. Limdi NA, et al. Clinical pharmacology and therapeutics 2008; 83: 312–321.
5. Wang D, et al. Blood 2008; 112: 1013–1021.
6. Cavallari LH, et al. Clin Pharmacol Ther 2010; 87: 459–464.
7. Perera MA, et al. Clin Pharmacol Ther 2011; 89: 408–415.
8. Takeuchi F, et al. PLoS Genet 2009; 5: e1000433.
9. Johnson JA, et al. Clinical pharmacology and therapeutics 2011; 90: 625–629.
10. Coumadin package insert. 2007. (Accessed October, 2007, at http://www.bms.com/cgi-bin/anybin.pl?sql=PI_SEQ=91.)
The author
Minoli A Perera, PharmD., PhD
Knapp Center for Biomedical Discovery
Room 3220B, University of Chicago, 900 E.
57th Street, Chicago, IL 60637, USA
E-mail: mperera@bsd.uchicago.edu
Managing chronic disease: do clinical labs hold the key ?
, /in Featured Articles /by 3wmediaChronic diseases are placing an increasingly heavy burden on the healthcare systems of both development and emerging countries. Together with renewed prevention strategies based on systematic and coordinated approaches, clinical laboratories will have an essential role to play with the advent of new biomarkers and the development of e-health systems.
Chronic diseases are acknowledged to be one of the biggest challenges for healthcare systems. Traditionally, chronic diseases were non-communicable. Using World Health Organization (WHO) data [1], they consisted of four major groups – cardiovascular diseases, cancers, chronic respiratory diseases and diabetes, as well as some neuropsychiatric disorders and arthritis. More recently, an increase in survival rates for infectious and genetic diseases has led to expanding the definition to certain communicable diseases (such as HIV/AIDS) as well as genetic disorders like cystic fibrosis.
Attention to chronic diseases has been growing, largely due to three factors:
1. Ageing populations.
2. Early detection, or ‘secondary prevention’.
3. E-health – the possibility offered by sophisticated at-home monitoring and timely treatment.
Ageing populations
The elderly are far more susceptible to chronic disease. In the US, some 10% of the beneficiaries of Medicare, almost all with chronic disease, account for three-quarters of its budget. [2] Per capita spending is 3-10 times more for older adults with chronic diseases than those without. [3] In Europe, the EU Council has noted the “enormous burden” posed by chronic diseases and also warned that the next decade (2011-2020) will see this grow further due to an ageing population. [4]
Early detection
The early detection of chronic disease has been revolutionized by virtue of innovative and ever-faster diagnostic techniques in clinical laboratories. Clinical laboratories have, for some years, taken the lead in reducing the gap between the evolution of a chronic disease and interventional treatment, both at home and in the hospital.
In 2007, a report by the influential Milken Institute think-tank made a powerful argument to include prevention and early detection, rather than treatment alone, in the US debate on funding healthcare. The Milken report was titled ‘An Unhealthy America: The Economic Burden of Chronic Disease’. [5] It was one of the most ambitious attempts to quantify the reduction in case burden that could be achieved by such strategic reorientation: a drop by as many as 40 million cases of chronic diseases in the year 2023, in the US alone. At the time of the report’s launch, former US Surgeon General Richard Carmona noted the biggest problem with the present healthcare system was that it waited for people to get sick and then treated them at high cost.
The story is similar in Europe. Though EU-wide statistics do not yet exist, in the UK, half of hospital bed day use is accounted for by only 2.7% of all medical conditions, most of which are chronic diseases. [6] The EU Commission has called for technology-driven strategies to permit both early detection and timely monitoring of chronic disease – and do this in the context of healthy ageing.
As in the US, much European thinking about managing the burden of chronic disease involves e-Health, especially in the context of structured programmes of home care for patients. In January 2007, a major EU Commission study called “Healthy Ageing: Keystone for a Sustainable Europe” [7] approvingly highlighted a Swedish program called ‘Preventive Home Visits’ as leading to both a decrease in GP visits and lower mortality. It called for promoting and using such best-of-class practices across the EU.
E-health and clinical laboratories
All such plans essentially consist of remote acquisition of patient data using lower skilled and mobile personnel. They transfer the data in real- or near real-time for remote interpretation at a clinical laboratory, followed by consultation with a physician (e.g to modify dosage/change medicines), or to transfer the patient for intervention at a hospital.
The role of the clinical laboratory in e-Health is already advanced in telepathology. Though some telepathology efforts have aimed at remote manipulation of diagnostic equipment, the more proven approach has been to transmit images from a slide. Such systems have been in use since the mid-1990s, especially in sparsely populated areas such as parts of Canada and the north-western US, and in Norway and Sweden. France’s RESINTEL was, however, one of the first systems to establish that telepathology was at least as reliable as a physical slide examination, in a transatlantic pilot project. [8]
The largest application for telepathology has so far been in cytology. Nevertheless, microbiologists have been remotely interpreting gram stains, and hematologists have reported success with blood films.
Biomarkers: promises and challenges
The next frontier is likely to be biomarkers – pre-symptomatic signals of early disease states, detectable in blood/serum. In 2011, an article by 61 healthcare experts from Europe, the US, Brazil, Russia, India, China and some other countries called for a systemic approach to combat chronic disease, with a roadmap “for predictive, preventive, personalized and participatory (P4) medicine.” [9] The core of the proposal is to systematically identify biomarkers, which would then (progressively) be used to chart out a matrix of co-morbidities, disease severity and progression – including the critical trigger signals which predict the occurrence of abrupt transitions in the stages of a chronic disease.
The authors of the above paper cite an in-depth study on the clinical impact of telemedicine in four major chronic diseases – diabetes, asthma, heart failure and hypertension, [10] and propose that continuous monitoring of individual clinical histories and their development would be a key source of primary data, to build up a robust and extensive knowledge management infrastructure.
The role of clinical laboratories in much of the above system – from biomarker discovery to the monitoring of patients – is evident. At the moment, tests on the bulk of approved biomarkers (such as Oncotype DX and Trofile) are conducted in large reference laboratories. However, a great deal of research is also being directed at tests for use at home or at point-of-care; for example, CRP (C-reactive protein) and the hormone prolactonin are biomarkers which differentiate between bacterial and viral pneumonia in less than an hour, and reduce the use of precautionary antibiotics.
Nevertheless, there is still some way to go before biomarkers and systemic/personal approaches to medication and treatment of chronic disease become commonplace. Most barriers are regulatory [see box on this page], and are a consequence of the relative novelty of biomarkers – and their potentially sweeping impact.
In the light of this, the challenge for clinical laboratories will be to develop acceptable technical standards for the use of biomarkers, jointly with regulators and manufacturers. Clearly, given the massive challenge posed by chronic diseases in the decades ahead, any serious solution will have to involve a combination of biomarker-based personalized medicine, at-home care and clinical laboratories.
References
1. http://www.who.int/nmh/Actionplan-PC-NCD-2008.pdf
2. Berk ML, Monheit AC. The Concentration of Health Expenditures: An Update. HealthAffairs 1992; 11 (4): 145–149.
3. Fishman P, et al. Chronic Care Costs in Managed Care. Health Affairs 1997; 16 (3): 239–247.
4. http://www.consilium.europa.eu/uedocs/cms_Data/docs/pressdata/en/lsa/118282.pdf
5. http://www.milkeninstitute.org/healthreform/pdf/AnUnhealthyAmericaExecSumm.pdf
6. Chronic Disease management – a compendium of information, UK Department of Health, May 2004
7. http://ec.europa.eu/health/archive/ph_information/indicators/docs/healthy_ageing_en.pdf
8. http://pubmedcentralcanada.ca/pmcc/articles/PMC2579163/pdf/procascamc00009-0625.pdf
9. http://genomemedicine.com/content/3/7/43#B46
10. Pare G, Moqadem K, Pineau G, St-Hilaire C. Clinical effects of home telemonitoring in the context of diabetes, asthma, heart failure and hypertension: a systematic review. J Med Internet Res 2010 (12:e21).
11. http://ec.europa.eu/research/health/pdf/biomarkers-for-patient-stratification_en.pdf
12. http://www.phgfoundation.org/file/3998/
Scientific literature: autoimmunity
, /in Featured Articles /by 3wmediaThere are many peer-reviewed papers covering autoimmunity, and it is frequently difficult for healthcare professionals to keep up with the literature. As a special service to our readers, CLI presents a few key literature abstracts from the clinical and scientific literature chosen by our editorial board as being particularly worthy of attention.
Unraveling multiple MHC gene associations with systemic lupus erythematosus: model choice indicates a role for HLA alleles and non-HLA genes in Europeans
Morris DL et al. A J Hum Genet. 2012; doi: 10.1016/j.ajhg.2012.08.026.l
In order to determine the association with both SNPs and classical human-leukocyte-antigen (HLA) alleles, a meta-analysis of the major-histocompatibility-complex (MHC) region in systemic lupus erythematosus (SLE) was performed. Results from six studies and well-known out-of-study control data sets were combined, providing 3701 independent SLE cases and 12 110 independent controls of European ancestry. The study used genotypes for 7199 SNPs within the MHC region and for classical HLA alleles (typed and imputed). The results from conditional analysis and model choice with the use of the Bayesian information criterion showed that the best model for SLE association includes both classical loci (HLA-DRB1*03:01, HLA-DRB1*08:01, and HLA-DQA1*01:02) and two SNPs, rs8192591 (in class III and upstream of NOTCH4) and rs2246618 (MICB in class I). The authors’ approach was to perform a stepwise search from multiple baseline models deduced from a priori evidence on HLA-DRB1 lupus-associated alleles, a stepwise regression on SNPs alone, and a stepwise regression on HLA alleles. This enabled them to identify a model that was a much better fit to the data than one identified by simple stepwise regression either on SNPs alone [Bayes factor (BF) > 50] or on classical HLA alleles alone (BF > 1,000).
Cellular targeting in autoimmunity
Rogers JL et al. Curr Allergy Asthma Rep. 2012; doi: 10.1007/s11882-012-0307-y.
Many biologic agents that were first approved for the treatment of malignancies are now being actively investigated and used in a variety of autoimmune diseases such as rheumatoid arthritis (RA), antineutrophil cytoplasmic antibody (ANCA)-associated vasculitis, systemic lupus erythematosus (SLE), and Sjogren’s syndrome. The relatively recent advance of selective immune targeting has significantly changed the management of autoimmune disorders and in part can be attributed to the progress made in understanding effector cell function and their signalling pathways. This review discusses the recent FDA-approved biologic therapies that directly target immune cells as well as the most promising investigational drugs affecting immune cell function and signalling for the treatment of autoimmune disease.
Mechanisms of premature athero-sclerosis in rheumatoid arthritis and lupus
Kahlenberg JM & Kaplan MJ. Annu Rev Med. 2012; doi: 10.1146/annurev-med-060911-090007.
Rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE), the two most common systemic autoimmune disorders, have both unique and overlapping manifestations. One feature they share is a significantly enhanced risk of atherosclerotic cardiovascular (CV) disease that significantly contributes to morbidity and mortality. The primary mechanisms that drive CV damage in these diseases remain to be fully characterized, but recent discoveries indicate that distinct inflammatory pathways and immune dysregulation characteristic of RA and SLE are likely to play prominent roles. This review focuses on analysing the major mechanisms and pathways that are potentially implicated in the acceleration of atherothrombosis and CV risk in SLE and RA, as well as in the identification of putative preventive strategies that may mitigate vascular complications in systemic autoimmunity.
The role of epigenetic mechanisms and processes in autoimmune disorders
Greer JM & McCombe PA. Biologics 2012; 6: 307–27. l
The lack of complete concordance of autoimmune disease in identical twins suggests that non-genetic factors play a major role in determining disease susceptibility. This review considers how epigenetic mechanisms could affect the immune system and effector mechanisms in autoimmunity and/or the target organ of autoimmunity and thus affect the development of autoimmune diseases. The authors also discuss the types of stimuli that lead to epigenetic modifications and how these relate to the epidemiology of autoimmune diseases and the biological pathways operative in different autoimmune diseases. Increasing our knowledge of these epigenetic mechanisms and processes will increase the prospects for controlling or preventing autoimmune diseases in the future through the use of drugs that target the epigenetic pathways.