Nucleic acids, which are among the best signatures of disease and pathogens, have traditionally been measured in centralised screening facilities using expensive instruments. Such tests are seldom available on point-of-care (POC) testing platforms. Advancements in simple microfluidics, cellphones and low-cost devices, isothermal and other novel amplification techniques, and reagent stabilisation approaches are now making it possible to bring some of the assays to POCs. This article highlights selected advancements in this area.
by Dr Robert Stedtfeld, Maggie Kronlein and Professor Syed Hashsham
Why point-of-care diagnostics?
Point-of-care diagnostics (POCs) bring selected capabilities of centralised screening to thousands of primary health care centres, hospitals, and clinics. Quick turnaround time, enhanced access to specialised testing by the physicians and patients, sample-in-result-out capability, simplicity, ruggedness and lower cost are among the leading reasons for the emergence of POCs. Another advantage of POCs is its flexibility to be adopted for assays that have received less attention and therefore are often “home brewed”, meaning an analyst develops it within the screening facility for routine patient care. The societal benefit–cost analysis of POCs may often exceed the traditional approaches by 10- to 100-fold. However, POCs must deliver the same quality of test results that is available with the existing centralised screening. Centralised screening is well established, has a performance record and analytical expertise ensuring reliability. POCs are emerging and, therefore, for successful integration into the overall healthcare system, POCs must provide an advantage over the existing system consisting of sample transport to a centralised location followed by analysis and reporting. Besides answering why POCs are better than the existing approaches, they must face validation and deployment challenges.
On the positive side, POCs are expected to have lower financial and acceptance barriers compared to what is faced by more expensive traditional approaches because of the need for lowering the cost of diagnostics in general. In 2011, the global in vitro testing market was $47.6 billion and projected to be $126.9 billion by 2022 (http://www.visiongain.com/). At present POCs constitute approximately one third of the total market – distributed in cardiac markers (31%), HbA1c (21%), cholesterol/lipids (16%), fecal occult blood (14%), cancer markers 98%), drug abuse (4%), and pregnancy (4%). Market forces critically determine the pace of technical development and deployment of POCs. Consider, for example, the global market for blood sugar testing (examples for genetic assays on POCs are non-existent) that is estimated to be $18 billion by 2015 and the alternative test, A1c that is only $272 million in 2012. Even though, A1c testing is now indispensable in managing diabetes, it has not received the priority it deserves due to much lower frequency of testing and therefore smaller market. Lowering the cost further, makes its deployment and diffusion even more challenging. Thus POCs must tackle the inherent bottleneck in their business model, i.e. how to succeed with an emerging or new technology, priced to be low cost, but without the access to market and high sales volumes – at least initially.
One option is to use the existing network of cellphones as one component of the POCs. Diagnostic tools based on cellphones and mobile devices have the potential to significantly reduce the economic burden and play an important role in providing universal healthcare. By 2015 the number of cellphone users will reach 1.4 billion and at least 500 million of them will have used health related applications (mHealth) in some form. Currently, more than 17,000 mHealth apps are available on various platforms. However, their ability to carry out genetic assays is yet to be harnessed. Out of the more than 2,500 genetic assays available, perhaps none are available on a mobile platform (GeneTests: www.genetests.org/). The coming decade is predicted to merge genomics, microfluidics and miniaturisation and multiply its impact many-fold by leveraging the resources and cellphone networks. Such platforms may allow the possibility of establishing an open source model for assays that are commercially not viable due to very low volumes.
A key question and the focus of this article is can genetic assays that are currently possible only in centralised screening facilities be carried out on POC platforms? We believe that through a combination of emerging molecular techniques, low-cost simple microfluidic systems, and some additional developments in detection systems and information transfer, it is possible to carry out genetic assays including mutation detection on POCs within the next 5 years and possibly sequencing within a decade.
Existing POC-adaptable genetic technologies
Nucleic acid-based amplification techniques remain the widely used analytical technique for genetic diagnostics. However, integrated systems capable of reliable detection with sensitivity and specificity required for clinical applications are still scarce. In centralised screening facilities, quantitative polymerase chain reaction (qPCR) is the workhorse for genetic analyses. Compared to qPCR, isothermal amplification strategies have been recognised as a promising alternative especially for POCs. This is because of the complexity of establishing the temperature cycling for qPCR and detection systems in POC devices. The advantages of isothermal amplification include high amplification yields (in some instances allowing a positive reaction to be observed with the naked eye), savings in power consumption without the need for temperature cycling, and low time to a positive amplification (as low as 5 minutes for larger copy numbers). Many isothermal techniques have been developed [1] including: loop-mediated isothermal amplification (LAMP), recombinase polymerase amplification (RPA), nucleic acid sequence-based amplification (NASBA), smart amplification process (SmartAmp), rolling circle amplification (RCA), multiple displacement amplification (MDA), helicase-dependent amplification (tHDA), strand displacement amplification (SDA), isothermal and chimeric primer-initiated amplification (ICAN), cross-priming amplification (CPA), single primer isothermal amplification (SPIA), self-sustained sequence replication reaction (3SR), transcription mediated amplification (TMA), genome exponential amplification reaction (GEAR) and exponential amplification reaction (EXPAR).
The benefits of one isothermal technique over another will depend on the application of interest. Techniques requiring a large number of enzymes and that are carried out at low temperature may be less amenable to POCs than those that require a single enzyme. More than one enzyme may, in general, increase the cost, rigor and complexity of the amplification reaction in a POC. While larger number of primer sets will increase the specificity, they will also make the design of primers to target a certain phylogenetic group or divergent functional gene more difficult, if not impossible. This is because of the need for multiple target specific regions, each being a certain distance (number of bases) between the other, and the increased complexity when trying to incorporate degenerate bases in multiple primer sequences within an assay. Isothermal assay enzymes that work at low temperature (less than 40°C) may have a disadvantage in hot and warm climatic conditions. However, an isothermal amplification strategy that directly incorporates primers/probes designed for previously validated qPCR assays, uses a single enzyme, can be performed at higher temperatures, and allows for accurate quantification, will greatly increase the attraction of isothermal amplification, ushering in a new era of point of care genetic diagnostics. The cost associated with licensing an amplification technique will also dictate if it can be used for POCs applications, specifically in low resource settings.
Existing POC platforms for genetic analysis
Multiple platforms have been developed for POC genetic testing with an emphasis on reduced costs, sizes, throughput, accuracy and simplicity. Table 1 is a non-exhaustive list to illustrate some of the capabilities. Ideally, POCs must simplify the genetic analysis by accepting crude or unprocessed samples. All of the listed qPCR platforms automatically perform sample processing (cell lysing and DNA purification) directly within the cartridge that the sample is dispensed. Compared to qPCR POCs, isothermal assay POCs have not focused as much on sample processing. There are two reasons for this. One, isothermal assays are generally less influenced by sample inhibitors and may not even require it in certain cases. Second, development of POCs based on isothermal assays has lagged because the assays themselves are relatively new for the diagnostics application.
Development of isothermal genetic POC devices, however, is relatively easier compared to qPCR devices. This is because isothermal genetic POCs utilise components that are inexpensive, smaller and have less power consumption. Use of such components is possible due to the high product yields of isothermal amplification techniques. LAMP, for example, produces 10 µg of DNA in a 25 µl volume compared to 0.2 µg in PCR. This high yield can be quantified with less sophisticated optics compared to those used in qPCR devices. The Gene-Z platform [figure 1], for example, uses an array of individually controlled low power light emitting diodes for excitation and optical fibres (one for each reaction well) for channelling the excitation light to a single photodiode detector for real time measurement [2].
Although POCs are generally considered as a single-assay device, multiplexing of targets (e.g. in co-infections) and analysing a given pathogen with greater depth (e.g. methicillin resistance Staphylococcus aureus, or HIV genotyping) is becoming absolutely critical. Genetic analysis is expected to allow resolution of genotype that is better than that possible by immunoassays. Use of simpler but powerful microfluidic chips (e.g. used with Gene-Z or GenePOC) instead of conventional Eppendorf tubes can be advantageous in terms of cost and power of analysis. Such microfluidic chips are increasingly changing their shape, form, and material and are bound to be simpler, better and more accessible. An example is the paper-based diagnostics platform developed by Whiteside’s group [3]. Miniaturisation obviously leads to significant reagent cost saving provided it does not run into detection-limit issues. Multiplexed detection also simplifies the analysis since manual dispensing into individual reaction tubes is not required. For example, the chip used with Gene-Z does not require external active elements for sealing, pumping, or distributing samples into individual reaction wells, eliminating potential for contamination between chips or to the device.
Type of genetic assays on POCs
So what types of genetic assays are more likely to move to POCs first? For regions with excellent centralised screening, it may be those assays where getting the results quickly using POCs saves lives or has tangible long term benefits, e.g. quickly deterring the infection and its antibiotic resistance. The leading example of this is MRSA, for which resistance has continuously increased over the past few decades. It is now known that patients are more likely to acquire MRSA in wards where the organism is screened by culturing compared to rapid molecular techniques. In such cases, detection of antibiotic resistance genes using a panel of genetic assays and POCs would minimise the practice of administering broad spectrum antibiotics because the results are not available soon enough.
In limited resource settings, the examples of genetic testing by POCs are literally endless – TB, malaria, dengue fever, HIV, flu, fungal infections and so on. This is because very little or no centralised screening occurs in such scenarios. The ability to measure dengue virus, for example, in 1–4 µl of blood could provide better tools to the 2.5 billion people who are at risk of infection and the 50–100 million people who do contract it every year. Similarly, multidrug-resistant and extensively drug-resistant TB is a global concern due to the high cost of treatment. At present, large numbers of mutations cannot be measured simultaneously using POCs. However, except the fact that isothermal mutation assays are fewer and the success rate for primer development is much lower than the signature marker probe/primer based assays, there are no technical barriers. The availability of a simple isothermal mutation assay will go a long way in making many genotyping-based diagnostics available on POCs.
In the long run, POCs may even be used to detect and quantify genetic markers associated with non-infectious diseases, such as cancer, and selected assays focusing on human genetics. Globally, cancer is responsible for 7.6 million deaths (2008 data) and projected to be rise to 13.1 million by 2030. Simple and quantitative tools capable of measuring a panel of markers may play an additional role – they may help collect data related to potentially useful but un-tested markers. Both PCR and isothermal-based assays are amenable to this application using circulating tumour cells, circulating free DNA/RNA, gene mutations, and microRNA panels. Currently utilised methods of cancer detection are highly invasive and time consuming. Minimally invasive methods on POCs may significantly increase the deployment of such capabilities.
Why do we need the wireless connectivity for POCs?
With POCs, comes the question of connectivity. Is it a must or good to have? We envision that it is important to have, but that a less useful form of device may be deployed without connectivity. Wireless connectivity via cellular phones has many advantages. Paramount among them is access to the physician and/or nurse for expert input and support. Technical advantages are automated data transfer, increased efficiency in reporting, saving time, lower equipment costs due to complexity of a touch-screen user interface and the computational power needed for data analysis.
The use of cellphones is an obvious possibility due to its ubiquitous availability and the vast network of mobile services. “There are 7 billion people on Earth; 5.1 billion own a cellphone; 4.2 billion own a toothbrush (Mobile Marketing Association Asia, 2011). By 2015 it is estimated that one third of cellphone users will have used mobile health solution in some form. However, POC genetic diagnostics and mobile networking have not yet crossed their paths. Some gene analysers (e.g. Gene-Z, Hunter) already have network enabled wireless connectivity to bridge these paths. More work is needed, however. One critical element is that transfer of data including through wireless mode must meet the requirements of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy and Security Rules set by the U.S. Department of Health and Human Services. FDA clearance standards and specifications are still evolving for this area [4].
Impacts of the resulting products and devices are expected on both communicable and non-communicable diseases. Qualcomm Life provides a platform (2Net), that could be used for many different applications. According to them, “The 2Net platform is designed as an end-to-end, technology-agnostic cloud-based service that interconnects medical devices so that information is easily accessible by device users and their healthcare providers and caregivers” (http://www.qualcommlife.com/). Although the famous medical scanner or Tricorder of Star Wars fame is not yet possible, the recently announced $10 million prize by X-Prize Institute sponsored by Qualcom Life, for developing a Tricorder that can diagnose a set of 15 diseases without the intervention of the physician and weighs less than 2.3 kg is not too far from reality. In ten years, we should expect nothing less than a POC platform that is capable of sequencing-based diagnostics with assay cost of less than a dollar.
References
1. Craw P, Balachandran W. Isothermal nucleic acid amplification technologies for point-of-care diagnostics: a critical review. Lab Chip 2012; 12: 2469–2486.
2. Stedtfeld RD, Tourlousse DM, Seyrig G, Stedtfeld TM, Kronlein M, Price S, Ahmad F, Erdogan G, Tiedje JM, Hashsham SA. Gene-Z: a device for point of care genetic testing using a smartphone. Lab Chip 2012; 12: 1454–1462.
3. Martinez AW, Phillips ST, Whitesides GM, Carrilho E. Diagnostics for the developing world: microfluidic paper-based analytical devices. Anal Chem 2010; 82: 3–10.
4. Draft Guidance for Industry and Food and Drug Administration Staff – Mobile Medical Applications. July 21, 2011. www.fda.gov/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm263280.htm.
The authors
Robert Stedtfeld PhD, Maggie Kronlein and
Syed Hashsham, PhD*
Civil and Environmental Engineering
1449 Engineering Research Court Rm A127
Michigan State University
East Lansing, MI 48824, USA
*Corresponding author:
E-mail: hashsham@egr.msu.edu
Heart fatty acid binding protein and troponin: a match made in heaven?
, /in Featured Articles /by 3wmediaPlasma levels of heart-type fatty acid binding protein (H-FABP) have been shown to rise early after the onset of acute myocardial infarction (AMI). Recent evidence suggests combining H-FABP with troponin gives superior diagnostic accuracy compared to the alternative ‘early markers’ of myocardial necrosis, creatine kinase-MB (CK-MB) and myoglobin. However, using a single measurement at the time of presentation to the Emergency Department (ED), H-FABP is unlikely to have sufficient sensitivity to safely ‘rule out’ AMI, even when combined with a standard troponin assay. With the advent of high sensitivity troponin assays which have higher diagnostic sensitivity at the time of presentation, it is possible that H-FABP could be combined with levels of high sensitivity troponin and potentially with other clinical information to enable safe ‘rule out’ of AMI using a single blood test at the time of presentation. Further work in this area is needed.
by Dr Richard Body
Background
Suspected cardiac chest pain accounts for approximately one quarter of acute medical admissions, although only a minority of the patients admitted will ultimately be diagnosed with an acute coronary syndrome [1]. Meanwhile, up to 2% of patients with acute myocardial infarction (AMI) have that diagnosis missed and are inadvertently discharged, leading to a worse prognosis [2]. There is therefore tremendous potential to reduce unnecessary hospital admissions in this patient group, although advances in diagnostic technology are clearly necessary in order to do so.
High sensitivity troponin
Cardiac troponins are regulatory proteins contained within the myofibrillar apparatus of cardiac myocytes. They are released into the bloodstream following myocardial necrosis and their detection allows highly sensitive and specific diagnosis of AMI. Indeed, the detection of a rise and/or fall of cardiac troponin in serum or plasma is integral to the diagnosis of AMI. With the advent of high sensitivity troponin (hs-cTn) assays, which have greater analytical and diagnostic sensitivity than standard assays, it is tempting to believe that the hunt for an ‘early rule out’ strategy for acute coronary syndromes is over. Standard troponin assays lack the diagnostic sensitivity to enable safe exclusion of acute myocardial infarction (AMI) when measured at the time of presentation. This creates a period of ‘troponin blindness’, when patients with AMI still have low circulating troponin levels prior to the development of a late troponin rise. Hs-cTn assays have been shown to improve diagnostic sensitivity at the time of initial presentation to the Emergency Department (ED). While this reduces the magnitude of our problem with ‘troponin blindness’, it does not overcome the problem completely. Even hs-cTn assays fail to identify approximately 10% of patients with AMI at the time of presentation [3, 4]. With hs-cTn assays it may be possible to reduce the time taken to confidently ‘rule out’ AMI with serial sampling from 6 to 9 hours after arrival (or 10–12 hours from symptom onset) to as little as 3 hours after arrival [4, 5]. This approach still needs to be validated against a hs-cTn reference standard, however, and there are a few other reasons to be cautious. The sensitivity of the Siemens troponin I Ultra assay (a sensitive assay but not high sensitivity), which had a diagnostic sensitivity of 100% at 3 hours after presentation in Keller et al.’s original study (evaluated against the reference standard of testing 6 hours after arrival), was actually only 94.5% at 6 to 12 hours from symptom onset [4]. Further, high sensitivity troponin T (hs-cTnT) has been shown to have a sensitivity of only 92.2% when measured 2 hours after presentation, which is still some way from a satisfactory rule out strategy [6]. Using the new Abbott Architect high sensitivity troponin I assay, sensitivity for AMI is 98.2% (with 95% confidence intervals extending down to 96.9%), again using a standard troponin assay as the reference standard [5]. Even if we accept that no rule out strategy will be 100% sensitive and consider this 3-hour troponin to be a satisfactory rule out strategy, that still means an anxious wait for patients and would still, in health systems like the United Kingdom, necessitate admission to an inpatient ward for investigation.
Interest in ‘early markers’ of myocardial necrosis
There has been interest in the role of ‘early markers’ of myocardial necrosis for many years. As troponin is predominantly an intracellular constituent and levels do not peak for 12 to 24 hours after the onset of infarction [7], many have investigated the value of biomarkers with release kinetics suggesting that they may enable earlier identification of AMI. Thus, the measurement of creatine kinase-MB (CK-MB) and myoglobin levels in combination with troponin were shown to improve early diagnosis of AMI as early as 2001 [8]. More recently, the ASPECT study from 14 countries in the Asia-Pacific region examined the value of CK-MB, myoglobin and troponin I (using assays from Alere, San Diego, CA, USA) measured at presentation and 120 minutes later in patients with a Thrombolysis In Myocardial Infarction (TIMI) score of 0/7. The authors found that 9.8% of patients could be discharged using this strategy with a 0.9% incidence of adverse cardiac events within 30 days [9]. Around the same time, the Randomised Assessment Using Panel Assay of Cardiac Biomarkers (RATPAC) study demonstrated that serial evaluation of CK-MB, myoglobin and troponin I over 90 minutes led to an increase in the proportion of patients successfully discharged from the ED, although this came at a cost of rebound-overuse of Coronary Care resources, perhaps as a function of the lack of specificity of myoglobin and CK-MB. The strategy was found to be not cost effective [10].
Heart-type fatty acid binding protein
Heart-type fatty acid-binding protein (H-FABP) is a cytosolic protein that is abundantly expressed in human myocardial cells, where it facilitates intracellular fatty acid transport within cardiac myocytes [11]. Plasma H-FABP levels rise early after the onset of AMI. McCann et al. evaluated H-FABP (Hycult Biotechnology ELISA) and troponin T (cTnT; Roche Elecsys, 4th generation) in 415 patients who were admitted to an acute cardiology unit on suspicion of an acute coronary syndrome. They demonstrated that H-FABP had superior sensitivity to troponin in patients who presented early (<4h) after symptom onset [Figure 1] [12]. A meta-analysis of 16 studies including 3,709 patients with suspected AMI demonstrated a pooled sensitivity of 84% [95% confidence intervals (CI) 76–90%] and a pooled specificity of 84% (95% CI 76–89%), although there was significant heterogeneity between studies [13]. It is clear that measurement of H-FABP alone cannot enable safe ‘rule out’ of AMI. Combining H-FABP with troponin will, however, yield a higher diagnostic sensitivity. Body et al. [14] demonstrated that the combination of H-FABP and troponin I offers both superior sensitivity and superior specificity to the combination of CK-MB, myoglobin and troponin I [Figure 2].
A systematic review by Carroll et al. demonstrated that, in 4 studies, the combination of H-FABP and troponin had an overall sensitivity of between 76 and 97% [15]. Two of these studies did, however, use insensitive troponin assays with diagnostic sensitivities of 42% and 55% respectively. The use of more sensitive troponin assays may be expected to yield higher diagnostic performance. Indeed, in the study by Body et al., the sensitivity of the combination of H-FABP and troponin increased from 82% to 87% when a more sensitive troponin assay was used [14, 16]. If only low risk patients (using the modified Goldman risk stratification tool) who had normal H-FABP and normal cTnT were considered for early discharge, a sensitivity and negative predictive value of 99% could be achieved, although this strategy may have a specificity as low as 19%, meaning that only a minority of patients would be eligible for early discharge while 1% of AMIs would still be missed [16].
H-FABP and high sensitivity troponin
It is clear that neither H-FABP nor troponin (even using a high sensitivity assay) can be used to safely exclude a diagnosis of AMI when measured at the time of presentation to the ED. The combination of H-FABP and standard troponin assays improves overall diagnostic sensitivity but is still unable to ‘rule out’ this important diagnosis. By combining H-FABP with high sensitivity troponin assays, it may be possible to further increase sensitivity and thus achieve an effective early rule out strategy. Evidence in this area is still limited. However, Aldous et al. did evaluate the combination of H-FABP (Hycult Biotech) and hs-cTnT in a cohort of 384 patients presenting to the ED with suspected acute coronary syndromes. This combination had a sensitivity of 90.0% for AMI and a specificity of 73.5%. Notably, the sensitivity of the H-FABP assay alone was particularly low in this study (50.0%), which may be a function of the high diagnostic cut-off employed (60ng/ml) when compared to the cut-off employed by McCann et al. using the same assay (5ng/ml) [12, 17]. Using this high diagnostic cut-off, however, the combination of H-FABP and hs-cTnT measured at the time of presentation may help to ‘rule in’ the diagnosis of AMI, with a specificity of 99.4% (95% CI 97.9–99.9%) [17].
Inoue et al. also evaluated both hs-cTnT and H-FABP (DS Pharma Biomedical, Osaka) in 432 ED patients with suspected acute coronary syndromes. In this study, H-FABP had a similar area under the receiver operating characteristic (ROC) curve (AUC) to hs-cTnT (0.83 versus 0.82), although hs-cTnT had a higher sensitivity at the diagnostic cut-off (87.9% vs. 78.5%) [18]. The authors do not report the diagnostic value of the combination of both biomarkers. Meanwhile, in 1,818 patients with suspected acute coronary syndromes, Keller et al. reported that H-FABP had an AUC of 0.89, which rose to 0.97 when combined with high sensitivity troponin I (Abbott Architect STAT high sensitive troponin) [5]. This implies that the combination has high diagnostic accuracy, although the sensitivity and negative predictive value of the strategy were not reported.
H-FABP and prognosis
H-FABP levels may also have prognostic value in patients with suspected acute coronary syndromes. Viswanathan et al. studied 1,080 consecutive patients presenting with suspected acute coronary syndromes [19]. They measured both H-FABP (Randox Evidence Biochip) and troponin I using a sensitive assay (Siemens Advia troponin I Ultra) and followed patients for a median of 18 months. H-FABP predicted death or AMI occurring during follow up, even in troponin negative patients and after adjustment for age and serum creatinine. For predicting death or AMI, H-FABP had an AUC of 0.79 (95% CI 0.74–0.84)
compared to 0.77 (95% CI 0.72–0.82) for troponin I.
Future work
Further work is still needed to determine whether the combination of H-FABP and high sensitivity troponin will enable safe rule out of acute coronary syndromes in the ED. Combination with other clinical information available from risk stratification tools (such as the modified Goldman or TIMI scores) or the ECG may further increase sensitivity, enabling AMI to be safely excluded in a proportion of patients presenting to the ED. Further, with the increase in false positive results given by high sensitivity troponin assays, H-FABP may help to ‘rule in’ the diagnosis of AMI in patients with troponin elevations at the time of presentation, before the results of serial testing are available. This will facilitate early treatment and triage to an appropriate level of care in the hospital, while avoiding the risks of unnecessary treatment for those patients with false positive elevations.
Conclusions
H-FABP is a promising biomarker for use in patients with suspected acute coronary syndromes. Used alone or in combination with a standard troponin assay, sensitivity will be insufficient to safely ‘rule out’ AMI. Further work is needed to determine whether combination with a high sensitivity assay can enable safe ‘rule out’ for a proportion of patients, and to evaluate whether H-FABP may have a role in the differentiation between ‘true positive’ and ‘false positive’ troponin elevations at the time of initial presentation.
References
1. Goodacre S, et al. The health care burden of acute chest pain. Heart 2005; 91: 229–230.
2. Pope JH, et al. Missed diagnoses of acute cardiac ischaemia in the Emergency Department. N Engl J Med 2000; 342: 1163–1170.
3. Reichlin T, et al. Early diagnosis of myocardial infarction with sensitive cardiac troponin assays. N Engl J Med 2009; 361: 858–867.
4. Keller T, et al. Sensitive troponin I assay in early diagnosis of acute myocardial infarction. N Engl J Med 2009; 361(9): 868–877.
5. Keller T, et al. Serial changes in highly sensitive troonin I assay and early diagnosis of myocardial infarction. JAMA 2011; 306(24): 2684–2693.
6. Aldous SJ, et al. Diagnostic and prognostic utility of early measurement with high-sensitivity troponin T assay in patients presenting with chest pain. CMAJ 2012; 184: E260-E268.
7. Tucker JF, et al. Early diagnostic efficiency of cardiac troponin I and troponin T for acute myocardial infarction. Acad Emerg Med 1997; 4(1): 13–21.
8. McCord J, et al. Ninety-minute exclusion of acute myocardial infarcdtion by use of quantitative point-of-care testing of myoglobin and troponin I. Circulation 2001; 104: 1483–1488.
9. Than M, et al. A 2-h diagnostic protocol to assess patients with chest pain symptoms in the Asia-Pacific region (ASPECT): a prospective observational validation study. Lancet 2011; 377(9771): 1077–1084.
10. Fitzgerald P, et al, on behalf of the RATPAC investigators. Cost-effectiveness of point-of-care biomarker assessment for suspected myocardial infarction: The RATPAC trial (Randomised Assessment of Treatment Using Panel Assay of Cardiac markers). Acad Emerg Med 2011; 18(5): 488–495.
11. Schaap FG, et al. Impaired Long-Chain Fatty Acid Utilization by Cardiac Myocytes Isolated From Mice Lacking the Heart-Type Fatty Acid Binding Protein Gene. Circ Res 1999; 85(4): 329–337.
12. McCann C, et al. Novel biomarkers in early diagnosis of acute myocardial infarction compared with cardiac troponin T. Eur Heart J 2008; 29(23): 2843–2850.
13. Bruins Slot MH, et al. Heart-type fatty acid-binding protein in the early diagnosis of acute myocardial infarction: a systematic review and meta-analysis. Heart 2010; 96(24): 1957–1963.
14. Body R, et al. A FABP-ulous ‘rule out’ strategy? Heart fatty acid binding protein and troponin for rapid exclusion of acute myocardial infarction. Resuscitation 2011; 82(8): 1041–1046.
15. Carroll C, et al. Heart-type fatty acid binding protein as an early marker for myocardial infarction: systematic review and meta-analysis. Emerg Med J 2012.
16. Body R, et al. Reply to Letter: Still FABP-ulous even with a more sensitive troponin assay. Resuscitation 2012; 83(2): e29–e30.
17. Aldous S, et al. Heart fatty acid binding protein and myoglobin do not improve early rule out of acute myocardial infarction when highly sensitive troponin assays are used. Resuscitation 2012; 83(2): e27–e28.
18. Inoue K, et al. Heart fatty acid-binding protein offers similar diagnostic performance to high-sensitivity troponin T in Emergency Room patients presenting with chest pain. Circulation 2011; 75: 2813–2820.
19. Viswanathan K, et al. Heart-Type Fatty Acid-Binding Protein Predicts Long-Term Mortality and Re-Infarction in Consecutive Patients With Suspected Acute Coronary Syndrome Who Are Troponin-Negative. J Am Coll Cardiol 2010; 55(23): 2590–2598.
The author
Richard Body, MB ChB MRCSEd(A&E) FCEM PhD
Emergency Department,
Manchester Royal Infirmary,
Oxford Road, Manchester, M13 9WL, UK
e-mail: richard.body@manchester.ac.uk
Variability of the response to clopidogrel: mechanisms, availability of testing, and relation to clinical outcomes
, /in Featured Articles /by 3wmediaInterindividual variability in the response to clopidogrel has been shown to be related to the clinical ischemic outcomes. Although testing of platelet function or genetic profile is recommended to evaluate the response to clopidogrel, standardized testing and definitive antiplatelet therapy after testing need to be established.
by Yusuke Yamaguchi and Professor Mitsuru Murata
Clinical background
Platelet activation and aggregation play a pivotal role in arterial thrombosis formation; therefore, antiplatelet therapy to inhibit platelet function is considered effective for preventing and treating atherothrombosis. The combination of aspirin and clopidogrel has been shown to be more effective than aspirin alone for improving clinical ischemic outcomes in patients with coronary artery disease (CAD). This dual antiplatelet therapy contributes substantially to prevent the occurrence of cardiovascular events in patients with acute coronary syndrome (ACS) or percutaneous coronary intervention (PCI). Current guidelines recommend aspirin and clopidogrel for these patients; however, some patients still develop cardiovascular events despite dual therapy. It has been shown in the last decade that the responsiveness to clopidogrel is highly variable in individuals and that a suboptimal response to clopidogrel is a risk factor for cardiovascular events. The interindividual variability in the effect of clopidogrel is due to multiple factors [Table 1].
Effects of CYP2C19 on clopidogrel
Clopidogrel, a second generation thienopyridine, is an inactive prodrug that requires a 2-step metabolic conversion to an active metabolite. This active metabolite inhibits adenosine diphosphate (ADP)-induced platelet aggregation by selectively and irreversibly binding P2Y12 receptors on the platelet membrane. Several isoforms of cytochrome P450 (CYP), including CYP2C19, CYP3A4, CYP1A2, CYP2B6, and CYP2C9, have been shown to be involved in the metabolic pathway. Of these enzymes, CYP2C19 is considered to be the main determinant of clopidogrel metabolism that produces the active form.
It is known that CYP2C19 has numerous single nucleotide polymorphisms (SNPs), of which CYP2C19*2 (681G>A, located in exon 5) has been studied extensively and shown to be associated with a loss of function of the enzyme. CYP2C19*2 clearly associates with both the pharmacokinetics (i.e., area under the concentration curve and maximal plasma concentration of clopidogrel active metabolite) and the pharmacodynamics (i.e., inhibition of ADP-induced platelet aggregation) of clopidogrel. CYP2C19*2 is detected more frequently in Asians than in Caucasians, with approximately 40–50% and 30% having at least one CYP2C19*2 allele, respectively. In addition to CYP2C19*2, CYP2C19*3, *4, *5, *6, *7, and *8 have been identified as loss-of-function alleles.
Methods to evaluate the effect of clopidogrel on platelet inhibition
Different laboratory tests [Table 2] can be used to assess platelet function in patients treated with clopidogrel. ADP-induced platelet aggregation in platelet-rich plasma measured by light transmission aggregometry is used most commonly, with numerous published studies using this method to measure platelet function. The majority of these studies measured platelet function as maximal platelet aggregation rate induced by 5, 10, or 20 µmol/l ADP. The platelet aggregation rate 5–8 min after the addition of ADP has also been reported. The POPULAR study [1] on clopidogrel-treated patients following elective PCI showed that 42.9% maximal platelet aggregation rate induced by 5 µmol/l ADP or 64.5% induced by 20 µmol/l ADP correlated with the 1-year mortality rate, myocardial infarction (MI), stent thrombosis, and stroke.
The VerifyNow P2Y12 test (Accumetrics Inc, SanDiego, CA) has been developed as a point-of-care device to quickly and accurately assess platelet function in patients. This test is a whole-blood, light transmission-based optical detection assay that measures the light transmittance of ADP-induced platelet aggregation in a cartridge containing fibrinogen-coated beads and is able to specifically evaluate P2Y12 receptor inhibition. The results are reported as P2Y12 reaction units (PRU), with a lower PRU value being associated with higher P2Y12 inhibition. A meta-analysis of individual patient data in six observational studies [2] revealed that a PRU value of 230 at PCI is the best cut-off value for predicting the occurrence of cardiovascular events, including death, MI, and stent thrombosis, in patients with stable CAD or non-ST elevated ACS undergoing PCI over 1 year.
The effect of clopidogrel on platelet function can be also evaluated by detecting vasodilator-stimulated phosphoprotein (VASP). VASP is phosphorylated by cyclic adenosine monophosphate (cAMP) produced in the adenylate cyclase cascade downstream of the P2Y12 receptor. By binding to the P2Y12 receptor and suppressing the cascade, ADP leads to an increase in VASP dephosphorylation, whereas inhibition of the receptor by clopidogrel active metabolite leads to an increase in VASP phosphorylation. This test measures VASP phosphorylation in a flow cytometric assay with the result expressed as platelet reactivity index (PRI) that represents the ratio of the phosphorylated and dephosphorylated VASP. A lower PRI value reflects higher P2Y12 inhibition.
Clinical utility of laboratory testing
Numerous studies, including our meta-analysis [3], have reported that patients with a suboptimal response to antiplatelet therapy have increased cardiovascular events [Figure 1A], and data have been accumulated on testing of platelet function to establish a reliable cut-off value for clinical risk. However, it remains unclear how to monitor suboptimal responses in daily clinical practice due to the lack of a standardised method to measure and interpret the results of platelet function. Furthermore, there is no guideline for alternative treatment strategies to the “one-size-fits-all” 75 mg/day clopidogrel regime because conclusive evidence that personalised antiplatelet therapy improves patient outcomes has not been established from large-scale randomised trials. However, a meta-analysis [4] recently reported the evaluation of the clinical efficacy and safety of intensified antiplatelet therapy involving reloading clopidogrel, using glycoprotein IIb/IIIa inhibitors periprocedural PCI, increasing the maintenance dose of clopidogrel, or switching to prasugrel. Although there were several limitations, this meta-analysis showed that intensified antiplatelet therapy reduces cardiovascular death and stent thrombosis without increasing major bleeding.
Meanwhile, CYP2C19 genotype does not always seem to predict cardiovascular events, although it is a major predictor for suboptimal response to clopidogrel. To date, many large-scale clinical trials, including the recent Genotype Information and Functional Testing (GIFT) trial [5], which investigate an association of CYP2C19 genotype with cardiovascular events, have been performed. However, the results of these trials were inconsistent. Indeed, we showed heterogeneity in the odds ratio of the cardiovascular events between the carriers and non-carriers of CYP2C19*2 allele in our meta-analysis [Figure 1B]. Considering that CYP2C19*2 contributes to only about 5% of the variability in response to clopidogrel [6], many other genetic factors may contribute to the variability apart from CYP2C19. Therefore, genetic testing including additional factors such as SNPs in other CYPs or ABCB1 (encoding p-glycoprotein) would be expected to improve identification of patients with a suboptimal response.
Current status and future prospects
In 2009, the U.S. Food and Drug Administration (FDA) released a black box warning that significant attention needs to be paid to clopidogrel pharmacogenomics. Similarly, the American and European guidelines published in 2011 gave a Class IIb recommendation for testing of platelet function or genetic profile in patients treated with clopidogrel and for consideration of the use of an alternate P2Y12 inhibitor in patients with inadequate platelet inhibition.
The primary goal of testing of platelet function and genetic profile is to identify patients with a suboptimal response to antiplatelet therapy and provide them with a tailor therapy to improve the clinical ischemic outcomes without an
associated bleeding risk. Although these laboratory tests provide sufficient evidence to predict outcomes, personalised antiplatelet therapy on the basis of these tests has not been established in the guidelines. Currently, several clinical trials are ongoing that evaluate the effect of personalised antiplatelet therapy on the basis of laboratory tests. These trials will hopefully provide important data to establish guidelines, to allow clinicians to properly select laboratory tests, and to plan personalised antiplatelet therapy in patients with a suboptimal response.
References
1. Breet NJ, van Werkum JW, Bouman HJ, Kelder JC, Ruven HJ, Bal ET, et al. JAMA 2010; 303: 754–762.
2. Brar SS, ten Berg J, Marcucci R, Price MJ, Valgimigli M, Kim HS, et al. J Am Coll Cardiol 2011; 58: 1945–1954.
3. Yamaguchi Y, Abe T, Sato Y, Matsubara Y, Moriki T, Murata M. Platelets. Epub 2012 Jul 3, doi: 10.3109/09537104.2012.700969
4. Aradi D, Komócsi A, Price MJ, Cuisset T, Ari H, Hazarbasanov D, et al. Int J Cardiol. Epub 2012 Jun 15, doi: 10.1016/j.ijcard.2012.05.100
5. Price MJ, Murray SS, Angiolillo DJ, Lillie E, Smith EN, Tisch RL, et al. J Am Coll Cardiol 2012; 59: 1928–1937.
6. Hochholzer W, Trenk D, Fromm MF, Valina CM, Stratz C, Bestehorn HP, et al. J Am Coll Cardiol 2010; 55: 2427–2434.
The authors
Yusuke Yamaguchi and Mitsuru Murata MD, PhD
Dept of Laboratory Medicine, Keio University School of Medicine,
35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
E-mail: yusukeyamaguchi@z8.keio.jp
Biomarkers of vascular calcification in patients with impaired kidney function
, /in Featured Articles /by 3wmediaChronic renal failure is a disease with a high and increasing prevalence. Currently about 10% of the population of Europe and North America are affected. The disease is associated with a high morbidity and mortality mainly attributed to cardiovascular diseases. In fact patients with more advanced stages of chronic renal failure have a greater risk of dying due to cardiovascular disease than of renal failure itself. Approximately 50% of these patients die from cardiovascular complications.
by Professor Berthold Hocher
Accelerated vascular calcification (VC) is but one of the important mechanisms of cardiovascular disease in dialysis patients. Under the setting of end-stage renal disease (ESRD), VC is more severe and develops in both the intima and the media of the blood vessels. VC is an active and regulated process mediated by vascular smooth muscle cells [Fig. 1], which undergo a phenotypic change to osteoblasts or chondrocytes, which, in turn, release promoters of VC and apoptosis. VC is markedly up regulated in dialysis patients, and this may be explained by the up-regulation of such promoters of VC as hyperphosphatemia, hypercalcemia, cholesterol, hyperleptinemia down-regulation of the inhibitors of VC such as matrix Gla protein, fetuin-A [1].
Fetuin-A
Fetuin-A is a 62-kilodalton glycoprotein, which belongs to the cystatin superfamily of proteins. In humans, the 349-amino acid protein, as secreted from the liver, consists of two chains: a heavy and a light chain joined by a connecting segment and linked by disulfide bonds. The N-terminus of the heavy chain consists of two cystatin domains, D1 and D2; the acidic amino acids in the D1 domain appear to account for fetuin’s ability to inhibit precipitation of calcium and phosphorus. Indeed, fetuin-A accounts for up to one-half of the in vitro capacity of the serum to prevent the precipitation of calcium and phosphorus. It is now recognised that fetuin-A can actively regulate the cell-mediated process of osteogenesis in the vessel wall, inhibits mineralisation in a concentration-dependent manner, enhances the phagocytosis of apoptotic bodies by vascular smooth muscle cells, limiting their ability to nucleate calcium phosphate. Finally, fetuin-A is an antagonist of bone morphogenetic protein-2, the promoter of VC in vascular cells.
A number of studies have demonstrated an association between serum fetuin-A levels and all-cause mortality of dialysis patients. This association of low fetuin-A levels and mortality was confirmed by clinical trial on 664 hemodialysis (HD) and 323 peritoneal dialysis (PD) patients during a median follow-up of 2.8 years. In this study, an increase in serum fetuin-A by 0.1 g per litre corresponded to a 9% lower death risk. The death predictable value of fetuin-A in this study was independent of serum C-reactive protein (CRP) levels. At the same time, in multivariate analysis of biomarkers of prediction of mortality dialysis patients where serum C-reactive protein was entered, fetuin-A lost its predictable value. The latter fact suggests further investigation of the role of fetuin-A in dialysis patients is needed to fully elucidate the pathomechanisms lowering serum fetuin-A levels in ESRD [1].
Fibroblast growth factor 23 (FGF-23)
FGF-23 is a hormone secreted by osteoblasts. It plays a role in the regulation of phosphorus and in the metabolism of vitamin D. Depletion of FGF-23 causes hyperphosphatemia, up-regulation of 1,25- dihydroxyvitamin D, ectopic calcification and early death. FGF-23 is involved in physiological maintenance of normal serum phosphate levels in the settings of variable dietary phosphorus intake. In the settings of impaired/reduced nephron mass, normal serum phosphate levels are maintained in part by reactive increase of FGF-23, which promotes excretion of phosphate via the remaining nephrons and decreases the absorption of dietary phosphorus by inhibiting the synthesis of 1,25-dihydroxyvitamin D. Depletion of FGF-23 with chronic kidney disease (CKD) progression leads to hyperphosphatemia, ectopic calcification and premature death. It was previously reported that increased serum phosphate levels and decreased 1,25-dihydroxyvitamin D levels are associated with increased mortality.
In the recent study by Gutiérrez et al., multivariable adjusted analyses showed that an increase in serum phosphate levels higher than 5.5 mg/dl and an increase of FGF-23 was associated with a 20% increase in the mortality risk, suggesting hyperphosphatemia and increased FGF-23 are sensitive biomarkers for assessment of the risk of death [reviewed in 1].
Receptor activator of NF-kB ligand–Osteoprotegerin System Osteoblasts regulate differentiation and activation of osteoclasts under conditions of normal bone turnover. Osteoblasts synthesise and secrete a protein called receptor activator of NF-κB ligand (RANKL). RANKL binds to its receptor on pre-osteoclasts and thus regulates bone turnover. Osteoprotegerin (OPG) is also secreted by osteoblasts and modulates the effects of RANKL by blocking osteoblast differentiation. These two key players are also involved in the transformation of vascular smooth muscle cells into bone formatting cells in blood vessels under the condition of chronic renal failure.
Several studies indicate a pathogenic role of OPG in the pathogenesis of cardiovascular diseases in uremic and also non-uremic patients. The OPG/RANKL system plays a key role in the pathogenesis of endothelial function. Tseng et al. suggest that an imbalance between bone formatting hormones and bone degrading hormones may play a key role in the pathogenesis of vascular calcification. High OPG might indicate a reduced degradation capacity of calcified arteries. These authors suggest that an induction of RANKL in the vessel walls might overcome this problem and thus offer even new therapeutic options for vascular calcification. However, this hypothesis needs for sure further investigations [2–4].
Vitamin D
Vitamin D is a multifunctional hormone that can affect many essential biological functions, ranging from immune regulation to mineral ion metabolism. A close association between altered activity of vitamin D and vascular calcification has been reported in various human diseases, including patients with atherosclerosis, osteoporosis and CKD. Experimental studies have shown that excessive vitamin D activities can induce vascular calcification, and such vascular pathology can be reversed by reducing vitamin D activities. The human relevance of these experimental studies is not clear, as vitamin D toxicity is relatively rare in the general population. Contrary to the relationship between vitamin D and vascular calcification, in experimental uremic models low levels of vitamin D were shown to be associated with extensive vascular calcification – a phenomenon that is very similar to the vascular pathology seen in patients with CKD. The current treatment approach of providing vitamin D analogues to patients with CKD often poses a dilemma, as studies linked vitamin D treatment to subsequent vascular calcification. In any case, a close monitoring of the vitamin D status in patients with CKD is indicated to ensure that these patients have vitamin D levels associated with the best survival likelihood [5, 6].
Osteopontin
Osteopontin (OPN) was initially identified in osteoblasts as a mineralisation-modulatory matrix protein. Recently, OPN has been studied as a multifunctional protein that is up regulated in a variety of acute and chronic inflammatory conditions, such as wound healing, fibrosis, autoimmune disease and atherosclerosis. OPN is highly expressed at sites with atherosclerotic plaques, especially those associated with macrophages and foam cells. In the context of atherosclerosis, OPN is generally regarded as a pro-inflammatory and pro-atherogenic molecule. The role of OPN in VC, which is closely related to chronic and active inflammation, is that of a negative regulator. It is an inhibitor of calcification and an active inducer of decalcification. OPN expression and its regulatory molecular mechanisms remain elusive during the process
of vascular calcification. Therefore, further research with regard to the role of OPN in diseases associated with VC is needed to identify potential OPN-related therapeutic targets [7].
References
1. Chaykovska L, Tsuprrykov O, Hocher B. Biomarkers for the prediction of mortality and morbidity in patients with renal replacement therapy. Clin Lab 2011; 57(7–8): 455–467.
2. Shin JY, Shin YG, Chung CH. Elevated serum osteoprotegerin levels are associated with vascular endothelial dysfunction in type 2 diabetes. Diabetes Care 2006; 29(7): 1664–1666.
3. Tseng W, Graham LS, Geng Y, Reddy A, Lu J, Effros RB, et al. PKA-induced receptor activator of NF-kappaB ligand (RANKL) expression in vascular cells mediates osteoclastogenesis but not matrix calcification. J Biol Chem 2010; 285(39): 29925–29931.
4. Ozkok A, Caliskan Y, Sakaci T, Erten G, Karahan G, Ozel A, et al. Osteoprotegerin/RANKL axis and progression of coronary artery calcification in hemodialysis patients. Clin J Am Soc Nephrol 2012; 7(6): 965–973.
5. Lieb W, Gona P, Larson MG, Massaro JM, Lipinska I, Keaney JF Jr, et al. Biomarkers of the osteoprotegerin pathway: clinical correlates, subclinical disease, incident cardiovascular disease, and mortality. Arterioscler Thromb Vasc Biol 2010; 30(9): 1849–1854.
6. Ellam TJ, Chico TJ. Phosphate: the new cholesterol? The role of the phosphate axis in non-uremic vascular disease. Atherosclerosis 2012; 220(2): 310–318.
7. Ketteler M, Rothe H, Krüger T, Biggar PH, Schlieper G. Mechanisms and treatment of extraosseous calcification in chronic kidney disease. Nat Rev Nephrol 2011; 7(9): 509–516.
The author
Berthold Hocher, M.D., Ph.D.
Institute of Nutritional Science, University of Potsdam,
D-14558 Nuthetal-Potsdam, Germany
E-mail: hocher@uni-potsdam.de
www.uni-potsdam.de/eem
Molecular allergology offers new opportunities – for the lab and the clinician
, /in Autoimmunity & Allergy, Featured Articles /by 3wmediaby Dr Magnus Borres Molecular allergology enables quantification of IgE antibodies to single allergen protein components at the molecular level. This helps the clinician establish the cause of allergic sensitisation, evaluate the risk for severe allergic reactions and improve patient management. New tests and technologies enable the laboratory to assist in an efficient manner.
Pharmacogenetics and pharmacogenomics: moving towards personalized medicine
, /in Featured Articles /by 3wmediaGenetic polymorphisms are well recognized as one of the main cause of variations in personal drug response. Pharmacogenetics investigates the role of polymorphisms in the individual response to pharmacological treatments in order to design specific genetic tests, which can be performed before drug administration to optimize drug response and reduce adverse events.
by Dr Francesca Marini and Professor Maria Luisa Brandi
Personalized medicine based on genetics
The complete sequencing of the human genome in 2001, by the Human Genome Project, has opened the new era of personalized medicine based on genetics. Polymorphic variations are suspected to cover at least 20% of the entire human genome. An average of about 6 million single nucleotide polymorphisms (SNPs) and other sequence variations (i.e. copy number variations, CNVs) are estimated to exist between any two randomly selected human individuals. Advancements in understanding of variations in the human genome and rapid improvements in high-throughput genotyping technology have made it feasible to study most of the human genetic diversity in relation to phenotypes. Today, the challenge for genomic medicine is contextualizing the myriad of genomic variations in terms of their functional consequences for disease predisposition and for different responses to medications.
The ability to predict the outcome of drug therapies, by a simple analysis of common variants in the genotype, is today one of the main challenges for individualised medicine. Pharmacogenetics and its whole-genome application, pharmacogenomics, are the utilization of individual genetic data to predict the individual response to drug treatment with respect to both beneficial and adverse effects.
They, currently, represent one of the disciplines most pursued by basic and clinical research. Pharmacogenetics examines the single gene and/or single polymorphism influences in drug response in terms of drug absorption and disposition (pharmacokinetics) or drug action (pharmacodynamics), including polymorphic variations in genes encoding drug-metabolizing enzymes, drug transporters, drug receptors and drug biological targets. Pharmacogenomics studies alterations in gene and protein expression that may be correlated with pharmacological function and therapeutic response, encompassing factors beyond those that are inherited, such as epigenetics (pharmacoepigenomics).
One of the main goals of pharmacogenetics and pharmacogenomics is the identification of genetic biomarkers that lead to the recognition, in advance, of patients who will not respond to a therapy, or who will be at risk of developing adverse reactions, in order to design specific pre-prescription genetic tests. A biomarker is most commonly a genetic variant, but can also include functional deficiencies, expression changes, chromosomal abnormalities, epigenetic variants, etc. A necessary step, for the application of pharmacogenetic results into clinical practice is the validation of biomarkers, a process that requires several stages: 1) the correct design of prospective association studies and setting of all experimental conditions to increase sensitivity, reliability and specificity of the assay; 2) replication of results in different, independent studies; 3) biomarker characterization, through evaluation of variability of a particular biomarker in different human populations to determine ethnical differences, relevant interactions and potential confounders; and 4) expression and functional studies, to establish the possibility of a casual relationship between a candidate biomarker and the response to a drug.
Pharmacogenetic data on more than 110 commonly used drugs and over 35 genes are currently depicted in the Food and Drug Administration (FDA)-approved “Table of Pharmacogenomic Biomarkers in Drug Labels” (http://www.fda.gov/drugs/scienceresearch/researchareas/pharmacogenetics/ucm083378.htm), and, for many of them, the list includes specific clinical actions to be taken based on genetic information. These specific tests are currently used in clinical practice, mostly in oncology, psychiatry, neurology and cardiovascular disease. The first clinical application of a pharmacogenetic test was approved by the FDA in January 2005: the AmpliChip CYP450 test that includes genetic variants of CYP2C19 and CYP2D6 genes (two drug–metabolising P450 cytochromes, responsible of the most frequent variation in phase I metabolism of approximately 80% of all prescribed drugs today). In June 2007, the FDA released an online “Guidance on pharmacogenetic tests and genetic tests for heritable markers” (available at http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm077862.htm), which presents general guidelines for the rapid transfer of experimental results to the clinical practice and for the correct performing and data handling of pharmacogenetics screenings. The application of rapid, simple and non-invasive pharmacogenetic tests, that can be easily performed on a blood sample and do not need to be repeated during the patient’s lifetime, would help clinicians in tailoring the best therapy for each patient, reducing adverse events and maximizing positive effects. Results from pharmacogenetic tests would allow clinicians to adjust dosages, choose between similar drugs or offer an alternative therapy, if available, before the administration of each treatment. Data obtained from pharmacogenetic tests should become part of the patient medical records, with access protected by medical privacy laws, and available, before drug administration, to clinicians granted the official permission of the patient.
The accuracy of pharmacogenetic testing and the correct management and interpretation of the results will become crucial factors in determining the benefits and/or risks for patients. Also, all the new technologies, including the development of pharmacogenetic diagnostic tools, will require a high level of expertise to be appropriately applied. Several studies have documented the lack of knowledge and confidence of primary care physicians in the field of genetic tests, with only 4% of general practitioners in the US and only 21% in the UK feeling confident and sufficiently prepared for counselling patients regarding genetic tests [1, 2]. Specific training programmes about pharmacogenetic testing for medical geneticists and health care professionals are strongly recommended and they should encompass clinical genetics, genetic counselling, knowledge of inherited and ancillary genetic data management and legal protection.
Pharmacogenetics and osteoporosis: state of the art and translation into clinical practice
Osteoporosis is the most common metabolic bone disorder of the elderly, affecting both sexes (with a higher prevalence in women) and all ethnicities, and is characterized by a low bone mass and bone microarchitectural deterioration, with a consequent increase in bone fragility and susceptibility to spontaneous fracture. Today it is well known that, despite the fact that osteoporosis is a multifactorial complex disorder, genetic factors exert a key role in the acquisition of personal bone mass peak, in the determination of microarchitectural bone structure, and in the regulation of bone metabolism. Numerous and effective anti-fracture treatments, acting on bone cells to restore a normal bone turnover, are today available: hormone replacement therapy (HRT), selective estrogen receptor modulators (SERMs), bisphosphonates, calcitonin, parathyroid hormone (PTH), Teriparatide, Strontium Ranelate, and anti-RANK monoclonal antibody (Denusomab), administered alone or in combination with supplements of vitamin D and calcium. Response to all of these drugs is variable among treated patients both in terms of efficacy [evaluated as bone mineral density (BMD) gain, reduction of bone turnover, reduction of fracture risk] and of adverse reactions. In the last two decades, some pharmacogenetic studies on anti-osteoporotic drugs have been performed, but their number is still very limited and no conclusive results are available yet.
The main characteristics and results of these studies have been summarized in some recent reviews [3–5]. Results, replicated in at least two different unrelated studies, seem to indicate that:
These preliminary data appear to be promising, but they surely need to be implemented and validated before any application to clinical practice. Association studies on pharmacogenetics of osteoporosis need to be confirmed in larger cohorts, different ethnical populations and multicentre studies, preferentially from prospective controlled clinical trials, including analysis of genetic variations in genes encoding for drug transporters, drug receptors, drug metabolizing enzymes and drug molecular targets. Moreover, the single gene-approach should be integrated with multi-candidate gene and genome-wide association studies on large cohorts to individuate also unsuspected candidate genes and polymorphisms. Subsequently, data obtained from genetic studies should be implemented and validated using gene expression and proteomic analyses and by performing specific functional in vitro and in vivo studies. Also, the effects of epigenetic mechanisms (i.e. histone modifications, cytosine methylation in gene promoters and microRNAs), on the regulation of expression of genes encoding drug metabolic enzymes, transporters receptors and targets, should be taken into account and investigated.
References
1. Burke W, Emery J. Nat Rev Genet 2002; 3(7): 561–566.
2. Suchard MA, Yudkin P, Sinsheimer JS, Fowler GH. Br J Gen Pract 1999; 49(438): 45–46.
3. Marini F, Brandi ML. Expert Rev Endocrinol Metab 2010; 5(6): 905–910.
4. Marini F, Brandi ML. Curr Osteoporos Rep 2012; 10(3): 221–227.
5. Marini F, Brandi ML. J Pharmacogenom Pharmacoproteomics 2012; 3(3): 109.
The authors
Francesca Marini PhD and Maria Luisa Brandi MD, PhD
Metabolic Bone Unit, Department of Internal Medicine, University of Florence, Florence, 50139, Italy.
E-mail: m.brandi@dmi.unifi.it
DIY diagnostics – to do or not to do
, /in Featured Articles /by 3wmediaMuch like do-it-yourself (DIY) hardware stores, DIY or at-home diagnostic test kits possess both benefits and drawbacks. Making a decision is tricky. It may become even more so as a host of new kits arrive on the market, some of which are aimed at potentially deadly diseases like cancers.
The growth in the DIY kit market is driven by a combination of several factors:
Many kit developers are beginning to see easier opportunities in the developing world, especially in large emerging countries such as Brazil, India and China. All these have a rising number of affluent consumers, accompanied by lifestyle changes which heighten the risk of diseases such as diabetes or AIDS. At the same time, medical regulations are more lax than in the West; for example, it is not impossible that kits are packaged differently, without the visible labels which warn that a specific test is not (yet) approved by health regulators.
What is common to both the West and large emerging markets, as far as DIY test kits go, is the Internet. Not only does the Net allow consumers to become aware of new tests, but it also provides them a channel for access to vendors, credit card payments and delivery by mail order. As a follow-on, some DIY kit producers are working to provide encrypted transfer of data and access to the test results, again over the Internet.
No one doubts the utility of DIY kits in areas such as ovulation and pregnancy testing. Most physicians also agree that the monitoring of chronic diseases is far better served by emerging DIY diagnostic technologies. For example, a relatively new test for patients taking the anticoagulant warfarin does away with the need for weekly visits to a physician – to ensure that their blood is neither too thick to cause a stroke, nor too thin to be life threatening in case of a wound or high blood pressure. This is also the case for at-home diabetes tests, which permit day-to-day modifications in insulin intake. Blood pressure too, it is now accepted, needs to be monitored throughout the day to give a true reading, rather than once at a doctor’s.
However, there are several areas where healthcare professionals are apprehensive about the growth in DIY tests, and are likely to remain so for some time. This is mainly because even state-of-the-art DIY technology has an approximately 10% risk of error. While the psychological impact of a false positive – which has a similar error level to false negatives in most DIY tests – can be serious, a false negative on a major allergy, urinary tract or yeast infection, or for that matter, HIV, would be devastating.
Neonatal screening for lysosomal storage disorders
, /in Featured Articles /by 3wmediaThe interest in newborn screening for lysosomal storage disorders (LSDs) has increased significantly due to newly developed enzyme replacement therapies, the need for early diagnosis, and advances in technical developments. However, testing for lysosomal storage disorders in newborn screening (NBS) raises many challenges for primary health care and their providers. The high frequency of late-onset mutations makes lysosomal storage disorders a broad health problem beyond childhood, as well as a challenge for diagnosis and therapy.
by Professor David C. Kasper
Clinical Background
Lysosomal storage disorders (LSDs) may be an attractive candidate for newborn screening (NBS). These disorders result in the accumulation of macromolecular substrates that would normally be degraded by enzymes involved in lysosomal metabolism [1]. Although individual LSDs are rare, their combined incidence has been estimated at 1 per 7,700 live births for Caucasians [2]. LSDs have a progressive course, and can present at any age affecting any number of tissues and organ systems [3]. In most cases, treatment is directed toward symptomatic care of secondary complications. The development of novel diagnostic techniques was strengthened by the availability of treatment strategies including enzyme replacement, stem cell transplantation and substrate reduction although limitations of these therapies still exist [4]. Nonetheless, early diagnosis and treatment is essential for optimal treatment thus leading to the support of implementing LSDs to the NBS panel. However, the current experience of nationwide screening for LSDs is still limited.
Laboratory diagnostics
The increased technological capacity implies that expanded NBS programmes can now identify a broader range of conditions where early detection and pre-symptomatic treatment result in clinical benefit. However, the technology for a simultaneous screening of several enzyme activities related to LSDs from more or less one single blood sample was initially complicated, time-consuming and laborious but finally new protocols and technologies are now available that allow a simplified screening procedure. For future implementation of high-throughput LSD assays in routine clinical diagnostics, sample handling and mass spectrometric analysis has to be simplified; specifically, sample pre-treatment, speed of analysis and finally detection must become more integrated [5]. In this context it is also mandatory to achieve high laboratory standards in terms of technical proficiency and reproducibility of results. Hereby, quality control materials provided by the Newborn Screening Quality Assurance Program at the Centers for Disease Control and Prevention (CDC, Atlanta, GA, USA) are available [6].
Protocols for analysing lysosomal enzyme activities evolve continuously. In addition to fluorescent methods, using, for example, 4-methylumbelliferone, efforts have been made to use tandem mass spectrometry (MS/MS) particularly for high-throughput analysis in routine newborn screening laboratories. MS/MS procedures were refined and optimised, but the complexity of sample preparation prior to mass spectrometry still remains. Drawbacks of these protocols were the need of liquid-liquid extraction (LLE), solid phase extraction (SPE), and the handling with hazardous organic compounds such as ethyl acetate. Novel aspects such as online multi-dimensional chromatography prior to flow injection analysis facilitate ease-of-use sample introduction and increased speed of analysis. Our research group previously reported the use of TurboFlow (Turbulent Flow Chromatography) for online sample clean-up to remove matrix interferences such as salts, proteins and detergents for the analysis of lysosomal enzyme activities in dried blood spot samples [7]. Subsequently, purified analytes of interest that were removed from potential matrix interferences were transferred from a TurboFlow column to an analytical column for ultra high performance liquid chromatography (UHPLC) separation prior to MS/MS analysis in order to separate enzymatic products from residual substrate. This simplified protocol has recently been evaluated in a comprehensive pilot screening of more than 8,500 newborns to demonstrate the technical feasibility and robustness [8]. Moreover, the incubation time was reduced tremendously from 12–16h to 3h [9]. However, novel buffer systems for the combined incubation of more than 6 or 9 enzymes simultaneously are on the horizon including substrates for mucopolysaccharidosis type II, IVA and VI [10]. These new buffer systems might allow the incubation of several enzymes in one reaction vial, and help to reduce costs for personnel, consumables and reagents. We conclude that multiplex MS/MS screening assays are reliable for nationwide LSD NBS, and for selective metabolite screening in high-risk population.
In our experience, comparing biochemical with genetic data of affected patients, we did not observe any correlation between mutation and lack of enzyme activity measured biochemically by MS/MS, nor could type of mutation be estimated by the level of decreased enzyme activity. However, it is mandatory to confirm biochemically suspected cases by genetic mutation analysis.
The nationwide LSD screening experience
The nationwide screening for LSDs is the beginning of a new category of disorders that will confront us with challenging topics regarding NBS. Currently, routine newborn screening for LSDs has been introduced for Pompe disease in Taiwan and for Krabbe disease in the State of New York, respectively. The Austrian Newborn Screening Center and others, for example in Washington State and Italy, have successfully started pilot studies using multiplexed MS/MS screening assays.
We report the results of a comprehensive pilot screening of ~35,000 newborns for four LSDs using a multiplex MS/MS based assay including genetic mutation analysis [11]. Our results revealed a surprisingly high number of enzyme deficiencies among a predominantly Caucasian population in a Central European country. The results finally confirmed 15 newborns with at least one mutation including diminished lysosomal enzyme activity, demonstrating the high overall incidence of 1 : 2315 among the Austrian population. Frequency, positive predictive value and technical practicability make nationwide NBS for LSDs technical feasible. In our screening, the positive cases contribute predominantly to Fabry disease with an incidence of late-onset Fabry disease of 1 : 4100 among the Austrian population. Fabry disease is found among all ethnic, racial, and demographic groups and is not restricted to a specific ethnic background. Our results are concurrent with those from Spada et al. who reported a high incidence of 1 : 3100 for late-onset and 1 : 37 000 for the classic phenotype [12]. Furthermore, several studies have shown that patients with renal insufficiency, cerebral infarctions, or left ventricular hypertrophy of unknown aetiology might suffer from Fabry disease [13]. We conclude that a putative NBS may be beneficial to identify severe clinical cases and but has the drawbacks of detecting mild forms, late onsets and asymptomatic cases.
Future perspectives
The high incidence of the late-onset phenotypes in Fabry, Gaucher and Pompe disease raises the question when genetic screening for this disease should be undertaken, in the neonatal period or at early maturity. Clearly, early detection, genetic counselling, and therapeutic intervention are beneficial for the classic phenotype but the time of screening for the late-onset variants of Fabry and other treatable diseases may raise concerns. A recent study revealed that long-term treatment led to substantial and sustained clinical benefits; however advanced cardiac and renal disease cannot be reversed later on making early diagnosis crucial. NBS is less controversial for infantile Pompe. In Taiwan, first prospective Pompe screening including the initiation of treatment before onset of obvious symptoms and significant irreversible muscle damage clearly demonstrated the benefit for infants. The central nervous system cannot be treated by enzyme replacement therapies for neuronopathic LSDs like for Gaucher II and Niemann-Pick A, and thus highlights the importance of consented genotyping and phenotype prediction after biochemical first-line screening. Apart the potential clinical benefit for patients, NBS for LSDs can provide reproductive risk information for parents and future adults. This situation is common for screening of metabolic disorders as they are inherited predominately in a recessive manner.
In conclusion, our study shows that Pompe, Gaucher and Fabry are frequent disorders with great public health implications. Even though the American College of Medical Genetics (ACMG) ranked LSDs with low priority in 2006, two LSDs including Pompe and Krabbe were finally nominated for consideration by the federal advisory committee. Currently, three states initiated NBS for LSDs, three other states have passed legislation [14]. LSDs belong to a new category of disorders for which population-based screening assays exist, and new high-throughput screening assays and novel treatment strategies are on the horizon for many others. Challenges of the future may include the implementation of the LSDs in routine NBS, dealing with the identification of late-onset phenotypes, and optimal therapy schemes potentially including cost-intensive enzyme replacement therapies.
References
1. Wenger DA, Coppola S, and Liu SL. Insights into the diagnosis and treatment of lysosomal storage diseases. Arch Neurol 2003; 60(3): 322–328.
2. Ranierri E, et al. Pilot neonatal screening program for lysosomal storage disorders, using lamp-1. Southeast Asian J Trop Med Public Health 1999; 30(Suppl 2): 111–113.
3. Beck M. Variable clinical presentation in lysosomal storage disorders. J Inherit Metab Dis 2001; 24(Suppl 2): 47–51; discussion 45–46.
4. Beck M. Therapy for lysosomal storage disorders. IUBMB Life 2010; 62(1): 33–40.
5. Annesley T, et al. Mass spectrometry in the clinical laboratory: how have we done, and where do we need to be? Clin Chem 2009; 55(6): 1236–1239.
6. De Jesus VR, et al. Development and evaluation of quality control dried blood spot materials in newborn screening for lysosomal storage disorders. Clin Chem 2009; 55(1): 158–64.
7. Kasper DC, et al. The application of multiplexed, multi-dimensional ultra-high-performance liquid chromatography/tandem mass spectrometry to the high-throughput screening of lysosomal storage disorders in newborn dried bloodspots. Rapid Commun Mass Spectrom 2010; 24(7): 986–994.
8. Metz TF, et al. Simplified newborn screening protocol for lysosomal storage disorders. Clin Chem 2011; 57(9): 1286–1294.
9. Mechtler TP, et al. Short-incubation mass spectrometry assay for lysosomal storage disorders in newborn and high-risk population screening. Journal of Chromatography B 2012; in press.
10. Gelb MH, and Scott CR. Screening for three lysosomal storage diseases in a NBS laboratory and the potential to expand to a nine-plex assay. APHL Newborn Screening and Genetics Testing Symposium San Diego, CA, USA; 7–10 November, 2011.
11. Mechtler TP, et al. Neonatal screening for lysosomal storage disorders: feasibility and incidence from a nationwide study in Austria. Lancet 2012; 379(9813): 335–341.
12. Spada M, et al. High incidence of later-onset fabry disease revealed by newborn screening. Am J Hum Genet 2006; 79(1): 31–40.
13. Monserrat L, et al. Prevalence of fabry disease in a cohort of 508 unrelated patients with hypertrophic cardiomyopathy. J Am Coll Cardiol 2007; 50(25): 2399–2403.
14. Zhou H, Fernhoff P, and Vogt RF. Newborn bloodspot screening for lysosomal storage disorders. Journal of Pediatrics 2011; 159: 7–13.
The author
David Kasper, PhD
Department of Pediatrics and Adolescent Medicine, Medical University of Vienna, Währinger Gürtel 18–20, A-1090 Vienna, Austria
e-mail: david.kasper@meduniwien.ac.at
Benefits of polychromatic flow cytometry in a clinical setting
, /in Featured Articles /by 3wmediaIncreasingly sophisticated instruments and an expanding range of fluorochromes are making it possible to detect an increasing number of markers on a single cell. These advances are encouraging the wider adoption of polychromatic flow cytometry (PFC). This review looks at the benefits of PFC in clinical laboratories, and how to deal with the associated challenges.
by Sandy Smith and Professor William Sewell
Flow cytometry is a valuable tool in today’s diagnostic pathology laboratories [1]. The main strengths of flow cytometry are the ability to detect and characterise abnormal populations, the capacity to assess several markers simultaneously on the one cell and the relative speed with which results can be produced. In recent years, there has been a progressive introduction into clinical laboratories of polychromatic flow cytometry (PFC), using instruments that detect 5–10 markers simultaneously. This paper will focus on how increasing colours can impact a clinical flow cytometry laboratory.
The advantages of PFC
Arguably the biggest impact of increasing colours is the exponential increase in the amount of information obtained from paucicellular samples, such as cerebrospinal fluid (CSF). Often all the sample needs to be committed to a single tube to obtain enough events. Studies have shown flow cytometry improves the detection rate of CSF involvement of haematopoietic neoplasms [2]. With low cell numbers, background events become a significant proportion of total events, thus having sufficient colours available to include a nuclear stain can be very useful to identify true cells from debris. Another major benefit of increased colours is in the analysis of complex populations [3]. Light chain expression is the key to demonstrating monoclonality on B cell populations, so the more markers in the light chain tube, the better the sensitivity. The availability of more markers increases the ability to separate populations and analyse them independently. T cell phenotyping is significantly more complex than for B cells [4], and PFC can improve the effectiveness of panels investigating T cell disorders. However specificity becomes an issue since there are often many T cell subsets in reactive samples. CD7 negativity is used to identify some T-NHL cells, however CD7 negative populations are commonly found in normal samples. False negativity can be reduced by appropriate selection of clones and fluorochromes, and we have found that switching CD7 from FITC to APC has reduced the amount of dim-negative populations [Fig. 1]. However, T cell malignancies are relatively rare thus would rarely justify an instrument upgrade alone. PFC can aid in the detection of minimal residual disease (MRD) populations, by allowing the inclusion of more markers to identify targeted populations. In recent years, MRD detection has benefitted from developments in instrumentation that improve consistency in settings over different collection time points, and improved computers and software packages that allow fast analysis of >500,000 events. As these technologies are more widely adopted, the benefits of PFC will have a greater impact in MRD detection. PFC has made panel construction both easier and harder. Using more colours means fewer decisions when assigning markers to tubes, but this will be limited by the range of conjugates for rarer fluorochromes, and complicated by compensation and spreading. Sorting out compensations for overlapping fluorochrome emissions does become more complex with more fluorochromes, but is reduced when they are excited by different lasers. With advances in software, compensation can be managed with automated matrices and manual optimisation by experienced users. Although there is an increased range of fluorochromes, it is helpful to use one fluorochrome per channel (i.e. always FITC or always AF488) to avoid generating and maintaining too many separate settings files. The spreading effect is the expansion either side of the zero point of an axis due to the bright positive intensity of a second fluorochrome [Fig. 2]. This phenomenon is unique to instruments producing digital data, and can be managed by arranging mutually exclusive combinations on affected fluorochromes [5].
Quality control
Increasing the number of colours does not increase the number of QC procedures, however these can become more complex as there are more things that could go wrong. No matter how many colours are used, any lab will still need daily bead calibration to ensure consistent instrument operation, plus a biological control to ensure appropriate assay and acquisition set up. Upgraded instruments will have more detectors, lasers and fluorochromes to check, therefore a greater knowledge base is required to troubleshoot problems. Labs with the expertise to resolve technical issues in-house will have less instrument downtime. For biological controls, a very effective form of QC is to utilise internal controls, which are negatively stained cells in the same sample. These are independent of the number and type of fluorochromes used and are especially useful in high throughput labs.
Data handling
As the number of colours increase, the information becomes harder to express in traditional graphic form [6]. Standard graphs are two-dimensional; gates can be combined in Boolean formulas, but each region is still adjusted in two dimensions. The number of graphs required to display each marker against each other marker increases. Careful planning should enable each lymphocyte marker to be shown only once for each tube in panels targeting lymphocyte lineage neoplasms, reducing the time taken to review data. In myeloid panels the emphasis is on tracking development pathways, thus some markers are required to help track multiple pathways. For example, CD33 is useful for blasts, and for both monocytic and granulocytic development. Another strategy to clarify data is to use colour schemes to track cells on different 2D plots from one tube; these schemes can then be applied to all tubes in the panel to help tie the information together. Traditionally, analysis software has been provided by the cytometer manufacturer. However, the increased complexity of analysis in PFC means that specialist software companies are playing a greater role. For the next stage of software development, many of these companies are developing complex algorithms to define clusters of cells in multi-dimensional space in a way that the traditional approach of sequential gating cannot. However the main issue seems to be around expression of the data in a user friendly way so that subtle populations can be visualised in a persuasive fashion.
Technology development
In recent years, there have been efforts to standardise antibody panels. Increasing colours can make the choice of which markers to combine in the same tube easier, and allows ‘backbone’ markers to be included. Backbone markers refer to markers used in every tube of a panel to allow more specific gating across tubes; an example of B cell panel markers in multiple tubes is shown in Table 1. Various international groups have recommended approaches to standardisation [7]. However adoption has been slow, likely for practical reasons. Increasing colours increases information, but also complexity of analysis and range of technical issues, thus staff need to have a greater knowledge and experience. This issue is worthy of its own paper so is not discussed in depth here; major issues are listed in Table 2. Labs tend to use reagents recommended by their instrument manufacturer which makes technical support easier. The appropriate number of colours and most suitable instrumentation for each laboratory is very site specific, which decreases the capacity for standardisation. It is desirable, and indeed more practical, to standardise user expertise; the implementation of the International Cytometry Certification Examination is a significant first step.
Fluorochrome availability and cost
The average number of colours used in the clinical world depends on both suppliers and labs. It requires a critical mass of usage from laboratories to make a larger range of fluorochromes and conjugates commercially viable. As the increased range is more widely adopted, experience increases and more suppliers take on larger ranges, thus prices may be reduced; both of which encourage more laboratories to upgrade their systems, and so on. This cycle relies on both commercial investment in new technologies, as well as laboratories investing resources to trial and optimise these technologies. In labs this tends to rely on individuals being personally motivated plus supported by the lab, which is difficult in the current economic climate. One solution is for multiple sites to pool resources, for one centre to investigate and implement options, which can then be adopted and optimised by all. Also, research groups will concentrate resources into creating single panels to glean maximum information from samples. Here, the more unusual fluorochromes and instruments can be tested and optimised, and these experiences passed onto clinical users.
Conclusion
Practicalities and cost effectiveness will always play a part in the future directions of clinical flow cytometry labs. There are many benefits to increasing the colour capabilities of clinical labs. More information can be taken from each assay tube improving sensitivity for abnormal populations in a normal or reactive background and in the analysis of paucicellular specimens. Workflow can also improve with fewer tubes to run. More colours will potentially lead to more technical issues and more resources for trial and validation; ultimately the availability of resources will dictate the appropriate number of colours for each laboratory. Labs should regularly assess how many colours would be of benefit to them, and how many colours they can handle. These developments will continue to enhance the contribution of flow cytometry to laboratory diagnosis.
References
1. Craig FE, Foon KA. Blood 2008; 111: 3941–3967.
2. de Graaf MT, de Jongste AH, et al. Cytometry B Clin Cytom 2011; 80: 271–281.
3. Sewell WA, Smith SA. Pathology 2011; 43: 580–591.
4. Tembhare P, Yuan CM, Xi L, et al. Am J Clin Pathol 2011; 135: 890–900.
5. Roederer M. Cytometry 2001; 45: 194–205.
6. Mahnke YD, Roederer M. Clin Lab Med 2007; 27: 469–485, v.
7. Davis BH, Holden JT, et al. Cytometry B Clin Cytom 2007; 72(S 1): S5–13.
The authors
Sandy ABC Smith1 MSc, and William A Sewell1,2,3 MBBS, PhD
1 Immunology Department, SydPath, St Vincent’s Pathology, St Vincent’s Hospital Sydney, Victoria St, Darlinghurst, NSW 2010, Australia.
2 St Vincent’s Clinical School, University of NSW, NSW 2052, Australia.
3 Garvan Institute of Medical Research, Victoria St, Darlinghurst, NSW 2010, Australia
Progress in the management of prostate cancer
, /in Featured Articles /by 3wmediaAlthough globally prostate cancer (PCa) is the second most common cancer in men after lung cancer, and around one in six men in the West will eventually be diagnosed with the disease, the majority of these patients will die of unrelated causes. Thus PCa management should ideally not only involve diagnosis and provision of the most appropriate therapy, but also a decision as to whether any treatment is actually necessary.
Traditional screening based on an elevated level (above 4ng/mL) of the far from specific marker Prostate Specific Antigen (PSA) to diagnose PCa has led to over-diagnosis, unnecessary biopsies and over-treatment. It has also led to PCa cases with PSA levels below the cut-off value remaining undetected. The phi test, available in Europe since 2010 and very recently approved by the FDA, was developed to improve prostate cancer management. Intended for use in men with a PSA level in the range of 4 – 10 ng/mL, the test combines measurements of total PSA, free PSA and an isoform of free PSA, namely [-2]pro-PSA, to determine the probability of prostate cancer. The test does help to discriminate between PCa and benign disease, and reduces the number of negative prostate biopsies.
However many elderly men diagnosed with a tumour confined to the prostate gland that would not have affected their survival are still undergoing aggressive and unnecessary therapy; the majority of these patients suffer from erectile dysfunction after treatment and many have urinary leakage and intestinal problems. There is still a major need for accurate, preferably blood tests to determine which elderly patients with PCa currently confined to the prostate gland are likely to suffer eventually from life-threatening metastatic prostate cancer.
Two papers published in the October issue of The Lancet Oncology give cause for optimism, however. It was found that whole blood gene profiling in men with metastatic castration-resistant prostate cancer (defined as disease progression despite androgen depletion therapy) was able to stratify patients into two distinct prognostic groups. In addition the European Medicines Agency (EMA) is about to approve a new drug, Enzalutamide, to be taken orally once a day, for the treatment of metastatic castration-resistant prostate cancer. Data from Phase III clinical trials show that as well as extending life in sufferers, the drug also improves quality of life by reducing pain and increasing appetite and energy levels.
Hopefully at least some of the unnecessary suffering resulting from PCa management will soon be alleviated.
Point-of-care platforms, genetic assays and wireless connectivity
, /in Featured Articles /by 3wmediaNucleic acids, which are among the best signatures of disease and pathogens, have traditionally been measured in centralised screening facilities using expensive instruments. Such tests are seldom available on point-of-care (POC) testing platforms. Advancements in simple microfluidics, cellphones and low-cost devices, isothermal and other novel amplification techniques, and reagent stabilisation approaches are now making it possible to bring some of the assays to POCs. This article highlights selected advancements in this area.
by Dr Robert Stedtfeld, Maggie Kronlein and Professor Syed Hashsham
Why point-of-care diagnostics?
Point-of-care diagnostics (POCs) bring selected capabilities of centralised screening to thousands of primary health care centres, hospitals, and clinics. Quick turnaround time, enhanced access to specialised testing by the physicians and patients, sample-in-result-out capability, simplicity, ruggedness and lower cost are among the leading reasons for the emergence of POCs. Another advantage of POCs is its flexibility to be adopted for assays that have received less attention and therefore are often “home brewed”, meaning an analyst develops it within the screening facility for routine patient care. The societal benefit–cost analysis of POCs may often exceed the traditional approaches by 10- to 100-fold. However, POCs must deliver the same quality of test results that is available with the existing centralised screening. Centralised screening is well established, has a performance record and analytical expertise ensuring reliability. POCs are emerging and, therefore, for successful integration into the overall healthcare system, POCs must provide an advantage over the existing system consisting of sample transport to a centralised location followed by analysis and reporting. Besides answering why POCs are better than the existing approaches, they must face validation and deployment challenges.
On the positive side, POCs are expected to have lower financial and acceptance barriers compared to what is faced by more expensive traditional approaches because of the need for lowering the cost of diagnostics in general. In 2011, the global in vitro testing market was $47.6 billion and projected to be $126.9 billion by 2022 (http://www.visiongain.com/). At present POCs constitute approximately one third of the total market – distributed in cardiac markers (31%), HbA1c (21%), cholesterol/lipids (16%), fecal occult blood (14%), cancer markers 98%), drug abuse (4%), and pregnancy (4%). Market forces critically determine the pace of technical development and deployment of POCs. Consider, for example, the global market for blood sugar testing (examples for genetic assays on POCs are non-existent) that is estimated to be $18 billion by 2015 and the alternative test, A1c that is only $272 million in 2012. Even though, A1c testing is now indispensable in managing diabetes, it has not received the priority it deserves due to much lower frequency of testing and therefore smaller market. Lowering the cost further, makes its deployment and diffusion even more challenging. Thus POCs must tackle the inherent bottleneck in their business model, i.e. how to succeed with an emerging or new technology, priced to be low cost, but without the access to market and high sales volumes – at least initially.
One option is to use the existing network of cellphones as one component of the POCs. Diagnostic tools based on cellphones and mobile devices have the potential to significantly reduce the economic burden and play an important role in providing universal healthcare. By 2015 the number of cellphone users will reach 1.4 billion and at least 500 million of them will have used health related applications (mHealth) in some form. Currently, more than 17,000 mHealth apps are available on various platforms. However, their ability to carry out genetic assays is yet to be harnessed. Out of the more than 2,500 genetic assays available, perhaps none are available on a mobile platform (GeneTests: www.genetests.org/). The coming decade is predicted to merge genomics, microfluidics and miniaturisation and multiply its impact many-fold by leveraging the resources and cellphone networks. Such platforms may allow the possibility of establishing an open source model for assays that are commercially not viable due to very low volumes.
A key question and the focus of this article is can genetic assays that are currently possible only in centralised screening facilities be carried out on POC platforms? We believe that through a combination of emerging molecular techniques, low-cost simple microfluidic systems, and some additional developments in detection systems and information transfer, it is possible to carry out genetic assays including mutation detection on POCs within the next 5 years and possibly sequencing within a decade.
Existing POC-adaptable genetic technologies
Nucleic acid-based amplification techniques remain the widely used analytical technique for genetic diagnostics. However, integrated systems capable of reliable detection with sensitivity and specificity required for clinical applications are still scarce. In centralised screening facilities, quantitative polymerase chain reaction (qPCR) is the workhorse for genetic analyses. Compared to qPCR, isothermal amplification strategies have been recognised as a promising alternative especially for POCs. This is because of the complexity of establishing the temperature cycling for qPCR and detection systems in POC devices. The advantages of isothermal amplification include high amplification yields (in some instances allowing a positive reaction to be observed with the naked eye), savings in power consumption without the need for temperature cycling, and low time to a positive amplification (as low as 5 minutes for larger copy numbers). Many isothermal techniques have been developed [1] including: loop-mediated isothermal amplification (LAMP), recombinase polymerase amplification (RPA), nucleic acid sequence-based amplification (NASBA), smart amplification process (SmartAmp), rolling circle amplification (RCA), multiple displacement amplification (MDA), helicase-dependent amplification (tHDA), strand displacement amplification (SDA), isothermal and chimeric primer-initiated amplification (ICAN), cross-priming amplification (CPA), single primer isothermal amplification (SPIA), self-sustained sequence replication reaction (3SR), transcription mediated amplification (TMA), genome exponential amplification reaction (GEAR) and exponential amplification reaction (EXPAR).
The benefits of one isothermal technique over another will depend on the application of interest. Techniques requiring a large number of enzymes and that are carried out at low temperature may be less amenable to POCs than those that require a single enzyme. More than one enzyme may, in general, increase the cost, rigor and complexity of the amplification reaction in a POC. While larger number of primer sets will increase the specificity, they will also make the design of primers to target a certain phylogenetic group or divergent functional gene more difficult, if not impossible. This is because of the need for multiple target specific regions, each being a certain distance (number of bases) between the other, and the increased complexity when trying to incorporate degenerate bases in multiple primer sequences within an assay. Isothermal assay enzymes that work at low temperature (less than 40°C) may have a disadvantage in hot and warm climatic conditions. However, an isothermal amplification strategy that directly incorporates primers/probes designed for previously validated qPCR assays, uses a single enzyme, can be performed at higher temperatures, and allows for accurate quantification, will greatly increase the attraction of isothermal amplification, ushering in a new era of point of care genetic diagnostics. The cost associated with licensing an amplification technique will also dictate if it can be used for POCs applications, specifically in low resource settings.
Existing POC platforms for genetic analysis
Multiple platforms have been developed for POC genetic testing with an emphasis on reduced costs, sizes, throughput, accuracy and simplicity. Table 1 is a non-exhaustive list to illustrate some of the capabilities. Ideally, POCs must simplify the genetic analysis by accepting crude or unprocessed samples. All of the listed qPCR platforms automatically perform sample processing (cell lysing and DNA purification) directly within the cartridge that the sample is dispensed. Compared to qPCR POCs, isothermal assay POCs have not focused as much on sample processing. There are two reasons for this. One, isothermal assays are generally less influenced by sample inhibitors and may not even require it in certain cases. Second, development of POCs based on isothermal assays has lagged because the assays themselves are relatively new for the diagnostics application.
Development of isothermal genetic POC devices, however, is relatively easier compared to qPCR devices. This is because isothermal genetic POCs utilise components that are inexpensive, smaller and have less power consumption. Use of such components is possible due to the high product yields of isothermal amplification techniques. LAMP, for example, produces 10 µg of DNA in a 25 µl volume compared to 0.2 µg in PCR. This high yield can be quantified with less sophisticated optics compared to those used in qPCR devices. The Gene-Z platform [figure 1], for example, uses an array of individually controlled low power light emitting diodes for excitation and optical fibres (one for each reaction well) for channelling the excitation light to a single photodiode detector for real time measurement [2].
Although POCs are generally considered as a single-assay device, multiplexing of targets (e.g. in co-infections) and analysing a given pathogen with greater depth (e.g. methicillin resistance Staphylococcus aureus, or HIV genotyping) is becoming absolutely critical. Genetic analysis is expected to allow resolution of genotype that is better than that possible by immunoassays. Use of simpler but powerful microfluidic chips (e.g. used with Gene-Z or GenePOC) instead of conventional Eppendorf tubes can be advantageous in terms of cost and power of analysis. Such microfluidic chips are increasingly changing their shape, form, and material and are bound to be simpler, better and more accessible. An example is the paper-based diagnostics platform developed by Whiteside’s group [3]. Miniaturisation obviously leads to significant reagent cost saving provided it does not run into detection-limit issues. Multiplexed detection also simplifies the analysis since manual dispensing into individual reaction tubes is not required. For example, the chip used with Gene-Z does not require external active elements for sealing, pumping, or distributing samples into individual reaction wells, eliminating potential for contamination between chips or to the device.
Type of genetic assays on POCs
So what types of genetic assays are more likely to move to POCs first? For regions with excellent centralised screening, it may be those assays where getting the results quickly using POCs saves lives or has tangible long term benefits, e.g. quickly deterring the infection and its antibiotic resistance. The leading example of this is MRSA, for which resistance has continuously increased over the past few decades. It is now known that patients are more likely to acquire MRSA in wards where the organism is screened by culturing compared to rapid molecular techniques. In such cases, detection of antibiotic resistance genes using a panel of genetic assays and POCs would minimise the practice of administering broad spectrum antibiotics because the results are not available soon enough.
In limited resource settings, the examples of genetic testing by POCs are literally endless – TB, malaria, dengue fever, HIV, flu, fungal infections and so on. This is because very little or no centralised screening occurs in such scenarios. The ability to measure dengue virus, for example, in 1–4 µl of blood could provide better tools to the 2.5 billion people who are at risk of infection and the 50–100 million people who do contract it every year. Similarly, multidrug-resistant and extensively drug-resistant TB is a global concern due to the high cost of treatment. At present, large numbers of mutations cannot be measured simultaneously using POCs. However, except the fact that isothermal mutation assays are fewer and the success rate for primer development is much lower than the signature marker probe/primer based assays, there are no technical barriers. The availability of a simple isothermal mutation assay will go a long way in making many genotyping-based diagnostics available on POCs.
In the long run, POCs may even be used to detect and quantify genetic markers associated with non-infectious diseases, such as cancer, and selected assays focusing on human genetics. Globally, cancer is responsible for 7.6 million deaths (2008 data) and projected to be rise to 13.1 million by 2030. Simple and quantitative tools capable of measuring a panel of markers may play an additional role – they may help collect data related to potentially useful but un-tested markers. Both PCR and isothermal-based assays are amenable to this application using circulating tumour cells, circulating free DNA/RNA, gene mutations, and microRNA panels. Currently utilised methods of cancer detection are highly invasive and time consuming. Minimally invasive methods on POCs may significantly increase the deployment of such capabilities.
Why do we need the wireless connectivity for POCs?
With POCs, comes the question of connectivity. Is it a must or good to have? We envision that it is important to have, but that a less useful form of device may be deployed without connectivity. Wireless connectivity via cellular phones has many advantages. Paramount among them is access to the physician and/or nurse for expert input and support. Technical advantages are automated data transfer, increased efficiency in reporting, saving time, lower equipment costs due to complexity of a touch-screen user interface and the computational power needed for data analysis.
The use of cellphones is an obvious possibility due to its ubiquitous availability and the vast network of mobile services. “There are 7 billion people on Earth; 5.1 billion own a cellphone; 4.2 billion own a toothbrush (Mobile Marketing Association Asia, 2011). By 2015 it is estimated that one third of cellphone users will have used mobile health solution in some form. However, POC genetic diagnostics and mobile networking have not yet crossed their paths. Some gene analysers (e.g. Gene-Z, Hunter) already have network enabled wireless connectivity to bridge these paths. More work is needed, however. One critical element is that transfer of data including through wireless mode must meet the requirements of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy and Security Rules set by the U.S. Department of Health and Human Services. FDA clearance standards and specifications are still evolving for this area [4].
Impacts of the resulting products and devices are expected on both communicable and non-communicable diseases. Qualcomm Life provides a platform (2Net), that could be used for many different applications. According to them, “The 2Net platform is designed as an end-to-end, technology-agnostic cloud-based service that interconnects medical devices so that information is easily accessible by device users and their healthcare providers and caregivers” (http://www.qualcommlife.com/). Although the famous medical scanner or Tricorder of Star Wars fame is not yet possible, the recently announced $10 million prize by X-Prize Institute sponsored by Qualcom Life, for developing a Tricorder that can diagnose a set of 15 diseases without the intervention of the physician and weighs less than 2.3 kg is not too far from reality. In ten years, we should expect nothing less than a POC platform that is capable of sequencing-based diagnostics with assay cost of less than a dollar.
References
1. Craw P, Balachandran W. Isothermal nucleic acid amplification technologies for point-of-care diagnostics: a critical review. Lab Chip 2012; 12: 2469–2486.
2. Stedtfeld RD, Tourlousse DM, Seyrig G, Stedtfeld TM, Kronlein M, Price S, Ahmad F, Erdogan G, Tiedje JM, Hashsham SA. Gene-Z: a device for point of care genetic testing using a smartphone. Lab Chip 2012; 12: 1454–1462.
3. Martinez AW, Phillips ST, Whitesides GM, Carrilho E. Diagnostics for the developing world: microfluidic paper-based analytical devices. Anal Chem 2010; 82: 3–10.
4. Draft Guidance for Industry and Food and Drug Administration Staff – Mobile Medical Applications. July 21, 2011. www.fda.gov/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm263280.htm.
The authors
Robert Stedtfeld PhD, Maggie Kronlein and
Syed Hashsham, PhD*
Civil and Environmental Engineering
1449 Engineering Research Court Rm A127
Michigan State University
East Lansing, MI 48824, USA
*Corresponding author:
E-mail: hashsham@egr.msu.edu