Beukenlaan 137
5616 VD Eindhoven
The Netherlands
+31 85064 55 82
info@clinlabint.com
PanGlobal Media is not responsible for any error or omission that might occur in the electronic display of product or company data.
The activated partial thromboplastin time coagulation assay is one of the most frequently performed tests in hematology, and has a variety of uses in clinical practice. Accurate interpretation of the test depends on both clinical context (i.e. why the test was ordered) as well as an understanding of each laboratory’s normal reference range and assay sensitivity regarding detection of factor deficiencies, (unfractionated) heparin therapy and lupus anticoagulant.
by Dr Julianne Falconer and Dr Emmanuel J. Favaloro
Introduction
The activated partial thromboplastin time (APTT) assay is a commonly requested coagulation test, perhaps second only to the prothrombin time (PT)/international normalized ratio (INR), as used to monitor vitamin K antagonist (VKA) therapy such as warfarin. The APTT test assesses the intrinsic pathway of coagulation and has a variety of clinical uses; however, it is primarily used to screen for hemostasis issues, factor deficiencies, lupus anticoagulant (LA) or to monitor unfractionated heparin (UFH) therapy dosing. The test is sensitive to, but not specific for, detection of these abnormalities or influences. APTT prolongation may also be seen in liver disease, disseminated intravascular coagulation (DIC) and in the presence of factor inhibitors. Interpretation of an APTT result, be it normal or prolonged, is dependent on both the clinical context and the characteristics of the reagents and the assay as performed on particular instruments. The establishment of normal reference intervals (NRIs) and assessment of the assay in terms of its sensitivity to heparin, LA and clotting factors are important to provide accurate information for clinical interpretation [1].
Uses of the APTT assay
The APTT test is a global assay that measures the time to fibrin clot formation via the contact factor (‘intrinsic’) pathway (Fig. 1). The APTT test is usually performed on fully automated platforms, and involves activation of coagulation within the test (plasma) sample by the addition of specific reagents (containing phospholipids, contact factor activator and calcium chloride). The type of contact factor activator, and the type and concentration of phospholipid, used in the APTT reagent affects the sensitivity of the assay to, and thus its prolongation by, factor deficiencies, as well as to the presence of UFH and LA [1, 2].
The APTT is commonly used to monitor anticoagulation therapy using UFH (Table 1). It may also be prolonged, however, in the presence of VKAs including warfarin, as well as direct oral anticoagulants (DOACs) such as dabigatran (direct thrombin inhibitor) and rivaroxaban (anti-FXa inhibitor). The APTT is generally less sensitive to, but may still be slightly prolonged, by anticoagulation with low molecular weight heparin (LMWH) and with apixaban, another DOAC (anti-FXa inhibitor).
In the absence of anticoagulation therapy, an ‘isolated’ prolonged APTT may be used to determine a clinically important factor deficiency, for example as a screen for hemophilia A (FVIII deficiency), hemophilia B (FIX deficiency), or hemophilia C (FXI deficiency), or even von Willebrand Disease (VWD; which may be associated with loss of FVIII) [1]. An ‘isolated’ prolonged APTT, however, could instead be indicative of a clinically unimportant factor deficiency such as FXII or other contact factor deficiency. Other alternatives for an ‘isolated’ prolonged APTT include a factor inhibitor or LA. Despite causing prolongation of APTT in vitro, LA may be associated clinically with increased risk of thrombosis rather than bleeding. A prolonged APTT may be accompanied by a prolonged PT in the context of liver disease, DIC or fibrinogen (or other ‘common factor pathway’ deficiency/ies). Clinical context, therefore, must form the basis for accurate interpretation of APTT, be it either normal or prolonged, and together with other routine coagulation studies is essential to guide further investigations (Fig. 2).
A large number of commercial APTT reagents are now available, with wide variation in the type of contact factor activator and phospholipid source and concentration used. This will result in variation in sensitivity to all typical influences; thus also causing substantial variation in NRIs between APTT reagents, and requiring the establishment and verification of NRIs based on both the reagent and instrument in use. Unawareness of variation in APTT reagent sensitivity in context of clinical picture will lead to flawed clinical interpretation of results.
Establishing and verification of NRIs
A minimum of 20 normal individuals may be sufficient to establish a NRI for PT and APTT, according to guidance documents provided by the Clinical and Laboratory Standards Institute (CLSI) [3, 4]. However, a larger number of normal individuals is recommended to establish an initial NRI, following which a smaller sample of normal individuals may be used for future verification purposes [1].
As an example, Figure 3 shows an initial (historical) NRI estimation for APTT testing using a dataset of nearly 80 normal individuals. This included one outlier sample result (Fig. 3a), which was removed to produce the cleaner dataset used to produce the subsequent NRI. A statistical normality test was performed and showed the distribution to be near Gaussian, allowing parametric statistical assessment. For APTT testing, the NRI would aim to evaluate the 95 % confidence interval, approximating a mean
± 2 standard deviation (SD) assessment (Fig. 3b). Logarithmic transformation can instead be used to normalize test data when it is non-parametric and fits a log distribution (e.g. Fig. 3c).
If a NRI has been previously established by the laboratory or by the manufacturer of the APTT reagent using a specific reagent/instrument combination, the laboratory could use a process of transference to verify the ‘established’ NRI as fit for purpose. This may be done by establishing that a majority of samples in a small set of normal donors give values within the established NRI (e.g. >18 out of a set of 20 normal samples). Samples obtained from normal individuals or a dataset of normal patient test results may be used to assess a new lot of reagent to establish whether an existing NRI can be maintained when changing reagent lots.
Factor (deficiency) sensitivity
Factor sensitivity of an APTT assay (representing a specific reagent/instrument combination) can be assessed in a number of ways. One method involves serial dilution of either in-house or commercially derived normal plasma, into single-factor deficient plasma, in order to generate a series of aliquots with reducing factor levels. These samples are then tested by APTT and for factor level. The APTT reagent is regarded to be sensitive to the level of factor that correlates with the upper limit of the NRI.
A more accurate process, though particularly difficult to perform outside of a hemophilia centre, is to establish APTT values from true patients with various known factor levels [1, 2] (e.g. Fig. 4).
As a general guide, if the APTT is used for screening factor deficiencies, then the patient APTT value should be above the NRI when their factor level is below around 30–40 U/dL for FVIII, FIX, and FXI.
Sensitivity of APTT to UFH
Despite the changing landscape of anticoagulation therapy with the addition of direct anti-Xa inhibitors (rivaroxaban and apixaban) and a direct thrombin inhibitor (dabigatran) [5, 6], both LMWH and UFH continue to be frequently used in clinical practice. In turn, the APTT continues to be a generally preferred method of UFH monitoring over anti-FXa, given the wide availability and relative low cost of the assay. However, unlike the calibrated anti-FXa assay, APTT results are subject to variation between different instruments, be they be based on optical or mechanical clot detection methods [7], different APTT reagents (including variation between different lots of the same reagent type) and algorithms used on instruments for raw data processing. This poses a substantial problem with regards to historical recommendations to maintain patients on UFH between 1.5 and 2.5 times the ‘normal reference value’ (as based on limited evidence [8]). Therapeutic ranges should therefore be defined with specific reference to the instrument/reagent combination used locally [9].
One ‘spiking method’ involves testing samples containing known quantities of UFH diluted into normal pool plasma, as then tested by APTT and anti-FXa methods, allowing an estimation of the APTT therapeutic interval [1]. However, variations in certain components of patient plasma, as well as the non-physiologically processed nature of the UFH used, can impact on the interpretation of data obtained using this method. A better method involves ex vivo assessment of plasma obtained from patients on UFH therapy, with these tested for both APTT and anti-FXa, and then to establish a UFH therapeutic range for APTT that matches the therapeutic range for anti-FXa (e.g. 0.3–0.7 U/mL). It is important to recognize that individual response to UFH according to APTT is affected by many influences, including (but not limited to): antithrombin level; high or low levels of coagulation factors and proteins such as von Willebrand factor or proteins released from endothelial cells or platelets, competing with antithrombin for heparin binding; or increased FVIII levels in acute phase response; or reduction in FXII; or presence of LA (etc).
To obtain a cleaner data set to establish UFH therapeutic ranges, the following steps can be undertaken during sample collection and processing [1].
• Ensure baseline PT, APTT and INR testing prior to commencement of UFH are within their NRIs.
• Exclude underfilled samples, samples with visible hemolysis or likely platelet activation and release of heparin neutralizer platelet factor 4 (PF4).
• Exclude samples containing LMWH or other anticoagulants (e.g. VKAs, DOACs).
• Adhere to manufacturer guidelines with regards to the window from time of blood collection to testing.
• Double centrifuge samples when freezing them for batch testing (to remove residual platelets, which release PF4 and phospholipids on thawing).
• Accumulate data over a suitable time period to account for day-to-day test result variability.
• Aim for 30 or more data points.
• Appropriately dilute samples with anti-Xa activity above the test’s linearity limit.
• Remove data points reflecting ‘gross’ outliers.
LA sensitivity
The LA sensitivity of a particular APTT reagent can be assessed by comparing APTT tests of samples containing LA, for example by comparison of mean clotting times for each reagent.
Given that the APTT is a phospholipid-dependent assay, the test may be susceptible to prolongation in the presence of LA. However, differences in the phospholipid type and concentration between APTT reagents account for wide variation seen in the degree of prolongation of APTT, including due to LA. The LA sensitivity of the APTT reagent also has bearing on the use of APTT to monitor UFH and must inform the establishment of an algorithm to further investigate unexpectedly prolonged APTTs.
In one empirical method, initial testing using an LA sensitive method (e.g. dilute Russell viper venom time; dRVVT) is initially used to formulate a set of LA-positive samples of various ‘strengths’. Different APTT reagents can then be used to test the samples and the data for each sample can be plotted again the upper reference limit of the APTT for each reagent [1]. The ratio of clotting time of each LA-positive sample (of varying strengths) to the mean normal APTT derived from normal plasma samples is calculated. The median of these ratios allows different reagents to be ranked according to LA sensitivity. It can then become clear which APTT reagents are most (versus least) sensitive to LA. These can then be differentially selected according to the laboratory desire. For example, a laboratory may prefer to select an APTT reagent that is relatively LA ‘insensitive’, as combined with good factor VIII/IX/XI and UFH sensitivity if there is a desire to use a general purpose APTT screening reagent (i.e. hospital laboratory monitoring UFH, but wishing to avoid LA detection in asymptomatic patients). Alternatively, a laboratory may select an LA sensitive and an insensitive APTT reagent pair if they wish to assess for LA in symptomatic (thrombosis and/or pregnancy morbidity) patients.
Conclusion
Interpretation of a normal or a prolonged APTT must take into account both clinical context, including presence of anticoagulant therapy, as well as the methods and reagents used by the laboratory. The sensitivity of a particular APTT reagent to detect UFH therapy, LA and factor deficiencies has significant bearing on diagnostic assessment and therapy monitoring, and thus reflects essential knowledge for laboratory and clinical staff alike.
Figure 1. The activated partial thromboplastin time (APTT) assay measures the clot time to formation of fibrin via the contact factor pathway and is dependent on contact factors (FXII and above), and then FXI, FIX, FVIII, FX, FV, and FII. The APTT is also affected by vitamin K antagonists (VKAs; ‘W’), but more importantly is used to monitor unfractionated heparin (UFH; ‘H’) therapy and also to assess for potential hemophilia (FVIII, FIX or FXI deficiency). The APTT is also sensitive to the presence of other anticoagulants, including direct oral anticoagulants (DOACs) such as dabigatran (‘D’) and rivaroxaban (‘R’), and potentially also apixaban (‘A’) for some reagents. The APTT may also be utilized as part of a panel of tests to help assess for lupus anticoagulant (LA). (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Figure 2. An algorithm that provides one recommended approach for the follow-up of an abnormal APTT. Always exclude an anticoagulant effect first – there is no point investigating a prolonged APTT associated with anticoagulant use. Then consider the patient’s history, or the clinical reason for the test order, both of which assist in terms of follow-up approach. APTT, activated partial thromboplastin time; FBC/CBC, full blood count (UK/Australia)/complete blood count (USA); DIC, disseminated intravascular coagulation; DOAC, direct oral anticoagulant; EDTA, ethylenediaminetetraacetic acid; F, factor; LA, lupus anticoagulant; PT, prothrombin time. (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Table 1. The APTT test. A multipurpose and sensitive assay, but not specific for any individual parameter. List is not meant to be all inclusive.
DOACs, direct oral anticoagulants; VWD, von Willebrand disease.
*PT should also be prolonged if APTT is prolonged in the indicated setting.
(Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Figure 3. Historical data from our laboratory to illustrate the process of deriving a normal reference interval (NRI) for the APTT, and using nearly 80 normal individual plasma samples. (a) APTT of all samples tested shown as a dot plot; one clear outlier shown as a red asterisk. (b) Data cleaned of outliers [i.e. in this case the single red asterisk sample in (a)]. (c) NRR estimate as mean ± 2 standard deviations (SDs) to provide approximate 95 % coverage. Bar graphs of parametric data processing and log transformed data processing shown. The NRI for this data set approximates 27–38 sec. (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Figure 4. Ex vivo heparin versus APTT evaluation. (a) Samples from all patients identified to be on heparin (as identified by our laboratory information system) and for which an APTT was performed at the time of evaluation are also tested for anti-FXa level. The APTT therapeutic range is that corresponding to a heparin level of 0.3–0.7 U/mL by anti-Xa. However, many data points in this figure do not reflect UFH alone. Some points may instead reflect low molecular weight heparin (e.g. likely to be the sample yielding an anti-Xa value close to 0.7 U/mL but with normal APTT) or alternatively UFH co-incident to FXII deficiency or LA, or else patients potentially transitioning from UFH to VKAs. These data points can be removed to yield a ‘cleaner’ data set, as shown in (b). (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Disclaimer: The views expressed in this paper are those of the authors, and are not necessarily those of NSW Health Pathology.
References
1. Favaloro EJ, Kershaw G, Mohammed S, Lippi G. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35.
2. Kershaw G. Performance of activated partial thromboplastin time (APTT): determining reagent sensitivity to factor deficiencies, heparin, and lupus anticoagulants. Methods Mol Biol 2017; 1646: 75–83.
3. Defining, establishing, and verifying reference intervals in the clinical laboratory; proposed guideline—third edition. CLSI document C28–P3. Clinical and Laboratory Standards Institute (CLSI) 2008.
4. One-Stage Prothrombin time (PT) test and activated partial thromboplastin time (APTT) test; approved guideline—second edition. CLSI document H47-A2. CLSI 2008.
5. Favaloro EJ, McCaughan GJ, Mohammed S, Pasalic L. Anticoagulation therapy in Australia. Ann Blood 2018; 3: 48.
6. Lippi G, Mattiuzzi C, Adcock D, Favaloro EJ. Oral anticoagulants around the world: an updated state-of the art analysis. Ann Blood 2018; 3: 49.
7. Favaloro EJ, Lippi G. Recent advances in mainstream hemostasis diagnostics and coagulation testing. Semin Thromb Hemost. 2019; 45(3): 228–246.
8. Baluwala I, Favaloro EJ, Pasalic L. Therapeutic monitoring of unfractionated heparin – trials and tribulations. Expert Rev Hematol 2017; 10(7): 595–605.
9. Marlar RA, Clement B, Gausman J. Activated partial thromboplastin time monitoring of unfractionated heparin therapy: issues and recommendations. Semin Thromb Hemost 2017; 43(3): 253–260.
The authors
Julianne Falconer1 MBBS and Emmanuel J. Favaloro*1,2 PhD, FFSc (RCPA)
1Haematology, Institute of Clinical Pathology and Medical Research (ICPMR), NSW Health Pathology, Westmead Hospital, NSW, Australia.
2Sydney Centres for Thrombosis and Hemostasis, Westmead Hospital
*Corresponding author
E-mail: Emmanuel.Favaloro@health.nsw.gov.au
The introduction of cardiac troponin (cTn) assays has helped improve the triage of chest-pain patients. Evolution from relatively insensitive cTn assays to high-sensitivity assays has necessitated evolving testing approaches to optimize clinical utility. The latest generation (high-sensitivity cTn) support rapid diagnostic protocols aiding in the earlier discharge of a significant percentage of non-AMI patients, as well as aid a faster admission. The current 4th universal definition for AMI emphasizes that cTn can be elevated in many non-ischemic etiologies. To facilitate differentiation of an AMI, the guidelines define a rising or falling pattern of cTn assessed over time, in conjunction with other clinical information and risk assessment. The choice of clinical cutoffs and change values (delta) can be confounding, as cTn assays are not standardized. Testing algorithms such as a 0-3h protocol compared to rapid pathways supported by high-sensitivity assays (0 – 2h or 0 – 1h) also need to be taken into consideration. The use of the 99th percentile for cTn has been recommended since the first universal definition of AMI and continues to be currently recommended for 0-3h protocols, along with gender-specific cutoffs. For rapid diagnostic protocols, values well below the 99th percentile along with time-dependent deltas must be used. The sensitivity and precision offered by high-sensitivity assays is essential for rapid protocols in order to more accurately differentiate clinically significant change from assay imprecision. Rapid protocols identified for two recently available high sensitivity cTnI assays (High-Sensitivity Troponin I assays from Siemens Healthineers) will be reviewed, including performance in a 0 – 1h algorithm.
by Laurent Samson, PharmD and Katherine Soreng, PhD
Chest pain patients and AMI assessment
Patients with a chief complaint of “chest pain” suggestive of acute myocardial infarction (AMI) represent one of the most common ED presentations. As highly effective but time-dependent interventions for AMI exist, these patients are typically prioritized for assessment. While a diagnostic ECG can rapidly identify an ST-segment elevated myocardial infarction (STEMI), only a small percentage of patients have definitive ECG results. A larger percentage of patients with AMI lack clear ECG evidence but are experiencing a non-ST-segment elevated myocardial infarction (NSTEMI) and benefit from intervention. Both STEMI and NSTEMI fall into the category of the Acute Coronary Syndrome (ACS). Most chest pain patients have pain unrelated to ACS. The challenge in busy emergency departments (ED) is to rapidly identify STEMI and NSTEMI patients from those that can be safely discharged or evaluated for alternate etiologies. To aid diagnostic stratification, guidelines recommend serial biomarker testing with cardiac troponin I or T (cTnI, cTnT), with a rising/falling pattern indicative of evolving injury. High sensitivity troponin testing, in conjunction with other clinical findings and risk assessment, supports the differentiation of non-AMI patients from those experiencing cardiac ischemia.1
Evolving testing guidance is linked to cTn assay performance
In 2000, an expert consensus panel (the First Global MI Task Force) published a new AMI definition, which designated cardiac necrosis in the setting of myocardial ischemia be labeled as AMI. Recognizing the specificity of cTn, the authors adopted the 99th-percentile for cTn for a healthy reference population as the diagnostic threshold. An AMI was characterized by a rise and/or fall in values with at least one value above the decision level, along with a strong pre-test likelihood. This redefinition to a value just above that identified in a normal, healthy population dramatically increased AMI detection, and improved clinical confidence for exclusion. As cTn assays were (and are) not standardized (and cTnI is a different molecule than cTnT), the adoption of the 99th percentile vs. a “shared” numeric diagnostic cut-point was, and remains, necessary.
With adoption of the 99th percentile, low-end accuracy was crucial to better differentiate a true cTn elevation from assay imprecision. A precision criteria of <10% at the 99th percentile (upper reference limit or URL) was designated. While no assays available in 2000 could meet this definition for both sensitivity and precision, some manufacturers achieved approval of “guideline-compliant” or “contemporary sensitive” assays in subsequent years. As assay performance continued to improve and additional data to be published, recommendations evolved. In 2007, an update to the Universal Definition expanded the MI definition into five MI subcategories, each associated cTn values. The 99th percentile threshold continued to be recommended for Types 1 and 2 MI (typically occurring in patients presenting in the ED with chest pain) while multiples of the URL were designated for MI types 4 and 5.
The need for a changing pattern with at least one result above the diagnostic threshold in the setting of suspected myocardial ischemia was emphasized in the guidance. A changing pattern is essential to improve diagnosis of an AMI from chronic elevations associated with structural heart disease or alternate etiologies of cardiac damage. To assess change, cTn testing recommendations were 0 and 6-9 hours (with additional testing if AMI suspicion persisted).
In 2011, the ESC guidelines for the management of NSTEMI patients were published. The Expert Panel recognized the increased availability and improved performance for sensitive assays, and the development of high-sensitive cTn. Given the ability of high sensitivity assays to detect low levels of cTn with good precision, suggested testing intervals were shortened to 0 and 3-6 hours. In 2012, the updated 3rd Universal Definition was published, and recommended a 3h vs. 6h delta change if using a hs cTn. Similar guidance for a 0-3h protocol was published in the American guidelines in 2014 for the management of NSTE-ACS patients. Also, in 2014, the IFCC task force on cardiac biomarkers defined the high sensitivity troponin criteria and introduced the use of whole numbers with cTn (units of ng/L or pg/ml) to more readily discriminate a changing pattern. Recognizing the mounting data for the good performance of rapid protocols with hs cTn assays, the ESC published new guidelines for the management of NSTEMI patients in 2015. This update included rapid pathways (1 or 2 hours) as an alternative to the classical 0-3h protocol.2 Challenges to rapid testing were recognized, including, concerns for misdiagnosing “early presenters” (those appearing in the ED within 3 h of chest pain onset). With early presenters, a rapid protocol could lack the needed sensitivity. The authors also recognized that in patients with a high pre-test risk for MI, a changing pattern may not be seen such as those near the peak of the cTn time-concentration curve or on the downside.
Current testing guidance: The 2018 fourth Universal Definition of MI
The 2018 Fourth Universal Definition of MI (ESC/ACC/AHA/WHF Expert Consensus Document) elaborates on the use of hs-cTn assays.1 In the 6-year interim, striking progress had been made for increased commercial availability of high-sensitivity cTn assays as well as validation of these assays in both “standard” (0-3h) and “accelerated” or rapid (0-2h or 0-1h) diagnostic protocols. Differentiating acute ischemia-induced damage from cardiac injury resulting from nonischemic conditions was emphasized, as both can cause elevated cTn levels. The term myocardial injury comprises MI as well as other nonischemic cardiac conditions (such as myocarditis or heart failure) and noncardiac morbidities (such as sepsis or renal patients) associated with elevations of cTn. In the case of MI, injury is acute and characterized by a significant rise and/or fall of cTn with at least one value above the 99th percentile URL of a healthy reference population. Acute MI is diagnosed if there is evidence of myocardial necrosis (cell death due to injury) in a clinical setting consistent with myocardial ischemia. Chronic elevations are less likely to show significant change, which can aid exclusion for AMI. The 4th Universal Definition reinforces value for gender-specific cut-points. As women tend to have lower levels of cTn, the percent detection in a female reference population can be lower, meaning some assays may detect ≥50% of healthy men but not women. Additional data explored the potential for a single value rule-out using the assay limit of detection (LoD). Updates included a focus on improved diagnosis for MI types and a discussion of analytic issues for cTn, including that values from one assay cannot be applied to another due to lack of standardization.
High-sensitivity cTn: Impact on testing and patient management
Currently, hs cTn is analytically defined by the ability to detect ≥50% of a healthy reference population using values between the LoD and gender-specific 99th percentiles (with a CV <10% at the URL). The 99th percentile continues to be the recommended cut-point if using a 0-3h testing strategy, but with implementation of gender-specific values. As hs assays more accurately detect smaller levels of change, they can also be incorporated into rapid protocols (either 0-1h or 0-2h). Assay precision in these accelerated protocols is critical, as small measures of change below the 99th percentile must be reliably detected. Since hs cTn assays continue to lack standardization, each assay must be independently validated, with clinical decision limits and change values identified. The shorter the time between testing, the lower the values. Caution must be exercised depending on the hs assay utilized, as change values are not only assay-specific but can be obfuscated by low-end imprecision and lot-to-lot variation that can vary significantly among assays at values much below the 99th percentile. These and other considerations for institutions wishing to implement hs cTn testing have recently been published.3-4 Rapid protocols have been proposed to exclude patients for AMI and so reduce patient burden in the ED. High-risk patients may be more rapidly identified as well using hs assays and rapid protocols. Patients have a higher likelihood for NSTEMI if the hs-cTn concentration at presentation is at least moderately elevated, or hs-cTn concentrations show a clear rise within the first hour.
Assay-specific hs cTn: Analytic issues can impact choice of testing algorithm
The lack of standardization among cTn assays remains a challenge, necessitating assessment for the specific assay utilized in a given setting. Hs cTn assays should demonstrate ≥50% gender-specific detection in a healthy reference population. Challenges around what defines a “healthy” population exist, and screening criteria can significantly affect percent detection. Biologic variation can also contribute to divergent values, adding to the uncertainty associated with analytic variation. Any impact on low-end precision or lot-to-lot variation of hs cTn assays can confound clinical assessment when using rapid diagnostic algorithms. While all hs cTn assays meet the precision criteria at the 99th percentile, significant differences among assays exist at the lower cut-points utilized in rapid diagnostic algorithms. It is imperative that both labs and clinicians understand the precision of their assay if adopting rapid testing and not assume a low coefficient of variation (CV) extends to the lower cut-points utilized.5,6
Performance of the “classical” (0-3h) pathway with hs cTn assays
The increased sensitivity of hs cTn assays means a greater percent of chest pain patients may present with elevated values in excess of the 99th percentile. To address differentiation of elevations associated with ischemic-associated injury from alternate causes of cardiac necrosis, a 20% change value has been recommended for patients with initial elevations above the 99th and >50% of change for value below.1 Values can typically be obtained from the manufacturers package insert or published studies and percent calculated. An example listing the gender-specific 99th percentile and other assay details for a recently approved hs cTnI (Siemens Healthineers Atellica IM High-Sensitivity Troponin I) assay is shown in Table 1. Analytic performance characteristics of the Atellica IM High-Sensitivity Troponin I assay meet the criteria for an hs cTn assay.
Performance of hs cTn using rapid strategies
Rapid strategies can include employing either very low levels of hs cTn on presentation (<LoD) or the lack of significant change in persistently elevated hs-cTn values over a 1–2-hour period along with risk assessment to exclude AMI. In addition, these strategies have validated a single, high value rule-in for patients with a high index of suspicion for AMI, but again all values are assay-specific, and performance should be established in large and well-validated studies.1,5,6 A single sample rule-out strategy using a very low value has high sensitivity for myocardial injury and therefore high negative predictive value (NPV) to exclude MI, though pretest probability should be considered, along with the timing of chest-pain onset. Rapid testing strategies rely on two concepts: first, hs cTn is a quantitative and continuous variable and the probability of MI increases with increasing values; second, early absolute changes (versus relative or percent change) of cTn can be highly predictive for AMI. Importantly, morbidities such as end-stage renal disease may require an alteration of the cut-off used, though renal-specific cut-offs have yet to be widely established. Studies designed to identify cut-offs on both traditional and rapid diagnostic algorithms have often excluded patients with renal disease, as well as other co-morbidities that can be associated with cTn elevation.1 The following section reviews published data for a hs cTnI assay (Siemens Healthineers’ Atellica IM High-Sensitivity Troponin I) for use in both a traditional and rapid diagnostic algorithm. For institutions utilizing an alternate hs cTn assay, similar study guidance data is often available.
Validation studies of the 0-1h algorithm with ADVIA Centaur and Atellica IM High-Sensitivity Troponin I assays
A hs cTnI assay from Siemens Healthineers available on the ADVIA Centaur and Atellica IM analysers has been validated in three large AMI studies (one American and two European patient cohorts). The APACE study group (Advantageous Predictors of Acute Coronary Syndrome Evaluation) is an ongoing prospective international multicentre study with 12 centres in 5 European countries aiming to advance the early diagnosis of AMI. APACE investigators have validated performance for several sensitive and high-sensitive assays.7 For rapid protocols, their approach utilized a derivation cohort followed by validation for each assay studied. The results for the ADVIA Centaur High-Sensitivity Troponin I assay in a 0-1h protocol are shown in Fig. 1.
Applying the derived optimal cutoff levels and delta, 46% of patients could be classified as rule-out with a corresponding NPV of 99.7% and a sensitivity of 99.1% (using a rule-out criteria of either a single determination <3 ng/L or a 0-1h <6ng/L with a delta <3 ng/L) in patients with chest pain >3h. A single-value rule-out of 3 ng/L was applied to early presenters (chest pain <3 h from onset). Conversely, a direct rule-in based on a single ADVIA Centaur hs-cTnI concentration (≥120 ng/L) at presentation was feasible in 12% of patients, and 6% more were identified with a delta of ≥12 ng/ml at 0-1h. Overall, the 0-1h algorithm produced a diagnosis after 1 h (either rule-in or rule-out) in 64% of patients. The remaining patients (36%) underwent additional testing and observation; ultimately 11% were ruled in for NSTEMI.
To validate the 0-1h algorithm with Atellica IM High-Sensitivity Troponin I assay, two additional studies using two different cohorts have been published, one in Scotland (High-STEACS)8 and one in the U.S. (HIGH US)9. The baseline characteristics of the patients admitted at the ED are detailed in Table 2.
Importantly, Table 3 identifies key exclusion criteria differences in the testing populations. Unlike the APACE cohort, the HIGH U.S. study did not exclude renal dialysis patients, so may more closely approximate a “real world” patient testing scenario.
The High-STEACS study in Scotland validated the performance of the Atellica IM High-Sensitivity Troponin I assay in a 0-1h protocol (using the derivation values of the ADVIA Centaur High-Sensitivity Troponin I assay established with the APACE cohort); similar findings were observed with both study populations.8 The Atellica IM High-Sensitivity Troponin I assay was further validated is a US testing population (HIGH US).9 Both the ADVIA Centaur High-Sensitivity Troponin I and Atellica IM High-Sensitivity Troponin I assays utilize identical designs and differ only on the platform analyser utilized and time to result (18 minutes on ADVIA Centaur system vs. 10 minutes on Atellica IM analyser). Table 4 shows the comparable clinical performance of both the ADVIA Centaur High-Sensitivity Troponin I and Atellica IM High-Sensitivity Troponin I assays utilizing the APACE-derived values. In all three studies, a majority of patients could be excluded or diagnosed for AMI using the 0-1h strategy. Importantly, the NPV for rule-out was >99%, supporting early and safe exclusion for a significant percentage of patients across testing cohorts. The clinical accuracy for the 0-1h early rule-out of NSTEMI found with the APACE and High-STEACS cohorts was like that reported for the American cohort, despite inclusion of patients with significant renal impairments who tend to have chronic heart injury with increased cTn levels.10
Atellica IM and ADVIA Centaur High-Sensitivity Troponin I assays: Design features compatible for a fast rule-out strategy
Consistency in performance between the assays is associated with assay design, including the choice of antibodies. Three monoclonal antibodies are employed in the assay, two for the capture and one for the detection. The two monoclonal capture antibodies target unique cTnI epitopes and are conjugated to streptavidin and are preformed on magnetic latex particles to reduce interference with biotin. Specimens that contain biotin demonstrate ≤10% change in results up to 3500ng/mL. Detection of captured cTnI is accomplished using a conjugated Lite Reagent consisting of a proprietary acridinium ester and a recombinant anti-human cTnI sheep Fab covalently attached to bovine serum albumin (BSA) for chemiluminescent detection. This unique Fab has been molecularly modified to remove the primary Fc region associated with reports of human anti-animal antibodies (HAAA which can include HAMA) and other heterophile sources of interference. A direct relationship exists between the amount of troponin I present in the patient sample and the amount of relative light units (RLUs) detected by the system, producing a quantitative result. Manufacturing processes and reagent stocks have been carefully designed to provide reliable lot-to-lot consistency. Fig.2 shows the reproducibility of ADVIA Centaur High-Sensitivity Troponin I across 6 lots of reagents using a value of cTnI well below the 99th percentile (where variation would be more likely to impact clinical assessment).
Conclusion
As an alternative to the classical 0-3h protocol which now includes gender-specific 99th percentiles, a faster rule-out strategy based on a 0-1h algorithm has been validated for the Siemens Healthineers high-sensitivity cTnI assays on the ADVIA Centaur systems and Atellica IM analyser. The analytic performance of these assays supports confidence in results across the measuring range, especially at the low clinical decision cut-points. The ability to rapidly exclude a large percent of chest pain patients for AMI with a high degree of certainty can help with triage in the ED. The similar high NPV’s observed across hs cTn studies (>99%) support good clinical performance for a safe rule-out using a 0-1h strategy. The accuracy for an early AMI rule-out established by these studies supports harmonization of these algorithms worldwide for well validated assays.
References:
1) Thygesen K, Alpert JS, Jaffe AS, Chaitman BR, Bax JJ, Morrow DA, White HD; Executive Group on behalf of the Joint European Society of Cardiology (ESC)/American College of Cardiology (ACC)/American Heart Association (AHA)/World Heart Federation (WHF) Task Force for the Universal Definition of Myocardial Infarction. Circulation. 2018 Nov 13;138(20): e618-e651
2) Marco Roffi et al, 2015 ESC Guidelines for the management of acute coronary syndromes in patients presenting without persistent ST-segment elevation European Heart Journal (2016) 37, 267–315
3) Apple, FS et al. Cardiac Troponin Assays: Guide to Understanding Analytical Characteristics and Their Impact on Clinical Care. Clinical Chemistry 63:173–81 (2017)
4) Januzzi, Jr., J.L. et al. Recommendations for Institutions Transitioning to High-Sensitivity Troponin Testing. JACC Scientific Expert Panel, J Am Coll Cardiol. 2019;73(9):1059–77
5) Paul O. Collinson, Amy K. Saenger and Fred S. Apple, on behalf of the IFCC C-CBa High sensitivity, contemporary and point-of-care cardiac troponin assays: educational aids developed by the IFCC Committee on Clinical Application of Cardiac Bio-Markers Clin Chem Lab Med 2019; 57(5): 623–632
6) How Does the Analytical Quality of the High-Sensitivity Cardiac Troponin T Assay Affect the ESC Rule Out Algorithm for NSTEMI? Clinical Chemistry 65:3 (2019)
7) Boeddinghaus, J. et al. Clinical Validation of a Novel High-Sensitivity Cardiac Troponin I Assay for Early Diagnosis of Acute Myocardial Infarction. Clinical Chemistry 64:9 (2018): 1-14.
8) Andrew R Chapman et al. Novel high-sensitivity cardiac troponin I assay in patients with suspected acute coronary syndrome. Heart. 2018; 0:1–7. doi:10.1136/heartjnl-2018-314093
9) Christenson, RH et al. Trial design for assessing analytical and clinical performance of high sensitivity cardiac troponin I assays in the United States: The HIGH US study. Contemporary Clinical Trials Communications 14 (2019) 100337.
10) R.M Novak, Performance of a novel high sensitivity cardiac Troponin I assay for one Hour algorithm for evaluation of NSTEMI in the US population Journal of the American College of Cardiology Volume 73, Issue 9 Supplement 1, March 2019
The authors
Katherine Soreng, PhD is the Director of Clinical and Scientific Support for Laboratory Diagnostics at Siemens Healthineerskatherine.soreng@siemens-healthineers.com Laurent Samson, PhD, is the Associate Director for Global Commercial Marketing, Immunoassays at Siemens Healthineers laurent.samson@siemens-healthineers.com
Product availability may vary from country to country and is subject to varying regulatory requirements.
DNA methylation at the cytosine of CpG dinucleotides is a key form of epigenetic regulation of gene expression and aberrant hypermethylation of the promoter regions of certain genes has been identified in many cancers. The ability to analyse methylation status from non-invasively collected samples (such as saliva, sputum, stool and urine) as well as circulating tumour (ct)DNA in blood has led to much interest in methylation status as a potential biomarker for diagnosis, prognosis and treatment monitoring of cancer. Indeed, a fecal blood test for colorectal cancer screening (Cologuard®) that includes aberrant methylation testing of NDRG4 and BMP3 genes was approved by the Food and Drug Administration in the USA in 2014. However, methylation state analysis of specific promoter regions requires the use of technically demanding methods, such as PCR of bisulfite-treated DNA, pyrosequencing, methylation-specific PCR, methyl BEAMing and genomic sequencing, that have limitations of one sort or another in their use as high-throughput screening tools.
Recently, though, a paper by Sina et al. (“Epigenetically reprogrammed methylation landscape drives the DNA self-assembly and serves as a universal cancer biomarker” in Nature Communications 2018; 9(1): 4915) has described how the changes in methylation patterns in cancer genomes have a general effect on the physicochemical properties of DNA and that this change can be used as a potential universal cancer biomarker. In the transition from normal to malignant neoplasm, the general genomic methylation pattern changes from one of dispersed methylation to general hypomethylation but with increased clustering of methylation at regulatory regions. This change in the ‘methylation landscape’ results in a difference in the solvation properties between the normal and the cancer DNA polymer, which in turn affect the affinity of DNA to gold: the more highly aggregated normal DNA exhibits low adsorption to gold, whereas the less aggregated cancer DNA shows high adsorption. The authors have been able to use these properties to create a highly sensitive and specific, non-invasive, quick (≤10 min) colorimetric assay for the detection of cancer that needs only minimal sample preparation and small DNA input. So far, identification of this ‘Methylscape’ biomarker is only an indication of the presence of cancer – further work-up is needed to determine location, type and stage of disease. However, this seems like the ideal first test to determine whether a patient’s symptoms are caused by cancer or not.
February | March 2025
The leading international magazine for Clinical laboratory Equipment for everyone in the Vitro diagnostics
Beukenlaan 137
5616 VD Eindhoven
The Netherlands
+31 85064 55 82
info@clinlabint.com
PanGlobal Media is not responsible for any error or omission that might occur in the electronic display of product or company data.
This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies.
Accept settingsHide notification onlyCookie settingsWe may ask you to place cookies on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience and to customise your relationship with our website.
Click on the different sections for more information. You can also change some of your preferences. Please note that blocking some types of cookies may affect your experience on our websites and the services we can provide.
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to provide the website, refusing them will affect the functioning of our site. You can always block or delete cookies by changing your browser settings and block all cookies on this website forcibly. But this will always ask you to accept/refuse cookies when you visit our site again.
We fully respect if you want to refuse cookies, but to avoid asking you each time again to kindly allow us to store a cookie for that purpose. You are always free to unsubscribe or other cookies to get a better experience. If you refuse cookies, we will delete all cookies set in our domain.
We provide you with a list of cookies stored on your computer in our domain, so that you can check what we have stored. For security reasons, we cannot display or modify cookies from other domains. You can check these in your browser's security settings.
.These cookies collect information that is used in aggregate form to help us understand how our website is used or how effective our marketing campaigns are, or to help us customise our website and application for you to improve your experience.
If you do not want us to track your visit to our site, you can disable this in your browser here:
.
We also use various external services such as Google Webfonts, Google Maps and external video providers. Since these providers may collect personal data such as your IP address, you can block them here. Please note that this may significantly reduce the functionality and appearance of our site. Changes will only be effective once you reload the page
Google Webfont Settings:
Google Maps Settings:
Google reCaptcha settings:
Vimeo and Youtube videos embedding:
.U kunt meer lezen over onze cookies en privacy-instellingen op onze Privacybeleid-pagina.
Privacy policy