• News
    • Featured Articles
    • Product News
    • E-News
  • Magazine
    • About us
    • Digital edition
    • Archived issues
    • Free subscriptions
    • Media kit
    • Submit Press Release
  • White Papers
  • Events
  • Suppliers
  • E-Alert
  • Contact us
  • FREE newsletter subscription
  • Search
  • Menu Menu
Clinical Laboratory int.
  • Allergies
  • Cardiac
  • Gastrointestinal
  • Hematology
  • Microbiology
  • Microscopy & Imaging
  • Molecular Diagnostics
  • Pathology & Histology
  • Protein Analysis
  • Rapid Tests
  • Therapeutic Drug Monitoring
  • Tumour Markers
  • Urine Analysis

Archive for category: Featured Articles

Featured Articles

27789 Cellavision CLI cropped resized

Meet CellaVision DC-1

, 26 August 2020/in Featured Articles /by 3wmedia
https://clinlabint.com/wp-content/uploads/sites/2/2020/08/27789_Cellavision_CLI_cropped___resized.jpg 1381 1000 3wmedia https://clinlabint.com/wp-content/uploads/sites/2/2020/06/clinlab-logo.png 3wmedia2020-08-26 09:39:502021-01-08 11:33:41Meet CellaVision DC-1
C369 Menke Fig1 New

Methodology for finding and interpreting efficient biomarker signals

, 26 August 2020/in Featured Articles /by 3wmedia

Modern ‘omics’ and screening technologies make possible the analysis of large numbers of proteins with the aim of finding biomarkers for individually tailored diagnosis and prognosis of disease. However, this goal will only be reached if we are also able to sensibly sort through the huge amounts of data that are generated by these techniques. This article discusses how data analysis techniques that have been developed and refined for over a century in the field of psychology may also be applicable and useful for the identification of novel biomarkers.

by Dr J. Michael Menke and Dr Debosree Roy

Introduction
The profession and practice of medicine are rapidly moving towards more specialization, more focused diagnoses and individualized treatments. The result will be called personalized medicine. Presumably genetic predisposition will remain the primary biological basis, but diagnosis and screening will also evolve from complex system outputs observed as increases or decreases of levels of biomarkers in human secretions and excretions. In this sense, the exploration in the human sciences will undoubtedly expand to new frontiers, interdisciplinary cooperation, new disease reclassifications, and the disappearance of entire scientific professions.

Big data and massive datasets by themselves can never answer our deepest and most troubling questions about mortality and morbidity. After all, data are dumb, and need to be properly coaxed to reveal their secrets [1]. Without theories, our great piles of data remain uninformative. Big data need to be organized for, and subjected to, theory testing or data fitting to best competing theories [2, 3] to avoid spurious significant differences, conceivably the biggest threat to science in history [4, 5].

Old tools for big data

New demands presented by our ubiquitous data require new inferential methods. We may discover that disease is emergent from many factors working together to create a diagnosis in one person that, in fact, actually has many different causes in another person with the same diagnosis. Perhaps there are new diseases to be discovered. There might be better early detection and treatment. Much like the earliest forms of life on earth, pathology is much more complicated than just the rise of plant and animal kingdoms as taught mid-twentieth century in evolution.

Although new methodologies may meet scientific requirements of big data, tools already in existence may obviate the need to invent new ones. In particular, methods developed by and for psychologists over more than 100 years may already be an answer. Established data organization and analysis have already been developed by psychologists to test theories about nature’s most complex systems of life. Inference and prediction from massive amounts of data from multiple sources might yield more from these ‘fine scalpels’ without the need for brute force analyses, such as tests for statistical differences that look significant in many cases because of systematic bias in population data arising from unmeasured heterogeneity. The development of some of the most applicable psychological tools began in the early 20th century for measuring intelligence, skills and abilities. Thus, these tools have been used and refined for over a century. From psychological science emerged elegant approaches to data analysis and reduction to evaluate persons and populations for test validity, reliability, sensitivity, specificity, positive and negative predictive values, and efficiency. Psychological testing and medical screening share a common purpose: measure the existence and extent of largely invisible or hard to measure ‘latent’ attributes by establishing how various indicators that are attached to the latent trait react to the presence or absence of subclinical or unseen disease. Biomarkers are thus analogues of test questions, with each biomarker expressing information that helps establish the presence or absence of disease and its stage of progression. The analogous process recommended in this paper is simply this: How many and what kind of biomarkers are sufficient to screen for disease?

Biomarkers for whole-person healthcare

Although the use of biomarkers seems to buck the popular trend of promoting whole person diagnosis and treatment, biomarkers per se are actually nothing new. Biomarkers as products of human metabolism and waste have played an important role over centuries of disease diagnosis and prognosis, preceding science and often leading to catastrophic or ineffective results (think of ‘humours’ and ‘bloodletting’ as examples). Today, blood and urine chemistries are routinely used for focusing on a common cause (disease) of a number of symptoms. Blood in the stools, excessive thirst, glucose in urine, colour of eye sclera, round out information attributable to a common and familiar cause crucial for identifying and treating a system or body part. Signs of thirst and frequent urination may be necessary, but not sufficient for diagnosis of diabetes mellitus, yet can lead to quick referral or triage. The broad category of the physiological signs (biomarkers) has extended along with technology to the microscopic and molecular.

Today, the general testing for and collection of biomarkers in bodily fluids is a growing medical research frontier. However, too many, biomarkers can be confused with genes and epigenetic expressions of genes. Small distinctions might uncover the discovery of new genes leading to new definitions of disease, more accurate detection, and more personal treatment.

With the flood of data unleashed by research in these areas, a new and fundamental problem arises: How do we make sense of all these data? For now, professions and the public may be putting their faith in ‘big data’ in order to make biomarkers clinically meaningful and informative. We are in good company with those who remind us that data are dumb and can be misused to support bias, and that lots of poor quality data do not compile good science. At its heart, scientific theories need to be tested and scientific knowledge built in supported increments.
Biomarkers as medical tests
As with any medical test, some biomarkers are more accurate, or more related, to disease presence and absence and therefore are better indicators of underlying disease state. Thus, some biomarkers are more accurate than others; or put another way, biomarkers represent ‘mini-medical’ tests and their levels of contribution to diagnoses and prognoses depend upon random factors, along with sensitivity, specificity, and disease prevalence [6]. Some biomarkers may increase in presence with disease but lower with health, or the opposite – lower concentrations with disease. To complicate matters further, there are probably plenty of mixed signals, i.e. biomarker A is more sensitive than biomarker B, but B is more specific than A. Blending the information acquired by multiple biomarkers needs to be organized and read in a sequence to reduce false signals – positives or negatives – or at least minimize errors based on risk of disease and morbidities and mortalities.

Thus, managing and analysing the flood of biological diagnostic data is not the concern here, but rather its interpretation and clinical application. Balancing biomarker information at the clinical level is the function of translational research. Test-and-measurement (T&M) psychologists have worked on the science of organizing and interpreting individual items as revealing underlying latent constructs for over a century. Through the extremely tedious task of measuring human intelligence, skills and abilities, some already developed T&M tools could help improve the science, accuracy and interpretation of biomarkers [6].

Psychometric properties of biomarkers

Before embarking on a psychometric approach to biomarker interpretation, some common definitions are required. For instance, what is sensitivity or specificity? A psychometric or medical test shows high sensitivity when the underlying disease or person characteristic is also high. For intelligence, a high-test score implies high intelligence. On a single well-crafted test question, the probability of answering it correctly (formally called probability of endorsing) increases along with higher intelligence; if the question is associated with high intelligence, then the question is a strong or weak indicator of personal intelligence. When many test questions are indicators of intelligence, more correctly endorsed answers of good questions should indicate more intelligence. Indeed, some questions may even be ‘easier’ than others, leading to the need to design questions to fill out the continuum of an underlying intelligence being measured. This procedure is item analysis, a part of item response theory, see Figure 1 for an illustration of how multiple items ‘cover’ a given theta or disease.

Notice how irrelevant is the concept of sensitivity in clinical screening and diagnosis. Sensitivity means that if we already know for sure someone is smart or has a disease, the test and its questions will be correct in describing latent construct (referred to ‘theta’) a certain percentage of the time, based upon the test’s ability to detect and describe the presence or degree of the latent trait. Thus, the proportion of time the question is correct, given that we already know the person’s underlying status, is test or item sensitivity. Sensitivity is a test characteristic given we already know the latent trait – disease status. Symbolically, sensitivity is p(T+|D+), the probability of a positive test score (T+) given we already know the person has the disease (D+). Similarly, specificity is p(T−|D−), the probability of a negative test (T−) or item given that we already know that the patient is confirmed disease-free (D−).

Bayesian induction
Bayes Theorem is useful for many reasons, some controversial. But the conversion of disease prevalence along with biomarker sensitivity and specificity, will axiomatically give the probability of an individual having a disease given a positive test.

In Bayesian terms, the positive predictive value (PPV) is the posterior probability of a patient with a positive test. Two important properties of the PPV are: 1. It is a conversion of population prevalence turned into personal probability of disease based on a person’s positive test; and 2. PPV varies directly with the population prevalence of the disease. One cannot interpret a PPV without starting from its known or estimated population prevalence. PPV decreases with rare disease and increases with common disease, irrespective of tests’ sensitivity or specificity estimates. For further details see Figure 3 in the open access article ‘More accurate oral cancer screening with fewer salivary biomarkers’ by Menke et al. [7].

Sensitivity and specificity are characteristics of the test, not any patient. Such deductive processes are not at all clinically useful. In fact, diagnosing and screening are exactly the inverted probability of that: what is the inferred disease state, D+ or D−, from positive and negative test results? In other words, we want p(D+|T+) instead of p(T+|D+), and p(D−|T−) instead of p(T−|D−). The method for inverting the probabilities from test to patient characteristics is by the application of Bayes’ Theorem. This inverted probability is highly influenced by disease prevalence, however, whereas sensitivity and specificity are not.

Role of prevalence in disease detection
Generally, the higher the disease prevalence in a population, the easier it is to detect. Fortunately, this coincides with good intuitive sense. In fact, when screening for diseases, we need to read the biomarker results diachronically to take advantage of the information added by each biomarker.  ‘Diachronically’ refers to reading over time. In the case of biomarker screening, all biomarker antibodies or other detectors of biomarker presence will require the fewest number of biomarkers when read in context of other present biomarkers. Diachronic refers to the order in which biomarkers are read, not the order in which they are administered.

Biomarkers can be strongly or weakly informative. The indicator of strong or weak biomarkers is the diagnostic likelihood ratio, which is shown in the image above.
More explicitly this is called a positive diagnostic likelihood ratio, abbreviated +LR. The higher the +LR, the more information it conveys about the presence or absence of disease. The objective of the inverted probability, p(D+|T+), is called the positive predictive value of a test, PPV.

Diachronic contextual reading
When used in conjunction with other biomarkers, [p(D+|T1, T2, T3, …Tn)], the tests’ accuracy can be increased, but only if the test results are read diachronically. For instance, ‘passing along’ only positive test findings to another biomarker amounts to throwing out true negatives in the sample (and a few false negatives), which increases the ability to detect suspected diseased screened persons from a more prevalent sample pool. After five to ten of these ‘pass-alongs’, depending on original disease prevalence, the PPV can approach 100%, signifying great confidence that a disease is present and further testing and treatment are required. Also, panels of biomarkers – multiple biomarkers used in a single unit for screening – can also have a PPV. In some cases, biomarkers only appear in panels in which case, there is a resultant sensitivity, specificity and PPV for the entire panel.

Biomarkers that are too sensitive might generate too many false positives. This problem can be overcome with a biomarker or biomarkers to ‘clean out’ the false positives. Highly specific biomarkers will throw out false negative ones, a perspective balanced with sensitive biomarkers. Sensitivity and specificity generally vary inversely for each given biomarker. Those high on one attribute tend to be low on the other. Overall, according to our previous experience in meta-analyses, we found specificity was the primary attribute for quickly and accurately screening a population.

The exceptional biomarker can be high on both test attributes. In most cases, the information from mediocre biomarkers can be improved by combining them into biomarker panels with a combined accuracy stronger than any individual biomarker. Once biomarkers are ranked from high to low, wherein they pass along positive test results from highest to lowest dLR, the number of biomarkers required to achieve a PPV near 1.0 is considerably fewer than if biomarkers are ordered from lowest to highest dLR (Fig. 2).

Meta-analysis
As you may have inferred by now, the methodology of identifying the best biomarkers is via meta-analysis. A word of caution for diagnostic meta-analyses. There are software packages for the meta-analysis of medical tests. Meta-DiSc is one such tool [8, 9]. Material on its development may be found here [9]. When last checked, the Meta-DiSc program was being revised to correct some estimate errors and researchers were re-directed to a Cochrane Collaboration page [10]. In short, it is important not to add up all cells as if they represent one large study, because this misrepresents study homogeneity and therefore variance.

We recommend a meta-analysis that uses an index of evidential support [11–13]. In so doing, the weighting of data based on sample size alone may be avoided [7].

Partitioning panels with evidential support estimates

Biomarkers may be either high on sensitivity or specificity. Others may be very high in one attribute, but not the other. Few are high on both. This issue may be overcome by combining a panel made up of the same biomarker(s) of interest, where individual biomarker member weaknesses may be averaged out by including other biomarkers with complementary strengths. A biomarker with high sensitivity and low specificity may be combined with biomarkers of complementary strengths, such as those with low sensitivity and high specificity. The scenario is to combine those biomarkers high in one trait with those high with its complement. This can be tricky as an average accuracy might fall along a diagonal in a receiver operating characteristic (ROC) chart, rendering it a useless test. Indeed, the idea is to maximize the area under the curve on a ROC chart by ‘pulling the curve’ up into the upper left corner to create more area under the curve, representing diagnostic accuracy. For further details see Figure 2 in the open access article ‘More accurate oral cancer screening with fewer salivary biomarkers’ by Menke et al. [7].

The question is whether the combined accuracy is synergistically greater from using two biomarkers or becomes just an arithmetic average of two biomarkers. This conundrum is solved by making sure there are data points in the upper right corner to ‘pull up’ the ROC curve and maximize the area under the curve, which translates roughly to diagnostic accuracy. In fact, sensitivity to cancer or any other disease must be inverted to PPV before the biomarker exhibits utility. Somewhat paradoxically, just using more biomarkers does not increase screening accuracy without being read in the diachronic context of other tests done at the same time. not (again, refer to Fig. 2 in this article).

Should cancer tests detect only binary signals?
From a test and measures perspective, each biomarker is a kind of test question, where the answer to each question is the state of disease in the body. Some questions or biomarkers or biomarker panels are more or less informative because they are more or less sensitive and specific to detecting disease. The answers sought are binary – yes or no. The patient either has a disease or does not. It is up to the properties of the tests to reveal the truth.

As mentioned before, biomarker accuracy varies. No medical test of any kind is 100% accurate. Biomarkers associated with cancer can and do appear at lower levels in healthy individuals. We must understand this principle to decide whether other tests or panels are necessary to improve screening or diagnostic information.

When educational psychologists measure traits and abilities, e.g. IQ, they ask a series of questions. To the degree that the questions are answered ‘correctly’, a person scores higher and has more of the trait or ability to be measured. Creating a survey or questionnaire is a rigorous process. Think of an underlying variable (IQ) as the latent construct. ‘Construct’ is the intended concept we attempt to measure. The construct is not directly measurable, and thus called latent. Each question is a kind of probe that, to various degrees of accuracy, allows indirect observation of the latent construct or disease state. By analogy, biomarkers can be interpreted as test questions indicating the existence of a latent trait or disease.

Pushing the test analogy further, biomarkers might be negatively keyed, i.e. the levels of certain biomarkers are reduced in the presence of disease, or positively keyed with biomarker presence associated with disease. Whereas assessment of traits and abilities measures a continuous scale of latent construct presence, biomarkers answer a simple binary choice: Is the disease present or not?

Biomarker accuracy is estimated by its sensitivity and specificity. Test questions are subject to data reduction techniques (factor analysis), internal consistency within factors, and item response theory to identify redundant questions and design new questions cover gaps in detecting an underlying disease state.

As we are not basic scientists, but rather behavioural and population ones, we cannot address the clinical and laboratory aspects of biomarkers, but in collaboration with colleagues at dental programmes here  in Mesa, Arizona and in Malaysia, we came to understand that some biomarkers are more informative than others in screening and diagnosing disease.

Unidimensionality, monotonicity, and local independence properties
Test items should obey conditions of unidimensionality, monotonicity, and local independence. Briefly applied to medical tests, biomarkers should be indicative of the same latent construct (presence of disease), but individual biomarkers should increase (be positive for disease) along with the actual presence of disease [14].

The application of item response theory to academic test scores will reveal that there are gaps in assessment that miss progress or degree of the latent construct. When graphed on person–item maps, the high-ability persons will score higher on the test – i.e. endorse more items, especially the most difficult ones. The item–person map might show two areas of concern: redundant items that may be removed from the test to make the test more efficient, and abilities that cannot be determined owing to items clustering over small ranges of the latent construct. This is exemplified in Figure 1 in Warholak et al. [15].

As for biomarker disease screening, test or panel gaps may miss a subclinical or early stage disease by not matching the stage with biomarkers that would alert us to that stage of disease. In effect, this would be a blind-spot that more research may be required to fill. On the one hand, for a binary screening outcome – yes or no – gaps are not crucial. On the other hand, the discovery of gaps may lead to better science and better early disease detection.

Generalizability theory

Generalizability theory – or G-Theory – is a tool developed by Lee Cronbach and colleagues at Stanford around 1972 [16]. Without getting into excessive detail, it should suffice in this article that G-Theory be mentioned as a methodology for identifying sources of error, bias, or interference in statistical modelling of complex systems. As an example of the reasons for developing G-Theory in the first place, students are taught by professors within classes in courses in schools and states and countries. Each level of this education hierarchy may become a source of variability. If what we want to produce is a similar product in student graduates, as minimal competency in medicine, we may glean interference – variability – introduced by various levels or one specific level. With G-Theory, the primary source of variance may be identified and modified accordingly.

In the biomarker analogy, some biomarkers introduce more confusion than they resolve and can be eliminated or modified to improve reliability and consistent accuracy.

Conclusion
Although biomarker research is being funded and undertaken at unprecedented levels, it is important to remember the credible handling the data in a scientific manner is still the key to understanding and discovery. Big data still needs to answer the question of ‘What does it all mean?’ Yet, we recommend starting with  highly refined methodology developed for T&M of human skills, abilities and knowledge. At the very least T&M science might minimize errors, increase medical test efficiencies, and may be used to complement or confirm findings for translational research.
References
1. Pearl J, Mackenzie D. The book of why: the new science of cause and effect. Basic Books 2018.
2. Platt JR. Strong inference: certain systematic methods of scientific thinking may produce much more rapid progress than others. Science 1964; 146(3642): 347–352.
3. Chamberlin TC. The method of multiple working hypotheses. Science 1897, reprint 1965; 148: 754–759.
4. Kline RB. Beyond significance testing: reforming data analysis methods in behavioral research. American Psychological Association 2004.
5. Ziliak ST, McCloskey DN. The cult of statistical significance: how the standard error costs us jobs, justice, and lives. The University of Michigan 2011.
6. Kraemer HC. Evaluating medical tests: objectives and quantitative guidelines. Sage Publications 1992.
7. Menke JM, Ahsan MS, Khoo SP. More accurate oral cancer screening with fewer salivary biomarkers. Biomark Cancer 2017; 9: 1179299X17732007 (https://journals.sagepub.com/doi/full/10.1177/1179299X17732007?url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org&rfr_dat=cr_pub%3Dpubmed#articlePermissionsContainer).
8. Zamora J, Abraira V, Muriel A, Khan K, Coomarasamy A. Meta-DiSc: a software for meta-analysis of test accuracy data. BMC Med Res Methodol 2006; 6: 31.
9. Zamora J, Muriel A, Abraira V. Statistical methods: Meta-DiSc ver 1.4. 2006: 1–8 (ftp://ftp.hrc.es/pub/programas/metadisc/MetaDisc_StatisticalMethods.pdf).
10. Cochrane Methods: screening and diagnostic tests 2018  (https://methods.cochrane.org/sdt/welcome).
11. Goodman SN, Royall R. Evidence and scientific research. Am J Public Health 1988; 78(12): 1568–1574.
12. Menke JM. Do manual therapies help low back pain? A comparative effectiveness meta-analysis. Spine (Phila Pa 1976) 2014; 39(7): E463–472.
13. Royall R. Statistical evidence: a likelihood paradigm. Chapman & Hall/CRC 2000.
14. Beck CJ, Menke JM, Figueredo AJ. Validation of a measure of intimate partner abuse (Relationship Behavior Rating Scale-revised) using item response theory analysis. Journal of Divorce and Remarriage 2013; 54(1): 58–77.
15. Warholak TL, Hines LE, Song MC, Gessay A, Menke JM, Sherrill D, Reel S, Murphy JE, Malone DC. Medical, nursing, and pharmacy students’ ability to recognize potential drug-drug interactions: a comparison of healthcare professional students. J Am Acad Nurse Pract 2011; 23(4): 216–221.
16. Shavelson RJ, Webb NM. Generalizability Theory: a primer. Sage Publications 1991.

The authors
J. Michael Menke* DC, PhD, MA; Debosree Roy PhD
A.T. Still Research Institute, A. T. Still University, Mesa, AZ 85206, USA

*Corresponding author
E-mail: jmenke@atsu.edu

https://clinlabint.com/wp-content/uploads/sites/2/2020/08/C369_Menke_Fig1_New.jpg 550 746 3wmedia https://clinlabint.com/wp-content/uploads/sites/2/2020/06/clinlab-logo.png 3wmedia2020-08-26 09:39:502021-01-08 11:33:52Methodology for finding and interpreting efficient biomarker signals
27818 STAGO Max Generation cropped resized

Max Generation.

, 26 August 2020/in Featured Articles /by 3wmedia
https://clinlabint.com/wp-content/uploads/sites/2/2020/08/27818_STAGO_Max_Generation_cropped___resized.jpg 1461 1000 3wmedia https://clinlabint.com/wp-content/uploads/sites/2/2020/06/clinlab-logo.png 3wmedia2020-08-26 09:39:502021-01-08 11:33:35Max Generation.
27835 Awareness Technology CLI MedLab APAC 2019

Fully automated RPR

, 26 August 2020/in Featured Articles /by 3wmedia
https://clinlabint.com/wp-content/uploads/sites/2/2020/08/27835-Awareness-Technology-CLI-MedLab-APAC-2019.jpg 1559 1087 3wmedia https://clinlabint.com/wp-content/uploads/sites/2/2020/06/clinlab-logo.png 3wmedia2020-08-26 09:39:502021-01-08 11:33:44Fully automated RPR
C376 Figure 1

Activated partial thromboplastin time assay

, 26 August 2020/in Featured Articles /by 3wmedia

The activated partial thromboplastin time coagulation assay is one of the most frequently performed tests in hematology, and has a variety of uses in clinical practice. Accurate interpretation of the test depends on both clinical context (i.e. why the test was ordered) as well as an understanding of each laboratory’s normal reference range and assay sensitivity regarding detection of factor deficiencies, (unfractionated) heparin therapy and lupus anticoagulant.

by Dr Julianne Falconer and Dr Emmanuel J. Favaloro

Introduction
The activated partial thromboplastin time (APTT) assay is a commonly requested coagulation test, perhaps second only to the prothrombin time (PT)/international normalized ratio (INR), as used to monitor vitamin K antagonist (VKA) therapy such as warfarin. The APTT test assesses the intrinsic pathway of coagulation and has a variety of clinical uses; however, it is primarily used to screen for hemostasis issues, factor deficiencies, lupus anticoagulant (LA) or to monitor unfractionated heparin (UFH) therapy dosing. The test is sensitive to, but not specific for, detection of these abnormalities or influences. APTT prolongation may also be seen in liver disease, disseminated intravascular coagulation (DIC) and in the presence of factor inhibitors. Interpretation of an APTT result, be it normal or prolonged, is dependent on both the clinical context and the characteristics of the reagents and the assay as performed on particular instruments. The establishment of normal reference intervals (NRIs) and assessment of the assay in terms of its sensitivity to heparin, LA and clotting factors are important to provide accurate information for clinical interpretation [1].

Uses of the APTT assay
The APTT test is a global assay that measures the time to fibrin clot formation via the contact factor (‘intrinsic’) pathway (Fig. 1). The APTT test is usually performed on fully automated platforms, and involves activation of coagulation within the test (plasma) sample by the addition of specific reagents (containing phospholipids, contact factor activator and calcium chloride). The type of contact factor activator, and the type and concentration of phospholipid, used in the APTT reagent affects the sensitivity of the assay to, and thus its prolongation by, factor deficiencies, as well as to the presence of UFH and LA [1, 2].

The APTT is commonly used to monitor anticoagulation therapy using UFH (Table 1). It may also be prolonged, however, in the presence of VKAs including warfarin, as well as direct oral anticoagulants (DOACs) such as dabigatran (direct thrombin inhibitor) and rivaroxaban (anti-FXa inhibitor). The APTT is generally less sensitive to, but may still be slightly prolonged, by anticoagulation with low molecular weight heparin (LMWH) and with apixaban, another DOAC (anti-FXa inhibitor).

In the absence of anticoagulation therapy, an ‘isolated’ prolonged APTT may be used to determine a clinically important factor deficiency, for example as a screen for hemophilia A (FVIII deficiency), hemophilia B (FIX deficiency), or hemophilia C (FXI deficiency), or even von Willebrand Disease (VWD; which may be associated with loss of FVIII) [1]. An ‘isolated’ prolonged APTT, however, could instead be indicative of a clinically unimportant factor deficiency such as FXII or other contact factor deficiency. Other alternatives for an ‘isolated’ prolonged APTT include a factor inhibitor or LA. Despite causing prolongation of APTT in vitro, LA may be associated clinically with increased risk of thrombosis rather than bleeding. A prolonged APTT may be accompanied by a prolonged PT in the context of liver disease, DIC or fibrinogen (or other ‘common factor pathway’ deficiency/ies). Clinical context, therefore, must form the basis for accurate interpretation of APTT, be it either normal or prolonged, and together with other routine coagulation studies is essential to guide further investigations (Fig. 2).

A large number of commercial APTT reagents are now available, with wide variation in the type of contact factor activator and phospholipid source and concentration used. This will result in variation in sensitivity to all typical influences; thus also causing substantial variation in NRIs between APTT reagents, and requiring the establishment and verification of NRIs based on both the reagent and instrument in use. Unawareness of variation in APTT reagent sensitivity in context of clinical picture will lead to flawed clinical interpretation of results.

Establishing and verification of NRIs
A minimum of 20 normal individuals may be sufficient to establish a NRI for PT and APTT, according to guidance documents provided by the Clinical and Laboratory Standards Institute (CLSI) [3, 4]. However, a larger number of normal individuals is recommended to establish an initial NRI, following which a smaller sample of normal individuals may be used for future verification purposes [1].

As an example, Figure 3 shows an initial (historical) NRI estimation for APTT testing using a dataset of nearly 80 normal individuals. This included one outlier sample result (Fig. 3a), which was removed to produce the cleaner dataset used to produce the subsequent NRI. A statistical normality test was performed and showed the distribution to be near Gaussian, allowing parametric statistical assessment. For APTT testing, the NRI would aim to evaluate the 95 % confidence interval, approximating a mean
± 2 standard deviation (SD) assessment (Fig. 3b). Logarithmic transformation can instead be used to normalize test data when it is non-parametric and fits a log distribution (e.g. Fig. 3c).

If a NRI has been previously established by the laboratory or by the manufacturer of the APTT reagent using a specific reagent/instrument combination, the laboratory could use a process of transference to verify the ‘established’ NRI as fit for purpose. This may be done by establishing that a majority of samples in a small set of normal donors give values within the established NRI (e.g. >18 out of a set of 20 normal samples). Samples obtained from normal individuals or a dataset of normal patient test results may be used to assess a new lot of reagent to establish whether an existing NRI can be maintained when changing reagent lots.

Factor (deficiency) sensitivity
Factor sensitivity of an APTT assay (representing a specific reagent/instrument combination) can be assessed in a number of ways. One method involves serial dilution of either in-house or commercially derived normal plasma, into single-factor deficient plasma, in order to generate a series of aliquots with reducing factor levels. These samples are then tested by APTT and for factor level. The APTT reagent is regarded to be sensitive to the level of factor that correlates with the upper limit of the NRI.

A more accurate process, though particularly difficult to perform outside of a hemophilia centre, is to establish APTT values from true patients with various known factor levels [1, 2] (e.g. Fig. 4).

As a general guide, if the APTT is used for screening factor deficiencies, then the patient APTT value should be above the NRI when their factor level is below around 30–40 U/dL for FVIII, FIX, and FXI.

Sensitivity of APTT to UFH
Despite the changing landscape of anticoagulation therapy with the addition of direct anti-Xa inhibitors (rivaroxaban and apixaban) and a direct thrombin inhibitor (dabigatran) [5, 6], both LMWH and UFH continue to be frequently used in clinical practice. In turn, the APTT continues to be a generally preferred method of UFH monitoring over anti-FXa, given the wide availability and relative low cost of the assay. However, unlike the calibrated anti-FXa assay, APTT results are subject to variation between different instruments, be they be based on optical or mechanical clot detection methods [7], different APTT reagents (including variation between different lots of the same reagent type) and algorithms used on instruments for raw data processing. This poses a substantial problem with regards to historical recommendations to maintain patients on UFH between 1.5 and 2.5 times the ‘normal reference value’ (as based on limited evidence [8]). Therapeutic ranges should therefore be defined with specific reference to the instrument/reagent combination used locally [9].

One ‘spiking method’ involves testing samples containing known quantities of UFH diluted into normal pool plasma, as then tested by APTT and anti-FXa methods, allowing an estimation of the APTT therapeutic interval [1]. However, variations in certain components of patient plasma, as well as the non-physiologically processed nature of the UFH used, can impact on the interpretation of data obtained using this method. A better method involves ex vivo assessment of plasma obtained from patients on UFH therapy, with these tested for both APTT and anti-FXa, and then to establish a UFH therapeutic range for APTT that matches the therapeutic range for anti-FXa (e.g. 0.3–0.7 U/mL). It is important to recognize that individual response to UFH according to APTT is affected by many influences, including (but not limited to): antithrombin level; high or low levels of coagulation factors and proteins such as von Willebrand factor or proteins released from endothelial cells or platelets, competing with antithrombin for heparin binding; or increased FVIII levels in acute phase response; or reduction in FXII; or presence of LA (etc).

To obtain a cleaner data set to establish UFH therapeutic ranges, the following steps can be undertaken during sample collection and processing [1].
• Ensure baseline PT, APTT and INR testing prior to commencement of UFH are within their NRIs.
• Exclude underfilled samples, samples with visible hemolysis or likely platelet activation and release of heparin neutralizer platelet factor 4 (PF4).
• Exclude samples containing LMWH or other anticoagulants (e.g. VKAs, DOACs).
• Adhere to manufacturer guidelines with regards to the window from time of blood collection to testing.
• Double centrifuge samples when freezing them for batch testing (to remove residual platelets, which release PF4 and phospholipids on thawing).
• Accumulate data over a suitable time period to account for day-to-day test result variability.
• Aim for 30 or more data points.
• Appropriately dilute samples with anti-Xa activity above the test’s linearity limit.
• Remove data points reflecting ‘gross’ outliers.

LA sensitivity
The LA sensitivity of a particular APTT reagent can be assessed by comparing APTT tests of samples containing LA, for example by comparison of mean clotting times for each reagent.
Given that the APTT is a phospholipid-dependent assay, the test may be susceptible to prolongation in the presence of LA. However, differences in the phospholipid type and concentration between APTT reagents account for wide variation seen in the degree of prolongation of APTT, including due to LA. The LA sensitivity of the APTT reagent also has bearing on the use of APTT to monitor UFH and must inform the establishment of an algorithm to further investigate unexpectedly prolonged APTTs.
In one empirical method, initial testing using an LA sensitive method (e.g. dilute Russell viper venom time; dRVVT) is initially used to formulate a set of LA-positive samples of various ‘strengths’. Different APTT reagents can then be used to test the samples and the data for each sample can be plotted again the upper reference limit of the APTT for each reagent [1]. The ratio of clotting time of each LA-positive sample (of varying strengths) to the mean normal APTT derived from normal plasma samples is calculated. The median of these ratios allows different reagents to be ranked according to LA sensitivity. It can then become clear which APTT reagents are most (versus least) sensitive to LA. These can then be differentially selected according to the laboratory desire. For example, a laboratory may prefer to select an APTT reagent that is relatively LA ‘insensitive’, as combined with good factor VIII/IX/XI and UFH sensitivity if there is a desire to use a general purpose APTT screening reagent (i.e. hospital laboratory monitoring UFH, but wishing to avoid LA detection in asymptomatic patients). Alternatively, a laboratory may select an LA sensitive and an insensitive APTT reagent pair if they wish to assess for LA in symptomatic (thrombosis and/or pregnancy morbidity) patients.

Conclusion
Interpretation of a normal or a prolonged APTT must take into account both clinical context, including presence of anticoagulant therapy, as well as the methods and reagents used by the laboratory. The sensitivity of a particular APTT reagent to detect UFH therapy, LA and factor deficiencies has significant bearing on diagnostic assessment and therapy monitoring, and thus reflects essential knowledge for laboratory and clinical staff alike.

Figure 1. The activated partial thromboplastin time (APTT) assay measures the clot time to formation of fibrin via the contact factor pathway and is dependent on contact factors (FXII and above), and then FXI, FIX, FVIII, FX, FV, and FII. The APTT is also affected by vitamin K antagonists (VKAs; ‘W’), but more importantly is used to monitor unfractionated heparin (UFH; ‘H’) therapy and also to assess for potential hemophilia (FVIII, FIX or FXI deficiency). The APTT is also sensitive to the presence of other anticoagulants, including direct oral anticoagulants (DOACs) such as dabigatran (‘D’) and rivaroxaban (‘R’), and potentially also apixaban (‘A’) for some reagents. The APTT may also be utilized as part of a panel of tests to help assess for lupus anticoagulant (LA). (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Figure 2. An algorithm that provides one recommended approach for the follow-up of an abnormal APTT. Always exclude an anticoagulant effect first – there is no point investigating a prolonged APTT associated with anticoagulant use. Then consider the patient’s history, or the clinical reason for the test order, both of which assist in terms of follow-up approach. APTT, activated partial thromboplastin time; FBC/CBC, full blood count (UK/Australia)/complete blood count (USA); DIC, disseminated intravascular coagulation; DOAC, direct oral anticoagulant; EDTA, ethylenediaminetetraacetic acid; F, factor; LA, lupus anticoagulant; PT, prothrombin time. (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Table 1. The APTT test. A multipurpose and sensitive assay, but not specific for any individual parameter. List is not meant to be all inclusive.
DOACs, direct oral anticoagulants; VWD, von Willebrand disease.
*PT should also be prolonged if APTT is prolonged in the indicated setting.
(Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)

Figure 3. Historical data from our laboratory to illustrate the process of deriving a normal reference interval (NRI) for the APTT, and using nearly 80 normal individual plasma samples. (a) APTT of all samples tested shown as a dot plot; one clear outlier shown as a red asterisk. (b) Data cleaned of outliers [i.e. in this case the single red asterisk sample in (a)]. (c) NRR estimate as mean ± 2 standard deviations (SDs) to provide approximate 95 % coverage. Bar graphs of parametric data processing and log transformed data processing shown. The NRI for this data set approximates 27–38 sec. (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)
Figure 4. Ex vivo heparin versus APTT evaluation. (a) Samples from all patients identified to be on heparin (as identified by our laboratory information system) and for which an APTT was performed at the time of evaluation are also tested for anti-FXa level. The APTT therapeutic range is that corresponding to a heparin level of 0.3–0.7 U/mL by anti-Xa. However, many data points in this figure do not reflect UFH alone. Some points may instead reflect low molecular weight heparin (e.g. likely to be the sample yielding an anti-Xa value close to 0.7 U/mL but with normal APTT) or alternatively UFH co-incident to FXII deficiency or LA, or else patients potentially transitioning from UFH to VKAs. These data points can be removed to yield a ‘cleaner’ data set, as shown in (b). (Modified from Favaloro EJ, et al. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35 [1].)

Disclaimer: The views expressed in this paper are those of the authors, and are not necessarily those of NSW Health Pathology.

References
1. Favaloro EJ, Kershaw G, Mohammed S, Lippi G. How to optimize activated partial thromboplastin time (APTT) testing: solutions to establishing and verifying normal reference intervals and assessing APTT reagents for sensitivity to heparin, lupus anticoagulant, and clotting factors. Semin Thromb Hemost 2019; 45: 22–35.
2. Kershaw G. Performance of activated partial thromboplastin time (APTT): determining reagent sensitivity to factor deficiencies, heparin, and lupus anticoagulants. Methods Mol Biol 2017; 1646: 75–83.
3. Defining, establishing, and verifying reference intervals in the clinical laboratory; proposed guideline—third edition. CLSI document C28–P3. Clinical and Laboratory Standards Institute (CLSI) 2008.
4. One-Stage Prothrombin time (PT) test and activated partial thromboplastin time (APTT) test; approved guideline—second edition. CLSI document H47-A2. CLSI 2008.
5. Favaloro EJ, McCaughan GJ, Mohammed S, Pasalic L. Anticoagulation therapy in Australia. Ann Blood 2018; 3: 48.
6. Lippi G, Mattiuzzi C, Adcock D, Favaloro EJ. Oral anticoagulants around the world: an updated state-of the art analysis. Ann Blood 2018; 3: 49.
7. Favaloro EJ, Lippi G. Recent advances in mainstream hemostasis diagnostics and coagulation testing. Semin Thromb Hemost. 2019; 45(3): 228–246.
8. Baluwala I, Favaloro EJ, Pasalic L. Therapeutic monitoring of unfractionated heparin – trials and tribulations. Expert Rev Hematol 2017; 10(7): 595–605.
9. Marlar RA, Clement B, Gausman J. Activated partial thromboplastin time monitoring of unfractionated heparin therapy: issues and recommendations. Semin Thromb Hemost 2017; 43(3): 253–260.
The authors
Julianne Falconer1 MBBS and Emmanuel J. Favaloro*1,2 PhD, FFSc (RCPA)
1Haematology, Institute of Clinical Pathology and Medical Research (ICPMR), NSW Health Pathology, Westmead Hospital, NSW, Australia.
2Sydney Centres for Thrombosis and Hemostasis, Westmead Hospital

*Corresponding author
E-mail: Emmanuel.Favaloro@health.nsw.gov.au

https://clinlabint.com/wp-content/uploads/sites/2/2020/08/C376_Figure_1-scaled.jpg 1919 2560 3wmedia https://clinlabint.com/wp-content/uploads/sites/2/2020/06/clinlab-logo.png 3wmedia2020-08-26 09:39:502021-01-08 11:33:38Activated partial thromboplastin time assay
27627 DIAsource 178x92 Annonce DiaSource ChromogranninA CLImagazine

The most complete solution for the measurement of Chromogranin A

, 26 August 2020/in Featured Articles /by 3wmedia
https://clinlabint.com/wp-content/uploads/sites/2/2020/08/27627-DIAsource-178x92_Annonce_DiaSource_ChromogranninA_CLImagazine.jpg 1500 775 3wmedia https://clinlabint.com/wp-content/uploads/sites/2/2020/06/clinlab-logo.png 3wmedia2020-08-26 09:39:502021-01-08 11:33:47The most complete solution for the measurement of Chromogranin A
27732 Shimadzu Clinical Lab 1118

Rookie of the year – MALDI-8020 is the newcomer

, 26 August 2020/in Featured Articles /by 3wmedia
https://clinlabint.com/wp-content/uploads/sites/2/2020/08/27732-Shimadzu_Clinical_Lab_1118.jpg 1500 1041 3wmedia https://clinlabint.com/wp-content/uploads/sites/2/2020/06/clinlab-logo.png 3wmedia2020-08-26 09:39:502021-01-08 11:33:57Rookie of the year – MALDI-8020 is the newcomer
Siemens Table 1

Chest pain management: high-sensitivity cardiac troponin supports rapid assessment of non-acute myocardial infarction patients

, 26 August 2020/in Featured Articles /by 3wmedia

The introduction of cardiac troponin (cTn) assays has helped improve the triage of chest-pain patients. Evolution from relatively insensitive cTn assays to high-sensitivity assays has necessitated evolving testing approaches to optimize clinical utility. The latest generation (high-sensitivity cTn) support rapid diagnostic protocols aiding in the earlier discharge of a significant percentage of non-AMI patients, as well as aid a faster admission. The current 4th universal definition for AMI emphasizes that cTn can be elevated in many non-ischemic etiologies. To facilitate differentiation of an AMI, the guidelines define a rising or falling pattern of cTn assessed over time, in conjunction with other clinical information and risk assessment. The choice of clinical cutoffs and change values (delta) can be confounding, as cTn assays are not standardized. Testing algorithms such as a 0-3h protocol compared to rapid pathways supported by high-sensitivity assays (0 – 2h or 0 – 1h) also need to be taken into consideration. The use of the 99th percentile for cTn has been recommended since the first universal definition of AMI and continues to be currently recommended for 0-3h protocols, along with gender-specific cutoffs. For rapid diagnostic protocols, values well below the 99th percentile along with time-dependent deltas must be used. The sensitivity and precision offered by high-sensitivity assays is essential for rapid protocols in order to more accurately differentiate clinically significant change from assay imprecision. Rapid protocols identified for two recently available high sensitivity cTnI assays (High-Sensitivity Troponin I assays from Siemens Healthineers) will be reviewed, including performance in a 0 – 1h algorithm.

by Laurent Samson, PharmD and Katherine Soreng, PhD

Chest pain patients and AMI assessment
Patients with a chief complaint of “chest pain” suggestive of acute myocardial infarction (AMI) represent one of the most common ED presentations. As highly effective but time-dependent interventions for AMI exist, these patients are typically prioritized for assessment. While a diagnostic ECG can rapidly identify an ST-segment elevated myocardial infarction (STEMI), only a small percentage of patients have definitive ECG results. A larger percentage of patients with AMI lack clear ECG evidence but are experiencing a non-ST-segment elevated myocardial infarction (NSTEMI) and benefit from intervention. Both STEMI and NSTEMI fall into the category of the Acute Coronary Syndrome (ACS). Most chest pain patients have pain unrelated to ACS. The challenge in busy emergency departments (ED) is to rapidly identify STEMI and NSTEMI patients from those that can be safely discharged or evaluated for alternate etiologies. To aid diagnostic stratification, guidelines recommend serial biomarker testing with cardiac troponin I or T (cTnI, cTnT), with a rising/falling pattern indicative of evolving injury. High sensitivity troponin testing, in conjunction with other clinical findings and risk assessment, supports the differentiation of non-AMI patients from those experiencing cardiac ischemia.1
Evolving testing guidance is linked to cTn assay performance
In 2000, an expert consensus panel (the First Global MI Task Force) published a new AMI definition, which designated cardiac necrosis in the setting of myocardial ischemia be labeled as AMI. Recognizing the specificity of cTn, the authors adopted the 99th-percentile for cTn for a healthy reference population as the diagnostic threshold. An AMI was characterized by a rise and/or fall in values with at least one value above the decision level, along with a strong pre-test likelihood. This redefinition to a value just above that identified in a normal, healthy population dramatically increased AMI detection, and improved clinical confidence for exclusion. As cTn assays were (and are) not standardized (and cTnI is a different molecule than cTnT), the adoption of the 99th percentile vs. a “shared” numeric diagnostic cut-point was, and remains, necessary.
With adoption of the 99th percentile, low-end accuracy was crucial to better differentiate a true cTn elevation from assay imprecision. A precision criteria of <10% at the 99th percentile (upper reference limit or URL) was designated. While no assays available in 2000 could meet this definition for both sensitivity and precision, some manufacturers achieved approval of “guideline-compliant” or “contemporary sensitive” assays in subsequent years. As assay performance continued to improve and additional data to be published, recommendations evolved. In 2007, an update to the Universal Definition expanded the MI definition into five MI subcategories, each associated cTn values. The 99th percentile threshold continued to be recommended for Types 1 and 2 MI (typically occurring in patients presenting in the ED with chest pain) while multiples of the URL were designated for MI types 4 and 5.
The need for a changing pattern with at least one result above the diagnostic threshold in the setting of suspected myocardial ischemia was emphasized in the guidance. A changing pattern is essential to improve diagnosis of an AMI from chronic elevations associated with structural heart disease or alternate etiologies of cardiac damage. To assess change, cTn testing recommendations were 0 and 6-9 hours (with additional testing if AMI suspicion persisted).

In 2011, the ESC guidelines for the management of NSTEMI patients were published. The Expert Panel recognized the increased availability and improved performance for sensitive assays, and the development of high-sensitive cTn. Given the ability of high sensitivity assays to detect low levels of cTn with good precision, suggested testing intervals were shortened to 0 and 3-6 hours. In 2012, the updated 3rd Universal Definition was published, and recommended a 3h vs. 6h delta change if using a hs cTn. Similar guidance for a 0-3h protocol was published in the American guidelines in 2014 for the management of NSTE-ACS patients. Also, in 2014, the IFCC task force on cardiac biomarkers defined the high sensitivity troponin criteria and introduced the use of whole numbers with cTn (units of ng/L or pg/ml) to more readily discriminate a changing pattern. Recognizing the mounting data for the good performance of rapid protocols with hs cTn assays, the ESC published new guidelines for the management of NSTEMI patients in 2015.  This update included rapid pathways (1 or 2 hours) as an alternative to the classical 0-3h protocol.2 Challenges to rapid testing were recognized, including, concerns for misdiagnosing “early presenters” (those appearing in the ED within 3 h of chest pain onset). With early presenters, a rapid protocol could lack the needed sensitivity. The authors also recognized that in patients with a high pre-test risk for MI, a changing pattern may not be seen such as those near the peak of the cTn time-concentration curve or on the downside.

Current testing guidance: The 2018 fourth Universal Definition of MI
The 2018 Fourth Universal Definition of MI (ESC/ACC/AHA/WHF Expert Consensus Document) elaborates on the use of hs-cTn assays.1 In the 6-year interim, striking progress had been made for increased commercial availability of high-sensitivity cTn assays as well as validation of these assays in both “standard” (0-3h) and “accelerated” or rapid (0-2h or 0-1h) diagnostic protocols. Differentiating acute ischemia-induced damage from cardiac injury resulting from nonischemic conditions was emphasized, as both can cause elevated cTn levels. The term myocardial injury comprises MI as well as other nonischemic cardiac conditions (such as myocarditis or heart failure) and noncardiac morbidities (such as sepsis or renal patients) associated with elevations of cTn. In the case of MI, injury is acute and characterized by a significant rise and/or fall of cTn with at least one value above the 99th percentile URL of a healthy reference population. Acute MI is diagnosed if there is evidence of myocardial necrosis (cell death due to injury) in a clinical setting consistent with myocardial ischemia. Chronic elevations are less likely to show significant change, which can aid exclusion for AMI. The 4th Universal Definition reinforces value for gender-specific cut-points. As women tend to have lower levels of cTn, the percent detection in a female reference population can be lower, meaning some assays may detect ≥50% of healthy men but not women. Additional data explored the potential for a single value rule-out using the assay limit of detection (LoD). Updates included a focus on improved diagnosis for MI types and a discussion of analytic issues for cTn, including that values from one assay cannot be applied to another due to lack of standardization.
High-sensitivity cTn: Impact on testing and patient management
Currently, hs cTn is analytically defined by the ability to detect ≥50% of a healthy reference population using values between the LoD and gender-specific 99th percentiles (with a CV <10% at the URL). The 99th percentile continues to be the recommended cut-point if using a 0-3h testing strategy, but with implementation of gender-specific values. As hs assays more accurately detect smaller levels of change, they can also be incorporated into rapid protocols (either 0-1h or 0-2h). Assay precision in these accelerated protocols is critical, as small measures of change below the 99th percentile must be reliably detected. Since hs cTn assays continue to lack standardization, each assay must be independently validated, with clinical decision limits and change values identified. The shorter the time between testing, the lower the values. Caution must be exercised depending on the hs assay utilized, as change values are not only assay-specific but can be obfuscated by low-end imprecision and lot-to-lot variation that can vary significantly among assays at values much below the 99th percentile. These and other considerations for institutions wishing to implement hs cTn testing have recently been published.3-4 Rapid protocols have been proposed to exclude patients for AMI and so reduce patient burden in the ED. High-risk patients may be more rapidly identified as well using hs assays and rapid protocols. Patients have a higher likelihood for NSTEMI if the hs-cTn concentration at presentation is at least moderately elevated, or hs-cTn concentrations show a clear rise within the first hour.

Assay-specific hs cTn: Analytic issues can impact choice of testing algorithm
The lack of standardization among cTn assays remains a challenge, necessitating assessment for the specific assay utilized in a given setting. Hs cTn assays should demonstrate ≥50% gender-specific detection in a healthy reference population. Challenges around what defines a “healthy” population exist, and screening criteria can significantly affect percent detection. Biologic variation can also contribute to divergent values, adding to the uncertainty associated with analytic variation. Any impact on low-end precision or lot-to-lot variation of hs cTn assays can confound clinical assessment when using rapid diagnostic algorithms. While all hs cTn assays meet the precision criteria at the 99th percentile, significant differences among assays exist at the lower cut-points utilized in rapid diagnostic algorithms. It is imperative that both labs and clinicians understand the precision of their assay if adopting rapid testing and not assume a low coefficient of variation (CV) extends to the lower cut-points utilized.5,6

Performance of the “classical” (0-3h) pathway with hs cTn assays
The increased sensitivity of hs cTn assays means a greater percent of chest pain patients may present with elevated values in excess of the 99th percentile. To address differentiation of elevations associated with ischemic-associated injury from alternate causes of cardiac necrosis, a 20% change value has been recommended for patients with initial elevations above the 99th and >50% of change for value below.1 Values can typically be obtained from the manufacturers package insert or published studies and percent calculated. An example listing the gender-specific 99th percentile and other assay details for a recently approved hs cTnI (Siemens Healthineers Atellica IM High-Sensitivity Troponin I) assay is shown in Table 1. Analytic performance characteristics of the Atellica IM High-Sensitivity Troponin I assay meet the criteria for an hs cTn assay.

Performance of hs cTn using rapid strategies
Rapid strategies can include employing either very low levels of hs cTn on presentation (<LoD) or the lack of significant change in persistently elevated hs-cTn values over a 1–2-hour period along with risk assessment to exclude AMI. In addition, these strategies have validated a single, high value rule-in for patients with a high index of suspicion for AMI, but again all values are assay-specific, and performance should be established in large and well-validated studies.1,5,6 A single sample rule-out strategy using a very low value has high sensitivity for myocardial injury and therefore high negative predictive value (NPV) to exclude MI, though pretest probability should be considered, along with the timing of chest-pain onset. Rapid testing strategies rely on two concepts: first, hs cTn is a quantitative and continuous variable and the probability of MI increases with increasing values; second, early absolute changes (versus relative or percent change) of cTn can be highly predictive for AMI. Importantly, morbidities such as end-stage renal disease may require an alteration of the cut-off used, though renal-specific cut-offs have yet to be widely established. Studies designed to identify cut-offs on both traditional and rapid diagnostic algorithms have often excluded patients with renal disease, as well as other co-morbidities that can be associated with cTn elevation.1 The following section reviews published data for a hs cTnI assay (Siemens Healthineers’ Atellica IM High-Sensitivity Troponin I) for use in both a traditional and rapid diagnostic algorithm. For institutions utilizing an alternate hs cTn assay, similar study guidance data is often available.

Validation studies of the 0-1h algorithm with ADVIA Centaur and Atellica IM High-Sensitivity Troponin I assays
A hs cTnI assay from Siemens Healthineers available on the ADVIA Centaur and Atellica IM analysers has been validated in three large AMI studies (one American and two European patient cohorts). The APACE study group (Advantageous Predictors of Acute Coronary Syndrome Evaluation) is an ongoing prospective international multicentre study with 12 centres in 5 European countries aiming to advance the early diagnosis of AMI. APACE investigators have validated performance for several sensitive and high-sensitive assays.7 For rapid protocols, their approach utilized a derivation cohort followed by validation for each assay studied. The results for the ADVIA Centaur High-Sensitivity Troponin I assay in a 0-1h protocol are shown in Fig. 1.

Applying the derived optimal cutoff levels and delta, 46% of patients could be classified as rule-out with a corresponding NPV of 99.7% and a sensitivity of 99.1% (using a rule-out criteria of either a single determination <3 ng/L or a 0-1h <6ng/L with a delta <3 ng/L) in patients with chest pain >3h. A single-value rule-out of 3 ng/L was applied to early presenters (chest pain <3 h from onset). Conversely, a direct rule-in based on a single ADVIA Centaur hs-cTnI concentration (≥120 ng/L) at presentation was feasible in 12% of patients, and 6% more were identified with a delta of ≥12 ng/ml at 0-1h. Overall, the 0-1h algorithm produced a diagnosis after 1 h (either rule-in or rule-out) in 64% of patients. The remaining patients (36%) underwent additional testing and observation; ultimately 11% were ruled in for NSTEMI.
 
To validate the 0-1h algorithm with Atellica IM High-Sensitivity Troponin I assay, two additional studies using two different cohorts have been published, one in Scotland (High-STEACS)8 and one in the U.S. (HIGH US)9. The baseline characteristics of the patients admitted at the ED are detailed in Table 2.

Importantly, Table 3 identifies key exclusion criteria differences in the testing populations. Unlike the APACE cohort, the HIGH U.S. study did not exclude renal dialysis patients, so may more closely approximate a “real world” patient testing scenario.

The High-STEACS study in Scotland validated the performance of the Atellica IM High-Sensitivity Troponin I assay in a 0-1h protocol (using the derivation values of the ADVIA Centaur High-Sensitivity Troponin I assay established with the APACE cohort); similar findings were observed with both study populations.8 The Atellica IM High-Sensitivity Troponin I assay was further validated is a US testing population (HIGH US).9 Both the ADVIA Centaur High-Sensitivity Troponin I and Atellica IM High-Sensitivity Troponin I assays utilize identical designs and differ only on the platform analyser utilized and time to result (18 minutes on ADVIA Centaur system vs. 10 minutes on Atellica IM analyser). Table 4 shows the comparable clinical performance of both the ADVIA Centaur High-Sensitivity Troponin I and Atellica IM High-Sensitivity Troponin I assays utilizing the APACE-derived values. In all three studies, a majority of patients could be excluded or diagnosed for AMI using the 0-1h strategy. Importantly, the NPV for rule-out was >99%, supporting early and safe exclusion for a significant percentage of patients across testing cohorts. The clinical accuracy for the 0-1h early rule-out of NSTEMI found with the APACE and High-STEACS cohorts was like that reported for the American cohort, despite inclusion of patients with significant renal impairments who tend to have chronic heart injury with increased cTn levels.10

Atellica IM and ADVIA Centaur High-Sensitivity Troponin I assays:  Design features compatible for a fast rule-out strategy
Consistency in performance between the assays is associated with assay design, including the choice of antibodies. Three monoclonal antibodies are employed in the assay, two for the capture and one for the detection. The two monoclonal capture antibodies target unique cTnI epitopes and are conjugated to streptavidin and are preformed on magnetic latex particles to reduce interference with biotin. Specimens that contain biotin demonstrate ≤10% change in results up to 3500ng/mL. Detection of captured cTnI is accomplished using a conjugated Lite Reagent consisting of a proprietary acridinium ester and a recombinant anti-human cTnI sheep Fab covalently attached to bovine serum albumin (BSA) for chemiluminescent detection. This unique Fab has been molecularly modified to remove the primary Fc region associated with reports of human anti-animal antibodies (HAAA which can include HAMA) and other heterophile sources of interference. A direct relationship exists between the amount of troponin I present in the patient sample and the amount of relative light units (RLUs) detected by the system, producing a quantitative result. Manufacturing processes and reagent stocks have been carefully designed to provide reliable lot-to-lot consistency. Fig.2 shows the reproducibility of ADVIA Centaur High-Sensitivity Troponin I across 6 lots of reagents using a value of cTnI well below the 99th percentile (where variation would be more likely to impact clinical assessment).

Conclusion
As an alternative to the classical 0-3h protocol which now includes gender-specific 99th percentiles, a faster rule-out strategy based on a 0-1h algorithm has been validated for the Siemens Healthineers high-sensitivity cTnI assays on the ADVIA Centaur systems and Atellica IM analyser. The analytic performance of these assays supports confidence in results across the measuring range, especially at the low clinical decision cut-points. The ability to rapidly exclude a large percent of chest pain patients for AMI with a high degree of certainty can help with triage in the ED. The similar high NPV’s observed across hs cTn studies (>99%) support good clinical performance for a safe rule-out using a 0-1h strategy. The accuracy for an early AMI rule-out established by these studies supports harmonization of these algorithms worldwide for well validated assays.

References:
1) Thygesen K, Alpert JS, Jaffe AS, Chaitman BR, Bax JJ, Morrow DA, White HD; Executive Group on behalf of the Joint European Society of Cardiology (ESC)/American College of Cardiology (ACC)/American Heart Association (AHA)/World Heart Federation (WHF) Task Force for the Universal Definition of Myocardial Infarction. Circulation. 2018 Nov 13;138(20): e618-e651
2) Marco Roffi
et al, 2015 ESC Guidelines for the management of acute coronary syndromes in patients presenting without persistent ST-segment elevation European Heart Journal (2016) 37, 267–315
3) Apple, FS
et al. Cardiac Troponin Assays: Guide to Understanding Analytical Characteristics and Their Impact on Clinical Care. Clinical Chemistry 63:173–81 (2017)
4) Januzzi, Jr., J.L.
et al. Recommendations for Institutions Transitioning to High-Sensitivity Troponin Testing. JACC Scientific Expert Panel, J Am Coll Cardiol. 2019;73(9):1059–77
5) Paul O. Collinson, Amy K. Saenger and Fred S. Apple, on behalf of the IFCC C-CBa High sensitivity, contemporary and point-of-care cardiac troponin assays: educational aids developed by the IFCC Committee on Clinical Application of Cardiac Bio-Markers Clin Chem Lab Med 2019; 57(5): 623–632
6) How Does the Analytical Quality of the High-Sensitivity Cardiac Troponin T Assay Affect the ESC Rule Out Algorithm for NSTEMI? Clinical Chemistry 65:3 (2019)
7) Boeddinghaus, J.
et al. Clinical Validation of a Novel High-Sensitivity Cardiac Troponin I Assay for Early Diagnosis of Acute Myocardial Infarction. Clinical Chemistry 64:9 (2018): 1-14.
8) Andrew R Chapman
et al. Novel high-sensitivity cardiac troponin I assay in patients with suspected acute coronary syndrome. Heart. 2018; 0:1–7. doi:10.1136/heartjnl-2018-314093
9) Christenson, RH
et al. Trial design for assessing analytical and clinical performance of high sensitivity cardiac troponin I assays in the United States: The HIGH US study. Contemporary Clinical Trials Communications 14 (2019) 100337.
10) R.M Novak, Performance of a novel high sensitivity cardiac Troponin I assay for one Hour algorithm for evaluation of NSTEMI in the US population Journal of the American College of Cardiology Volume 73, Issue 9 Supplement 1, March 2019

The authors
Katherine Soreng, PhD is the Director of Clinical and Scientific Support for Laboratory Diagnostics at Siemens Healthineerskatherine.soreng@siemens-healthineers.com Laurent Samson, PhD, is the Associate Director for Global Commercial Marketing, Immunoassays at Siemens Healthineers laurent.samson@siemens-healthineers.com

Product availability may vary from country to country and is subject to varying regulatory requirements.

https://clinlabint.com/wp-content/uploads/sites/2/2020/08/Siemens_Table_1.jpg 293 800 3wmedia https://clinlabint.com/wp-content/uploads/sites/2/2020/06/clinlab-logo.png 3wmedia2020-08-26 09:39:502021-01-08 11:33:33Chest pain management: high-sensitivity cardiac troponin supports rapid assessment of non-acute myocardial infarction patients
27866 Haematex revised DOAC STOP ok to use

Direct Oral Anti-Coagulants DOAC-STOP

, 26 August 2020/in Featured Articles /by 3wmedia
https://clinlabint.com/wp-content/uploads/sites/2/2020/08/27866_Haematex_revised_DOAC_STOP_ok_to_use.jpg 780 543 3wmedia https://clinlabint.com/wp-content/uploads/sites/2/2020/06/clinlab-logo.png 3wmedia2020-08-26 09:39:502021-01-08 11:33:42Direct Oral Anti-Coagulants DOAC-STOP
Alison Pic 07

Methylation landscape as a general test for cancer

, 26 August 2020/in Featured Articles /by 3wmedia

DNA methylation at the cytosine of CpG dinucleotides is a key form of epigenetic regulation of gene expression and aberrant hypermethylation of the promoter regions of certain genes has been identified in many cancers. The ability to analyse methylation status from non-invasively collected samples (such as saliva, sputum, stool and urine) as well as circulating tumour (ct)DNA in blood has led to much interest in methylation status as a potential biomarker for diagnosis, prognosis and treatment monitoring of cancer. Indeed, a fecal blood test for colorectal cancer screening (Cologuard®) that includes aberrant methylation testing of NDRG4 and BMP3 genes was approved by the Food and Drug Administration in the USA in 2014. However, methylation state analysis of specific promoter regions requires the use of technically demanding methods, such as PCR of bisulfite-treated DNA, pyrosequencing, methylation-specific PCR, methyl BEAMing and genomic sequencing, that have limitations of one sort or another in their use as high-throughput screening tools.
Recently, though, a paper by Sina et al. (“Epigenetically reprogrammed methylation landscape drives the DNA self-assembly and serves as a universal cancer biomarker” in Nature Communications 2018; 9(1): 4915) has described how the changes in methylation patterns in cancer genomes have a general effect on the physicochemical properties of DNA and that this change can be used as a potential universal cancer biomarker. In the transition from normal to malignant neoplasm, the general genomic methylation pattern changes from one of dispersed methylation to general hypomethylation but with increased clustering of methylation at regulatory regions. This change in the ‘methylation landscape’ results in a difference in the solvation properties between the normal and the cancer DNA polymer, which in turn affect the affinity of DNA to gold: the more highly aggregated normal DNA exhibits low adsorption to gold, whereas the less aggregated cancer DNA shows high adsorption. The authors have been able to use these properties to create a highly sensitive and specific, non-invasive, quick (≤10 min) colorimetric assay for the detection of cancer that needs only minimal sample preparation and small DNA input. So far, identification of this ‘Methylscape’ biomarker is only an indication of the presence of cancer – further work-up is needed to determine location, type and stage of disease. However, this seems like the ideal first test to determine whether a patient’s symptoms are caused by cancer or not.

https://clinlabint.com/wp-content/uploads/sites/2/2020/08/Alison-Pic_07.jpg 783 800 3wmedia https://clinlabint.com/wp-content/uploads/sites/2/2020/06/clinlab-logo.png 3wmedia2020-08-26 09:39:502021-01-08 11:33:52Methylation landscape as a general test for cancer
Page 129 of 145«‹127128129130131›»
Bio-Rad - Preparing for a Stress-free QC Audit

Latest issue of Clinical laboratory

March 2026

CLi Cover MRCH 2026
14 April 2026

Evident launches FV5000 confocal and multiphoton microscope

14 April 2026

Complex demands | Clear focus

17 March 2026

Complement assays for turbidimetry and nephelometry: C1q, C1 Inhibitor, C5 and Factor B

Digital edition
All articles Archived issues

Free subscription

View more product news

Get our e-alert

The leading international magazine for Clinical laboratory Equipment for everyone in the Vitro diagnostics

Sign up today
  • News
    • Featured Articles
    • Product News
    • E-News
  • Magazine
    • About us
    • Archived issues
    • Free subscriptions
    • Media kit
    • Submit Press Release
clinlab logo blackbg 1

Prins Hendrikstraat 1
5611HH Eindhoven
The Netherlands
info@clinlabint.com

PanGlobal Media is not responsible for any error or omission that might occur in the electronic display of product or company data.

Scroll to top

This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies.

Accept settingsHide notification onlyCookie settings

Cookie and Privacy Settings



How we use cookies

We may ask you to place cookies on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience and to customise your relationship with our website.

Click on the different sections for more information. You can also change some of your preferences. Please note that blocking some types of cookies may affect your experience on our websites and the services we can provide.

Essential Website Cookies

These cookies are strictly necessary to provide you with services available through our website and to use some of its features.

Because these cookies are strictly necessary to provide the website, refusing them will affect the functioning of our site. You can always block or delete cookies by changing your browser settings and block all cookies on this website forcibly. But this will always ask you to accept/refuse cookies when you visit our site again.

We fully respect if you want to refuse cookies, but to avoid asking you each time again to kindly allow us to store a cookie for that purpose. You are always free to unsubscribe or other cookies to get a better experience. If you refuse cookies, we will delete all cookies set in our domain.

We provide you with a list of cookies stored on your computer in our domain, so that you can check what we have stored. For security reasons, we cannot display or modify cookies from other domains. You can check these in your browser's security settings.

.

Google Analytics Cookies

These cookies collect information that is used in aggregate form to help us understand how our website is used or how effective our marketing campaigns are, or to help us customise our website and application for you to improve your experience.

If you do not want us to track your visit to our site, you can disable this in your browser here:

.

Other external services

We also use various external services such as Google Webfonts, Google Maps and external video providers. Since these providers may collect personal data such as your IP address, you can block them here. Please note that this may significantly reduce the functionality and appearance of our site. Changes will only be effective once you reload the page

Google Webfont Settings:

Google Maps Settings:

Google reCaptcha settings:

Vimeo and Youtube videos embedding:

.

Privacy Beleid

U kunt meer lezen over onze cookies en privacy-instellingen op onze Privacybeleid-pagina.

Privacy policy
Accept settingsHide notification only

Subscribe now!

Become a reader.

Free subscription