by Dr Mustafa Yousif, Dr David S. McClintock and Dr Keluo Yao Artificial intelligence (AI) has the potential to optimize anatomic pathology (AP) laboratory efficiency, enhance pathologists’ diagnostic skills, elevate case reimbursement and, ultimately, improve patient care. To realize this potential, AP laboratories must overcome the barriers to adopting digital pathology (DP) while DP vendors […]
Laboratory tests play an important role in the diagnosis and differentiation of chronic inflammatory bowel diseases, enabling endoscopies to be avoided in many cases.
By Dr Jacqueline Gosink
Fecal calprotectin (FC) has in recent years become an established marker for distinguishing chronic inflammatory bowel diseases (CIBD) from functional gut disorders such as irritable bowel syndrome (IBS) and for monitoring the disease activity in CIBD patients. FC determination can be complemented by serological tests for disease-specific antibodies, which enable differentiation of the two main forms of CIBD, namely Crohn’s disease (CD) and ulcerative colitis (UC). New therapeutic drug monitoring assays measure the level of infliximab or adalimumab in patient blood and the presence of inhibitory antibodies against these drugs, allowing the medication and dosage to be tailored to the patient’s individual response.
CIBD are autoimmune diseases which are characterized by inflammation of different regions of the gastrointestinal tract. UC affects predominantly the colon, while CD can affect any part of the gastrointestinal tract. The diseases are episodic with symptomatic phases (relapses) alternating with asymptomatic phases (remission). The severity of the symptoms and the duration of relapses vary from patient to patient. Diagnosis of CIBD is based a combination of clinical symptoms, endoscopy, histology, radiology and laboratory diagnostics. In order to obtain a definitive diagnosis the inflammatory status of the intestinal epithelium is investigated using invasive methods such as endoscopy and biopsy. These procedures are, however, costly and unpleasant for patients. Laboratory tests, especially measurement of FC and detection of disease-specific antibodies, provide valuable non-invasive support for CIBD diagnostics.
When the intestinal tract is inflamed, neutrophile granulocytes migrate through the intestinal mucosa into the lumen and secrete calprotectin. The calprotectin accumulates in feces and is secreted with it. FC can therefore be used as a marker for inflammatory processes that affect only the gastrointestinal tract. Its concentration is proportional to the degree of inflammation. FC is more effective for CIBD diagnostics than general clinical indices and classical serological inflammation markers, such as erythrocyte sedimentation rate (ESR), C-reactive protein (CRP) or leukocyte count.
Moreover, calprotectin is already produced at the start of the disease, making it a suitable marker for early diagnosis. Measurement of FC is especially suitable for differentiating CIBD from IBS. Only patients with elevated FC need to be referred for further invasive tests. FC is also a suitable surrogate marker for assessing the disease activity and for predicting disease recurrence after surgery. The risk of relapse is proportional to the measured FC value (Fig. 1).
Low FC concentrations are associated with long-lasting remission, while high values indicate ongoing or recurring inflammation requiring further investigation and intervention. Current international guidelines for CIBD diagnostics, for example from the European Crohn’s and Colitis Organisation , recommend measuring FC for differential diagnostics and also highlight the good correlation between the FC concentration and the disease activity. An FC concen-tration of less than 50 µg/g is considered inconspicuous and excludes an inflammatory cause of intestinal complaints with high certainty. Values between 50 µg/g and 120 µg/g lie in the borderline range and further monitoring of patients is indicated, for example by measuring FC again after two to three weeks. When the FC concentration is greater than 120 µg/g, the inflammatory status of the epithelium should be examined using imaging methods.
Calprotectin can be measured quickly and efficiently in stool samples using ELISA. The EUROIMMUN Calprotectin ELISA, for example, takes just 75 minutes and can be processed manually or automatically. It offers a broad measurement range of 1.9 to 2100 µg/g and correlates well with other FC assays. Pre-analytical sample preparation can be reduced to a minimum through the use of special stool dosage tubes, which enable extraction of a defined amount of stool in just one step.
In a study with stool samples from 47 clinical characterised patients with CIBD or IBS (Fig. 2), the Calprotectin ELISA from EUROIMMUN yielded a sensitivity of 94.1 % at a specificity of 95.5 % (excluding samples in the borderline range). Thus, there was a very high correlation between the FC level and the clinical diagnosis.
Serological determination of disease-specific antibodies provides additional support in CIBD diagnostics, enabling non-invasive differen-tiation between CD and UC (Table 1). Combined testing for all relevant antibodies is recommended to increase the diagnostic sensitivity .
Autoantibodies against acinus cells of exocrine pancreas are a reliable marker for CD. They have a high significance due to their organ specificity, disease association and frequently high serum concentration. The main target antigens of these autoantibodies are the proteoglycans CUZD1 (rPAg1) and GP2 (rPAg2) . Antibodies against Saccharomyces cerevisiae (ASCA) enrich the diagnosis of CD by a further parameter. The presence of ASCA can indicate a more severe disease course of CD requiring aggressive immunotherapy.
Autoantibodies against intestinal goblet cells occur exclusively in UC. The target antigen of these autoantibodies has not yet been identified. Anti-neutrophile cytoplasm autoantibodies (ANCA) represent a further marker for UC. In the indirect immunofluorescence test (IIFT) they generate a perinuclear (pANCA) reaction on ethanol-fixed granulocytes, but no reaction on formalin-fixed granulocytes. This pattern contrasts with ANCA in vasculitis, which show a reaction on both substrates. The most important target antigen of ANCA in CIBD is DNA-bound lactoferrin.
Multiplex antibody detection
All relevant CIBD-associated antibodies can be determined in parallel by IIFT based on biochip mosaics. The EUROIMMUN CIBD Profile 3, for example, provides a combination of CUZD1- and GP2-transfected cells, control-transfected cells, intestinal goblet cells, ethanol-fixed granulocytes, lactoferrin-specific granulocytes, lactoferrin-depleted granulocytes, and fungal smears of S. cerevisiae (Fig. 3). The broad antibody analysis yields a high diagnostic rate for CD and UC.
Therapeutic drug monitoring
There is no cure for CIBD, and therapy focuses on reducing inflammation and relieving symptoms. Patients are frequently administered with tumour necrosis factor α (TNFα) inhibitors like infliximab or adalimumab. Due to different clinical responses to these biologics, the dosage must be optimized for each patient. By measuring the drug concentration in blood, the most suitable dose can be individually determined. Despite the good overall effectiveness of these drugs, a substantial number of patients form antibodies directed against the drugs, which can hamper the function of infliximab or adalimumab. Low drug levels can be an indication for the presence of these anti-drug antibodies (ADA). Regular monitoring of drug levels and determination of ADA enables fine tuning of the drug dosage, early switch to other therapeutics and prevention of side effects.
The concentration of infliximab or adalimu-mab in patient sera can be precisely deter-mined using MabTrack Level ELISAs (Sanquin Reagents, distributed by EUROIMMUN). As complementary tests, MabTrack Anti-Drug Antibody ELISAs enable specific detection of antibodies against infliximab or adalimumab. The ELISAs are easy to perform and can be fully automated. Thus, under or over treatment of CIBD can be recognised promptly and medication and dosage adjusted accordingly.
Reliable laboratory diagnostics for CIBD enable the number of endoscopies to be reduced by about two thirds, a benefit for both patients and healthcare budgets. FC determination represents a simple, non-invasive method to select patients for endoscopic clarification. This is especially relevant given the much higher prevalence of IBS (10-20 %) compared to CIBD (around 0.4 %). FC measurement also allows regular assessment of the disease activity without the need for endoscopy, enabling more efficient patient management. Multiplex antibody testing allows CD and UC to be differentiated. A clear distinction between the two diseases is important due to significant differences in treatment and prognosis. Therapeutic drug monitoring of patients on infliximab or adalimumab is ushering in a new era of personalized treatment for CIBD patients, whereby the medication and dosage can be optimised to the patient’s exact needs. Since infliximab and adalimumab are also used to treat other autoimmune diseases, especially rheumatoid arthritis and psoriasis, these assays will also play an important future role in other areas of medical diagnostics.
1. Magro F., et al. J Crohns Colitis 2017; 11(6): 649-670.
2. Komorowski L., et al. J Crohns Colitis 2013; 7(10): 780-790.
3. Homšak E., et al. Wien Klin Wochenschr 2010;
122 (Suppl 2): 19-25.
Jacqueline Gosink, PhD
Figure 3. IIFT results. (a). Antibodies against CUZD1/GP2; (b). Antibodies against intestinal goblet cells; (c). Control-transfected cells; (d). pANCA; (e). Antibodies against DNA-bound lactoferrin; (f). Control lactoferrin-depleted granulocytes; g). Antibodies against S. cerevisiae.
by Peter Murphy
It was first noticed that the rate of erythrocyte sedimentation changed owing to illness in the 1700s. The use of this attribute as a measure of inflammatory activity due to underlying disease was formalized into a test in the early 1900s and what has become known as the Westergren test has again recently been proposed to be the reference method for measuring erythrocyte sedimentation rate, which is still a commonly used hematology test today. This article allows you to understand why it is used, how the results are affected by physiological factors and how to perform it to obtain useful and reliable results.
Using erythrocyte sedimentation rate measurement to indicate inflammation
Explaining erythrocyte sedimentation rate measurement
The erythrocyte sedimentation rate (ESR) is a general condition indicator and serves as a guide to determine diagnosis and treatment follow-up of different autoimmune diseases, acute and chronic infections and tumours. ESR is the speed at which erythrocytes settle in a tube and provides medical practitioners with valuable information for the diagnosis of their patients. Normal-sized erythrocytes are negatively charged and repel each other, which limits their sedimentation rate. Erythrocytes that form clumps fall faster than small ones, so factors that increase aggregation will increase sedimentation. This increased sedimentation indicates health problems, resulting in a need for additional tests.
Applications of ESR measurement
There’s a long list of conditions for which ESR can be used to assist in making a correct diagnosis or managing the care of a patient: autoimmune diseases such as rheumatoid arthritis, temporal arteritis and polymyalgia rheumatica are well known examples, as is multiple myeloma. When the presence of inflammation is suspected, ESR is a simple and cost-effective way of confirming this. Moreover, for patients with a known condition, the ESR test can provide useful information into the overall effectiveness of their treatment.
The Westergren method
The discovery of the ESR dates back to 1794, but in the 1920s, pathologist Robert Fåhraeus and Alf Westergren developed ESR measurement as we know it. To this day, the so-called Westergren method is recognized as the gold standard, among others by the Erythrocyte sedimentation rate: getting the most out of this test by Peter Murphy It was first noticed that the rate of erythrocyte sedimentation changed owing to illness in the 1700s. The use of this attribute as a measure of inflammatory activity due to underlying disease was formalized into a test in the early 1900s and what has become known as the Westergren test has again recently been proposed to be the reference method for measuring erythrocyte sedimentation rate, which is still a commonly used hematology test today. This article allows you to understand why it is used, how the results are affected by physiological factors and how to perform it to obtain useful and reliable results. Hematology and Flow Cytometry June 2020 13 | Clinical and Laboratory Standards Institute (CLSI). In 2017, the International Council for Standardization in Hematology (ICSH) reconfirmed the Westergren method as the reference method for ESR measurement. The Westergren method owes its popularity to the fact that it’s a simple and inexpensive first-line test, providing valuable information to GPs in the investigation of inflammation after only 60 (or even 30) minutes.
Critical factors of a reliable ESR test
Although the Westergren method may be the gold standard, many factors can meddle with its reliability. Therefore, always keep in mind the following requirements:
- non-hemolysed blood anti-coagulated with EDTA at collection;
- blood sample is thoroughly mixed and diluted 4|:|1 using a sodium citrate solution;
- the tube is held in vertical position at a constant temperature (±1|°C) between 18|°C and 25|°C in an area free from vibrations, drafts and direct sunlight; and
- results are interpreted after at least 30|minutes.
Can we speed up ESR measurement?
In the original Westergren method, the ESR is read after 60|minutes. You can imagine this puts practical limitations on the workflow in clinical laboratories. A laboratory investigation, however, showed that 30-minute ESR readings correlate highly with the corresponding 60-minute ESR readings, which is why today most laboratories perform 30-minute ESR readings and then extrapolate them to derive the 60-minute ESR result. There are Westergren alternatives that claim to measure ESR after only 20|seconds, but as it takes at least 10|minutes before sedimentation starts at a constant rate, these tests risk leading to a number of false negatives.
Why speeding up ESR measurement is not a good idea
The Westergren method and faster alternatives
As mentioned above, the 30-minute version of the Westergren test has become the standard in most hospitals and laboratories. However, even though 30|minutes can be regarded as a short time frame, some companies have worked on Westergren alternatives that can be read after mere minutes or even seconds. A major step forward, or so it seems.
What’s the deal with fast ESR measurement methods?
There are several conditions that ESR methods should comply with in order for them to be reliable. For example, test tubes must be held in vertical position, and the blood must be thoroughly mixed and diluted. Still the most important condition of all doesn’t revolve around equipment; it revolves around time. It takes approximately 10|minutes before red blood cell sedimentation starts at a constant rate. This means that ESR readings after 20|seconds do not actually measure sedimentation but calculate a mathematically derived ESR. This, in turn, leads to ESR readings that don’t correlate with the Westergren standard, leading to a number of false negatives. So, in their attempt to speed up the diagnosis of patients, laboratories that use Westergren alternatives risk overlooking important signs of disease.
Speed or reliability?
Healthcare and in vitro diagnostics are being improved daily and theories are constantly evolving. This makes it hard to determine which ESR method is the right one to choose. The choice is even harder when you consider that ESR alternatives are comparable to the Westergren method, as long as you treat healthy people under Erythrocyte sedimentation rate test normal circumstances. It’s when people are ill that the results start to deviate. This is why our advice is to always choose a method that adheres closely to the Westergren method [such as automated ESR analysers Starrsed (RR Mechatronics), MixRate and Excyte (ELITech)]. Westergren has always been the method of choice in fundamental studies, meaning that ESR is essentially based on this procedure. Moreover, the Westergren method is recommended by the CLSI and reconfirmed as the gold standard by ICSH, two organizations that inform healthcare professionals on state of the art technologies for in vitro diagnostic testing.
Not everything can be rushed
Moving forward is part of human nature; it’s why we’re always so busy making things better, faster and more comfortable. But in the case of ESR measurement, we simply have to face the fact that not everything can be rushed. We may be able to speed up the way we live, work and travel; we cannot force red blood cells to settle faster than they do. What we can do, is make ESR measurement tests as reliable as possible and have them help us improve diagnostics and save lives.
Physiological and clinical factors that influence ESR values
In the investigation of inflammation, ESR measurement is often the first-line test of choice as it’s simple, inexpensive and – if based on the Westergren method – reliable, reproducible and sensitive. But as is the case with every test, there are physiological and clinical factors that may influence ESR results. In this section, we’ll tell you more about them. However, when reading about factors that influence ESR results, please keep in mind that much, if not all of this information, is based on studies undertaken with the Westergren gold standard ESR method only. This is mainly due to the fact that the Westergren ESR method has been almost universally used to investigate the clinical utility of the test in a range of disease states, with much of this work published in peer reviewed journals. As a result, there’s a deep body of knowledge that describes the impact of disease, the limitations and sources of interference with the Westergren ESR. As the Westergren method for ESR measures a physical process under a defined set of conditions, this expansive body of knowledge cannot simply be ‘transferred’ to estimations of ESR by methods that use centrifugation or optical rheology.
What’s normal in ESR?
Before discussing the factors that influence ESR results, first we should answer the question: what is normal? When patients suffer from a condition that causes inflammation, their erythrocytes form clumps which makes them settle faster than they would in the absence of an inflammatory response. However, ‘faster’ is a relative term, and what’s ‘normal’ changes based on sex and age category.
Physiological and clinical factors that increase ESR
The most obvious explanation for increased ESR is inflammation. During acute phase reactions, macromolecular plasma proteins, particularly fibrinogen, are produced that decrease the negative charges between erythrocytes and thereby encourage the formation of cell clumps. And as cell clumps settle faster, this increases ESR. Inflammation indicates a physical problem, meaning additional tests and follow-up are needed. However, there are other factors that increase ESR but don’t necessarily come with inflammation. For example, ESR values are higher for women than for men and increase progressively with age. Pregnancy also increases ESR, which means you’ll be dealing with ESR results above average. In anemia, the number of red blood cells is reduced, which increases so-called rouleaux formation so that the cells fall faster. This effect is strengthened by the reduced hematocrit, which affects the speed of the upward plasma current. Another factor that increases ESR revolves around high protein concentrations. And in macrocytosis, erythrocytes have a shape with a small surface-to-volume ratio, which leads to a higher sedimentation rate.
Physiological and clinical factors that decrease ESR
Apart from factors that increase ESR, medical practitioners and laboratory scientists should also consider the factors that decrease ESR. This is especially important as decreased ESR results may lead to missed diagnoses, whereas increased ESR results either lead to the right follow-up or false positives. Polycythemia, caused by increased numbers of red blood cells or by a decrease in plasma volume, artificially lowers ESR. Red blood cell abnormalities also affect aggregation, rouleaux formation and therefore sedimentation rate. Another cause of a low ESR is a decrease in plasma proteins, especially of fibrinogen and paraproteins.
The four factors that determine ESR reliability (dos and don’ts)
As with any test, the reliability of ESR measurements stands or falls with proper implementation. When not reliably performed, the nonspecific indicator for inflammation may point in the wrong direction, and result in either a false positive or a false negative. This may lead to the initiation of unnecessary investigations or worse: the overlooking of serious problems that actually needed follow-up. In this section, we discuss some do’s and don’ts when performing ESR measurement, to guarantee ESR reliability.
Factor 1: blood collection
Do: make sure you mix and dilute the sample 4:1 using a sodium citrate solution. If you adhere to these practices, you standardize the way you handle the blood samples, and therefore their suitability for ESR.
Don’t: leave the sample for too long before testing. We can imagine you’re pretty busy, and that you can’t do everything at the same time. However, when it comes to blood collection for ESR tests, some speed is required. After four hours, the results won’t be as accurate as before, which may negatively impact the reliability of the result. We therefore recommend performing the test within these four hours. If you really can’t make it in time, 24|hours is the max, but only if the sample is stored at 4|°C.
Factor 2: tube handling
Do: hold the tube vertically. A tube that is not held completely vertical can lead to increased sedimentation rates and is one of the technical factors that can affect ESR readings. And as we discussed in the previous paragraph, temperature is a factor too. Therefore, always place the tube in a stable and vertical position and at a constant temperature.
Don’t: expose the sample to vibrations, draft and sunlight, as all of these factors can have a strong influence on the final result obtained.
Factor 3: result reading
Do: wait 30|minutes. This is a very important one. Before reading ESR results, you should always wait 30|minutes. There are ESR testing methods that claim to show reliable results within only 20|seconds, but as it takes 10|minutes before sedimentation starts at a constant rate, these tests do not actually measure sedimentation. In fact, they calculate a mathematically derived ESR, leading to a number of false negatives.
Don’t: include the buffy coat (which is made up of leukocytes) in the erythrocyte column.
Factor 4: test quality
Do: go with an automated ESR test. They provide you with more reliable results, not least because they can correct hazy results. Moreover, automated ESR tests have a higher throughput compared to manual tests and minimize human contact with the tubes, which helps you reduce operations costs and minimize occupational health and safety risks.
Don’t: choose an ESR test that deviates from the Westergren standard. This method has always been the method of choice in fundamental studies, meaning that ESR is essentially based on this procedure. ESR tests that deviate from the Westergren will logically provide you with different ESR values, meaning they can lead you in the wrong direction. This is why the Westergren method is recom-mended by the CLSI and reconfirmed as the gold standard by ICSH.
ESR test as a reliable tool
If you keep these dos and don’ts in mind, you’re well on your way to making the ESR test a reliable tool that’s going to help you diagnose patients fast and error-free.
Peter Murphy MBA(TechMgt), MAACB, BSc, GradDipEd
ELITech Group, Braeside, Victoria 3195, Australia
by Prof. Michael Vogeser, Dr Judy Stone and Prof. Alan Rockwood
While analytical standardization and metrological traceability are well-defined terms, ‘methodological standardization’ in clinical mass spectrometry is still in a developing stage. We propose a framework that facilitates the widespread implementation of this highly complex and very powerful technology and is based on two pillars – standardization of the description of LC-MS/MS methods and standardization of the release of clinical test results as a three-step sequence of method validation, batch validation and validation of individual measurements.
Mass spectrometry in the clinical laboratory
Mass spectrometry (MS)-based methods now play an important role in many clinical laboratories worldwide. To date, areas of application have focused especially on screening for hereditary metabolic diseases, therapeutic drug monitoring, clinical toxicology and endocrinology. In fact, these techniques offer significant advantages over immunoassays and photometry as basic standard technologies in clinical chemistry: high analytical selectivity through true molecular detection; wide range of applications without the need for specific molecular features (as in UV detection or specific epitopes); high multiplexing capacity and information-rich detection; and, in many cases, matrixindependent analyses, thanks to the principle of isotope dilution .
Various MS technologies – in particular tandem MS (MS/MS-coupling with molecular fragmentation), time-of-flight (TOF) MS and Orbitrap-MS – with front-end fractionation technologies such as HPLC or UPLC potentially allow very reliable analysis, but the technology itself is no guarantee of this: these techniques have a very high complexity and a wide range of potential sources of error  which require comprehensive quality assurance [3–5]. Indeed, the high degree of complexity is still the main hurdle for the application of MS in the special environment of clinical laboratories. Specific challenges of this type of laboratory – in contrast to research and development laboratories – include: heterogeneous mix of staff qualifications; requirement for maximum handling safety when operating a large number of analysis platforms; work around the clock; and direct impact on the outcome of the individual patient.
Indeed, after more than two decades of commercial availability of LC-MS/MS instruments, their application in a global perspective has remained very limited. The translation of MS into fully automated ‘black box’ instruments is underway, but still far from being realized on a large scale , with laboratory developed tests (LDTs) still dominating the field of clinical MS applications. Kit solutions for specific analytes provided by the in vitro diagnostics (IVD) industry are becoming increasingly available, but their application also requires a very high level of skills and competence from laboratories.
Two main differences of MS-based LDTs as opposed to standard ‘plug-and-play’ analysis systems in today’s clinical laboratories can be identified: first, the high heterogeneity of device configurations and second, the handling of large amounts of data, from sample list structures to technical metadata analysis.
In fact, the random access working mode is now so widespread in all clinical laboratories that the ‘analytical batch’ is no longer standard in laboratories. In the same way, modern analytical instruments no longer challenge the end users with extensive metadata (such as reaction kinetics or calibration diagrams). To achieve the goal of making the extraordinary and disruptive analytical power of MS fully usable for medicine to an appropriate extent, approaches to master the heterogeneity of platform configurations and to regulate the handling of batches and metadata are urgently needed – and standardization efforts seem to be crucial in this context.
Standardization of the method description
IVD companies manufacture many different instrument platforms, but each of these platforms is very homogeneous worldwide and is produced in large quantities for years. In contrast, MS platforms in clinical laboratories have to be individually assembled from a very large number of components from many manufacturers (sample preparation modules, autosamplers, high performance pumps, switching valves, chromatography columns, ion sources, mass analysers, vacuum systems, software packages, etc). As a result, hardly any two instrument configurations in different laboratories correspond completely with each other. This makes handling very demanding for operators, maintenance personnel, and service engineers.
Methods implemented on these heterogeneous platforms (e.g. instruments from various vendors) are in turn characterized by a very considerable number of variables, e.g. chromatographic gradients, labelling patterns of internal standards, purity of solvents, dead volume of flow paths, etc.
Taken together, the variety of assays referring to an identical analyte (such as tacrolimus or testosterone) is enormous, with an almost astronomical combinatorial complexity.
However, method publications are still traditionally written more or less in a case report approach: the feasibility and performance of a method realization is demonstrated for one individual system configuration. It is usually not clear which features are really essential for the method and which features can be variable between different implementations – and which second implementation can still be considered ‘the same’ method. This means that the question of the true ‘identity’ of a method has not yet been deepened by application notes or publications in scientific journals; thus the level of abstraction required here is missing.
In an attempt to standardize the description of MS/MS-based methods, we selected a set of 35 characteristics that are defined as essential for a method (see Table 1) , for example, main approach of sample preparation (e.g. protein precipitation with acetonitrile), main technique of ionization (e.g. electrospray ionization in negative mode); molecular structure of the internal standard; mass transitions; calibration range. In addition, we define 15 characteristics of a method that cannot or should not be realistically standardized in time and space (examples: manufacturer and brand of the MS detector; dead volume of the flow path; lot of analytical columns and solvents). These characteristics – identified as variable – should be documented in the internal report files.
We found it feasible to describe several exemplary MS/MS methods using this scheme and a corresponding matrix. On the basis of this matrix, the method transfer to different platforms and laboratories will be much easier and more reliable. Specifying the identity of a method in the proposed way has the essential advantage that a method revalidation can be transparently triggered by defined criteria, e.g. the use of a novel internal standard with a different labelling pattern.
The proposed scheme for method description may also be the basis of a comprehensive traceability report for any result obtained by an MS-based method in the clinical laboratory.
Standardization of batch release (Table 2)
While today’s routine analyser platforms essentially provide unambiguous final results for each sample, the process of generating quantitative results from primary data in MS is open and transparent. Primary data in MS are the peak areas of the target analyte observed in diagnostic samples. In addition to these primary data, a range of metadata is provided (e.g. internal standard area, peak height-to-area, peak skewness, qualifier peak area; metadata related to analytical batches, e.g. coefficient of variation (CV) of internal standard areas). This transparency and abundance of data is a cornerstone of the high potential reliability of MS-based assays and therefore their interpretation is very important [8, 9].
However, the evaluation of this metadata – related to individual samples and batches – is nowadays done very heterogeneously from laboratory to laboratory ; this applies to LDTs as well as to commercially available kit products. The structure of analytical batches is also very variable and there is no generally accepted standard (number and sequence of analysis of calibration samples in relation to patient and quality control samples, blank injections, zero samples, etc).
While the validation of methods – which is performed before a method is introduced into the diagnostic routine – is discussed in detail in the literature (and in practice), the procedures applied to primary data before release for laboratory reporting have not yet been standardized. Validation is generally defined as the process of testing whether predefined performance specifications are met. Therefore, quality control and release of analytical batches and patient results should also be considered a process of validation, and criteria for the acceptance or rejection of results should be predefined.
A three-step approach to validation, covering the entire life cycle of methods in the clinical laboratory, can be conceptualized: dynamic validation should integrate validation of methods, validation of analytical batches and validation of individual test readings. We believe that standardization of this process of batch and sample result validation and release is needed as a guide for developers of methods, medical directors, and technicians.
In a recent article published in Clinical Mass Spectrometry , we propose a list of characteristics that should be considered for batch and sample release. In this article we only mention figures for merits and issues to be addressed and do not claim to have specific numerical acceptance criteria. Therefore, this generic list of items is intended as a framework for the development of an individual series and batch validation plan in a laboratory. Furthermore, we consider this list to be a living document, subject to further development and standardization as the field matures.
We believe that it is essential to include basic batch and sample release requirements as essential characteristics in the description of a method . Therefore, we believe that efforts to standardize method description and batch/sample release should be synergistically linked to facilitate the use of MS in routine laboratories.
The approach proposed to clinical MS in these two companion articles [7, 11] can be the basis for discussion and eventually for the development of official standards for these areas by the Clinical and Laboratory Standards Institute (CLSI) and/or International Organization for Standardization (ISO). We believe that these documents can provide a solid basis for internal and external audits of LC-MS/MS-based Quality Control April/May 2020 9 | LDTs, which will become particularly relevant in the context of the IVD Regulation 746 in the European Union .
Both approaches – standardized description of MS methods and standardization of batch release – aim at implementing methodological traceability. This corresponds to the analytical standardization and metrological traceability of measurements to higher order reference materials [13, 14].
In the future, a commercialization of MS-based black-box instruments on a larger scale is expected. However, LC-MS/MS will remain a critical technique for LDTs, and the flexibility of MS to develop tests on demand – independent of the IVD industry on fully open LC-MS/MS platforms – will remain a key pillar of laboratory medicine.
Both publications, which this article puts into context [7, 11], have been published in Clinical Mass Spectrometry, the first and only international journal dedicated to the application of MS methods in diagnostic tests including publications on best practice documents. Both articles are freely available.
Clinical Mass Spectrometry is the official journal of MSACL (The Association for Mass Spectrometry: Applications to the Clinical Laboratory; www.msacl.org). MSACL organizes state-of-the-art congresses that focus on translating MS from clinical research to diagnostic tests (i.e. bench to clinic).
In summary, we advocate innovative approaches to methodological standardization of LC-MS/MS methods to master the complexity of this powerful technology and to facilitate and promote its safe application in clinical laboratories worldwide.
Michael Vogeser*1 MD, Judy Stone2 PhD, Alan Rockwood3 PhD
1 Hospital of the University of Munich (LMU), Institute of Laboratory Medicine, Munich, Germany
University of California, San Francisco Medical Center, Laboratory Medicine, Parnassus Chemistry, San Francisco, CA, USA
3 Rockwood Scientific Consulting, Salt Lake City, UT, USA
* Corresponding author
Point-of-care (POC) testing has the potential to provide results in a much shorter time than centralized lab testing, allowing faster implementation of appropriate treatment. This article discusses the technological developments in POC tests, considerations for the implementation of a POC system, as well as how to ensure accurate and reliable results.