p26 02

Improved tools to diagnose venom allergies

Bee and wasp venom allergy is a potentially life-threatening condition and diagnostic errors can therefore have serious consequences. Currently, the diagnosis of allergy to stinging insects relies on patient case history and quantification of specific IgE antibodies and skin prick testing to identify the responsible insect. However, the diagnosis can sometimes be problematic as patients may have very low levels of specific IgE and also because many patients show positive test results to several venom species. Moreover it is often difficult for the patient to identify the offending insect. Component based specific IgE testing helps to increase the sensitivity of testing as well as to resolve which stinging insect species the patient is sensitized to. By applying these new component-specific IgE tests and including testing for serum tryptase, the certainty in identifying patients that will benefit from relevant and safe venom immunotherapy treatment increases greatly.

by Magnus Borres, MD, PhD, MPH

Background
Venoms from stinging insects such as bees and wasps (Hymenoptera) can induce anaphylaxis in susceptible people, and stinging insects are the second most common cause of anaphylaxis in Europe and USA (prevalence of 0.3 to 7.5% in Europe).  Most of the severe and fatal reactions to insect stings in Europe are caused by members of the Vespidae family – commonly known as wasps. In contrast to many other IgE mediated allergic reactions, venom allergies may arise very unexpectedly as they can affect also individuals that do not have a genetic pre-disposition to make IgE antibodies. 

The reactions elicited by a bee or wasp sting can range from mild/local immediate reactions, to larger often late local reactions up to immediate systemic reactions, eventually leading into life threatening conditions requiring emergency treatment.

Markers and risk factors in venom allergies
The presence of specific IgE antibodies to venoms supports the diagnosis of an allergic reaction. In many patients however, the levels are low and there is no direct correlation between the levels of specific IgE antibodies and the risk for reactions. In fact, it is not uncommon that severe reactions occur in patients with very low or sometimes even undetectable venom-specific IgE levels. This exemplifies the need for highly sensitive diagnostic tests that can detect and quantify very low specific IgE levels.
The risk of developing severe reactions after a Hymenoptera sting is dependent on several factors, such as the patient’s history of previous reactions, serum tryptase levels, age and specific IgE-sensitization. People who have already suffered from severe systemic reactions due to stings are predisposed for future reactions – up to 80% will develop severe reactions following a subsequent sting.  However, in 50% of the fatal cases no previous systemic reaction has occurred. Serum tryptase is an important marker for evaluating the risk for systemic reactions, where elevated baseline tryptase levels indicate a higher risk for severe anaphylactic reactions. Approximately a fourth of patients who experience severe venom reactions have elevated baseline levels of this marker. The risk for severe reaction to venom stings also increases with age, and is higher in adults than in children and adolescents. This may be explained by an increased number of mast cells in addition to other contributing clinical conditions in older people.

Identify the little beast!
For patients who are highly allergic to insect stings the treatment option is to undergo venom immunotherapy (VIT) aiming at inducing tolerance. For selecting the most effective treatment, correct identification of the Hymenoptera species that causes reactions in the patient is crucial. This is however not trivial as many patients do not know what insect stung them, and as approximately 60% of the patients show up positive to both bee and wasp in venom extract-based tests.

Diagnostic in vitro tests in venom allergies
Patient history forms the basis in diagnosing a venom allergy and specific IgE antibody test results can support the doctor in the diagnosis and in choosing the appropriate treatment. Whether the reaction in a patient is IgE mediated or not needs to be established. This is usually done by in vitro testing for specific IgE and/or skin prick testing, but as many as 10-20% who seek medical care for sting-induced reactions are negative in these extract-based tests. The reason may be that the reaction was due to another pathogenic mechanism than an allergic reaction, or it could have been caused by an underlying mast cell disease. When using conventional extract based test, which due to the preparation procedure may be low in content of certain allergenic proteins, the sensitivity may not be high enough to pick up certain sensitizations. In addition patients reacting for the first time to a sting may initially have levels of venom-specific IgE below the detection limit.
On the other hand, it is common that patients appear to be sensitized to both bee and wasp venoms when using extractbased specific IgE tests, even in cases when proven non-reactive to one of the species.  Diagnostic tests capable of discriminating between clinically relevant and irrelevant sensitizations, while reliably detecting true co-sensitization to both species greatly improves proper diagnosis and selection of therapeutic interventions. 
There has thus been a need to increase both the sensitivity to detect low levels of IgE antibodies and the specificity to distinguish between sensitization to different Hymenoptera species. Recently this has become possible through the introduction of component-resolved diagnostics, or molecular allergology.

What is molecular allergology?
Molecular allergology allows the measurement of specific IgE antibodies to single, pure allergen molecules, thereby helping to identify the exact allergenic molecule (component) that a patient is sensitized to. All allergen sources contain several allergenic molecules, and the ability to produce these by recombinant means and assay them individually greatly increases the precision of specific IgE measurements. Using this component-resolved testing it is possible to discriminate between species-specific sensitizations, where the patient is genuinely sensitized to the allergen source, and sensitizations due to cross-reactivity. Cross-reactivity occurs when antibodies directed against one molecule cross-recognize a very similar but yet distinct  protein. Such cross-reactivity may arise due to high similarity between some components in bees and wasps, but may also be caused by carbohydrate structures (CCDs) on proteins in plants and invertebrates. CCD antibodies do not cause symptoms and are thus clinically irrelevant, but may confuse test results greatly. Recombinant components used in molecular allergology are free of CCD structures and are therefore very specific. In addition, tests that identify antibodies to CCD are available to further increase diagnostic accuracy.

Molecular allergology improves the allergy diagnosis
Results from extract-based tests give the first, although crude answers that guide the diagnosis, while further analyses using component-based testing take the diagnosis to completely new levels by offering improved test sensitivity, resolving ambiguous extract test results and guiding the selection of optimal treatment. In cases where the patient has a convincing history of bee or wasp allergy, but extract -based tests turn out negative, allergen component testing offers increased sensitivity to detect relevant sensitizations. These tests contain only one single pure allergen component therefore the sensitivity to detect antibodies directed to this unique protein is increased as compared to extract-based tests. However the strength of the extract-based test is that they do contain all relevant allergenic proteins in the allergen source, including minor allergens. 

Component-based testing enables the discrimination between true co-sensitization and sensitization due to cross-reactivity. Extract-based test results may indicate sensitization to both bee and wasp venoms, but using component testing it is possible to investigate if these sensitizations are clinically irrelevant or truly suggest allergy to both species. The recombinant markers for bee (Api m 1), common wasp (Ves v 1 and Ves v5) and/or paper wasp (Pol d 5) should be used to determine unambiguously if the sensitization is species-specific or not. CCD-antibodies can also give rise to double positive test results in the absence of specific Hymenoptera venom sensitization since these antibodies often are induced by grass sensitization.

When extract-based test turn out positive to either bee or wasp only, there is little questioning about what species that patient reacts to. Even though this indicates a true Hymenoptera venom sensitization, additional testing with component-based tests can confirm if the patient is sensitized to a major allergen in the relevant species. Venom immunotherapy treatment may be more effective in patients who are sensitized to these major allergens.

New diagnostic tools in Hymenoptera venom allergy are now available for clinicians
The recent development of IgE tests against species-specific allergen components in Hymenoptera venom allergy offers diagnostic tools that greatly improve the ability to differentiate between sensitization to bees and wasp, and helps in discriminating between clinically relevant and irrelevant sensitizations.
Identification of which molecules that triggers the severe reaction is of vital importance for the clinician when considering venom immunotherapy. The combined use of venom components and Tryptase optimize the diagnosis and management of patients with a suspicion of venom allergy.  Currently only Thermo Fisher Scientific, formerly known as Phadia, Uppsala, Sweden, have both tryptase and allergen component test available on the same technology platform.

The author
By Magnus Borres, MD, PhD, MPH
Pediatric Allergist, Uppsala University Hospital, Uppsala, Sweden
Medical Director, ImmunoDiagnostics, Thermo Fisher Scientific, Uppsala, Sweden

C111 Euroimmun Fig1

Molecular allergology – probing deeper into the triggers of allergies

Molecular allergology is a cutting edge technology that enables the triggers of allergies to be characterized to a new level of detail. Two new component-resolved immunoblot test systems provide in-depth profiling of allergic reactions against birch and grass pollens and against bee and wasp venoms. The molecular tests supplement the established Euroline allergy range, which comprises a comprehensive spectrum of application-oriented profiles designed for use in any diagnostic laboratory.

by Dr Jacqueline Gosink

Advanced diagnostic approach
Molecular allergology or component-resolved diagnostics is a novel approach to allergy diagnostics, whereby single purified allergen components (SPAC) are used for specific IgE detection in place of the usual whole extracts. This powerful technology introduces a new dimension to differential allergy diagnostics.

Precise, in-depth profiling
The raw allergen preparations of substances such as pollen that are traditionally used for in vitro allergy diagnostics are generally not well characterized and are thus difficult to standardize. In contrast, the allergenic targets used in molecular allergology tests are defined recombinant proteins, which are capable of delivering precise information about the source of sensitization.
The in-depth profiling enables allergologists to:

  • Identify disease-causing allergens
  • Assess the risk of cross reactions
  • Determine patients’ suitability for specific immunotherapy

Multiple pollen sensitizations
Pollen allergies are the most frequently occurring inhalation allergies, with sensitizations to birch and grass pollen as the most common ones. Typically, patients with multiple pollen sensitizations suffer from rhinitis, conjunctivitis and allergic asthma. The allergen extract-based determination of specific IgE antibodies encompasses sensitizations against major allergens and cross reacting minor allergens.
The Euroline SPAC Pollen 1 profile (Figure 1) combines the major and minor allergens of birch (Bet v1, Bet v2, Bet v4, Bet v6) and timothy grass (Phl p1, Phl p5, Phl p7, Phl p12), allowing the differentiation of pollen cross reactions from true multiple pollen sensitizations.
The efficacy of the assay has been confirmed by clinical studies. In one study the test successfully confirmed sensitizations to birch or grass pollen in 77 patients with clinically and anamnestically diagnosed allergies (1), and in a further study the test verified allergic reactions in 44 patients with birch and grass pollen double sensitizations (2). Furthermore, the test system correlated well with comparable commercial assays, demonstrating an EAST class correlation of 95-100% for each of the allergen components.

Bee and wasp venom allergies
Bee and wasp venom stings can pose a problem in the summer months. Whereas a normal reaction to a sting involves local swelling, itching and reddening, persons with an allergy can develop severe systemic reactions, including anaphylactic shock. Bee and wasp venom reactions can be identified using the single allergen components i208 (bee venom) and i209 (wasp venom). i208 represents the main bee venom marker rApi m1 from the honey bee (Apis mellifera) and i209 is the main allergen rVes v5 from the common wasp (Vespula vulgaris). Both preparations are free of cross-reactive carbohydrate determinant (CCD), providing higher reliability in result interpretation. The SPAC analysis allows true double sensitization to be distinguished from cross reactions between insect venoms. The Euroline SPAC Insect Venoms 1 profile (Figure 1) provides the recombinant antigens i208 and i209 together with the corresponding extracts i1 (bee venom) and i3 (wasp venom), allowing an efficient and comprehensive investigation of bee and wasp venom sensitizations with one test.

Fast and easy test procedure
The molecular allergology immunoblot tests are fast and simple to perform and are suitable for use in any diagnostic laboratory. The test procedure is based on established Euroline technology and consists of three basic steps: serum incubation (60 min), conjugate incubation (60 min) and chromogen substrate incubation (10 min). The in-between washing steps are short, and the entire procedure can be completed in 2.5 to 3 hours. All reagents are ready to use, saving time and reducing the risk of errors.

Only small amounts of sample material, typically 400 μl, are required per test. In a special volume-optimized version of the protocol the test can be performed with as little as 100 μl of patient sample, making it ideal for use in pediatrics.

Since the allergens are configured as a line blot with related allergens grouped together, the evaluation of profiles is effortless. Results are classified according to the RAST/EAST system. All profiles additionally include an indicator band of CCD to aid interpretation of the relevance of specific IgE results, for example in cases where positive IgE reactions are inconsistent with the clinical picture.

Fully automated processing
The standardized design of Euroline test strips allows automated processing using immunoblot incubators such as the EUROBlotOne (Figure 2). This advanced system automates the entire Euroline procedure from sample entry to report release. The compact, tabletop device has a high walkaway capacity: up to 44 strips can be incubated per run, and different tests can be combined in one run. All dilution, incubation and washing steps are performed automatically, and the integrated barcode scanner ensures that the correct samples are pipetted. User-friendly menus provide easy navigation, and error-detection features ensure high reliability. Test strips are subsequently digitalized using a special camera module.

Results are then automatically evaluated and archived using the worldwide-established and user-friendly EUROLineScan software. The software automatically identifies, quantifies and assigns bands, and a full results report is available within minutes of completing the incubation (Figure 3). The extensive individual data is administered and documented by the system, and all images and data are electronically archived, eliminating the need to store potentially infectious blot strips. The software can be easily integrated into LIS software, for example the EUROLabOffice system, for a smooth daily laboratory routine.

Comprehensive Euroline allergy range
The new molecular allergy tests are part of the established Euroline allergy range, which provides efficient multiparameter analysis of IgE antibodies against up to 36 different allergens in parallel. The immunoblots are composed from a wide portfolio of allergens, comprising both SPAC and native extracts which have been extensively purified and carefully quality controlled to ensure consistency. All profiles are application-oriented, each one being designed to address a particular diagnostic inquiry.

The Euroline system offers a very competitive price per allergen, making this system the ideal choice for laboratories wanting to perform state-of-the art allergy diagnostics on a small budget.

Perspectives
The advent of molecular allergology technology represents a quantum leap for allergy diagnostics. Component-resolved allergy test systems are unrivalled in the depth of diagnostic information they deliver and hence the level of support they provide for therapeutic decision-making. The Euroline SPAC range will soon be expanded to include further test systems based on this cutting-edge technology.

References
1. Weimann et al. 30th Annual Congress of the EAACI, Istanbul, Turkey, June 2011.
2. Weimann et al. 20th IFCC-EFLM European Congress of Clinical Chemistry and Laboratory Medicine (EuroMedLab), Milan, Italy, May 2013.

The author
Jacqueline Gosink PhD
Euroimmun AG
Luebeck, Germany

C117 Muro

Autoantibodies against MDA-5: very important serological markers in amyopathic dermatomyositis with rapidly progressive interstitial lung disease

by Assoc. Prof. Y. Muro, Assoc. Prof. K. Sugiura and Prof. M. Akiyama Autoantibodies against MDA-5 are serologically important biomarkers because they are mainly detected in patients with amyopathic dermatomyositis complicated with rapidly progressive interstitial lung disease (ILD). Anti-MDA-5 antibodies are useful not only for diagnosis but possibly also for monitoring disease activity in ILD.

C112 Biosystems Figure 1

The relevance of the manufacturer in indirect immunofluorescence standardization

Autoantibody detection is a powerful laboratory tool for clinical diagnosis in the autoimmune diseases field. Among the techniques most widely used worldwide, indirect immunfluorescence (IFA) plays a particularly important role not only in the diagnosis but in the follow up of many diseases and remains the hallmark despite the introduction of new techniques in the routine of clinical laboratories. Witness to this is the renaissance of the antinuclear antibodies (ANA) screening on HEp2 cells by this techique or the renewal of the detection of anti-endomysium antibodies on monkey esophagus as the gold standard serological test for celiac disease. Therefore, IFA is a technique in full validity and requires a level of standardization that unfortunately is far from being achieved.

by Petraki Munujos, PhD

The efforts to improve standardization of indirect immunofluorescence as a diagnostic tool are numerous worldwide. Traditionally, the players involved in standardization have been clinical laboratories, clinicians, regulators, and to a lesser degree, diagnostic reagents manufacturers. Energy has been concentrated basically in aspects like the control of laboratory procedures, unification of nomenclatures and classifications, guidelines on how to report the results, preparation of recommendations, definition of diagnostic criteria and diagnostic algorithms and development of external quality control programs. In these iniatives, laboratory staff, clinicians and regulators are mainly involved. Nevertheless, those aspects regarding the design, development and manufacturing of the reagents, which involve manufacturers, are basically ignored. And this is probably due to the fact that the evolution of the technology has led to a truncated view of the test procedure resulting in a misconception of what needs to be standardized. In other words, the execution of many procedures is nowadays being shared between the manufacturer, who actually initiates the assay, and the laboratory, where the test is finalized. In old scientific articles related to ANA, the Material and Methods section usually started with the cell culture, the preparation of the slides and the fixation among others, and the sample incubation was only one more step of the whole procedure. Currently, the Material and Methods section starts with the sample preparation and instead of describing all the preliminary steps, one can find the name and references of the manufacturer. Figure 1 illustrates what would be the whole test procedure, showing the part performed in the clinical laboratory, actually the only part which is taken into consideration when dealing with standardization. So, to ensure appropriate use of indirect immunofluorescence testing, clinicians, diagnostic laboratories, regulators and reagents manufacturers should be involved and share the tasks of identifying and managing the key points leading to proper results.

Evidences of disparity
At the level of the manufacturer, the potential variability in the performance of the kits lies in features like the reagents and materials that are purchased or manufactured to become components of the kit, the procedures and conditions of manufacturing (fixatives, temperatures, formulations), the reliability of the serum samples used to set up the calibration of the determination (basically, the sample dilution which actuallly acts as the cut-off point), and the stability of the final product (1).

When approaching the participation of the manufacturer in the standardization of antibody testing, it is observed that what basically matters for industry is the standardization of the manufacturing processes. This normally occurs in an environment of Quality System Certifications, like GMP, ISO-9001 or ISO-13485 and under the requirements of the European Directive on In Vitro Medical Devices, and it is strengthened by the manufacturer’s own interest in having robust and reliable processes. Nevertheless, despite regulatory compliant and well implemented standardized processes, there are several aspects that make final reagents differ from one manufacturer to another. Below are reviewed some examples of variation on the results depending on the manufacturer source.

Dense fine speckles 70 (DFS70) antigen
As with other fluorescence patterns, the typical DFS pattern (lens epithelium-derived growth factor) can vary depending on the manufacturer source of the HEp2 slides used. The variations consist basically in different sensitivities and even in positive and negative results for the same sample run in different slide brands. Inconsistencies are also observed when comparing fluorescence with the results obtained by means of ELISA (2,3).

Ribosomal P protein (Rib P)
In studies performed by Mahler et al. (4) to determine the sentitivity of the immunofluorescence technique to detect antibodies against ribosomal P protein, several different HEp2 slides manufacturers were used, resulting in significant differences in patterns of staining for monospecific anti-Rib-P sera. Differing patterns were observed for the same sample, from a fine speckled nucleoplasmic pattern, to a diffuse cytoplasmic staining, or a fine speckled cytoplasmic pattern.

CDC/AF Reference Human Sera
When running reference sera on HEp2 slides coming from different manufacturers, variations of unknown origin can be observed. While most brands produce the expected specific pattern, there are often differences among brands like the ones shown in Figure 2.

Labile nuclear antigens
Most of the patterns observed when analysing the presence of ANA in patients sera by IFA on HEp2 cells slides are suitably detected in most slides brands. However, there are some antigens for which expression may significantly vary from one manufacturer to another like Jo1, PCNA or SSA/Ro (5). These antigens are not always well preserved in the substrates and they can be extremely sensitive to handling, to certain fixatives and in some cases, they can be just washed out during the manufacturing process, resulting in a poor presence or a total lack of antigenic molecules available to capture the antibody being analysed.

Antineutrophil cytoplasmic antibodies (ANCA)

The neutrophil substrates used in the detection of ANCA may vary in their ability to give the typical immunofluorescence patterns described and established by consensus groups, i.e. a diffuse granular cytoplasmic staining with higher interlobular intensity (C-ANCA), a compact staining of the perinuclear zone of the cytoplasm (P-ANCA) and a broad non homogeneous perinuclear staining, eventually accompanied by a diffuse cytoplasmic pattern with no accentuation of the interlobular zone (X-ANCA). In general, substrates differ in their ability to distinguish between a C-ANCA and X-ANCA. In a study by Pollock et al. (6), it was observed that although all commercial neutrophil substrates consistently demonstrated nuclear extension of perinuclear fluorescence with sera containing P-ANCA with MPO specificity, there were more problems in P-ANCA testing than in C-ANCA, due basically to the eventual presence of additional cytoplasmic fluorescence.

Crithidia luciliae
In a similar way as observed in HEp2 cells immunofluorescence patterns, the anti-nDNA test on Crithidia luciliae slides may show significant differences among manufacturers. The variety of strains available in cell banks contribute to the heterogeneity of results. Apart from the kinetoplast, other organelles can be stained by antibodies from the sample, like the nucleus, the basal body and the flagellum. Depending on the conditions of preparation of C. luciliae substrates and on the nature of the sample analysed, different patterns of stained organelles can be observed. Nevertheless, the only specific staining to be considered as a positive result is the kinetoplast staining. In addition to anti-nDNA antibodies, there are other antibodies in the serum of lupus patients that can react with the substrate. The so called anti-nucleosome antibodies are antibodies that react with histones exposed in the nucleosome. It is well known that treating C. luciliae substrate with HCL eliminates histone from the kinetoplast (7). This could be another point of possible discrepancy among manufacturing processes if some include the histone removal procedure and some others do not. Furthermore, the cell cycle of C. luciliae may influence histone appearance in the kinetoplast. Therefore, the manufacturing process of C. luciliae slides, including culture, harvest, fixation and drying, can cause variation in the results.

Aspects providing variablity
Among the players participating in autoimmune diagnostics, there is no doubt that manufacturers hold the know-how of preparing diagnostic kits and are the true experts in the development of test methods. However, despite the standardized manufacturing processes and the CE-certifications or FDA approvals, there are several aspects that are found to be sources of variabilty. These aspects should be addressed and recommendations on key points should be created by specialized committees with the participation of laboratory experts, clinicians and manufacturers. The definition and  control of the raw materials incorporated in the kit production is a common and regulated practice in any kind of manufacturing process. But recommendations on nature, compostion or quality grades of key materials, including culture media, cell type and strain or fluorescent conjugates is still lacking. In the case of tests based on cellular substrates,  extracellular matrix (ECM) proteins are commonly used to aid the spreading and growth of cells on the slide glass surface.  Many ECM proteins contain defined amino acid sequences to which cell surface integrin receptors bind specifically. ECM, together with growth factors in the culture medium, work to produce an appropriate in vitro proliferative response, promoting cell growth and spreading. Altering cell-ECM contacts results in coordinated changes in cell, cytoskeletal, and nuclear form. Thus, the choice of the right ECM to coat the glass slides used as growing surface deserves our attention since it might have a direct effect on the fluorescent pattern finally observed (8). It is also common to use synchronization agents to achieve a greater rate of mitotic cells. Due to the fact that these compounds may be toxic for the cell, some cell disturbances may occur that can impact the morphology or the behaviour of the final cell preparation.

Diagnosis by means of tissue sections remains very important in liver autoimmune diseases like autoimmune hepatitis (AIH) or primary billiary cirrhosis (PBC). In particular, the detection of anti-smooth muscle antibodies (ASMA), antibodies to liver-kidney microsomes (LKM antibodies) and anti-mitochondrial antibodies (AMA) are considered important diagnostic tools. Only a few guidelines have been published on the obtention of tissue sections (9), while the variations in the preparation of tissue blocks regarding orientation, preservation conditions, and   sectioning keep on contributing to the heterogeneity of results, especially in the case of tissues that are not morphologically homogeneous. For instance, the LKM antibodies can only be well defined if the kidney section has the proper orientation that allows the distinction between proximal and distal renal tubules and, thus, between LKM and AMA.

Considering that the expression and topographical distribution of autoantigens is under the direct influence of the HEp-2 fixation method, some immunofluorescence patterns are not adequately expressed due to the way that the antigenic substrate is prepared. This aspect equally affects tissue and cell substrates. As for the sensitivity of the tests, differences among manufacturers are due to the use of fixatives to prolong shelf-life. The use of slides without fixation seems to be the best choice for most  autoantibody patterns. Nevertheless, there are several staining patterns that need the substrate to be fixed (figure 3), like anti-islet cells antibodies or anti-adrenal cortex antibodies.

A less frequent but significant source of variability in the immunofluorescence on tissue sections can be found in the origin of the animal used (Figure 4). Definition of suitable species and strains should be addressed in some cases in which the levels of antigen expression may differ. This affects the sensitivity of the test, especially in samples with moderate or low titers of antibody.  

Considering the complexity and diversity of manufacturing processes and subprocesses and their impact on the final test performance, it is important to combine the efforts of laboratory experts, clinicians and manufacturers in the task of standardizing those key aspects that could otherwise keep on undermining the successful harmonization of  the results obtained in the clinical laboratory.

References
1. Fritzler MJ, Wiik A, Fritzler ML, Barr SG. The use and abuse of commercial kits used to detect autoantibodies. Arthritis Res Ther 2003, 5:192-201
2. N.Bizzaro, E.Tonuttiand D.Villalta, «Recognizing the dense fine speckled/lens epithelium-derived growth factor/p75 pattern on HEP-2 cells: not an easy task! Comment on the article by Mariz et al,» Arthritis Rheum, vol. 63, no. 12, pp. 4036-4037, 2011
3. Mahler M. The clinical significance of anti-DFS70 antibodies as part of ANA testing. In: K. Conrad, E.K.L. Chan, M.J. Fritzler, R.L. Humbel, P.L. Meroni, G. Steiner, Y. Shoenfeld (Eds.). Infection, Tumors and Autoimmunity, AUTOANTIGENS, AUTOANTIBODIES, AUTOIMMUNITY, Volume 9, p.342-350. PABST, 2013.
4. Mahler M, Ngo JT, Schulte-Pelkum J, Luettich T, Fritzler MJ. Limited reliability of the indirect immunofluorescence technique for the detection of anti-Rib-P antibodies. Arthritis Research & Therapy 2008, 10:R131
5. Dellavance A, de Melo Cruvinel W, Carvalho Francescantonio PL, Pitangueira Mangueira CL, Drugowick IC, RodriguesSE; Coelho Andrade LE. Variability in the recognition of distinctive immunofluorescence patterns in different brands of HEp-2 cell slides J Bras Patol Med Lab  2013;49( 3):182-190.
6. Pollock W, Clarke K,  Gallagher K, Hall J, Luckhurst E,  McEvoy R, Melny J, Neil J, Nikoloutsopoulos A, Thompson T, Trevisin M, Savige J. Immunofluorescent patterns produced by antineutrophil cytoplasmic antibodies (ANCA) vary depending on neutrophil substrate and conjugate. J Clin Pathol 2002;55:680–683
7. Kobkitjaroen J, Jaiyen J, Kongkriengdach S, Potprasart S, Viriyataveekul R. Comparison of Three Commercial Crithidia luciliae Immunofluorescence Test (CLIFT) Kits for Anti-dsDNA Detection. Siriraj Med J 2013;65:9-11
8. (Integrin Binding and Cell Spreading on Extracellular Matrix Act at Different Points in the Cell Cycle to Promote Hepatocyte Growth  Hansen LK,. Mooney DJ, Vacanti JP, Ingber DE. Molecular Biology of the Cell 1994;5:967-975
9. Vergani D, Alvarez F, Bianchi FB, Cançado ELR, Mackay IR, Manns MP, Nishioka M, Penner E. Liver autoimmune serology: a consensus statement from the committee for autoimmune serology of the International Autoimmune Hepatitis Group. Journal of Hepatology 2004;41: 677–683

Sans titre1

Neurocysticercosis: can we trust serology?

Which is the most common parasitic disease of the nervous system, which affection is the leading cause of seizures and acquired epilepsy in the developing world but still preventable? The answer: neurocysticercosis. An orphan disease suffering from the absence of a real ‘gold standard’ diagnosis. Meanwhile, many laboratories perform immunodiagnosis but what is its real value and what can it tell us?

by Dr Jean-François Carod

What is neurocysticercosis?
Cysticercosis of the central nervous system (neurocysticercosis) is caused by the larval stage (cysticerci) of the pork tapeworm Taenia solium. When people eat undercooked pork containing viable cysticerci, they develop an intestinal tapeworm infection (Fig. 1). Humans can also become intermediate hosts, however, by directly ingesting T. solium eggs shed in the feces of human carriers of the parasite. These eggs then develop into cysticerci, which migrate mostly into muscle (causing cysticercosis) and into the central nervous system where the cysticerci can cause seizures and many other neurological symptoms, neurocysticercosis (NCC). NCC is a major cause of epilepsy in endemic countries. It is the most important neurological disease of parasitic origin in humans. The pathogenesis is unclear but symptoms seem to correlate with the stage of the cyst. Starting as a viable entity, the cyst then gradually degenerates and become calcified. Seizures seem to appear at the degenerating and calcified stage but treatment is effective on the living cysts. Human cysticercosis is endemic in the Andean area of South America, Brazil, Central America and Mexico; China, the Indian subcontinent, South-East Asia; and Sub-Saharan Africa including Madagascar.

Why do we need to diagnose it?
Diagnosing NCC is required in the event of unexplained encephalitic disorders such as first onset of seizures in countries where NCC is endemic or in patients travelling in countries where NCC is endemic and who may have been at risk of infection (e.g. exposed to NCC risk factors, such as inadequate hand and food hygiene).

How can it be diagnosed?
The diagnosis of cysticercosis of the central nervous system involves the interpretation of non-specific clinical manifestations, such as seizures, often with characteristic findings on computed tomography (CT) or magnetic resonance imaging (MRI) of the brain, and the use of specific serological tests (Fig. 2). Diagnostic criteria based on objective clinical, imaging, immunological and epidemiological data have been proposed but are not generally used in areas endemic for the disease [1].

Serology is indicated for the diagnosis of T. solium seropositivity. But from a positive serology to the assessment of NCC diagnosis, there is a huge gap. A positive T. solium serology is not predictive for a neurological localization and serology may remain positive years after the end of the infection.
No single test can lead to a definitive diagnosis of NCC. CT-scan or MRI may be performed on the presentation of clinical symptoms that could be attributed to NCC (first onset of seizure, unexplained headache…) for people who were exposed to NCC risk factors. Imaging may show typical ring lesions with or without inflammation and calcification. However, the image is not pathognomonic of NCC unless hooks (scolex) are visible inside the ring. Thus, serology may give the clue if positive. A positive serology (antibody) may be confirmed by Western-blot or electro-immuno transfer blot (EITB), which show the typical bands specific of T. solium glycoproteins. Antigen detection in the blood can also be performed. This test is specific for T. solium and does not require laboratory confirmation. Both antigen and antibody assays can be performed in the cerebrospinal fluid (CSF). The presence of antibody or antigen in the CSF may contribute towards the assessment of the neurological localization of the disease. In developing countries, the regions most affected by T. solium infection, CT-scan and, of course, MRI are unaffordable, if ever available.

What are the current laboratory tools?
The laboratory diagnosis of cysticercosis is basically the immunodiagnostic based firstly on antibody detection with ELISA (enzyme-linked immunosorbent assay) or immunoblot.
The detection of antibodies against T. solium is a common method of infection diagnosis, but presents many limitations as a single cyst carrier may not be easily detected. Commercially available tests include essentially ELISA and Western-blots. Western-blots are the ‘gold standard’ assays for the detection of specific antibodies against T. solium. The reference Western-blot assay remains the one developed at the Centers for Disease Control (CDC), Georgia, USA, by Tsang et al. [2]. It employs a specific fraction of T. solium cysts. Many of the components have been identified and cloned. The test is very specific for exposure and/or disease and to confirm the diagnosis. Both ELISA tests and Western-blot relay on antigens that have varied significantly throughout the years (Fig. 3) [3]. Historically, the first assays used crude soluble extracts, then purified proteins such as lentil lectin glycoproteins (LLGPs) Recent trends, though not yet commercialized, tend to emphasize the use of recombinant proteins. Designing recombinant antigens requires a proteinomic approach (Fig. 4) that is now frequently used in development units. Current studies propose the use of nanobodies for diagnostic purposes. These evolutions increased both the sensitivity and the specificity of the tests.

Another available technique is based on the detection of circulating parasitic antigens using monoclonal antibodies [4]. This test is capable of detecting single cyst carriers and is more specific than available antibody ELISA tests. Its main advantage is its ability to monitor the response to cysticidal therapy.

Understanding the performance assessment of T. solium detection tests
Most commercially available ELISA tests have been evaluated by poor methodology. Assessing that a performance evaluation used the proper method means ensuring that the study used a serum bank of parasitologically-defined sera to assess test sensitivity. Defined cysticercosis sera should ideally include the following sera: two or more viable cysts, single viable cysts, degenerating cysts, calcified cysts.
Each series should be initially tested separately. A parasitologically-defined sera should correspond to the Del Brutto criteria [1]. In the absence of a true ‘gold standard’ for the diagnosis of neurocysticercosis, positive sera (cases) should be taken from patients with (1) absolute diagnosis of NCC, or (2) probable NCC diagnosis.       
     
The test specificity should be carefully evaluated using defined negative and potentially cross-reactive sera. Negative sera (control) should be taken from the same area and if possible from people exposed to the same risk factors as the positive cases, with age and sex correlation. Negative cases are usually taken from blood donors of developed countries. Those people have not been in contact with many parasitic infections and the sensitivity of the test will not be accurate/reliable for use in developing countries. This is why specificity should not only be assessed on negative samples from Western countries but also on other parasitic infections from cysticercosis-free developing countries.

What are the new trends in laboratory tests?
If only immunodiagnostic tools based on antibody or antigen detection are currently commercialized, new approaches have been developed including molecular biology (gene amplification in CSF mostly) (Fig. 5). However, so far none constitutes a ‘gold standard’. Table 1 summarizes the pros and cons of NCC diagnosis tools.

Conclusions and future
A test is reliable and useful if it contributes to a care improvement; that is to say to an appropriate therapy for all the patients. As for NCC; the decision to treat is still subject to controversy. Furthermore, even basic serologies are unaffordable or unavailable in endemic countries, not to mention imaging. The key will be in developing a reliable rapid test able to screen infected patients and correlated to neurological lesions of cysticerci.

References
1. Del Brutto OH. Diagnostic criteria for neurocysticercosis, revisited. Pathog Glob Health 2012; 106(5): 299–304.
2. Tsang VC, Brand JA,  Boyer AE. An enzyme-linked immunoelectrotransfer blot assay and glycoprotein antigens for diagnosing human cysticercosis (Taenia solium). J Infec Dis. 1989; 159(1): 50–59.
3. Esquivel-Velázquez M, Ostoa-Saloma P, Morales-Montor J, Hernández-Bello R, Larralde C. Immunodiagnosis of neurocysticercosis: ways to focus on the challenge. J Biomed Biotechnol. 2011; 2011: 516042. Doi:10.1155/2011/516042.
4. Garcia HH, Harrison LJ, Parkhouse RM, Montenegro T, Martinez SM, Tsang VC, Gilman RH. A specific antigen-detection ELISA for the diagnosis of human neurocysticercosis. The Cysticercosis Working Group in Peru. Trans R Soc Trop Med Hyg. 1998; 92(4): 411–414.

The author
Jean-François Carod Pharm D, MSc
Laboratoire de Biologie Médicale, GCS de l’ARC Jurassien, Centre Hospitalier Louis Jaillon, 2 Montée de l’hôpital, 39200 Saint-Claude, France.
E-mail: jean-francois.carod@ch-stclaude.fr

p43

Electronic alerts for acute kidney injury: the role of the laboratory

Acute kidney injury is a common and serious complication of many hospital admissions, yet there are often delays in recognizing its development. The laboratory can play a key role in ensuring large increases in serum creatinine do not go unnoticed so that deteriorating patients receive prompt medical attention.

by Nick Flynn

Introduction
Acute kidney injury (AKI) is a sudden decline in renal function, generally occurring over hours or days. AKI is increasingly recognized as a common healthcare problem associated with poor outcomes such as increased mortality and progression of chronic kidney disease [1], prolonged hospital stay and increased healthcare costs [2]. There is also evidence that management of patients with AKI is sometimes poor: in the UK, a National Confidential Enquiry into Patient Outcomes and Death (NCEPOD) report found severe deficiencies of care in a cohort of patients who died with a primary diagnosis of AKI [3]. For example, there was often a delay in recognizing post-admission AKI. This has prompted some hospitals to implement electronic alerts (e-alerts) to systematically detect and highlight cases of AKI. As current definitions of AKI are based mainly upon changes in serum creatinine, laboratories are well placed to implement these systems (Table 1) [4]. This review will briefly discuss options for e-alerts, some considerations for their implementation, and the evidence base for their use.

AKI e-alerts
The aim of AKI e-alert systems is to improve the outcomes of patients by facilitating earlier recognition and treatment of AKI. E-alerts may be triggered by a variety of different criteria, ranging from a single threshold creatinine value to full application of AKI diagnostic criteria. This may result in an automated comment being appended to the creatinine result, a phone call, email or text message to the requesting doctor, nephrologist or critical care outreach team, or a combination of the above. The intention is for the alert to prompt medical attention for these high-risk deteriorating patients, with a resulting improvement in patient outcomes (Fig. 1). The most successful e-alert systems are therefore likely to combine the alert with a clinical protocol for AKI management, and should be developed in collaboration with clinical colleagues.

Choosing alert criteria
Although a single threshold creatinine (for example, 300 µmol/L) is the simplest approach, this lacks both sensitivity and specificity for AKI. Creatinine may need to rise significantly before reaching the threshold, so the speed at which AKI is recognized may not be improved. In addition, depending on the population served by the laboratory, a large number of elevated creatinine results are likely to be from patients with stable chronic kidney disease, rather than AKI.
Accuracy can be improved by applying a ‘delta check’ to flag an absolute or percentage increase in creatinine, for example, a 75% increase in creatinine [5]. It is usually within the realms of most modern laboratory information management systems to offer one delta check for creatinine, and it is also sometimes possible to run multiple checks with different criteria. Finally, some systems aim to fully apply current definitions, such as those recommended by KDIGO (Table 1) [4].

Accurately estimating baseline creatinine is difficult
A problem faced both by simple delta checks and e-alerts based on AKI definitions is the difficulty in reliably estimating baseline creatinine. A system employing manual estimation of baseline by clinical biochemists at the Royal Derby Hospital has been shown to have good diagnostic accuracy for detection of AKI with a false negative rate of 0.2% and a false positive rate of 1.7% [6]. However, this approach is limited to normal working hours and many laboratories do not have the resources to replicate this labour intensive system. Instead, automatic surrogate estimation methods are used, such as the lowest, most recent or median creatinine value within a certain timeframe, such as the previous three months. Laboratories should be aware of the limitations of some of these estimation methods; for example, the lowest creatinine result has been shown to be a particularly poor estimate of baseline creatinine that can lead to high rates of potential AKI misclassification [7].

Should every case fulfilling AKI criteria be highlighted?
When choosing criteria for an e-alert system, it may seem sensible to use current definitions for AKI. However, there are arguments against this approach. The KDIGO definition of AKI relies on small changes in serum creatinine based on epidemiological studies which show that even these small increases are associated with an increase in mortality risk in large populations [2]. However, in many cases an increase of 0.3 mg/dl (≥26.5 µmol/L) is within the realms of normal biological variation, particularly amongst patients with chronic kidney disease. As an illustrative example, creatinine increased by between 69% and 129% after the consumption of 300 g of animal protein in healthy volunteers, even with creatinine measurement using a specific enzymatic method [8]. The limitations of the more widely used Jaffe method for serum creatinine are well known amongst laboratory professionals, and any of a wide range of non-creatinine chromogens may cause an increased result in the absence of renal disease. When KDIGO criteria are combined with a poor method of baseline estimation (lowest previous creatinine), the proportion of creatinine results causing an AKI e-alert can approach 10%; this is unlikely to be helpful. Strict application of current AKI definitions could therefore lead to annoyance and unresponsiveness amongst clinicians alerted to minor creatinine elevations, unnecessary interventions, anxiety for patients and families, and diversion of limited healthcare resources to a large and relatively low risk group.  It is therefore important for laboratories to consider both local IT and resource capabilities and the relative benefit and harm of different criteria for e-alerts before implementation.

Evidence base
A small number of studies have investigated the effect of AKI e-alerts on clinician behaviour or patient outcomes. For example, a real-time alert of worsening AKI stage through a text message sent to the clinician’s telephone was found to increase the number of early therapeutic interventions in an ICU in Belgium [9]. There was also an increase in the proportion of patients who recovered their renal function within 8 hours after an alert indicating less severe AKI, but not amongst those with more severe AKI. There was no significant effect on renal replacement therapy, ICU length of stay, mortality, maximum creatinine or maximum AKI stage. Importantly, 9 out of 10 AKI alerts were based on urine volume criteria, so the applicability of these findings to creatinine based e-alerts is questionable.
Hospitals that have already implemented AKI e-alerts have noted improved outcomes following their introduction. For example, a hospital-wide e-alert system based on changes in serum creatinine at the Royal Derby Hospital, led to a progressive reduction in 30 day mortality over consecutive 6 month periods (23.7%, 20.8%, 20.8%, 19.5%, chi-square for trend P=0.006) [10]. This improvement in survival was maintained after adjustment for age, co-morbid conditions, severity of AKI, elective/non-elective admission and baseline renal function. However, the e-alert was introduced as part of a range of educational interventions so it is difficult to determine the contribution made by the e-alert component.
The evidence base for AKI e-alerts is therefore not strong, and would benefit from further studies to demonstrate that this approach can lead to measurable improvements in patient outcomes.

Conclusions
E-alerts represent an opportunity for the laboratory to assist in the early detection of acute kidney injury. This could improve the outcomes of patients with this life threatening condition. Aside from AKI, there are undoubtedly many other opportunities for the laboratory to optimize existing resources by helping clinicians to digest the large amount of laboratory data produced on a daily basis, to highlight trends and to ensure that important changes are recognized and acted upon. The laboratory can play a key role to ensure that these systems are implemented, that they are effective in selectively capturing a high risk population, and that evidence is gathered to justify their continued use.

References
1. Coca SG, et al. Long-term risk of mortality and other adverse outcomes after acute kidney injury: a systematic review and meta-analysis. Am J Kidney Dis. 2009; 53(6): 961–973.
2. Chertow GM, et al. Acute kidney injury, mortality, length of stay, and costs in hospitalized patients. J Am Soc Nephrol. 2005; 16: 3365–3370.
3. Stewart J, et al. Adding Insult to Injury: a review of the care of patients who died in hospital with a primary diagnosis of acute kidney injury (acute renal failure). A report by the National Confidential Enquiry into Patient Outcome and Death. London: NCEPOD, 2009. www.ncepod.org.uk/2009report1/Downloads/AKI_report.pdf
4. Kidney Disease: Improving Global Outcomes (KDIGO) Acute Kidney Injury Work Group. KDIGO Clinical Practice Guideline for Acute Kidney Injury. Kidney Int. Suppl. 2012; 2: 1–138.
5. Thomas M, et al. The initial development and assessment of an automatic alert warning of acute kidney injury. Nephrol Dial Transplant 2011; 26: 2161–2168.
6. Selby N, et al. Use of electronic results reporting to diagnose and monitor aki in hospitalized patients. Clin J Am Soc Nephrol. 2012; 7: 533–540.
7. Siew ED, et al. Estimating baseline kidney function in hospitalized patients with impaired kidney function. Clin J Am Soc Nephrol. 2012; 7: 712-719.
8. Butani L, et al. Dietary protein significantly affects the serum creatinine concentration. Kidney Int. 2002; 61: 1907.
9. Colpaert K, et al. Impact of real-time electronic alerting of acute kidney injury on therapeutic intervention and progression of RIFLE class. Crit Care Med. 2012; 40: 1164–1170.
10. Kohle N, et al. Impact of a combined, hospital-wide improvement strategy on the outcomes of patients with acute kidney injury (AKI) [abstract]. Joint Congress of the British Transplantation Society & Renal Association, 2013. Bournemouth. Abstract O30. www.btsra2013.com/

The author
Nick Flynn, Pre-registration clinical scientist
Department of Clinical Biochemistry, University College London Hospitals, London, UK
E-mail: nick.flynn@nhs.net

26417 Avantor Diagnostics avr009P

One Company. Two World-Class Diagnostics Brands.

26454 Teco Nov

Sodium available in ‘enzymatic’ format

26510 Phadia 121019 Ad Autoimmunity CLI Journal

EliA autoimmunity assays

Frances1 036d58

Pre-eclampsia: the good and bad news

Affecting around one in twenty pregnancies, pre-eclampsia is a leading cause of fetal morbidity and mortality globally. Around half a million babies die as a result of the condition annually. Severe pre-eclampsia, leading to eclampsia characterized by seizures, is also the second leading cause of maternal mortality (after hemorrhage) in most countries: an estimated 76,000 women die from it each year. A diagnosis of this multisystemic disorder has classically been made if hypertension and proteinuria are present. Pre-eclampsia can only be resolved by delivery of the placenta, thus management must weigh the severity of the condition against the risk to the fetus of an induced, premature delivery.
The launch of a rapid test measuring the plasma level of placental growth factor (PLGF), a biomarker of placental function, four years ago offered the possibility of a more timely diagnosis of pre-eclampsia and its severity that could facilitate optimal management for both mother and baby, including the administration of corticosteroids to accelerate fetal lung development prior to premature delivery. The level of PLGF normally rises during pregnancy up to 26 to 30 weeks’ gestation, and then falls until full-term, but its level is abnormally low in women with pre-term pre-eclampsia. Recently the published results of a large multicentre study using this rapid test made very encouraging reading. During the study, PLGF was measured in 625 pregnant women between 20 and 35 weeks gestation with suspected pre-eclampsia. The condition was confirmed in 55% of these women, with outcome being the delivery of the fetus within 14 days. The authors concluded that the test had high sensitivity in women presenting with suspected pre-eclampsia before 35 weeks’ gestation, and indicated need for delivery better than other diagnostic methods.
Although this research is good news for pregnant women and their babies, another aspect of pre-eclampsia has largely been ignored and is not generally known by either health-workers or women themselves, namely the subsequent increased health risk in older women who suffered from pre-eclampsia in pregnancy. A robust meta-analysis has linked the condition with a fourfold increased risk of hypertension, and a twofold increased risk of ischemic heart disease, stroke and venous thromboembolism, later in  life. A recent study from Australia found that the endothelial dysfunction associated with pre-eclampsia persists, causing the increased risk. At the very least previous pre-eclampsia should be flagged as important in an older woman’s medical history!