What is animal testing? Essay

2/2 of 134    DIMDI: AnimAlt-ZEBET (ZT00) © BfR (ZEBET) 2008

>We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

ND:        ZEBET203

MNR:    203

LR:          29.06.2007

SH:         Immunology

Title:      Production of antibodies by IgY technology in the egg yolk of chicken

Language:           english

UT:         IgY; ZEBET; animal experiments; animal testing alternatives; animal use alternatives; animal welfare; antibodies; antibodies, avian; antibodies, chicken; antibodies, monoclonal; antibodies, polyclonal; antibody production; bleed; bleeding; blood; blood-letting; chicken; ducks; egg yolk; goats; guinea pigs; hens; hens, laying; horses; immunization; immunization, DNA; immunization, passive; immunoglobulin Y; immunoglobulins; immunology; mice; number of animals; pharmacy; production, antibody; rabbits; rats; reduction; refinement; sheep; strains, inbred; vaccination; veterinary medicine; yolk

EV:         Reduction,Refinement

STA:       Scientific Acceptance

VOR:     Not used for regulatory purposes


Polyclonal antibodies are usually obtained from rabbits, but also from mice, rats and guinea pigs, and larger amounts from sheep, goats and horses. Two invasive procedures are generally necessary for polyclonal antibody production, both of which cause distress and pain to the animals. First, the animals have to be injected with the relevant antigen to induce activation of the immune system; the initial immunisation is usually followed by booster injections. Some time later blood is collected to isolate and purify the desired antibodies. In contrast, isolation of antibodies from birds does not require sampling of blood.


In immunised chicken hens, the kinetics of specific antibody generation are comparable to that of rabbits. However, a significant portion of the produced antibodies is transferred from the serum to the egg. Between 50 to 100 mg immunoglobulin can be isolated from the egg yolk containing up to 10 mg of specific antibody. Moreover, procedures for isolation from egg yolk are simple (e.g. Schade et al., 1996 and 2005). Chicken egg yolk immunoglobulin is phylogenetically distinct from mammalian immunoglobulin IgG; it is therefore called IgY.

Polyclonal IgY antibodies can be used in the same way as IgG antibodies, but offer advantages in terms of specificity, cross-reactivity and sensitivity permitting the use for special applications. When using chicken, a higher percentage of specific antibody against mammalian antigen can be produced, especially when the antigen is a highly conserved protein (e.g. Calzado et al., 2001). In addition, chicken IgY does not bind to complement nor to human or bacterial Fc receptors and does not cross-react with mammalian IgG (see review by Schade et al., 2005). More recently, the so-called IgY technology has been used to generate gene-specific antibodies for diagnostics and drug discovery (e.g. Zhang, 2003; Cova 2005). Other bird species providing IgY for research and diagnostic purposes include quails and breeding ducks (e.g. Cova, 2005).


Isolation of polyclonal antibodies from bird eggs constitutes a refinement method compared with traditional approaches based on extracting blood from mammals. Because a single chicken or duck egg contains as much antibody as an average bleed from a rabbit, this simple, economical non-invasive method may also obviate the killing of laboratory animals in some instances. Avian antibodies (IgY) can be used for a wide range of applications just as mammalian antibodies (IgG) and are neither better nor worse. Since IgY is immunologically distinct from IgG, avian antibodies have some advantages over conventional IgG antibodies (detailed in Tini et al., 2002; Zhang, 2003; Cova, 2005; Schade et al., 2005); in particular, IgY are attractive for passive oral immunisation against gastrointestinal infections (e.g. Mine et al., 2002). The only major disadvantage is that IgY is less efficient than IgG in precipitating antigens; this must be taken into account for automated diagnostic systems.

In 1996, the workshop on the production of avian antibodies convened by European Center for Validation of Alternative Methods (ECVAM) already called for the wider use of chicken IgY (Schade et al, 1996). In view of the benefits for donor animals, scientists should consider using avian antibodies where appropriate before opting for mammalian antibodies; thus, the Swiss directive for proficient antibody production conforming with animal welfare gives guidance for chicken IgY as well as for IgG from rabbits and rodents (cf. Swiss Federal Veterinary Office animal welfare directive 3.04, ho-800.116-3.04, 1999).

NOTE:   Chicken eggs have also been used for the production of human monoclonal antibodies (mAbs), e.g. expressed in the egg white of chimeric chicken (e.g. Zhu et al., 2005). Since mAb production in mammalian cell cultures is well established and continuously being improved, it remains to be seen whether transgenic chicken offer a feasible alternative (Birch and Racher, 2006, p. 682). Another method to produce polyclonal and monoclonal antibodies which do not derive from hybridoma is described in AnimAlt-ZEBET method no. 299 (recombinant antibodies by phage display). Techniques for in vitro production of monoclonal antibodies derived from hybridoma clones are described in AnimAlt-ZEBET method nos. 78 (fluidised bed and ceramic bioreactors), 173 (dialysis membrane cell culture), 180 (hollow fibre bioreactor), and 182 (roller and spinner cultures); see also Marx et al. (1997).

RN:        10

RF:         Birch JR, Racher AJ: Antibody production. Advanced drug delivery reviews 58(5-6); 671-685 (2006)

RF:         Calzado EG, García Garrido RM, Schade R: Human haemoclassification by use of specific yolk antibodies obtained after immunisation of chickens against human blood group antigens. Alternatives to laboratory animals, ATLA 29(6); 717-726 (2001)

RF:         Cova L: DNA-designed avian IgY antibodies: novel tools for research, diagnostics and therapy. Journal of clinical virology 34(Suppl 1); S70-S74 (2005)

RF:         Marx U, Embleton MJ, Fischer R, Gruber F, Hansson U, Heuer J, Leeuw WA de, Logtenberg T, Merz W, Portetelle D, Romette JL, Straughan DW: Monoclonal antibody production: the report and recommendations of ECVAM workshop 23. Alternatives to laboratory animals, ATLA 25; 121-137 (1997)

RF:         Mine Y, Kovacs-Nolan J: Chicken egg yolk antibodies as therapeutics in enteric infectious disease: a review. Journal of medicinal food 5(3); 159-169 (2002)

RF:         Schade R, Calzado EG, Sarmiento R, Chacana PA, Porankiewicz-Asplund J, Terzolo HR: Chicken egg yolk antibodies (IgY-technology): a review of progress in production and use in research and human and veterinary medicine. Alternatives to laboratory animals, ATLA 33(2); 129-154 (2005)

RF:         Schade R, Staak C, Hendriksen C, Erhard M, Hugl H, Koch G, Larsson A, Pollmann W, van Regenmorte l M, Rijke E, Spielmann H, Steinbusch H, Straughan D: The production of avian (egg yolk) antibodies: IgY. The report and recommendations of ECVAM workshop 21. Alternatives to laboratory animals, ATLA 24; 925-934 (1996)

RF:         Tini M, Jewell UR, Camenisch G, Chilov D, Gassmann M: Generation and application of chicken egg-yolk antibodies. Comparative biochemistry and physiology/A: a molecular and integrative physiology 131(3); 569-574 (2003)

RF:         Zhang WW: The use of gene-specific IgY antibodies for drug target discovery. Drug discovery today 8(8); 364-371 (2003)

RF:         Zhu L, Lavoir MC van de, Albanese J, Beenhouwer DO, Cardarelli PM, Cuison S, Deng DF, Deshpande S, Diamond JH, Green L, Halk EL, Heyer BS, Kay RM, Kerchner A, Leighton PA, Mather CM, Morrison SL, Nikolov ZL, Passmore DB, Pradas-Monne A, Preston BT, R: Production of human monoclonal antibody in eggs of chimeric chickens. Nature biotechnology 23(9); 1159-1169 (2005)



DIMDI: AnimAlt-ZEBET (ZT00) © BfR (ZEBET) 2008

ND:        ZEBET140

MNR:    140

LR:          12.12.2007

SH:         Parasitology

Title:      In vitro cultivation of life cycle stages of Plasmodium parasites

Language:           english

UT:         Malaria; Plasmodium; ZEBET; animal experiments; animal testing alternatives; animal use alternatives; animal welfare; blood serum; cell culture; cell lines; co-culture; cultivation; culture; culture, axenic; erythrocytes; hepatocytes; life cycle stages; maturation; medium, artificial; medium, defined; mice; monkeys; mosquitoes; parasitology; primates; propagation; protozoology; reduction; reticulocytes; rodents; spleen; splenectomy

EV:         Reduction

STA:       Development

VOR:     Not used for regulatory purposes


Malaria is caused by the parasite, Plasmodium. Four species are specifically human pathogens: P. falciparum (malaria tropica), P. malariae (malaria quartana), P. ovale (malaria tertiana) and P. vivax (malaria tertiana). The life cycle of this protozoan organism comprises a complex series of developmental stages, which includes an exo-erythrocytic stage within liver cells of the animal or human host, an erythrocytic stage within erythrocytes or precursor reticulocytes, and the sporogonic stage within the mosquito vector through which the disease is transmitted. Parasite material for scientific research, drug screening and diagnostic purposes has traditionally been obtained from the spleens of artificially infected monkeys and the blood of infected laboratory mice.


Following Trager and Jensen’s success in 1976 at propagating Plasmodium falciparum in continuous culture in their laboratory, various life cycle stages of the four species that infect humans have been established in vitro (cf. reviews by Schuster, 2002; Hurd et al., 2003). However, P. falciparum is the only species for which all stages have been cultured in vitro. Different degrees of success have been achieved with the other human Plasmodium species (e.g. Golenda et al., 1997; Panichakul et al., 2007). Culture media generally consist of a basic tissue culture medium, e.g., minimal essential medium or RPMI 1640, to which serum and erythrocytes are added (e.g. Basco, 2007; Djimde et al., 2007; Srivastava et al., 2007). Most of the efforts have been directed toward the stage found in the erythrocyte (e.g. Orjih, 2005). This stage has been cultivated in petri dishes placed in a candle jar to generate elevated carbon dioxide levels or, alternatively, in a more controlled atmosphere. Later developments have employed continuous-flow systems (see Schuster, 2002; Hurd et al., 2003). The exo-erythrocytic and sporogonic life cycle stages of a number of avian, rodent, and simian malarial parasites have also been cultivated in vitro, e.g. P. berghei (Al-Olayan et al., 2002; Carter et al., 2007).


Ex vivo cultivation of both human and non-human species of Plasmodium has led to a greater understanding of the biology of the parasite. Efforts at cultivating the organisms in vitro are complicated by the parasites’ alternating between a mammalian host and an arthropod vector, each having its own set of physiological, metabolic, and nutritional requirements. In consequence, there are only a few species for which all life cycle stages have been cultured in vitro, e.g. P. falciparum and P. berghei (e.g. Al-Olayan et al., 2002). For other species, culture of only individual life stages has been reported, e.g. axenic culture of the mosquito stages of P. yoelii (Porter-Kelley et al., 2006). Moreover, during adaptation to in vitro culture changes in the parasite’s metabolism have been noted, which may have significant implications (e.g. Peters et al., 2007).

In conclusion, for most Plasmodium species still further efforts at in vitro cultivation are needed. Nonetheless, short-term in vitro culture has allowed the development of quick diagnostic assays which has reduced the need for testing in animals. Further potential for reduction may lie in the combination of nucleic acid based techniques with in vitro culture assays and in vivo analysis in laboratory animals (e.g. Vontas et al., 2005; Mens et al., 2006; Conway, 2007).

RN:        18

RF:         Al-Olayan EM, Beetsma AL, Butcher GA, Sinden RE, Hurd H: Complete development of mosquito phases of the malaria parasite in vitro. Science 295(5555); 677-679 (2002)

RF:         Basco LK: Molecular epidemiology of malaria in Cameroon. XXIII. Experimental studies on serum substitutes and alternative culture media for in vitro drug sensitivity assays using clinical isolates of Plasmodium falciparum. The American journal of tropical medicine and hygiene 75(5); 777-782 (2006)

RF:         Carter V, Nacer AM, Underhill A, Sinden RE, Hurd H: Minimum requirements for ookinete to oocyst transformation in Plasmodium. International journal for parasitology 37(11); 1221-1232 (2007)

RF:         Conway DJ: Molecular epidemiology of malaria. Clinical microbiology reviews 20(1); 188-204 (2007)

RF:         Djimde AA, Kirkman L, Kassambara L, Diallo M, Plowe CV, Wellems TE, Doumbo O: In vitro cultivation of fields isolates of Plasmodium falciparum in Mali. Culture in vitro d’isolats de terrain de Plasmodium falciparum au Mali. Bulletin de la Societe de Pathologie Exotique 100(1); 3-5 (2007)

RF:         Golenda CF, Li J, Rosenberg R: Continuous in vitro propagation of the malaria parasite Plasmodium vivax. Proceedings of the National Academy of Sciences of the United States of America 94; 6786-6791 (1997)

RF:         Hurd H , Al-Olayan E, Butcher GA: In vitro methods for culturing vertebrate and mosquito stages of Plasmodium. Microbes and infection (Institut Pasteur) 5(4); 321-327 (2003)

RF:         Mens PF, Schoone GJ, Kager PA, Schallig HDFH: Detection and identification of human Plasmodium species with real-time quantitative nucleic acid sequence-based amplification. Malaria Journal 5, 80 2006http://www.malariajournal.com/contents/5/1/80 5; 80 (2006)

RF:         Ndeta GN, Dickson LA, Aseffa A, Winston AA, Duffy PE: Techniques for in vitro confirmation of reticulocyte invasion by the Plasmodium parasites. Laboratory medicine 35(5); 298-302 (2004)

RF:         Orjih AU: Comparison of Plasmodium falciparum growth in sickle cells in low oxygen environment and candle-jar. Acta tropica 94(1); 25-34 (2005)

RF:         Panichakul T, Sattabongkot J, Chotivanich K, Sirichaisinthop J, Cui L, Udomsangpetch R: Production of erythropoietic cells in vitro for continuous culture of Plasmodium vivax. International journal for parasitology 37(14); 1551-1557 (2007)

RF:         Peters JM, Fowler EV, Krause DR, Cheng Q, Gatton ML: Differential changes in Plasmodium falciparum var transcription during adaptation to culture. The journal of infectious diseases 195(5); 748-755 (2007)

RF:         Porter-Kelley JM, Dinglasan RR, Alam U, Ndeta GA, Sacci JB Jr, Azad AF: Plasmodium yoelii: axenic development of the parasite mosquito stages. Experimental parasitology 112(2); 99-108 (2005)

RF:         Schuster FL: Cultivation of plasmodium spp. Clinical microbiology reviews 15(3); 355-364 (2002)

RF:         Srivastava K, Singh S, Singh P, Puri SK: In vitro cultivation of Plasmodium falciparum: studies with modified medium supplemented with ALBUMAX II and various animal sera. Experimental parasitology 116(2); 171-174 (2007)

RF:         Udomsangpetch R , Somsri S, Panichakul T, Chotivanich K, Sirichaisinthop J, Yang Z, Cui L, Satta bongkot J: Short-term in vitro culture of field isolates of Plasmodium vivax using umbilical cord blood. Parasitology international 56(1); 65-69 (2007)

RF:         Vontas J, Siden-Kiamos I, Papagiannakis G, Karras M, Waters AP, Louis C: Erratum: Gene expression in Plasmodium berghei ookinetes and early oocysts in a co-culture system with mosquito cells. Molecular and biochemical parasitology 140(2); 251 (2005)

RF:         Vontas J, Siden-Kiamos I, Papagiannakis G, Karras M, Waters AP, Louis C: Gene expression in Plasmodium berghei ookinetes and early oocysts in a co-culture system with mosquito cells. Molecular and biochemical parasitology 139(1); 1-13 (2005)



2/5 of 134    DIMDI: AnimAlt-ZEBET (ZT00) © BfR (ZEBET) 2008

ND:        ZEBET324

MNR:    324

LR:          15.11.2007

SH:         Pharmacy

Title:      In vitro haemagglutinin-neuramidase protein assay for potency determination of Newcastle Disease Virus vaccine (inactivated)

Language:           english

UT:         ELISA; Newcastle Disease; ZEBET; animal experiments; animal testing alternatives; animal use alternatives; animal welfare; antigen quantification; batch release; biologicals; chickens; enzyme-linked immunosorbent assay; fowl; hemagglutination inhibition; hemagglutinin; immunogenicity; neuraminidase; paramyxovirus, avian; pharmacy; potency; potency test; poultry; protein, haemagglutination-neuraminidase; reduction; serology; vaccination; vaccine; vaccines, inactivated; vaccines, viral; virus

EV:         Reduction

STA:       Regulatory Acceptance

VOR:     European Directorate for the Quality of Medicines and Healthcare EDQM (Ed.): Newcastle disease vaccine (inactivated) – Vaccinum pseudopestis aviariae inactivatum. Monograph 01/2007:0870. In: European Pharmacopoeia – fifth edition, suppl. 5.6.; Council of Europe, Strasbourg ; p. 4492-4494 01/2007


Newcastle disease is a highly contagious notifiable disease of domesticated birds and wild birds world-wide. It is of particular commercial importance in areas of extensive poultry farming. Its causative agent is avian paramyxovirus type 1, also called Newcastle disease virus. Control of Newcastle disease is regulated by legislation (e.g. EU directive 92/66/EEC, 1992). Vaccines are available containing either live virus or inactivated virus (e.g. Senne et al., 2004). Requirements for the licensing of both types of vaccine stipulate testing on laboratory animals. As regards inactivated virus vaccines there are two in vivo protocols for assaying potency, both involving immunisation of chickens, followed either by challenge with a virus reference preparation or by serological determination of antibody titres using the haemagglutination inhibition test (European Pharmacopoeia monograph 0870, 2007).


An in vitro method for inactivated Newcastle virus vaccine potency determination has been developed by the official Dutch national control laboratory for Newcastle disease. It is based on the quantitative extraction and determination of virus antigen contained in the vaccine. The viral antigen, haemagglutinin-neuraminidase (HN) protein, is extracted from the vaccine formulation using a technique involving isopropylmyristate (Maas et al., 2000, 2002). Antigen is quantified using an ELISA with monoclonal antibodies specific for the HN protein, including reference antigen as well as internal and negative controls. The activity of the test antigen is determined in terms of relative potency to the reference standard. In a feasibility study with six different inactivated Newcastle disease vaccines the in vitro assay was compared with the two in vivo potency assays prescribed in the European Pharmacopoeia (Claassen et al., 2003). A good correlation was found between the in vivo tests and the in vitro test as well as an excellent reproducibility of the in vitro test in the ranking of the tested vaccines by potency. These results were confirmed in a large international validation study (Claassen et al., 2004).


In vitro determination of Newcastle virus antigen, i.e. haemagglutinin-neuramidase protein, contained in inactivated virus vaccines was successfully validated as vaccine potency assay in an international collaborative study involving 14 laboratories as part of the European Biological Standardisation Programme (BSP project 055; Claassen et al., 2004). As a result, the test has been adopted as a third option beside the two current in vivo procedures for manufacturer’s batch potency testing stipulated in the European Pharmacopoeia (Behr and Spieser, 2006). Moreover, in the relevant monograph, no. 01/2007:0870, it is recommended to use the in vitro test instead of in vivo potency assays wherever possible (EDQM, 2007).

NOTE:   For detection of Newcastle Disease virus by reverse transcriptase polymerase chain reaction, see AnimAlt-ZEBET method no. 163.

RN:        9

RF:         Behr-Gross ME, Spieser JM: Contributions of the European OMCL Network and Biological Standardisation Programme to Animal Welfare. Alternativen zu Tierexperimenten, ALTEX 23; 293-294 (2006)

RF:         Claassen I, Maas R, Daas A, Milne C: Feasibility study to evaluate the correlation between results of a candidate in vitro assay and established in vivo assays for potency determination of Newcastle disease vaccines. Pharmeuropa Bio 1; 51-66 (2003)

RF:         Claassen I, Maas R, Oei H, Daas A, Milne C: Validation study to evaluate the reproducibility of a candidate in vitro potency assay of Newcastle disease vaccines and to establish the suitability of a candidate biological reference preparation. Pharmeuropa Bio 1; 1-15 (2004)

RF:         European Commission (Ed.): European Council directive 92/66/EEC of 14 July 1992 on Community measures for the control of Newcastle disease. Official journal of the European Communitieshttp://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CONSLEG:1992L0066:20040501:EN:PDF L260; 1-34 (1992)

RF:         European Directorate for the Quality of Medicines and Healthcare EDQM (Ed.): Newcastle disease vaccine (inactivated) – Vaccinum pseudopestis aviariae inactivatum. Monograph 01/2007:0870.

In: European Pharmacopoeia – fifth edition, suppl. 5.6. ; 4492-4494 (2007)

RF:         Maas R, Diepen M van, Komen M, Oei H, Claassen I: Antigen content of inactivated Newcastle disease oil emulsion vaccines as an in vitro indicator of potency. Developments in biologicals 111; 313-318 (2002)

RF:         Maas R, Komen M, Oei H, Claassen I: Validation of a candidate in vitro potency assay for inactivated Newcastle disease vaccines. Developments in biologicals 111; 163-169 (2002)

RF:         Maas RA, Venema S, Claassen IJ , Oei HL, Winter MP de: Antigen quantification as in vitro alternative for potency testing of inactivated viral poultry vaccines. The veterinary quarterly 22(4); 223-227 (2000)

RF:         Senne DA, King DJ, Kapczynski DR: Control of Newcastle disease by vaccination. Developments in biologicals 119; 165-170 (2004)



2/6 of 134    DIMDI: AnimAlt-ZEBET (ZT00) © BfR (ZEBET) 2008

ND:        ZEBET325

MNR:    325

LR:          09.11.2007

SH:         Pharmacy

Title:      Non-lethal respiratory challenge assay for potency determination of pertussis vaccines (whole cell and acellular vaccines)

Language:           english

UT:         animal welfare; animal experiments; animal testing alternatives; animal use alternatives; ZEBET; refinement; replacement; mice; pharmacy; potency test; biologicals; microbiology; bacteriology; serology; Bordetella pertussis; pertussis; whooping cough; vaccine; vaccine, acellular; whole cell vaccine; vaccine, combined; toxicity; immunogenicity; intracerebral mouse protection test; Kendrick test; intracerebral challenge; modified intra-cerebral challenge assay; disease models; aerosols; aerosol challenge; aerosol challenge, non-lethal; intranasal challenge; infection, respiratory; clearance, bacterial

EV:         Refinement

STA:       Validation

VOR:     Regulatory Acceptance Pending


Pertussis or whooping cough is a highly contagious disease caused by the bacterium Bordetella pertussis. Whole cell vaccines (WCV) as well as acellular component pertussis vaccines, commonly combined with vaccines for diphtheria and tetanus, play an important role in reducing morbidity and mortality among children. Regulatory control of pertussis vaccines is essential and stipulates that they conform to specified standards of safety and efficacy. The currently required test for estimation of the potency of WCV is the intracerebral mouse protection test or Kendrick test (e.g. European Pharmacopoeia, Assay Section 2.7.7. and monograph 0161, 2005; World Health Organisation, 1990, 1998, 2005). It is still the only test that has shown a correlation with protection in children (e.g. Xing et al., 2001). Groups of mice are immunised with serial dilutions of either reference vaccine or test vaccine. Two weeks later the mice receive an intracerebral injection of bacterial suspension. In addition, in order to determine the lethal median dose (LD50), dilutions of the challenge dose are injected into control mice by the same route. The mice are observed for lethal effects over the next 14 days. At least 136 mice are required per vaccine batch. As the intracerebral challenge test for WCV proved unsuitable for determining potency of acellular vaccines, in Japan, China and South Korea a modified intra-cerebral challenge assay (MICA) is used as official potency test (e.g. Xing et al., 2007). This uses a similar procedure as the Kendrick test except that an interval of 3 weeks instead of 2 weeks is allowed between immunisation and challenge, and special mouse strains are preferred. Intracerebral challenge assays are criticised for using large numbers of animals and technical difficulty (Corbel and Xing, 2004).


Inoculation with bacteria to produce respiratory tract infection in mice has been described for evaluating the protective activity of B. pertussis antigens (e.g. Mills et al., 1998). In a non-lethal challenge approach for whole cell vaccine (WCV), groups of mice were immunised with serial dilutions of the reference vaccine and the test vaccine. Two weeks after immunisation, mice were inoculated with virulent B. pertussis suspension by aerosol challenge. Seven days after the challenge, viable counts were performed on homogenates of the lungs excised from challenged mice. The number of bacterial colony forming units was compared with those in the reference groups and vaccine potency estimated using parallel line assay. The rank order of potencies of the tested vaccines was identical with that obtained by the intracerebral challenge method (Canthaboo et al., 2000, 2001; Watanabe et al., 2002). For acellular vaccines, intranasal or aerosol administration of challenge has been reported. As for WCV, the end point is measured by comparing the responses to the test sample with those to an appropriate reference preparation (e.g. Carter et al., 2004). Intranasal challenge requires that mice be anaesthetised. A WHO collaborative study demonstrated clear discrimination between vaccines with different protective capacity, but in case of the intranasal system it was not possible to determine a quantitative relationship between vaccines (cf. Corbel and Xing, 2004).


Respiratory challenge assays (i.e. aerosol or intranasal administration) constitute a refinement in relation to the Kendrick test because, even although test animals are sacrificed, the route of administration is less severe and less painful to animals than the lethal intracerebral challenge method. However, prior to substitution of the current mandatory intracerebral challenge assay, respiratory challenge methods still require international validation (Xing and Corbel, 2004). As regards the intranasal challenge assay, the WHO working group agreed that it should not be proposed as a potency assay for routine vaccine lot release testing (Xing et al., 2007). However, inclusion of a reference (e.g. JNIH-3 reference vaccine) is recommended in order to standardise and optimise the assay. The intranasal challenge assay may then serve as a useful tool for establishing lot consistency and for characterisation of biological activity of purified antigens and formulated vaccines, e.g. in preclinical evaluation of new products.

NOTE:   A serological assay for pertussis whole cell vaccine potency is described in AnimAlt-ZEBET method no. 232. See also AnimAlt-ZEBET method nos. 326 (pertussis toxin ADP ribosylation assay) and 327 (pertussis toxin CHO cell assay).

RN:        20

RF:         Canthaboo C, Williams L, Xing DK, Corbel MJ: Investigation of cellular and humoral immune responses to whole cell and acellular pertussis vaccines. Vaccine 19(6); 637-643 (2001)

RF:         Canthaboo C, Xing D, Douglas A, Corbel M: Investigation of an aerosol challange model as alternative to the intracerebral mouse protection test for potency assay of whole cell pertussis vaccines. Biologicals 28(4); 241-246 (2000)

RF:         Carter CR, Dagg BM, Whitmore KM, Keeble JR, Asokanathan C, Xing D, Walker KB: High dose interleukin-12 exacerbates Bordetella pertussis infection and is associated with suppression of cell-mediated immunity in a murine aerosol challange model. Clinical and experimental immunology 135(2); 233-239 (2004)

RF:         Carter CR, Dagg BM, Whitmore KM, Keeble JR, Asokanathan C, Xing D, Walker KB: The effect of pertussis whole cell and acellular vaccines on pulmonary immunology in an aerosol challenge model. Cellular immunology 227(1); 51-58 (2004)

RF:         Corbel MJ, Xing DK: Toxicity and potency evaluation of pertussis vaccines. Expert review of vaccines 3(1); 89-101 (2004)

RF:         European Directorate for the Quality of Medicines and Healthcare EDQM (Ed.): Diphtheria, tetanus and pertussis vaccine (absorbed) – Vaccinum diphtheriae, tetani et pertussis adsorbatum. Monograph 0445. In European Pharmacopoeia – 5th edition.

Strasbourg – ; 643 (2005)

RF:         European Directorate for the Quality of Medicines and Healthcare EDQM (Ed.): General Section 2.7.7. Assay of pertussis vaccines. In: European Pharmacopoeia, 5th editon.

Strasbourg – ; 197 (2005)

RF:         European Directorate for the Quality of Medicines and Healthcare EDQM (Ed.): Pertussis vaccine absorbed. Monograph 0161. In European Pharmacopoeia – 5th edition.

Strasbourg – ; 690 (2005)

RF:         European Directorate for the Quality of Medicines and Healthcare EDQM (Ed.): Pertussis vaccine. Monograph 0160. In: European Pharmacopoeia – 5th edition.

Strasbourg – ; 685 (2005)

RF:         European Directorate for the Quality of Medicines and Healthcare EDQM (Ed.): Pertussis, diphtheria, tetanus and poliomyelitis (inactivated) vaccine (absorbed). Monograph 2061. In: European Pharmacopoeia – 5th edition (supplement 5.4.).

Strasbourg – ; 3864 (2005)

RF:         Mills KHG, Ryan M, Ryan E, Mahon BP: A murine model in which protection correlates with pertussis vaccine efficacy in children reveals complementary roles for humoral and cell mediated immunity in protection against Bordetella pertussis. Infection and immunity 66; 594-602 (1998)

RF:         Watanabe M, Komatsu E, Abe K, Iyama S, Sato T, Nagai M: Efficacy of pertussis components in an acellular vaccine, as assessed in a murine model of respiratory infection and a murine intracerebral challenge model. Vaccine 20(9-10); 1429-1434 (2002)

RF:         Watanabe M, Komatsu E, Sato T, Nagai M: Evaluation of efficacy in terms of antibody levels and cell-mediated immunity of acellular pertussis vaccines in a murine model of respiratory infection. FEMS immunology and medical microbiology 33(3); 219-225 (2002)

RF:         World Health Organization (Ed.): Annex 2: Guidelines for the production and control of the acellular pertussis component of monovalent or combined vaccines. In: WHO Expert Committee on Biological Standardisation. http://whqlibdoc.who.int/trs/WHO_TRS_878.pdf ; (1998)

RF:         World Health Organization (Ed.): Annex 6: Recommendations for whole cell pertussis vaccine. http://www.who.int/biologicals/publications/ECBS%202005%20Annex%206%20Pertussis.pdf ; (2005)

RF:         World Health Organization (Ed.): Laboratory methods for the testing for potency of diphtheria, tetanus, pertussis and combined vaccines. WHO BLG/92.1. http://whqlibdoc.who.int/hq/1992/BLG_92.1.pdf ; (1992)

RF:         World Health Organization (Ed.): Requirements for diphtheria, tetanus, pertussis and combined vaccines. Technical report series 800; 87-179; 26-27 (1990)

RF:         World Health Organization (Ed.): WHO Working Group meeting on revision of the Manual of Laboratory Methods for testing DTP vaccines. http://www.who.int/biologicals/publications/meetings/areas/vaccines/dtp/DTP%20Manual%20Final%20Report%20on%20DTP%20Manual.pdf ; (2006)

RF:         Xing D, Das G, O’Neill T, Corbel M, Delleplane N, Milstien J: Laboratory testing of whole cell pertussis vaccine: a WHO proficiency study using the Kendrick test. Vaccine 20(3-4); 342-351 (2001)

RF:         Xing DK, Corbel MJ, Dobbelaer R, Knezevic I: WHO working group on standardisation and control of acellular pertussis vaccines – report of a meeting held on 16-17 March 2006, St. Albans, United Kingdom. Vaccine 25(15); 2749-2757 (2007)


2/7 of 134    DIMDI: AnimAlt-ZEBET (ZT00) © BfR (ZEBET) 2008

ND:        ZEBET326

MNR:    326

LR:          31.10.2007

SH:         Pharmacy

Title:      Determination of pertussis toxin in vaccines with enzymatic-HPLC assay based on its ADP ribosylation activity as an alternative for the histamine sensitization test (HIST) in mice

Language:           english

UT:         animal welfare; animal experiments; animal testing alternatives; animal use alternatives; ZEBET; replacement; mice; pharmacy; biologicals; microbiology; bacteriology; Bordetella pertussis; pertussis; pertussis toxin; whooping cough; vaccine; acellular vaccine; whole cell vaccine; vaccine, combined; toxicity; immunogenicity; histamine sensitization; ADP ribosylation; ribosylation; ribosyl transferase; liquid chromatography; HPLC; enzymatic HPLC; HPLC, reverse phase

EV:         Replacement

STA:       Validation

VOR:     Regulatory Acceptance Pending


Pertussis or whooping cough is a highly contagious disease caused by the bacterium Bordetella pertussis. The pathogenesis of pertussis is complex; one of the several factors implicated in the initiation and evolution of the disease is pertussis toxin which is an ADP ribosyl transferase whose specific substrate is an amino acid sequence present in G-proteins involved in intracellular signalling. Pertussis toxin (PT) in its detoxified form elicits protective immunity and is a component of all current acellular vaccines as well as whole cell vaccines. Regulatory control of pertussis vaccines is essential and requires that they conform to specified standards of safety and efficacy. Monitoring chemically detoxified PT for residual toxicity and reversion to toxicity is thus an important part of the safety evaluation of acellular pertussis vaccines. At present, the histamine sensitization test (HIST) is the worldwide accepted method for detecting active PT in such preparations. In the HIST, groups of mice are injected intraperitoneally with doses of the test vaccine. Four to five days after injection, animals are challenged intraperitoneally with histamine solution and the number of mice dying within 24 hours is recorded (e.g. World Health Organization, TRS 878 Guidelines, 1998). The HIST is a test measuring the median lethal dose (LD50) and large variations in test performance have been observed. The precise mechanism of the HIST is unknown and since the test is also difficult to standardise the HIST is recommended to be replaced by alternative test protocols (e.g. Corbel and Xing, 2004; Xing et al., 2007).


For the detection and determination of active pertussis toxin (PT) in vaccines an in vitro, enzymatic-HPLC test has been described which is based on the ADP ribosylation activity of the toxin (Cyr et al., 2001; Yuen et al., 2002). A synthetic fluorescein-labelled peptide homologue of the native G protein sequence is used as a substrate for the PT-mediated enzymatic transfer of the ADP ribose from NAD to the cysteine moiety of the fluorescent synthetic peptide and, subsequently, reverse-phase HPLC is used to separate and quantify the ADP-ribosylated product.


The enzymatic-HPLC method of pertussis toxin determination has been earmarked by the European Directorate for the Quality of Medicines (EDQM) and the WHO Pertussis Working Group as a potential replacement for the histamine sensitization test (HIST) in mice (e.g. Corbel and Xing, 2004, European Pharmacopoeia monograph no. 1356, 2006; Xing et al. , 2007). However, the evidence suggests that it will require supplementation by additional assays before enzymatic-HPLC could be considered reliable for this purpose. Thus, the enzymic assay system was shown to be a convenient, sensitive method and to correlate well with the toxicity observed in vivo by the HIST when native PT was used. However, it was not reliable for predicting the toxicity of toxoided PT preparations since chemical detoxification can affect both the A and B subunits of PT (e.g. Fowler et al., 2003) but the enzymatic-HPLC method only measures catalytic A subunit activity. As the in vivo toxicity is dependent on both A and B subunit activities, refinement of the in vitro test to include a step which monitors the host-cell binding and internalization activity of the B subunit is needed before it can be applied to vaccine testing (e.g. Gomez et al., 2007). Recently, such a test specific for B subunit activity has been reported by the British competent authority, the National Institute for Biological Standards and Control (Gomez et al., 2006). Prior to regulatory acceptance of the method a full validation is required (Xing et al., 2007).

NOTE:   Another assay for pertussis toxin activity is described in AnimAlt-ZEBET method no. 327 (CHO cell assay). Alternatives to the intracerebral challenge test for assay for pertussis whole cell vaccine potency are described in AnimAlt-ZEBET method nos. 232 (serological potency test) and 325 (non-lethal respiratory challenge test).

RN:        13

RF:         Corbel MJ, Xing DK: Toxicity and potency evaluation of pertussis vaccines. Expert review of vaccines 3(1); 89-101 (2004)

RF:         Cyr T, Menzies AJ, Calver J, Whitehouse LW: A quantitative analysis for the ADP-ribosylation activity of pertussis toxin: an enzymatic-HPLC coupled assay applicable to formulated whole cell and acellular pertussis vaccine products. Biologicals 29(2); 81-95 (2001)

RF:         European Directorate for the Quality of Medicines and Healthcare EDQM (Ed.): Pertussis vaccine (acellular, component, absorbed) – Vaccinum pertussis sine cellulis ex elementis praeparatum adsorbatum. Monograph 1356. In: European Pharmacopoeia – 5th edition (supplement 5.3).

Strasbourg – ; 3407 (2006)

RF:         Fowler S, Xing DK, Bolgiano B, Yuen CT, Corbel MJ: Modifications of the catalytic and binding subunits of pertussis toxin by formaldehyde: effects on toxicity and immunogenicity. Vaccine 21(19-20); 2329-2337 (2003)

RF:         Gomez SR, Xing DK, Corbel MJ, Coote J, Parton R, Yuen CT: Development of a carbohydrate binding assay for the B-oligomer of pertussis toxin and toxoid. Analytical biochemistry 356(2); 244-253 (2006)

RF:         Gomez SR, Yuen CT, Asokanathan C, Douglas-Bardsley A, Corbel MJ, Coote JG, Parton R, Xing DK: ADP-Ribosylation activity in pertussis vaccines and its relationship to the in vivo histamine-sensitisation test. Vaccine 25(17); 3311-3318 (2007)

RF:         World Health Organization (Ed.): Annex 6: Recommendations for whole cell pertussis vaccine. http://www.who.int/biologicals/publications/ECBS%202005%20Annex%206%20Pertussis.pdf ; (2005)

RF:         World Health Organization (Ed.): Guidelines for the production and control of the acellular pertussis component of monovalent or combined vaccines. In: WHO Expert Committee on Biological Standardisation. http://whqlibdoc.who.int/trs/WHO_TRS_878.pdf ; (1998)

RF:         World Health Organization (Ed.): Laboratory methods for the testing for potency of diphtheria, tetanus, pertussis and combined vaccines. WHO BLG/92.1. http://whqlibdoc.who.int/hq/1992/BLG_92.1.pdf ; (1992)

RF:         World Health Organization (Ed.): Requirements for diphtheria, tetanus, pertussis and combined vaccines. Technical report series 800; 87-179; 26-27 (1990)

RF:         World Health Organization (Ed.): WHO Working Group meeting on revision of the Manual of Laboratory Methods for testing DTP vaccines. http://www.who.int/biologicals/publications/meetings/areas/vaccines/dtp/DTP%20Manual%20Final%20Report%20on%20DTP%20Manual.pdf ; (2006)

RF:         Xing DK, Corbel MJ, Dobbelaer R, Knezevic I: WHO working group on standardisation and control of acellular pertussis vaccines – report of a meeting held on 16-17 March 2006, St. Albans, United Kingdom. Vaccine 25(15); 2749-2757 (2007)

RF:         Yuen CT, Canthaboo C, Menzies JA, Cyr T, Whitehouse LW, Jones C, Corbel MJ, Xing D: Detection of residual pertussis toxin in vaccines using a modified ribosylation assay. Vaccine 21(1-2); 44-52 (2002)







Infectious Disease

SARS vaccine undergoing animal testing    August 17, 2004, http://www.cmaj.ca/cgi/content/full/171/4/320-a

Rosanna Tamburri

Oakville, Ont.

Canadian researchers have developed 2 potential SARS vaccines now undergoing animal testing.

Both prototypes have produced antibody responses in animals, a first step toward the eventual development of a human vaccine, said Dr. Lorne Babiuk, director of the University of Saskatchewan’s Vaccine and Infectious Disease Organization (VIDO). But it isn’t known yet whether the antibodies will effectively block the replication of the SARS virus, Babiuk said.

One of the prototypes is a conventional killed-virus vaccine developed by researchers at the BC Centre for Disease Control, the University of BC and VIDO.

The second vaccine was developed by researchers at McMaster University using an adenovirus vector, a common-cold virus that has been engineered with DNA from the SARS virus. The vaccines were tested on mice and ferrets at the Southern Research Institute in Alabama, a Level 3 laboratory.

“Both showed early signs of being able to provide protection” in animals, said Jack Gauldie, director of the Centre for Gene Therapeutics at McMaster. While the vaccine candidates have generated immunity, some aspects of the immunity may be damaging, he cautioned. Further tests on other animal models are likely, he said.

A human SARS vaccine could be available in about 2 years, depending on the final test results and whether “the disease rears its ugly head again,” Babiuk added. Another outbreak would likely spur researchers to fast-track the process, he said.

Several other SARS vaccine candidates are currently under development in Canada, but all are in earlier test phases, Gauldie said. Canadian researchers began work on a vaccine just over a year ago, after an outbreak of the potentially deadly respiratory disease. Developing a market-ready vaccine usually takes about 10 years, but researchers are hoping to reduce the time substantially in the case of SARS. — Rosanna Tamburri, Oakville, Ont.

CMAJ 2007 177: 361-368

SOURCE: Hoffer, L. John. “Complementary or alternative medicine: the need for plausibility.” CMAJ 2003 168: 180-182

L. John Hoffer

Dr. Hoffer is with the Lady Davis Institute for Medical Research, Sir Mortimer B. Davis Jewish General Hospital, and the Department of Medicine, McGill University, Montréal, Que.

Correspondence to: Dr. L. John Hoffer, Lady Davis Institute for Medical Research, Sir Mortimer B. Davis Jewish General Hospital, 3755 Côte-Ste-Catherine Rd. W, Montréal QC H3T 1E2; fax 514 340-7502; [email protected]

Everyone familiar with scientific medicine understands the importance of the randomized controlled trial (RCT), because it is the gold standard for evaluating treatment efficacy. Proof that a treatment is efficacious typically requires at least one definitive RCT or several convincing ones. It is not sufficient for the RCT to demonstrate a statistically and clinically significant effect; it must also be designed, and its results analyzed, in accordance with rigorous criteria set by clinical trial authorities. There is a trend toward seriously crediting only RCTs with large numbers of participants, and this calls for a complex study design and infrastructure.

It is not always appreciated that such high-cost, definitive RCTs come near the end, not the beginning, of the process of evaluating new therapies. Before the definitive RCT, there is usually a lengthy process of information gathering to rule out toxicity, optimize the parameters of the treatment and determine the clinical importance of its apparent effect. Only sufficiently promising therapies merit the effort and expense of final confirmation (or refutation) with large, definitive RCTs. This preliminary research may be termed “plausibility building.” When biochemical, tissue and animal data point to a mechanism of action or demonstrate the desired biological effect, they thereby confer biological plausibility. Clinical data from epidemiological studies, case reports, case series and small, formal open or controlled clinical trials may confer clinical plausibility. A therapy is sufficiently scientifically plausible to merit the time and expense of definitive testing if it is either biologically or clinically plausible.

These considerations are germane to discussions underway in Canada and the United States about the best ways to evaluate complementary or alternative medicine (CAM). Issues surrounding CAM are complex; indeed, even defining CAM can be difficult. In this article, I use the definition adopted by the US National Library of Medicine and the US National Institutes of Health National Center for Complementary and Alternative Medicine: a CAM therapy is one that is used instead of (“alternative”) or in addition to (“complementary”) the conventional accepted therapy for a condition (www.nlm.nih.gov/nccam/background.htm#c). But what formula determines which therapies are accepted and which ones rejected by conventional medicine? Generally speaking, a therapy joins the canon of conventional, accepted therapies either after its efficacy has been demonstrated in well-designed clinical trials, or because its biological rationale fits plausibly within the scientific biomedical conceptual framework, even if proof of its efficacy is lacking — it “makes biological sense.” The latter stipulation is necessary, because many conventional therapies are unproven but still not considered CAM. The definition of CAM thus hinges on the notion of scientific plausibility: CAM therapies are not considered conventional medicine because they lack good evidence of clinical efficacy (clinical plausibility) and they lack biological plausibility.

As obvious as it is, this definition can be difficult to apply in practice. The recognition and implementation into medical practice of even well-proven interventions is frequently delayed. Judgements can differ over the level of biological or clinical plausibility in a given case. Alternatively, a therapy may be reasonably biologically plausible but not submitted to definitive clinical testing because it is not sufficiently fashionable or financially rewarding. An additional complication is that, as with homeopathy, the stranger and more biologically implausible a therapy, the higher the bar medical scientists tend to set for crediting evidence supporting its clinical plausibility. Finally, some therapies commonly considered as CAM, such as glucosamine sulfate for osteoarthritis and St. John’s wort for depression, no longer really fit the definition, because well-designed RCTs indicate that they are probably efficacious, but the label continues to stick.

Many CAM therapies (like glucosamine sulfate and St. John’s wort) are easily tested using conventional RCT designs, for they are simple drug or drug-like products with standard clinical indications. Other CAM therapies are far more difficult to evaluate, either because of their complexity or because of the nature of the alternative medical philosophies from which they are derived. Some CAM approaches use definitions of health, diagnosis and disease that differ radically from those used in conventional scientific medicine. Some are more properly considered lifestyle, cultural or spiritual practices whose innate values go beyond the pathophysiological focus of conventional medical science. Despite these complexities, most observers advocate the scientific evaluation of CAM, while acknowledging the considerable difficulties involved.,

I too believe that most CAM therapies are amenable to evaluation using RCTs. It is common in RCTs of drugs to use placebos to blind the participants and investigators as to treatment allocation, but this is not always feasible with CAM approaches. Fortunately, it is now widely recognized that well-designed nonblinded RCTs can generate important conclusions.,,, Problems can emerge when the attempt is made to randomize clinical trial participants into placebo, nontreatment or alternative treatment groups they did not freely choose, because CAM therapies tend to be harmless, very complicated or require active participation., One way to deal with this is cluster randomization, in which the unit of randomization is a hospital ward, medical practice or community.,,,, Thus, while the barriers to the practical testing of CAM are real, they are not insurmountable.

In fact, as important as RCT design can be, I believe there are more fundamental problems to be solved on the way to establishing rational and fair procedures for determining the merits and flaws of CAM: (1) Which of the large and ever-increasing number of candidate therapies should be tested, and in what order of priority? (2) Who will fund and (3) who will carry out the clinical trials? These problems — and the key to their resolution — arise from the defining feature of CAM: its scientific implausibility.

For a therapeutic approach to be considered CAM it must, by definition, be scientifically implausible. But if a treatment is not scientifically plausible, how is one to proceed with its scientific evaluation? Thorough exploration of the potential of any therapeutic approach requires a notion of its target patients and their response and, whenever possible, its mechanism of action. Without this, testing CAM with large RCTs is liable to be as useful as firing cannons at flocks of sparrows.

What scientific peer review committee would — or should — award a competitive grant for a large RCT to test a treatment that is highly implausible? Private funding cannot be counted on, because most CAM therapies are not patented. And what private interest would willingly fund the definitive test of an already profitable CAM therapy? The result might be negative!

Few career medical researchers do more than dabble in CAM in their spare time, and this is hardly surprising. What bright young researcher would choose to devote a scientific career to confirming the inefficacy of implausible treatments? For their part, most CAM practitioners lack both training and sophistication in clinical investigation and the protected time necessary to gain research skills and conduct clinical trials. Most authorities believe that fruitful efforts to evaluate CAM will require partnerships between CAM practitioner–proponents and skilled clinical research investigators. How will they be created?

The correct way to meet these challenges is to put CAM therapies though the same plausibility-building process that conventional therapies undergo before they come to the stage of definitive RCTs. The premise of this strategy is that a gain in plausibility is not proof. (Conversely, lack of proof does not exclude plausibility.,) Rather, as plausibility increases, the case for definitive RCT testing becomes stronger. The purpose of this research is to groom CAM therapies into serious candidates for definitive testing by RCTs.

A candidate CAM therapy need not be biologically plausible to merit testing. The important drug classes were identified by empirical observation, and in-vitro empirical testing continues to be the most common path to new drug discovery. Indeed, an unexpected drug effect is likely to be more important than one that was predicted, for it can lead to new biological insight.

Biological plausibility is helpful, however. A biologically plausible mechanism of action greatly aids the investigator in selecting an appropriate dosing regimen and patient population, and encourages persistence in the face of disappointing results. The hypothesis that L-dopa would benefit patients with Parkinson’s disease appeared to be refuted when well-designed double-blind RCTs turned out to be negative, but its biological plausibility persuaded George C. Cotzias to test much higher doses of L-dopa than had previously been used. His landmark paper in the New England Journal of Medicine was a noncontrolled case series.

Despite its importance, biological plausibility is lacking for most CAM therapies. This does not mean they are ineffective, but it does require their evaluation to proceed solely on the basis of clinical plausibility. Several evaluation methods are known, but until now they have largely been discounted as unscientific. They are nonrandomized or open clinical trials, case studies and case series. Nonrandomized clinical trials can be valuable, as they tend to predict the results of subsequent definitive RCTs with reasonable accuracy. Many case study designs are possible, and the process of data collection and interpretation can be standardized to maximize reliability and validity. An accumulation of informative case histories can be developed into a “best case series.” The US National Center for Complementary and Alternative Medicine has developed guidelines for preparing a best case series of responses to an alternative cancer therapy (www3.cancer.gov /occam /bestcase.html).

An option available in some situations is the RCT of individual patients. In such a study, the patient acts as his or her own control., This approach (also termed the “n-of-1” study) was formalized at McMaster University, Hamilton, Ont., where it proved so popular that a clinical service was established to facilitate its use in the community. The McMaster group cautioned that a positive n-of-1 experiment does not prove that a treatment is effective for all patients with a given disorder. But the plausibility-building effect of several n-of-1 studies with coherent, positive results would be undeniable.

More convincing than a best case series is a consecutive case series, because it permits more general conclusions. This would be attempted only after the therapeutic algorithm, the appropriate patient population and the outcome variables of interest were clearly defined. Where would a consecutive case series lead? To the publication of further noncontrolled consecutive case series by independent investigators. Several independent positive consecutive case series would further increase plausibility, as long as negative studies were also made public. White and Ernst argue strongly for the use of formal, noncontrolled clinical trials in CAM research. The proper use of case studies and case series requires as much intellectual rigour as a clinical trial, but is more practical for CAM investigators in the field who typically lack funding, protected time or access to the sophisticated resources available in medical school clinical trial units, and whose patients may not be interested in participating in RCTs.

The first priority in a practical effort to focus CAM research productively should be to create mechanisms that foster research into clinical plausibility building. What are needed are (1) the development of formal standards and guidelines for formal case studies, case series and noncontrolled clinical trials in CAM; (2) the recruitment of expert consultants to advise and instruct CAM practitioners; (3) the creation of educational opportunities for CAM practitioners to develop expertise in research into plausibility building; (4) grant competitions for protocols for research into plausibility building; and (5) a central registry to record the results (both positive and negative) of plausibility- building research studies.

Without such a vetting process in place, I believe there is a real danger that public funds earmarked in good faith for CAM therapy research will be dissipated in a variety of ways: in descriptive sociology, in pseudo-CAM projects that are really artfully repackaged mainstream research, and in large, mostly futile RCTs of CAM therapies selected on the basis of advocacy rather than merit.

In summary, the way to prove the efficacy of most CAM therapies is with well-designed RCTs, and there is no reason to believe that clinical trial designs cannot be developed that allow even complex CAM therapies to be evaluated. The procedures involved can be sophisticated, complex and expensive, however, and this confronts investigators with the challenge of identifying which of the myriad of existing and future CAM therapies merit the effort and expense of definitive RCT evaluation. The challenge should be met as it is in conventional drug discovery, through plausibility-building research. Whenever possible, efforts should be made to establish a credible mechanism of action for a candidate CAM therapy, because this will increase its biological plausibility and reduce the risk of false-negative RCT results. When biological plausibility is lacking, clinical plausibility alone must be the basis for determining whether or not to proceed to the costlier phase of definitive RCTs. The creation of a plausibility-building CAM research strategy will require thought, instruction, funding, and collaboration among conventional clinical investigators and CAM advocates. The advantages are many: fairness, low cost and the creation of rules of engagement for CAM evaluation that foster balanced partnerships between CAM advocates and mainstream clinical scientists.




BMJ  2004;328:514-517 (28 February), doi:10.1136/bmj.328.7438.514

Education and debate

Where is the evidence that animal research benefits humans?

Pandora Pound, research fellow1, Shah Ebrahim, professor1, Peter Sandercock, professor2, Michael B Bracken, professor3, Ian Roberts, professor4, Reviewing Animal Trials Systematically (RATS) Group

1 Department of Social Medicine, University of Bristol, Bristol BS8 2PR, 2 Department of Clinical Neurosciences, University of Edinburgh, Western General Hospital, Edinburgh EH4 2XU, 3 Center for Perinatal, Pediatric, and Environmental Epidemiology, Yale University School of Medicine, New Haven, CT 06520 USA, 4 London School of Hygiene and Tropical Medicine, London WC1B 3DP

Correspondence to: I Roberts [email protected]

Much animal research into potential treatments for humans is wasted because it is poorly conducted and not evaluated through systematic reviews

Clinicians and the public often consider it axiomatic that animal research has contributed to the treatment of human disease, yet little evidence is available to support this view. Few methods exist for evaluating the clinical relevance or importance of basic animal research, and so its clinical (as distinct from scientific) contribution remains uncertain.1 Anecdotal evidence or unsupported claims are often used as justification—for example, statements that the need for animal research is “self evident”2 or that “Animal experimentation is a valuable research method which has proved itself over time.”3 Such statements are an inadequate form of evidence for such a controversial area of research. We argue that systematic reviews of existing and future research are needed.

Assessing animal research

Despite the lack of systematic evidence for its effectiveness, basic animal research in the United Kingdom receives much more funding than clinical research.1 4 5 Given this, and because the public accepts animal research only on the assumption that it benefits humans,6 the clinical relevance of animal experiments needs urgent clarification.

Several methods are available to evaluate animal research. These include historical analysis,7 critiques of animal models,8 investigations into the development of treatments,5 surveys of clinicians’ views,9 and citation analyses.10 However, perhaps the best way of producing evidence about the value of animal research is to conduct systematic reviews of animal studies and, where possible, compare the results of these with the results of the corresponding clinical trials. So what do studies that have done this show?

Systematic reviews of animal research

We searched Medline to identify published systematic reviews of animal experiments (see bmj.com for the search strategy). The search identified 277 possible papers, of which 22 were reports of systematic reviews. We are also aware of one recently published study and two unpublished studies, bringing the total to 25. Three further studies are in progress (M Macleod, personal communication).

Seven of the 25 papers were systematic reviews of animal studies that had been conducted to find out how the animal research had informed the clinical research. Two of these reported on the same group of studies, giving six reviews in this category. A further 10 papers were systematic reviews of animal studies conducted to assess the evidence for proceeding to clinical trials or to establish an evidence base.w1-w10 Eight systematically reviewed both the animal and human studies in a particular field, again before clinical trials had taken place.w11-w18 We focus on the six studies in the first category because these shed the most light on the contribution that animal research makes to clinical medicine.

Calcium channel blockers for stroke

The first systematic review of animal research, by Horn and colleagues,11 was conducted after their systematic review of clinical trials of nimodipine for acute stroke found no evidence of a clinically important effect.12 Their review of the animal experiments with nimodipine found no convincing evidence of benefit to support the decision to start clinical trials. Horn et al also found that the methodological quality of the animal studies included in the review was poor, commenting on the infrequency of randomisation of animals, lack of blinded assessment, and failure to measure outcomes beyond the acute phase. Furthermore, the animal and clinical studies of nimodipine ran simultaneously rather than sequentially, as would be expected if the animal experiments were to inform the human trials.

Low level laser therapy for wound healing

Lucas et al investigated the basis for clinical trials of low level laser therapy to improve wound healing after the treatment was found ineffective in humans.13 The authors found that the animal studies did not provide unequivocal evidence to substantiate the decision to conduct clinical trials, that the methodological quality of the animal studies was poor, and that animal and clinical studies were conducted simultaneously rather than sequentially. They commented on the relevance of the animal models to the real clinical situations, noting that the animal models excluded common problems associated with wound healing in humans such as ischaemia, infection, and necrotic debris.

Fluid resuscitation for bleeding

Roberts et al14 and Mapstone et al15 assessed the animal evidence in support of fluid resuscitation for bleeding trauma patients. Their systematic review of clinical trials of fluid resuscitation had previously found no evidence that the practice improved outcome and the possibility that it might be harmful.16 The review of animal research found that the fluid resuscitation reduced the risk of death in animal models of severe haemorrhage but increased the risk of death in those with less severe haemorrhage. They concluded that excessive fluid resuscitation in animals can be harmful in some situations.

The review again highlighted the poor methodological quality of individual animal studies. Moreover, because the animal experiments were small, the effect estimates from these studies were imprecise. The authors argued that systematic reviews and meta-analyses of previous animal experiments would ensure that new animal experiments do not set out to answer questions that have already been answered and, by increasing the precision of estimates of treatment effects, could reduce the number of animals needed in future experiments.

Thrombolysis for stroke

An unpublished study by Ciccone and Candelise systematically reviewed randomised controlled experiments of animal stroke models that compared the effects of thrombolytic drugs with placebo or open control.17 The background to the study was the finding that clinical trials of thrombolysis for acute stroke had found a substantial excess risk of intracranial haemorrhage that had not been predicted by individual animal studies. When the animal data were pooled, a significant difference was found in the rate of intracranial haemorrhage between animals in the control and treatment groups.

The validity of animal research needs investigation


Stress and coronary heart disease

Petticrew and Davey Smith examined randomised and observational studies of the effects of hierarchies and stress on coronary heart disease in primates.18 They found no convincing evidence of a relation between social status and experimentally induced stress and coronary heart disease. Among male primates, dominant rather than subordinate social position seemed to be associated with heart disease, contradicting a large body of observational epidemiological studies of stress and coronary heart disease.19 The authors noted that social epidemiologists had cited only the studies that supported their prior views of a positive association and had ignored studies with negative results. Within psychosocial epidemiology, citation was highly selective, producing the misleading impression that primate studies support the view that the health effects of inequality are manifest through psychosocial mechanisms. The authors concluded that the primate data do not support such major public health claims.

Endothelin receptor blockade in heart failure

Lee and colleagues20 conducted a systematic review and meta-analysis of controlled trials of endothelin receptor blockade in animal models of heart failure, after clinical trials in humans had found no evidence of benefit. They found that the animal studies were small and poorly designed with inconsistent use of randomisation and blinding. Pooled analyses of the animal data provided no evidence of benefit overall and showed a tendency towards increased mortality with early administration. The authors called for greater use of systematic reviews in preclinical drug evaluation.


The clinical trials of nimodipine and low level laser therapy were conducted concurrently with the animal studies, while the clinical trials of fluid resuscitation, thrombolytic therapy, and endothelin receptor blockade went ahead despite evidence of harm from the animal studies. This suggests that the animal data were regarded as irrelevant, calling into question why the studies were done in the first place and seriously undermining the principle that animal experiments are necessary to inform clinical medicine.

Furthermore, many of the existing animal experiments were poorly designed. Animal experiments can inform decisions about what treatments should be taken forward in clinical trials only if their results are valid and precise and if the animal studies are conducted before clinical trials are started. Biased or imprecise results from animal experiments may result in the testing of biologically inert or even harmful substances in clinical trials, thus exposing patients to unnecessary risk and wasting scarce research funds. Moreover, if animal experiments fail to inform medical research, or if the quality of the experiments is so poor as to render the findings inconclusive, the research will have been conducted unnecessarily. Investigating the validity of animal experiments is therefore essential for both human health and animals.

Although randomisation and blinding are accepted as standard in clinical trials, no such standards exist for animal studies.21 Bebarta et al found that animal studies that did not report randomisation and blinding were more likely to report a treatment effect than studies that used these methods.21 The box summarises further potential methodological problems.

Even if animal experiments provide valid results and sufficiently precise estimates of treatment effects to discount the effects of chance, the extent to which the results can reasonably be generalised to humans remains open to question. Perhaps it was because of this uncertainty that the data from animal studies were disregarded in the above cases.


The contribution of animal studies to clinical medicine requires urgent formal evaluation. Systematic reviews and meta-analyses of the existing animal experiments would represent an important step forward in this process. Systematic reviews (particularly cumulative meta-analyses of ongoing experiments22) could more efficiently determine when a valid conclusion has been reached from the animal studies. The UK Medical Research Council requires researchers who are planning clinical trials to reference systematic reviews of previous related work.23 A requirement to reference, or where necessary conduct, systematic reviews of relevant animal studies before clinical trials would make it difficult to disregard or selectively cite the evidence from animal studies, or for animal and human trials to proceed simultaneously.

By ensuring that animal experiments do not set out to answer questions that have already been answered, systematic reviews support the principle of reduction. This principle, outlined in the “three Rs,” (reduction and replacement of animals and refinement of procedures), is held to be a cornerstone of animal research.24 Systematic reviews would also be relevant in veterinary medicine to evaluate the efficacy of treatments for sick animals.

Systematic reviews of animal research would increase the precision of estimated treatment effects used in calculating the power of proposed human trials, reducing risk of false negative results. They are able to throw light on the process of translation (or its lack) between animal and clinical research as well offering the opportunity to review the appropriateness of the animal models used. Finally, the results of the animal and human research need to be compared to see how well one predicts the other.

In the 1970s Comroe and Dripps conducted an ambitious study to determine the relative contributions of basic and clinical research to important medical advances.25 They concluded that 62% of key articles that led to advances were the result of basic research. In the 1980s Smith highlighted many of the methodological shortcomings of Comroe and Dripps’ study.26 He concluded that the study was unscientific but that the main lesson to be gained is that research itself needs to be researched so that scarce funds can be allotted more intelligently rather than on the basis of anecdotal evidence. More recently, Grant et al noted that Research Council expenditure on basic research increased in the United Kingdom from 42% of the total civil research and development in 1991-2 to 61% in 1998-9.5 While recognising that it would be difficult to attribute this increase to the work of Comroe and Dripps, they observe that their study is often quoted in support of increased funding for basic biomedical research. Grant et al attempted to replicate the Comroe and Dripps study and found that it was “not repeatable, reliable, or valid and thus is an insufficient evidence base for increased expenditure on basic biomedical research.”5

Summary points

The value of animal research into potential human treatments needs urgent rigorous evaluation

Systematic reviews can provide important insights into the validity of animal research

The few existing reviews have highlighted deficiencies such as animal and clinical trials being conducted simultaneously

Many animal studies were of poor methodological quality

Systematic reviews should become routine to ensure the best use of existing animal data as well as improve the estimates of effect from animal experiments

The Cochrane and Campbell Collaborations for systematically reviewing evidence in health care and social science offer models for how the literature on animal experiments might be systematically organised and examined.27 28 Several sources of potential bias exist in systematic reviews—for example, pharmaceutical industry animal trials are likely to be excluded from the public domain for commercial reasons, resulting in publication bias in systematic reviews—but space precludes considering them here. Ideally, new animal studies should not be conducted until the best use has been made of existing animal studies and until their validity and generalisability to clinical medicine has been assessed.

Citation: Pandora Pound, Shah Ebrahim, Peter Sandercock, Michael B Bracken, Ian Roberts.

 “Where is the evidence that animal research benefits humans?” BMJ  2004;328:514-517, doi:10.1136/bmj.328.7438.514


Citation:  “Translating animal research into clinical benefit”

Daniel G Hackam  BMJ  2007;334:163-164, doi:10.1136/bmj.39104.362951.80


Translating animal research into clinical benefit

Poor methodological standards in animal studies mean that positive results rarely translate to the clinical domain

Most treatments are initially tested on animals for several reasons. Firstly, animal studies provide a degree of environmental and genetic manipulation rarely feasible in humans.1 Secondly, it may not be necessary to test new treatments on humans if preliminary testing on animals shows that they are not clinically useful. Thirdly, regulatory authorities concerned with public protection require extensive animal testing to screen new treatments for toxicity and to establish safety. Finally, animal studies provide unique insights into the pathophysiology and aetiology of disease, and often reveal novel targets for directed treatments. Yet in a systematic review reported in this week’s BMJ Perel and colleagues find that therapeutic efficacy in animals often does not translate to the clinical domain.2

The authors conducted meta-analyses of all available animal data for six interventions that showed definitive proof of benefit or harm in humans. For three of the interventions—corticosteroids for brain injury, antifibrinolytics in haemorrhage, and tirilazad for acute ischaemic stroke—they found major discordance between the results of the animal experiments and human trials. Equally concerning, they found consistent methodological flaws throughout the animal data, irrespective of the intervention or disease studied. For example, only eight of the 113 animal studies on thrombolysis for stroke reported a sample size calculation, a fundamental step in helping to ensure an appropriately powered precise estimate of effect. In addition, the use of randomisation, concealed allocation, and blinded outcome assessment—standards that are considered the norm when planning and reporting modern human clinical trials—were inconsistent in the animal studies.

A limitation of the review is that only six interventions for six conditions were analysed; this raises questions about its applicability across the spectrum of experimental medicine. Others have found consistent results, however. In an overview of similar correlative reviews between animal studies and human trials, Pound and colleagues found that the results of only one—thrombolytics for acute ischaemic stroke—showed similar findings for humans and animals.3 In our systematic review of 76 highly cited (and therefore probably influential) animal studies, we found that only just over a third translated at the level of human randomised trials.4 Similar results have been reported in cancer research.5

Why then are the results of animal studies often not replicated in the clinical domain? Several possible explanations exist. A consistent finding is the presence of methodological biases in animal experimentation; the lack of uniform requirements for reporting animal data has compounded this problem. A series of systematic reviews has shown that the effect size of animal studies is sensitive to the quality of the study and publication bias.6 7 8 A review of 290 animal experiments presented at emergency medicine meetings found that animal studies that did not use randomisation or blinding were much more likely to report a treatment effect than studies that were randomised or blinded.9

A second explanation is that animal models may not adequately mimic human pathophysiology. Test animals are often young, rarely have comorbidities, and are not exposed to the range of competing (and interacting) interventions that humans often receive. The timing, route, and formulation of the intervention may also introduce problems. Most animal experiments have a limited sample size. Animal studies with small sample sizes are more likely to report higher estimates of effect than studies with larger numbers; this distortion usually regresses when all available studies are analysed in aggregate.10 11 To compound the problem, investigators may select positive animal data but ignore equally valid but negative work when planning clinical trials, a phenomenon known as optimism bias.12

What can be done to remedy this situation? Firstly, uniform reporting requirements are needed urgently and would improve the quality of animal research; as in the clinical research world, this would require cooperation between investigators, editors, and funders of basic scientific research. A more immediate solution is to promote rigorous systematic reviews of experimental treatments before clinical trials begin. Many clinical trials would probably not have gone ahead if all the data had been subjected to meta-analysis. Such reviews would also provide robust estimates of effect size and variance for adequately powering randomised trials.

A third solution, which Perel and colleagues call for, is a system for registering animal experiments, analogous to that for clinical trials. This would help to reduce publication bias and provide a more informed view before proceeding to clinical trials. Until such improvements occur, it seems prudent to be critical and cautious about the applicability of animal data to the clinical domain.

Daniel G Hackam, clinical pharmacologist


Letters  Timothy I Musch, Rober G. Carroll, Armin Just, Pascale H. Lane & William T. Talman. “A broader view of animal research

Citation:  A broader view of animal research

Timothy I Musch, Robert G Carroll, Armin Just, Pascale H Lane, William T Talman

BMJ  2007;334:274, doi:10.1136/bmj.39115.390984.1F

A broader view of animal research

Perel et al examined only immediate preclinical testing of new drug therapies,1 but animal research aids medical science in many more ways Animal studies play a part in the initial development of candidate drugs, and the development and testing of medical devices and surgical procedures. Even more crucial, animal research informs clinical research by building the foundation of biological knowledge. Basic research that expand s our understanding of how life systems function indicates to clinicians not only what direction to pursue but what directions are possible.

Although animal research informs clinical research, its circumstances and experimental goals differ from those of clinical research. Thus their protocols and experimental designs necessarily differ. Animal studies generally seek a mechanism of action for treatment, rather than treatment efficacy. They are usually conducted on defined, genetically homogenous subjects with near perfect compliance, as opposed to the large scale diversity of genetics and behaviour of a clinical population. Some clinically necessary procedures, such as double blinding, serve little purpose in an animal study, since rats are not susceptible to the placebo effect. Furthermore, accepted standards for animal welfare as well as many national and institutional protocols insist that sample sizes of animal studies be small. Despite these differences, the protocol used by Perel et al to determine that the animal studies were of “poor” quality was based, for the most part, on standards meant for large clinical trials.

Timothy I Musch, chair, Animal Care and Experimentation Committee, American Physiological Society1, Robert G Carroll2, Armin Just3, Pascale H Lane4, William T Talman, chair, FASEB Animal Issues Committee5

1 Department of Anatomy and Physiology, College of Veterinary Medicine, Kansas State University, Manhattan, KS 66506, USA, 2 Brody School of Medicine, East Carolina University, 3 Department of Cell and Molecular Physiology, University of North Carolina at Chapel Hill, 4 Department of Pediatrics, University of Nebraska Medical Center, 5 Department of Veterans Affairs Medical Center, University of Iowa College of Medicine


Citation Ian Roberts, Irene Kwan, Phillip Evans, Steven Haig.  “:  Does animal experimentation inform human healthcare? “  Observations from a Systematic Review of International Animal Experiments on Fluid Resuscitation.  BMJ  2002;324:474-476, doi:10.1136/bmj.324.7335.474.

Education and debate

Does animal experimentation inform human healthcare? Observations from a systematic review of international animal experiments on fluid resuscitation

Ian Roberts, professor of epidemiology and public health a, Irene Kwan, research fellow a, Phillip Evans, consultant in accident and emergency b, Steven Haig, senior house officer b.

a Cochrane Injuries Group, Public Health Intervention Research Unit, London School of Hygiene and Tropical Medicine, London WC1B 3DP, b Accident and Emergency Department, Leicester Royal Infirmary, University Hospitals of Leicester NHS Trust, Leicester LE1 5WW

Correspondence to: I Roberts [email protected]

Animal models are often used to test the effectiveness of a drug or procedure before proceeding to clinical trials. One reason for use of animal models is that they allow researchers to focus on particular pathological processes without the confounding effects of other injuries and treatments. However, it is essential that their results are valid and precise. Biased or imprecise results from animal experiments may result in clinical trials of biologically inert or even harmful substances, thus exposing patients to unnecessary risk and wasting scarce research resources. Moreover, if animal experiments fail to inform medical research then the animals suffer unnecessarily.

The Italian pathologist Pietro Croce criticised vivisection on scientific grounds. He argued that results from animal experiments cannot be applied to humans because of the biological differences between animals and humans and because the results of animal experiments are too dependent on the type of animal model used.1 Croce’s arguments were based on insights into zoology and pathophysiology. In this paper, we make some methodological observations on animal experiments. Our observations were made in the context of a systematic review of all available randomised controlled trials of fluid resuscitation in animal models of uncontrolled bleeding. We conducted this review because we wanted to assess the scientific basis for fluid resuscitation. A previous systematic review of randomised trials of fluid resuscitation in bleeding trauma patients had provided no evidence that fluid resuscitation improved outcome.2

Has all the evidence been assessed?

Although each individual animal experiment provides little reliable information on the effectiveness of fluid resuscitation, each contributes to the total body of evidence. Any inferences should be based on all the evidence.21 A 1996 narrative review of fluid resuscitation in animal experiments included only nine of the 24 trials (38%) that were available at that time.22

Systematic reviews and meta-analyses of animal experiments are uncommon. About 1 in 1000 Medline records pertaining to human research is tagged as a meta-analysis compared with 1 in 10 000 records pertaining to animal research. In his book The Principles of Humane Experimental Technique, William Russell proposed the principle of reductionthat is, the use of methods to “reduce the number of animals needed to obtain information of a given amount and precision.”23 Meta-analyses of the results of previous animal experiments would increase the precision of estimates of treatment effects and therefore reduce the number of animals needed in future experiments.

Publication bias may be as potent a threat to validity in systematic reviews of animal experiments as it is in systematic reviews of clinical trials. We contacted the authors of included trials to ask about unpublished studies but none were identified. However, it would be surprising if there were no unpublished trials meeting our inclusion criteria. Prospective registration of animal experiments at inception may help to avoid the problem of publication bias.24 In the United Kingdom, the Animals (Scientific Procedures) Act 1986 regulates “any experimental or other scientific procedure applied to a protected animal which may have the effect of causing that animal pain, suffering, distress, or lasting harm.” Researchers must have a project licence from the Home Office before conducting any animal research, and the licence application describes the experimental protocol. These data could be used for prospective registration of all animal experiments.

Systematic reviews of animal models could, like ours, include a range of animal species and models. If the results were consistent across species and models this would indicate that they might also apply in humans. Since the primary aim of animal experimentation is to inform human experimentation, this would be valuable information.

We found substantial statistical heterogeneity in our meta-analysis, making it impossible to interpret the odds ratios. Investigation of heterogeneity is essential and can increase the scientific and clinical relevance of their results. In our meta-analysis, stratification according to how uncontrolled bleeding was induced accounted for a large amount of the heterogeneity, but these results need to be interpreted with caution. Meta-analytic subgroup analyses are akin to subgroup analyses within trials and are prone to bias. Although we specified in our protocol that the analyses would be stratified according to the animal model used, we did not specify that we would stratify according to where the tail was cut. Nevertheless, the meta-analysis provides an insight into model dependency that could be taken into account in future animal experiments and when considering whether the results can be generalised to humans.

                  Implications for human health

Animal experiments can inform human health care only if their results are valid and can be generalised. However, little information is available on the methodological determinants of bias in animal experiments, and in our example the sample sizes were too small to obtain precise estimates of the effects of the interventions. Systematic reviews of animal experiments would help to ensure that animal experiments do not set out to answer questions that have already been answered, reduce bias and increase precision, and provide reassurance about whether the results can be generalised. Prospective registration of animal experiments would help to avoid publication bias. In a recent editorial, Smith promoted the three Rs of animal research first suggested by William Russell: replacement, reduction, and refinement.25  On methodological grounds, animal experimentation would better contribute to human health care if we promoted registration, randomisation, and systematic reviews.