Publication

Article

Psychiatric Times

Psychiatric Times Vol 25 No 4
Volume25
Issue 4

Why Evidence-Based Medicine Can, and Must, Be Applied to Psychiatry

In the second century ad, a brilliant physician had a powerful idea: 4 humours, in varied combinations, produced all illness. From that date until the late 19th century, Galen's theory ruled medicine. Its corollary was that the treatment of disease involved getting the humours back in order; releasing them through bloodletting was the most common procedure and was often augmented with other means of freeing bodily fluids (eg, purgatives and laxatives).

In the second century ad, a brilliant physician had a powerful idea: 4 humours, in varied combinations, produced all illness. From that date until the late 19th century, Galen's theory ruled medicine. Its corollary was that the treatment of disease involved getting the humours back in order; releasing them through bloodletting was the most common procedure and was often augmented with other means of freeing bodily fluids (eg, purgatives and laxatives). For 17 centuries, physicians subscribed to this wondrous biological theory of disease. We bled our patients until they lost their entire blood supply, we forced them to throw up and to defecate and urinate, we alternated extremely hot showers with extremely frigid ones-all in the name of normalizing those humours.1 Yet it all proved to be wrong.

This is not a Whiggish interpretation of history: it is not simply a matter of "they were wrong and we are right." Galen, Avicenna, Benjamin Rush-these were far more intelligent and creative men than we are. In fact, not only am I not Whiggish, I believe we are repeating these past errors, which have sunk deep into the flesh of the medical profession. As Sir George Pickering, Regius Professor of Medicine at Oxford, said in 1949,

Modern medicine still preserves much of the attitude of mind of the school men of the Middle Ages. It tends to be omniscient rather than admit ignorance, to encourage speculation not solidly backed by evidence, and to be indifferent to the proof or disproof of hypothesis. It is to this legacy of the Middle Ages that may be attributed the phenomenon... [of] "the mysterious viability of the false."2

We see this influence even today in articles such as Levine and Fink's accompanying critique of evidence-based medicine (EBM).

There are and have always been 2 basic philosophies of medicine. One is Galenic: There is only one correct theory. For our purposes, the content of the theory does not matter (it can be about humours, serotonin and dopamine neurotransmitters,3 electroconvulsive therapy (ECT),4 or even psychoanalysis); what matters is that hardly any scientific theory (especially in medicine) is absolutely right.5 The error is not so much in the content but in the method of this way of thinking. It focuses on theory, not reality; on beliefs, not facts; and on concepts, not clinical observations.

There is a second approach that is much more humble and simple-the idea that clinical observation should precede any theory; that theories should be sacrificed to observations and not vice versa; that clinical realities are more basic than any other theory; and that treatments should also be based on observations, not ideas.

This approach was first promulgated clearly by Hippocrates and his school in the 4th century bc, but 500 years later Galen demolished Hippocratic medicine (while claiming its mantle), and it lay dormant until revived (more than 1000 years after Galen) during the Enlightenment.6-9

Why all this historical background in a discussion of EBM? Because it is important to know what the options are and what the stakes are. We are Hippocratic or we are Galenic-either we value clinical observation or we value theories. The whole debate may come down to this distinction.

Perhaps readers, including critics of EBM, will claim they value clinical observation. If so, how can we validate this value? How do we know when our observations are correct and when they are false?

Confounding bias
The problem lies in confounding bias.10 As clinicians, we cannot believe our eyes. Confounding bias means that in the course of our clinical experience, there are many other factors of which we are not aware that can impact what we observe. Thus, it can appear that something is the case when it is not or that some treatment is improving matters when it is not. Furthermore, these confounding factors are present most of the time.

Perhaps most clinicians would admit that confounding factors exist, but it is important to examine both the clinical and scientific implications.

Clinically, the reality of confounding bias teaches us the deep need for a Hippocratic humility versus a Galenic arrogance (Galen once said: "My treatment only fails in incurable cases")-a recognition that we might be wrong (indeed, we often are, even in our most definitive clinical experiences). The end of Galenic treatments came about in the 19th century because of the development of EBM, through the "numerical method" of Pierre Louis who showed by counting about 40 patients rather than relying on single cases or clinical experience that bleeding accelerated death in pneumonia rather than curing it.1

Many have thought Freud right for a century, but patients with manic-depressive illness who were given daily psychoanalysis instead of ECT or lithium (Eskalith, Lithobid) probably suffered and even died needlessly. The source of the greatest medical advances was EBM, not the exquisite case study, the brilliance of any specific person (be he Freud, Emil Kraepelin, or even our most prominent professors today), or decades of clinical experience. A. Bradford Hill, the founder of modern clinical epidemiology, made the point that the common distinction between clinical experience and clinical research is a false one.2 After all, clinical experience is based on the recollection of usually a few cases; clinical research is simply the claim that such recollection is biased and that the remedy is to collect more than just a few cases, comparing them in ways that reduce bias. The latter point entails EBM (see below).

Truths of theory are transient. Galen is out-of-date and so is the much-vaunted catecholamine theory of depression; today's most sophisticated neurobiology will be pass by the end of the decade. Clinical observation and research, in contrast, is steadier. That same melancholia that Hippocrates described can be discerned in today's major depression; the mania that Aretaeus of Cappadocia explained in the first century ad is visible in current mania. Obviously, social and cultural factors come in- to play, and such presentations vary somewhat in different epochs, as social constructionists will point out.11 Yet Karl Marx made this point long before Michel Foucault, and the limits of a purely social/economic interpretation of human existence should be obvious: the presence of social factors does not reduce an entity to nothing but a social construction.12 Clinical research is the solid ground of medicine; biological theory is a necessary but changing superstructure. If these relationships are reversed, then mere speculation takes over and the more solid ground of science (interpreted nonpositivistically) is lost.

Scientifically, confounding bias leads to the conclusion that any observation, even the most repeated and detailed, can be (and often is) wrong; thus, valid clinical judgments can be made only after removing confounding factors.10,13 Randomization, developed by the biologist Ronald Fisher in the 1920s and thereafter honed by Hill,14 is the most effective way to remove confounding bias. Hormone replacement therapy was the cure for many female illnesses. Decades of experience with millions of patients, huge observational studies with thousands of subjects, and the almost unanimous consensus of experts all came to naught when randomized studies proved the futility of the belief in that treatment (not to mention its carcinogenic harm).15

 

If we accept that clinical observation (rather than theory) is the core of medicine, that confounding bias afflicts it, and that randomization is the best solution, then we have accepted EBM. That is the core of EBM, and the rationale for the levels of evidence in which randomized data are more valid than observational data. These are new methods (the first randomized clinical trial [RCT] in medicine occurred in 1948 with streptomycin, and was directed by Hill), and major advances in medical treatment in the past 50 years are unimaginable without RCTs specifically, and EBM in general. Indeed, perhaps the greatest public health advance of our era-the linking of cigarette smoking and cancer (led by Hill)-was both source and sequel of EBM methods. (As to the relevance of EBM to psychiatry, after the streptomycin RCT, among the first RCTs to follow were in psychiatry with chlorpromazine [Thorazine] and lithium in the early 1950s.16)

In the accompanying critique of EBM, much is made of the limitations of psychiatric nosology. Yet EBM has little to do with diagnosis. EBM, as formally advanced in recent years,17 has mainly to do with treatment, not diagnosis; it focuses on treatment studies, randomization (which is only relevant to treatment, not diagnosis), and statistical techniques that relate to treatment (eg, meta-analysis, number needed to treat).17 Validating diagnoses is a matter for another field (ie, clinical epidemiology).5,18 To the extent that diagnosis is addressed at all in most of the EBM literature, it has to do with topics such as the sensitivity and specificity of diagnostic tests (eg, ventilation/perfusion scans for deep venous thrombosis19) and not theoretical questions about cause of illnesses or diagnostic criteria. One could define schizophrenia in a completely opposite manner than what is proposed in DSM-IV; assessments of treatment would still need to account for confounding bias, and the consequent validity of RCTs would still hold.

One can be (not unjustifiably) fed up with DSM-IV and its impact on contemporary psychiatry; that is fine, but there is no rationale for blaming it on EBM. We are dealing with the true (DSM-IV has many faults), true (EBM has limitations), and unrelated (they have nothing to do with each other).

The same holds for critiques of "data validity" and influences of the pharmaceutical industry, as well as methodological problems of clinical trials, such as use of appropriate dosages. None of this gets at the core rationale for EBM. Yes, for-profit doctors can conduct clinical research invalidly and unethically, as can pharmaceutical companies. For-profit, private practitioners can also conduct clinical medical practice invalidly and unethically; this does not invalidate clinical medicine. If your basketball team cheats, the whole sport is not thereby proved fraudulent. Levine and Fink's comments on dosage are simply details about how clinical trials are run; randomized trials can still be faulty for many reasons (dropouts can be high, inclusion and exclusion criteria can be wrong, and so on).20 But again this means only that those studies need to be conducted correctly. Poor driving by some people does not mean we should all give up automobiles. The core rationale for RCTs (to remove confounding bias) remains unaffected.

No ivory tower
My own view is that an important but underappreciated misuse is ivory tower EBM-the idea that unless there are double-blind, randomized, placebo-controlled data, then there is no "evidence." But there is always evidence. And EBM gives us a method with which we can weigh that evidence. Even nonrandomized evidence may be correct and useful (in the absence of randomized data or with certain constraints). For example, the link between cigarettes and smoking is completely based on nonrandomized evidence but with a great deal of careful statistical analysis to assess confounding factors.21 This view reflects a rarefied positivism that reflects a lack of understanding of the nature of evidence (and science). Commonly, I have observed academic leaders (and journal peer reviewers) disparage important observational data as mere "chart reviews," thus not representing useful evidence (a fetishization of RCTs). Because it can be misunderstood and even abused, we need informed critiques of EBM-not to destroy it but rather to improve it.

In summary, "evidence-based psychiatry is an untested hypothesis" only if the history of medicine were otherwise. It is a wayworn critique, one made to Louis in the 1840s and to Hill in the 1950s, and their responses are adequate. To cite a 1951 British Medical Journal editorial in response to a letter writer who decried the "replacement of humanistic and clinical values by mathematical formulae":

There appear to be two ways of going to work. We may try [a treatment] on John Smith and Mary Robinson and report the results-that John Smith got well quickly and Mary Robinson, poor soul, died. We have done our best for both. Or we may try it on 30 John Smiths and 20 Mary Robinsons and report the numbers that lived or died. It is likely that the larger numbers will be found more informative than the smaller. But have we done any less for the 50 than we did for the two? . . . Wherein have we shirked our duties? In treating patients with unproved remedies we are, whether we like it or not, experimenting on human beings, and a good experiment well reported may be more ethical and entail less shirking of duty than a poor one.2

Indeed, the accompanying critique also may stem from a deeper source, as suggested by this 1951 debate: many clinicians simply distrust research; they wish to ignore disease perspectives and focus on the person rather than the patient. For them, abetted by mainstream bioethics,22 practice and research are antithetical. Patients need to be protected from experimentation. The reality is, however, that this approach leads to poor clinical practice and irrelevant research. In such circumstances, perhaps patients need to be protected from practitioners. The relationship between practice and research needs to be more fluid; not 2 completely opposed armies, but rather 2 approaches to the same problem, a porous Green Zone rather than a rigid Maginot Line.23

Without the application of scientific principles to clinical research, we will have nothing but opinion-a postmodern relativist world where all is ideology. Without scientific, evidence-based clinical research, in the Hippocratic tradition of careful attention to clinical observation, and its statistical correlates in the need for combating confounding bias, psychiatry, and all of medicine, like a mutant flower, would bend toward the earth rather than the sun, a mere shadow of what it is, and an ashen image of what it could be. Not only should EBM be applied to psychiatry but if we do not apply it, we will just go back to the brackish dogmatisms of the past, and return to a non-Hippocratic approach to medicine that failed humanity for so long. Two millennia are long enough to test a theory.

 

References:

References


1.

Porter R.

The Greatest Benefit to Mankind: A Medical History of Humanity.

New York: Norton; 1997.

2.

Hill AB.

Statistical Methods in Clinical and Preventive Medicine.

New York: Oxford University Press; 1962.

3.

Stahl SM.

Essential Psychopharmacology.

Cambridge, UK: Cambridge University Press; 2005.

4.

Fink M, Taylor MA. Electroconvulsive therapy: evidence and challenges.

JAMA.

2007;298:330-332.

5.

Ghaemi SN.

The Concepts of Psychiatry.

Baltimore: Johns Hopkins University Press; 2003.

6.

McHugh PR.

Hippocrates a la Mode, in the Mind Has Mountains.

Baltimore: Johns Hopkins Press; 2006.

7.

Jouanna J.

Hippocrates.

Baltimore: Johns Hopkins University Press; 1999.

8.

Ghaemi SN. Hippocrates and Prozac.

Prim Psychiatry.

2006;13:51-58.

9.

Ghaemi SN. Toward a Hippocratic psychopharmacology.

Can J Psychiatry.

In press.

10.

Miettinen OS, Cook EF. Confounding: essence and detection.

Am J Epidemiol.

1981;114:593-603.

11.

Foucault M.

The Birth of the Clinic: An Archaeology of Medical Perception.

New York: Vintage Books; 1994.

12.

Lewis A.

Inquiries in Psychiatry: Clinical and Social Investigations.

New York: Science House; 1967.

13.

Rothman K, Greenland S.

Modern Epidemiology.

Philadelphia: Lippincott-Raven; 1998.

14.

Armitage P. Fisher, Bradford Hill, and randomization.

Int J Epidemiol.

2003;32:925-928.

15.

Prentice RL, Langer RD, Stefanick ML, et al. Combined analysis of Women's Health Initiative observational and clinical trial data on postmenopausal hormone treatment and cardiovascular disease.

Am J Epidemiol.

2006;163:589-599.

16.

Healy D.

The Creation of Psychopharmacology.

Cambridge, Mass: Harvard University Press; 2001.

17.

Sackett D, Strauss S, Richardson W, et al.

Evidence Based Medicine.

London: Churchill Livingstone; 2000.

18.

Robins E, Guze SB. Establishment of diagnostic validity in psychiatric illness: its application to schizophrenia.

Am J Psychiatry.

1970;126:983-987.

19.

Jaeschke R, Guyatt G, Sackett DL. Users' guides to the medical literature, III: how to use an article about a diagnostic test, A: are the results of the study valid? Evidence-Based Medicine Working Group.

JAMA.

1994;271:389-391.

20.

Friedman LM, Furberg CD, DeMets DL.

Fundamentals of Clinical Trials.

New York: Springer-Verlag; 1998.

21.

Soldani F, Ghaemi SN, Baldessarini R. Research methods in psychiatric treatment studies. Critique and proposals.

Acta Psychiatr Scand.

2005;112:1-3.

22.

Cassell EJ. The principles of the Belmont report revisited. How have respect for persons, beneficence, and justice been applied to clinical medicine?

Hastings Cent Rep. 2000;30:12-21.
23. Ghaemi SN, Goodwin F. The ethics of clinical innovation in psychopharmacology: challenging traditional bioethics. Philos Ethics Humanit Med. 2007; 2:26.

Related Videos
Erin Crown, PA-C, CAQ-Psychiatry, and John M. Kane, MD, experts on schizophrenia
leaders
brain depression
nicotine use
brain schizophrenia
brain
© 2024 MJH Life Sciences

All rights reserved.