Practice-Based Evidence

by Siegfried Othmer | June 26th, 2008

Practice-Based Evidence We know from electoral politics that it is hard to defeat somebody with nobody. In order to push back against the contagion of Evidence-Based Practice it is not enough to “go negative;” one must field a live candidate. We have in fact been living in a world largely driven by Practice-Based Evidence. This is true in particular in the field of neurofeedback, but it is also true in biological psychiatry. The majority of drug use for mental disorders is off-label, and there is essentially no research basis for stacked medications, and very little research on use with children. Actual practice far outruns the available research. This state of affairs is currently being defended by a practicing psychiatrist, David Hellerstein, MD, Associate Professor of Clinical Psychiatry at Columbia University. His message is brief, so here is the whole thing as presented on Medscape:



Practice-Based Evidence Rather Than Evidence-Based Practice in Psychiatry

David J. Hellerstein, MD

Physicians are barraged with demands to implement evidence-based practices (EBP).[1] Hesitate and you risk being labeled as part of the medical axis of evil. Some resisters may favor “superstition-based practice.” Not me. But, I do believe that the evidence that evidence-based practice works in actual practice often isn’t good enough.
Some EBP recommendations make good sense. Give depressed patients a full trial of antidepressant medication — OK, I can do that. Others are difficult to impossible. How do I find a local practitioner of interpersonal psychotherapy? It may work, but hardly anyone does it. The same for many psychiatric EBP recommendations, such as frequently measuring BMI and lipids in schizophrenics.[2]
Efficacy studies supporting such recommendations are often done in academic settings, then extended in real-world “effectiveness” trials. Unfortunately, much such work is shelved, or done half-heartedly. Often EBPs contain unfunded mandates — great ideas requiring resources you don’t have. Practitioners may be justified in skepticism.
Rather than more EBP, what we really need is what has been called PBE — practice-based evidence.[3] High-quality scientific evidence that is developed, refined, and implemented first in a variety of real-world settings.
My recommendations:

  1. Use real-world practices as laboratories for developing effective treatments.
  2. Revive quality improvement as an academic discipline[4] since QI projects are perfect models for PBE.
  3. Develop modular treatments that can be incorporated into existing practices.
  4. Use online technologies to the fullest — for collecting effectiveness data; training providers; educating patients; and for testing, monitoring, and rewarding effectiveness.

Eventually we will have PBE — practice-based evidence. I’ll be first in line to apply it!

So, Dr. Hellerstein, let me hasten to inform you that we already have a living model of Practice-Based Evidence in place in the field of neurofeedback. Almost immediately after EEG biofeedback first diffused out into the clinical world, actual practice began to diverge from the research-based protocols. And even the research-based protocols came to be used predominantly off-label, with a large variety of clinical conditions that had never been researched.

At every step of the way, the custodians of probity were offended, but they were unable to inhibit this proliferation. The gatekeepers were never able to establish their standards of quality control upon the field. This does not mean, however, that quality control was lacking. The clinical world is actually a rather ruthless environment, particularly for clinicians offering something new, and even more for practitioners who were otherwise uncredentialed. Whereas some people can indeed thrive as charlatans, most clinicians can only succeed over the long term by actually doing good work. There is a huge price paid all around when we don’t succeed with people. Null results have much longer legs than positive results. So when a particular technique succeeds in a variety of professional hands, notice should be taken. And if clinical effectiveness is already provable, one does not have to back-track and prove efficacy.

What makes all this so doable in the case of neurofeedback is traceable to two principal factors: 1) large effect size; 2) rapid time course of recovery. Our effect sizes are typically so large that one does not require big group studies to establish a claim. The “Number Needed to Treat” is often not much larger than two. (One always wants a replication!). Clinicians produced results that were largely being rejected on the basis that they were too good to be true. If that is the case, then the results do not need to be buttressed by a controlled study.

The second argument is that of the time course of recovery. In neurofeedback, the clinician has almost immediate feedback on his work, often within the same session. And the more fragile or volatile the client, the more immediate will be the observable effects of our intervention. In both of the above respects, we are in a much better position than biological psychiatry, where the effect sizes are smaller and the feedback loops are weeks and months, rather than minutes, hours, or days. So if the argument for Practice-Based Evidence is valid for biological psychiatry, it holds ever so much more for EEG feedback.

Since the whole issue revolves around what kind of evidence is to be admitted to the pantheon of science, why not submit the following proposition to test, to wit: “Practice-Based Evidence leads to better practice than Evidence-Based Practice.” We can now look back on nearly forty years of clinical work in EEG feedback to assess that proposition. The answer, of course, is already obvious, as we say, ‘by inspection.’ Apart from the original conditions of seizure control and hyperkinesis, all of the conditions now being ‘treated’ with EEG feedback found their origins in clinical practice. No claim has ever had to be retracted to date. Even the ‘absurd’ claims of life-transforming spiritual experiences that were made with regard to alpha training have turned out to be valid.

By contrast, what if the field suddenly found itself constrained to work only with those protocols that had been validated to the standards of Evidence-Based Practice. This field would suddenly find itself decimated and severely impoverished. Since our own work has so significantly evolved over the years, I sometimes think back on what my reaction would be if we were required to go back to our old methods, about which we were quite enthusiastic at the time. The answer is that I would consider that to be malpractice. We know too much now to ever go back there, despite the fact that what we found true then of course still holds true today.

So the answer is clear. Practice-Based Evidence has led to better outcomes to date, by far, than Evidence-Based Practice, in the case of EEG feedback (and most likely also in the case of biological psychiatry). This shows that something is missing from the intellectual edifice of Evidence-Based Practice: the proof that it actually leads to better practice than Practice-Based Evidence. And it is not enough that this proof be furnished by example. It also has to be shown in the individual case, i.e. neurofeedback. This is just not doable. Our entire history speaks against it. So I harbor no uncertainty about how such a comparison would come out. But in any event, until the Evidence-Based Practice people do their homework on this, they should stand aside and allow the field to grow organically.

  1. Torrey WC, Drake RE, Dixon L, et al. Implementing evidence-based practices for persons with severe mental illnesses. Psychiatr Serv. 2001;52:45-50. Abstract
  2. Marder SR, Essock SM, Miller AL, et al. Physical health monitoring of patients with schizophrenia. Am J Psychiatry 2004; 161:1334-1349
  3. Pincus T, Sokka T. Evidence-based practice and practice-based evidence. Nature Clin Pract Rheumatol. 2006;2:114-115.
  4. Davidoff F, Batalden P. Toward stronger evidence on quality improvement. Draft publication guidelines: the beginning of a consensus project. Qual Saf Health Care. 2005;14:319-325. Abstract

http://www.medscape.com/viewarticle/575578?src=mp&spon=12&uac=42223HZ

Siegfried Othmer, Ph.D.

Share your thoughts in the comments section below.

Leave a Reply