Protocols, Practice and Proof, Oh My!
by Matthew Fleischman | January 19th, 2010by Matthew Fleischman, PhD
“We’re not in Kansas anymore!” I thought to myself after hearing Dr. Eugene Peniston speak about brainwave therapy for alcoholism at The Menninger Foundation in Topeka in 1988. It was truly an over-the-rainbow idea that we could treat profound emotional and behavioral problems just by training individuals to modify their brainwaves. And in the twenty plus years since, I have remained committed to bringing this amazing treatment from Oz to the real world.
By any measure, positive client feedback, steady referrals and extensive records indicating client reported improvement suggest it has been a wonderful journey. I have learned some fantastic things and made the friendship of some outstanding colleagues. I’ve also had the pleasure of playing with some amazing toys: brain maps, symptom and QEEG-based protocols, LENS, HEG, SmartBrain Games, pirHEG and the Othmers’ ILF model.
Being a neurotherapist is good. Or is it? Certainly, new methods, systems and theoretical advances are all harbingers of progress Yet I have a nagging doubt about how much further along we are than we were a quarter century ago. For a treatment that got 80% of severe alcoholics to remain sober, raised IQ’s 20 points, normalized attention problems in troubled children, caused stunning improvements in autistic children or helped a team win the World Cup in soccer, we are still considered alternative and unproven. This despite a growing body of supportive research or that neurofeedback has anticipated, and at times, been ahead of the neuroscience that the brain can modify itself via feedback.
It then became much clearer to me that to take neurofeedback to larger world we needed to offer a different sort of proof. Beyond anecdote and case report, we needed to show convincingly that neurofeedback has a predictable impact on real problems with real clients in an efficient manner. The evidence we needed had to be gathered from our practice. Pointing to a few published studies does not prove that what you or I do works.
Slowly, it dawned on me; I did have evidence, or at least a start. As intimated earlier, for the last 10 years my practice has kept session-by-session records using what I call a “Progress Tracker.” Some of you may be aware of this form and many who have tried it have incorporated it into their practice. However, what I lacked was a way to take this Progress Tracker data, combine it with some pre- post- measures and easily analyze and summarize my results so I could prove with practice-based evidence what I believed to be true: Neurofeedback works.
Going for “practice-based evidence” has several advantages over “evidence-based practice.” First, data collected in the field gives a better answer to the most meaningful question: “How well does it really work?”
Second, because there are so many of us working clinicians, practice-based evidence can enable the field to advance faster. Every time we try a new protocol or employ a new system we are, in effect, running an experiment. If we start routinely mentioning our results, and those results are based on some measured outcome, we can multiply a hundred-fold the pace of improvement. Formal research designs are one thing, discoveries replicated by a dozen clinicians has a different form of credibility.
Incorporating measurement into our work has other advantages. It allows new providers to more easily assess their proficiency. I am reminded of this when I reviewed the Progress Tracker data from a recent trainee and it was quickly evident that his results were outstanding. Experienced providers can see if modifications to their practice do indeed make a difference or if they are being misled by a singularly dramatic clinical success. It also puts a burden on those promoting new methods or new systems to demonstrate their advantage over current practice. Without this, we risk following the pharmaceutical industry with its constant flow of new medications that often turn out no better than the one that just went out of patent.
So, if collecting and sharing outcome data is important, then why do hardly any practicing clinicians do so? Time and technology–not enough of one and none of the other. Thinking of how I kept my own records, I realized a simple software program could be the solution. But this solution had to offer clear benefits to my target audience: the practicing clinician. Also, it couldn’t burden that clinician. Otherwise no one would use it.
I began to envision a program that would make it easier to gather and report outcome data and track session-by-session client progress. It would also have to be flexible, not restricting the user in terms of what could be collected yet not being so free-form that you had to start from scratch. It would also have some cool features to make you a better neurotherapist, regardless of your approach. The program I envisioned became a database software program called Results, specifically designed to show neurotherapists their, well, results!
First of all, the program makes it easy to collect and report data from symptom questionnaires such as the Vanderbilt Assessment Scales for ADHD, the DASS for depression, anxiety and stress or the ATEC for autism. If you are unfamiliar with available measures, the website NeurotherapyResults.com has links to many outcome measures, including some that are free and in the public domain for you to download and use.
In addition, you can include data from any of the computerized tests of attention such as the TOVA, IVA and QIK and other broader measures of cognition such as the CNSVS or IntegNeuro for which the per-session fee is reasonable. Incidentally, having clients complete a few questionnaires and tests before they start treatment and having them repeat that at the end is often a billable part of practice.
In any case, it takes only a few minutes to enter the two to five key scores from each outcome measure, and then with a few mouse clicks, see your results. As a practicing clinician, it may seem strange, but I have found that clients are often the most convinced they got better when it says so on paper.
Once the data are collected, Results will export your reports to Word where you can paste your work into progress notes and clinical summaries. Being able to easily report your results can help gain referrals. For example, physicians are often asked about alternatives to medication and having seen client outcomes in your clinical summary, may have more confidence to refer to you. The program allows you to see your outcomes for an individual client or any group of clients and make fact-based claims about your effectiveness. You can also export your work to Excel to do statistical analyses. If you want to present or publish you have what you need.
Next, there is a system for tracking session-by-session progress using the client’s own presenting concerns as the indicator of change. The program even includes a list of over 200 selectable (and modifiable) concerns to help clients identify exactly what they want to see improve.
There is a good rule of thumb: if what you are doing is working, keeping doing it. If it isn’t, do something else. The problem is knowing when you are making progress and when you aren’t, or even when you just did something to make the client worse. Results allows you to see, at-a-glance, the entire course of treatment. It makes obvious the relationship between the protocols used at each session, including the adjustments to find the optimal reward frequency, and the client’s behavior. This is especially helpful if you are using the Othmer Method. It greatly facilitates clinical troubleshooting regardless of your approach.
Finally, Results makes it easy to use your own clinical practice to improve treatment. Have two different protocols, or two different neurofeedback systems and want to know how they compare? A few clicks of the mouse allow you to do that. You can set up formal research designs including random assignment. But even that is not necessary for practice-based evidence. If the data have been collected, you can look back and test a hunch. If you want to try something new, you can actually measure if it is better.
And, ultimately, that’s what we all want, for ourselves and for our field–to become better. To know that we are providing a vital health care service backed by clinical science and provable results.
I encourage you to go to NeurotherapyResults.com to view the video demonstrations, an on-line manual and to download Results for a free tryout. To encourage you to make it a regular part of your practice, I am offering a 50% discount to readers of this Newsletter who purchase Results in February, and, in addition, I will donate $10 to the Brian Othmer Foundation. To get the 50% off, enter the code “EEGInfoFebruary2010” when going through the on-line purchase process.
There is a not-so-steep learning curve as I developed it with the idea of keeping it as simple and intuitive as possible. I estimate it might take an hour to become familiar with the program. Adding a new client and scoring and entering data takes 5 to 10 minutes. Entering session protocols and client progress 5 minutes, something you can often do while running a session. Once you have entered some data, click a few options and Results will report your outcomes.
If you have any questions about the program, please contact me at info@NeurotherapyResults.com. And may we meet on the yellow brick road.
Matthew Fleischman, PhD
See inquiry to Info address…
Jerry E Wesch, PhD
Dear Dr. Fleischman:
Pre- and post-treatment tests show that neurofeedback works. What’s missing, and I mentioned this to Sue Othmer, is data indicating that treatment effects last. Hence, we don’t know if neurofeedback needs to be continued indefinitely to maintain, or add to, post intensive treatment effects. Repeated measures of one month, six months, and one year after intensive treatment are needed. I am interested to know your thoughts on this matter.
Sam Hopper, Ph.D.