“Neurofeedback: Significance for Psychiatry”

by Siegfried Othmer | April 21st, 2017

By Siegfried Othmer, PhD

This is the title of an article by Simkin and Lubar that recently appeared in Psychiatric Times. I suspect that much of it was written by Simpkin rather than Lubar because parts of it do not read like his papers. Also some of the content is quite startling.

MuskOxen

In reviewing the early history, the article states that “Lubar was the first to use sensorimotor frequency training on a hyperkinetic child in 1976 by placing 2 electrodes at C3 and C4.” That seems highly doubtful. I don’t recall the C3-C4 protocol from any of Joel’s early papers. And yet I do recall very well the controversy that was stirred up within the field when Sue first adopted inter-hemispheric placement at C3-C4. This placement had already been pioneered by Douglas Quirk, but hardly anyone in the field knew of his work before 1995—except for George von Hilsheimer, his partner during those early years, the seventies and eighties.

George professed to be using our protocols after he acquired the NeuroCybernetics in 1993. Years later he announced that he had really stuck with C3-C4 all along. So for all those years George was even keeping us in the dark about that placement. Sue had to get there on her own.

Sterman fulminated that he could not think of a single reason to be using inter-hemispheric placement. Jay Gunkelman vociferously sang in Barry’s chorus. I don’t recall Joel speaking out on the subject at the time, but he was certainly part of the wall of opposition over the years to whatever we were doing. In sum, I would be very surprised if evidence surfaces that Joel ever used inter-hemispheric placement in those early days. If he had done so, then he certainly should have spoken up at the time that the controversy was raging.

But this is a minor complaint. There is material in the paper that is much more disturbing. Confronting the issue of sham-controlled studies that don’t support NF efficacy, Simkin and Lubar write:

“Although well-intentioned, many double-blind placebo-controlled studies did not find differences between sham and neurofeedback. These studies used flawed methodologies that are not recognized as valid interventions for neurofeedback, including unconventional protocols; auto-thresholding (where a child is always rewarded even if there is no active learning); reinforcement that was set too high so that no learning occurred because it was too easy; and complicated neurofeedback (where it was difficult to determine whether feedback occurred due to entertainment or treatment). Unfortunately, such studies have helped to marginalize neurofeedback as a beneficial intervention for psychiatric disorders.”

The first item being objected to is “unconventional protocols.” That would no doubt include C3-C4! It would surely include our early standard, “C3-beta in combination with C4-SMR”; it would include our later standard for ADHD, “C3-Fpz in combination with C4-Pz”. Lubar steadfastly remained a skeptical by-stander throughout all of those early developments with our ADHD protocols.

Secondly, we have made no secret of the fact that we have used auto-thresholding since the early days when it was introduced to the NeuroCybernetics system. First we introduced discrete updating of the threshold at a rate of once per minute. Then the updating was made continuous and automatic. It was also standard on our second-generation design, the EEGer. And it is a feature on Cygnet as well for the inhibit aspect of the protocol. These three systems have likely served the majority of neurofeedback practitioners that have come into the field since the early nineties, and their clinical success is not in question. To blame the failure of a neurofeedback protocol on the use of auto-thresholding is simply absurd.

The second argument raised by Lubar was “reinforcement that was set too high so that no learning occurred because it was too easy.” This is an indirect reference to the operant conditioning model, in which rewards need to be sparse and rare in order to draw attention and have significance attached to them. Lubar and Sterman were on the same page on that issue also. But let us allow theory to be informed by actual practice: The fact is that Margaret Ayers adopted a high incidence of rewards very early in her work–with Sterman’s protocols–and was rewarded for doing so. By the time we met her in 1985 this had become standard practice and there is no doubting her clinical success. People were coming from all over the world to get help with their traumatic brain injury, and they got it. Her office became a modern Lourdes for traumatic brain injury.

We adopted Ayers’ clinical approach when we started our own clinical work in 1988. But we also weighed the theoretical argument that a threshold set for 50% success rate (i.e., the rewards would be meted out 50% of the time on average—at a rate of no more than two per second) meant that the switching rate between rewards and no rewards and back would be maximized. Seen in information-theory terms, we would maximize the information back to the brain with a threshold set for a 50% success rate.

We tried this, but we found ourselves migrating back up to higher success rates (i.e., lower thresholds) just as Margaret Ayers had done. And over time, the percent success was gradually ratcheted up to levels that Lubar would consider ridiculous, 75 to 85% or even higher. Our clinical success rate did not crater at any point. On the contrary, things got better and better. So it is most certainly not possible to assign failures in research to this aspect of the protocol. It has served us very well over the decades—in fact for as long as we were using threshold-based training at all.

With the use of high reward incidence solidly established in clinical practice, what is then the implication for theory? Obviously the conditions for an operant conditioning model are violated, and so the model has been thoroughly discredited in its core assumption. The principles of operant conditioning are clearly involved in this process, but that model cannot furnish the full explanation for the efficacy of these protocols. Lubar has it completely backward. The data annihilate the theory, not the other way around.

It has been Sterman’s and Lubar’s insistence on the operant conditioning model that best explains their bafflement at our claims over the years. Our results were obtained more quickly than could be explained in operant conditioning terms. The flaw is obvious. The operant conditioning model is not the full explanation for neurofeedback with human beings, even if it served nicely to explain Sterman’s results with the cats.

So now we get closer to the real explanation for the failures of the sham-controlled studies: They were not done using “flawed methodologies that are not recognized as valid interventions for neurofeedback, including unconventional protocols.” On the contrary, they were typically done in a manner that faithfully implemented the operant conditioning model, using standard protocols that trace directly back to Sterman and Lubar. That just doesn’t give you very much by way of what are called specific effects—those directly attributable to the existence of the discrete rewards. As for the non-specific effects, they are largely the same between the sham arm and the veridical feedback arm, and that’s really the heart of the story.

Sham training is an active process that engages attentional mechanisms. So whenever the target of the study is attentional factors, vigilance, and the like, then we should not be surprised at a modest difference between the two arms. The result of the near-equivalence of non-specific factors is that both arms of the study should show comparable gains, and that is typically what happens. If sham-controlled studies were done with migraines or asthma as the target, for example, then one could reasonably expect greater contrast between the two arms. The field now reaps the rewards of this absurd allegiance to an inadequate theoretical model.

The third reason cited by Lubar to explain the poor results of sham-controlled studies is the reliance on “complicated neurofeedback (where it was difficult to determine whether feedback occurred due to “entertainment or treatment)”. The challenge to engage the child while training his brain has been with us since the beginning. But in the early system designs that implemented the Sterman and Lubar protocols (such as our NeuroCybernetics and EEGer), that project never went very far. Any novelty on offer wore off pretty fast, and children certainly did not feel entertained. On the contrary, we were constantly nudged to provide new games. At the same time, our own design ground rule was that any movement on the screen had to reflect relevant information related to the feedback. So if there was any entertainment value to the presentation, then it was certainly not in conflict with the intent and purpose of the signal representation.

David Kaiser did a formal comparison of feedback modes at one point in the mid-nineties, and found that the most information-rich options yielded the highest retention in training and the best outcomes. That gave us all the validation we needed on our style of signal presentation. By comparison, the pure operant conditioning design deprives the brain of information that it would find useful as well as engaging, and thus leads to an impoverished training experience vis-a-vis more information-rich environments.

The reliance on what might truly be called entertainment has changed only recently, when the matter of entertaining the child and training the brain was completely separated in the ILF regime. The child’s attention to the feedback signal is simply not required in this kind of training. There is not even any need for awareness on their part that their brains are being trained. The kids do have to be entertained, however, to keep them in the chair. There is no interference with the training process as far as we can tell; on the contrary, the entertainment serves to facilitate the entire process.

We are clearly into a new era of neurofeedback, in which the old issues just slide into irrelevance, and hopefully into oblivion as well. It is unfortunate, in that sense, that Lubar has chosen to re-animate all these old issues at this late date. This newsletter is written in the hope that the old issues can finally be put to bed. The field needs to move on.

Siegfried Othmer, PhD
drothmer.com

DevelopmentofOthmerMethod

See also: Recent Critical Studies of Neurofeedback in Application to ADHD

2 Responses to ““Neurofeedback: Significance for Psychiatry””

  1. I am a behavioral psychologist who practices Neurofeedback, and I was very surprised to read that Sterman and Lubar maintain that feedback needed to be given 50% of the time to be called operant conditioning. My training in learning disabilities was feedback given at 70 to 80% and the upper criterion was success at 80%+. The most effective and robust results are based on a variable-ratio feedback schedule and neurofeeback comes as close to that as we have so far and the software gets better all the time. Brains can learn without cognitive input so operant conditioning is likely to be the most reasonable explanation for the results so far. Rewarding specific variables at the 70% level is the best rate for success. It has certainly been my experience empirically doing this work for the last 18 years and I have had many satisfied clients. The Othmers have contributed excellent theoretical suggestions that have borne fruit since I have been a neurotherapist. And successful outcomes will continue to be the gold standard for me.

    • The 50% reward set-point is something that we tried on theoretical grounds. Lubar and Sterman both used a reward incidence that was lower than 50%. Lubar was the only who actually quantified this. As I recall, he suggested a reward incidence of 8 or so for adults, higher for children. The maximum reward incidence in Sterman’s system was 30/min, so 8 would be well below a 50% rate. Lubar’s recommendation for children was closer to the 50% criterion.

Leave a Reply