Find us on Facebook Like us on Facebook
Follow Us on LinkedIn

The Ongoing Saga of Infra-low Frequency Training

by Siegfried Othmer | December 17th, 2008

Our infra-low frequency training is sending ripples through the field of neurofeedback because it appears to represent such a fundamental departure from prevailing models. It is at such a bifurcation point that a professional community is tested in its assumptions, in its procedures, in its processes for finding accommodation, and indeed in its humanity. Unfortunately, the field of neurofeedback already has a history of fragmentation behind it. Therefore history does not augur for a benign accommodation to our new findings. More than likely we will just be in for continuing Balkanization of our field.

The first response has been skepticism, much of it animated by the thought that this new approach has just suddenly sprung upon the scene without sufficient research and scientific support. In fact, of course, the infra-low training is just the culmination of a long development process that goes back more than ten years. There have been many milestones along the way, each of which was well-established both clinically and scientifically before we moved on. In particular, our march to probe the low-frequency training has taken place in five distinct epochs over that period of time. We have now been exploring the domain below 1.5 Hz for more than two years, and the domain below 0.05 Hz for more than one year.

Our new findings present the greatest “affront” to the classical QEEG perspective, namely that the guide to neurofeedback protocol is to be found in the stationary deviations in QEEG variables from established norms. The hope has been that the QEEG formalism would give neurofeedback the necessary rigor, predictability, and reproducibility to finally achieve respectability in polite circles. This has not yet happened. Under the pressure of rejection, the response has been to adhere to the model with ever greater rigor, all the while critiquing other neurofeedback modalities that were not compliant. Unfortunately, the QEEG perspective has the propensity to become a kind of closed system that confers validity on its own data preferentially. Because the belief system was the prime mover here, there has been a tendency to recognize data that supports the model and to dismiss data that contradicts it.

While writing his book “A Symphony in the Brain,” Jim Robbins observed that neurofeedback people were like ghosts. They saw the world, but the world did not see them. He hoped that his own book would not end up becoming part of that ghostly world. Just as mainstream medicine was ignoring neurofeedback, the QEEG contingent was ignoring what was happening outside of the confines of its model. The underlying dynamics were the same in both cases. The presumption of scientific rigor kept certain phenomena from even being admitted to the discussion. The argument was impeccable: If something isn’t quantifiable and reproducible, then we can’t make a science out of it and we might as well not waste our time dealing with it. In the end, it would just cost us our reputations…

On the other side, we have more observationally-oriented scientist-practitioners encountering clinical data that simply cannot be ignored. Over time, patterns are observed into which these accumulating data fit, and we again have the beginnings of a scientific model. The predictability and reproducibility of these patterns of responding can be verified, and a beachhead is secured from which these findings can no longer be dislodged. These developments have occurred out of the glare of the klieg lights, so it may come as a bit of a surprise to some when these methods suddenly begin to dominate the field.

At the risk of over-simplification, we have had a case of left-hemisphere tyranny in trying to control the terms of debate on the one side, and we have had a more integrated clinical intelligence on the other side actually doing the work. The bossy left hemisphere determines what appears in the journals and what gets admitted to conference programs, while the non-verbal right hemisphere actually does the work in the clinic.

Humor aside, the problem of quantifiability in our discipline is quite real. The problems are multi-fold. First of all, every measurement of the EEG is strongly constrained by the conditions under which it is measured. The selectivity afforded by signal processing options inevitably blinds us to other phenomenology that may be equally relevant. Secondly, the phenomenology that we are interacting with is not localizable. To put it crudely, we can never hear all relevant parts of the ongoing conversation that the brain is having. We can only capture fragments. Thirdly, the influences on brain function that we exert either through stimulation or through reinforcement are small compared to the ongoing brain dynamics. They never amount to more than a small perturbation at best, and as such remain beneath discernment. Maddening, but ineluctable. We simply have to learn to live with things that are more ephemeral than we can measure. At the brain level, our influence on the system is not separable from what might otherwise have occurred.

The fallback position is that we have to rely on outcomes to confirm that something has actually been accomplished. Fortunately, outcomes are typically robust enough to put this project on firm ground. There is precedent for something like this. Acupuncture is a case in point. A certain methodology has emerged that yields some measure of predictability of results, but the mechanism of action remains a bit mysterious. Despite misgivings in certain quarters, acupuncture has taken its place as a recognized modality despite any shortcomings in our understandings. This could be a model for what happens with neurofeedback. The practical benefits of our methods will win out over any high-brow objections. Details to be sorted out later.

Neurofeedback has actually grown into a robust discipline for the best of reasons, namely the relatively high degree of client satisfaction experienced for conditions that are otherwise rather intractable. And this has occurred over the entire spectrum of approaches in this fragmented community. Each of the principal modes of neurofeedback can claim to be making major inroads on the “disorders of disregulation.” Yet these modes appear to have very little in common operationally. Nor, it seems, can they be put under one hat conceptually. There is one distinction, however, that can be made between QEEG-based training and the others.

It is traditional QEEG-based training alone that depends on stationary measures to determine the targets for training. To obtain validity for such measures, sufficient measurement time has to be allotted to suppress the dynamics in the variables. The end result is something that we may be interested in as scientists, but it is not something that remains interesting to the brain. Squeeze the dynamics out of the signal and the brain no longer recognizes it as well or cares about it as much. Advantage appears to accrue to those methods that either respond to or reflect back to the person the actual EEG dynamics, which the brain then recognizes and responds to (or simply responds to in the case of the stimulation techniques).

The advantage conferred by real-time, dynamic training may lie in the matter of engagement as much as in enhanced rates of training. Or the enhanced rates of progress in training could follow mainly from the greater levels of engagement. In any event, the left-hemisphere need to be in charge of the process in QEEG-based training may have left us with something that is less satisfactory overall clinically. The more relationally-oriented right hemisphere may have allowed a brain-centered process to emerge in which more latitude prevails on how self-regulation may be achieved. It is a left-brain conceit, I would submit, to think that we could be in charge of this process at all.

Inevitably, the plot now thickens. At the present time QEEG-based training is migrating more toward coherence-based training from a prior focus on local band amplitudes and on comodulation deficits. This takes the training in the direction of higher dynamics. The greater training efficiency attained in coherence-based training could well be due principally to this factor. To those who are only aware of the QEEG-based perspective, the enhanced efficiency only buttresses the case for QEEG-based training even further. In the more inclusive perspective, QEEG-based training is finally joining the general thrust toward more dynamic training.

With nearly everyone gravitating toward dynamic training to a greater or lesser degree, where are we left then in the QEEG perspective? We have said above that the dual objectives of good measurement and of good training lead to divergent criteria that cannot be satisfied simultaneously. If precision is sought in targeting, then by the time such precision is afforded by the measurement the brain will have moved on. The resulting feedback cannot ever be sufficiently timely. If feedback is given rapidly enough to be relevant to ongoing brain activity, on the other hand, then it cannot be precise. There must remain an element of randomness. Such randomness is obviously ok, because decent results are still obtained. The implication is that precise targeting is now, and has always been, a fiction. Hence, one might as well go with dynamic training. That is what the ground truth of the field is telling us. That is what is succeeding on the frontlines of clinical work.

All that is left, then, as a role for traditional QEEG-based training is to help pick targets for training, in addition to some marginal additive role in the diagnostic adventure (recognizing endophenotypes, etc.). Picking targets is done off-line. There is typically some tracking done during the training process as well, but in this case the characterization procedure should be independently optimized with respect to the training procedure. As to the matter of picking targets, that only becomes relevant when we are dealing with either localized function or localized deficits. It is time to take a look, then, at the brain’s hierarchy of needs.

The Prime Directive for a control system is to be unconditionally stable. This is of particular importance for the brain, because it has no backup system like the Federal Reserve Board. Brain stability is clearly a function of brain dynamics, and the remedy for brain instability is to be found in training brain dynamics. By now this is simply a fact, based on years of our clinical experience. QEEG-based information, insofar as it is biased toward the suppression of dynamics in its measures, is almost entirely irrelevant to this enterprise.

The second order of business is the brain’s ability to maintain set-points of activation of basic regulatory systems, i.e. with respect to arousal, affect, executive function, motor system, sensory systems, autonomic balance, pain, and interoception generally. These are such global functions that they can be accessed in a variety of ways. It is impossible to do neurofeedback without affecting them. Protocol specificity here is going to be very elusive. Any statement that there is a particular way that these systems should be trained will be met with equally cogent counter-examples. Clearly this is not an area where QEEG-based protocols offer any exclusivity, or even necessarily a significant advantage. There is no narrow objective to be met here. Rather, the objective is overall enhancement of self-regulatory capability. Functional neuroanatomy can yield up nearly all of the information needed in the planning of specific training strategies. Standard QEEGs can provide support in this objective, but they are by no means mandatory.

Finally, the third order of business is to target localized function. Beyond state regulation issues, can the brain rise to the occasion and handle the complex processes such as reading? Can visual and auditory processing be further optimized? Can memory function be improved? This is the natural domain of QEEG-based training. This is where we want to be greedy and have a lot of information available to us to guide training. But even here, the baseline measures on which the QEEG field has concentrated to date are not terribly helpful. Much closer to the real objective are challenge-based measures such as those being pioneered by Kirt Thornton, and transient analysis such as ERP and ERD/ERS (event-related potentials; event-related desynchronization; event-related synchronization). This remains largely a project for the future of the field.

In sum, then, it should be clear by now that no single approach to neurofeedback is going to meet all needs. We are inevitably into a multi-polar world in neurofeedback. Under the circumstances, it would be a step forward if each of the emerging sub-disciplines made it their business to optimize their own approach rather than look to a restoration of a hegemony that is not in prospect. It has been said that much Enlightenment philosophy was a matter of “spilt theology.” In our culture most of us are the children of monotheistic religions. The ideal of “one theology” has carried over into our science. There are many pathways to enhanced self-regulation, however, and the survival value of each will be attested to by its survival in the marketplace. Central arbitration is neither needed nor wanted.

Siegfried Othmer, Ph.D.
www.drothmer.com

ShareShare on FacebookTweet about this on TwitterShare on RedditShare on Google+Share on TumblrShare on LinkedIn

3 Responses to “The Ongoing Saga of Infra-low Frequency Training”

  1. Jon Silverman says:

    This article covers the politics of introducing a new paradigm, without addressing what that new paradigm is.

    > Inevitably, the plot now thickens. At the present time QEEG-based training
    > is migrating more toward coherence-based training from a prior focus
    > on local band amplitudes and on comodulation deficits.

    It would help to add a section or a link explaining “coherence-based training.” I’ve been trying to understand Cygnet, but infra-low training is as much a mystery to me after reading this article as when I began. Only the words “coherence-based training” suggest someone, somewhere, could explain what Cygnet does. What are its mathematics? What does it actually do in interpreting EEG into the imagery of InnerTubes? From there, how is that imagery believed to yield a clinical result?

    Reply

  2. Indeed my article dealt more with the political atmospherics than with the specific issues involved in either coherence-based training and in our bipolar training. The difficulty is that these concepts cannot readily be explained in short-hand that might be suitable for a newsletter, so I end up writing to those who already come to this juncture with some preparation. The frustrated reader would be helped by reading my Chapter 2 in the Handbook of Neurofeedback, followed by Sue Othmer’s chapter 5 in that Handbook, and then by my Chapter 1 in the newly published Second Edition of “Introduction to Quantitative EEG and Neurofeedback.”

    That said, let me at least take a stab at the question. Coherence-based training acts upon the instantaneous relationship between two sites at a given EEG frequency. This relationship is a function of both the amplitude and the phase at each of the two sites. As a practical matter, the relative phase between the two sites makes the dominant contribution to the net reward signal. In bipolar placement, the relative phase also affects the net reward signal quite strongly, as I showed in our JNT paper. So both coherence-based training and bipolar training can be considered as primarily a challenge in the phase-domain. With a one-channel measurement in bipolar montage we obviously have slightly less information about the state of the system than we do with the two-channel measurement that is required for coherence-based training. But that appears to be of lesser importance for purposes of feedback to the brain, because the brain just has to have enough information to make the connection. This may not satisfy the outside observer, but it does not trouble the brain. So single-channel information is manifestly adequate to work with in feedback. The bipolar montage also means that we cannot do coherence up-training. But mostly that seems to be unnecessary. The challenge of feedback mainly stirs the system into reorganizing itself to function better, and that works even if, by some argument, our challenge happens to go in the “wrong” direction. The challenge itself appears to be the critical ingredient, not its sign. Of course the whole argument would go away if we just did our work with two channels instead of one, as some people are already doing. It’s just that the simplicity of single-channel training is not something we relinquish easily.

    Reply

  3. Hana Yin says:

    very well said: ghostly world. Neurofeedback is transformational, powerful and SPIRITUAL. It takes us away from narrow focus, instead, we open our focus for our human potentiality! Unfortunately, the rest of the world does not know this priceless gift to mankind. Only Neurofeedback practitioner, who are faithful to this practice because they personally realize the power of personal transformation.

    Cygnet is a new frontier. A mystery needs to be unfold only time will tell….

    Reply

Leave a Reply