Reflections on the ISNR 2004 Conference in Fort Lauderdale

by Siegfried Othmer | September 2nd, 2004

I came a day early to the ISNR Conference to hear Val Brown at an all-day Workshop under conditions where he had a friendly audience, and where he wasn’t being challenged. Most of what was presented was already familiar to me, but it was nice to hear how he chose to present the information. Val often relies on the sailing analogy, that one cannot trim the sails in advance, but rather must adapt to the conditions. When the sails luff, an adjustment is called for. In the case of the EEG, the luffing is most likely to occur in the high and low frequency ranges. Running NCP is a matter of keeping the sails in proper trim (in this case sixteen of them), and so is neurofeedback in general.

In the afternoon Sue Dermit Brown talked about her utilization of NeuroCarePro. She now does all of the clinical work, since Val does not have a license to practice in Canada. Sue said that after a period of experimentation with other site pairs than C3 and C4 she is now utilizing those other site pairs less and less. This trend is opposite from ours. Val suggests that the way he uses information from the whole EEG spectrum makes other sites more superfluous. That must be true at least to some extent, or Sue would feel more rewarded for site-shifting than is apparently the case. Sue is also doing less alpha-theta training than before, and she does not know why that is the case. “Becoming asymptomatic is really a matter of the eyes-open work,” says Sue, and we very much agree. It is quite possible in our own case that the extension of eyes-open work to lower frequencies gets some of the work done for which we used to rely on alpha/theta. NCP intrinsically covers the whole band, and Sue may be benefiting from the same effect as experience is gained with the system.

Val’s program tracks the median of the amplitudes in the different 1-Hz bands contained within a particular spectral window. Some eight spectral windows cover the whole band for each hemisphere, for a total of sixteen “targets.” A “current value bar” is displayed for each of the target bands. This measure is then averaged over the last sixteen samples, which defines the center point of a high and low threshold, displayed as a box on the bilateral spectral display. Because of the short-term averaging involved, the boxes dance on the screen, tracking the lagged changes in the ambient amplitudes. The size of the boxes is chosen by the therapist to be “not too big and not too small,” or not too easy and not too hard. We know how that goes. Only when the current value bar suddenly steps outside of the boxes does the reward signal on the client screen stop. The reward consists of music or tones and a video display. One might be tempted to call this the “tonus interruptus” method of training.

The result is that the feedback effectively reduces to a detector of EEG transients. If changes persist, then the average value will soon catch up, and rewards are once again restored. Hence no long-term disruption of the feedback ever occurs. Further, the clinician is typically going to dial in box sizes that will result in an overall reward probability in excess of 90%. It is amazing that such a simple reward schema can have such profound results. In its own way, one can think of this transient detection by analogy to our focus on the instabilities in behavior. The NCP detects instabilities in EEG behavior.

Because the system functions effectively as a transient detector, negative effects are not expected, and indeed they are only rarely observed. Obviously some people will react strongly to any pronounced shift in their state. Val has made provision in his program for rewarding the left hemisphere differently than the right in the SMR/low beta bands. This was primarily to humor the NeuroCybernetics users who came sniffing around, and to bring along his Biograph users where the same kind of training had been installed. But with the new emphasis on inter-hemispheric training at a single frequency on our part, there are few people left to carry the flag for the traditional C3beta/C4SMR paradigm.

If truth be told, it really doesn’t matter much in the case of NCP. Fact is that the signals likely to be inhibited are those that step out of the boxes on the top-side, not the bottom. So rewards in the usual sense are not much in the picture in NCP in any event. Therefore it hardly matters what is picked for these specific bands, and they might as well be the same (i.e. matched across the hemispheres) as all the others. We have actually moved in somewhat the same direction as Val in the sense that we have moved up in reward probability with respect to the low thresholds, i.e. the rewards. When it comes to our reward bands, even a 75% reward probability most likely does not divide the space between good and bad very well. And whereas we have gone in the direction of providing more information on the rewards (with visual, auditory, and tactile feedback), Val has gone in the direction of almost abandoning the whole issue of rewards, at least in the SMR/lowbeta band.

One thing that has continued to persuade me of the virtue of our reward strategy is the speed with which pronounced stage change can be induced. On that score, Sue Dermit Brown reported that she sometimes notices changes within a minute. This shows the power of even an inhibit-dominated strategy. So the jury is still out on how our respective approaches compare. Obviously an inhibit-dominated strategy has the significant advantage that in home use one can essentially turn people loose to use the training at will. There are no adverse saturation effects with training based on transients. There are no choices to be made. Both Val and Sue make the point that it is rarely either necessary or even obviously beneficial to deviate from the default settings that the program comes with out of the box.

The other significant advantage of a powerful inhibit-dominated strategy is that it can serve as a useful default in the event one is mystified by the response to the rewards by a particular client in our kind of training. Inhibit strategies are becoming more sophisticated all around. BioExplorer now offers an NCP replication that has the additional advantage of slewing the top and bottom thresholds independently, using criteria appropriate for each. Thus the boxes do not remain the same size over time, and this is a significant refinement vis-à-vis NCP in my estimation.

I asked Sue Brown what her clientele consists of, figuring that to a certain extent a larger feedback loop is operative and she gets the kind of clients that are best served with her instrumentation and her approach. First of all, she does not see many children, so that comparison is ruled out. With regard to the adults, it seems to me that her population is just below what one might consider the worried well. Obviously she saw a lot of trauma after 9/11, when they were still on Long Island, but still these were basically functional people. Sue sees cases of anxiety, depression, sleep disturbances, Chronic Fatigue Syndrome, immune system suppression, eating disorders, and chronic pain. Trauma is the underlying theme with the vast majority of these patients. I can well imagine that the gentle guidance of the combination of NCP and Sue Brown’s ministrations would be ideal for trauma recovery.

Val emphasizes the breadth of the information that jointly informs the transitions in the feedback. “Experience the matrix” (meaning matrix mirror) is the mantra, but there is no gainsaying that the final information flow to the client is a mere trickle. The transients that inform the feedback are on the scale of several seconds to tens of seconds. Val points out that the significant learning occurs when the expected tones (or moving video displays) suddenly stop. Indeed we have converged on the same kind of language. The beep in our games becomes an expectation, and the novelty is when it stops, not when it occurs. But we don’t rely merely on the on-off transitions. In the reward-driven systems, the bulk of the information is in the analog domain, in the ebb and flow of amplitudes of highways and squares, of tones and of vibrations. We are talking about transients in the hundreds of milliseconds, and the information is available all the time, not episodically.

This allows the brain to do what it does well, which is pattern recognition. The fact that this much greater information stream does not result in comparably greater learning rates may have to do with the fact that we are being bottle-necked by the brain’s capacity to acquire new skills. Matters may not be entirely up to us. We just have to make sure that we give the brain enough information to work with, and in this regard we have probably underestimated what it can handle. Having learned that lesson, I am somewhat reluctant to fall back to the information trickle that is NCP—even if the clinical results are satisfactory.

Last year I asked Val to provide a Lissajous loop display for his two-channel narrow-band filtered reward signals. One of the channels is display horizontally, the other vertically. The resulting figure then reveals the coherence and phase relationships between them directly. This change has been adopted, but I did not get to see it on this day, and Val does not highlight the new feature. Moreover, the display appears on the clinician screen, where it is unlikely to serve the pattern-recognition objective I had in mind for the client. Most likely Val is reticent because it would be a violation of the basic design of the NCP to provide limited-bandwidth feedback of this kind. He considers it too narrowly prescriptive.

When Val and Sue talk about the rewards, it is likely to be about the 21Hz band and the 40-Hz band, as well as some fairly narrow bands just above 40 Hz called hi-Hertz targets. So clearly they have the sense that some reward-based training is operative even with on-off responses. But the 21-Hz reward should not be thought of as simply moving people to higher arousal, as in our paradigm. That is not what happens here. They also talk about the inhibits (which Val calls resilience targets) as being primary during the onset of training, with rewards (flexibility targets) playing a greater role in the later phases of training. Stability is the first order of business for us all. The relative weighting of the two objectives is simply handled by the sizing of the boxes, all within the scope of the default setup.

Sue talks about the difficulty that clients may experience early in the training of having “better days than ever” alternate with “as bad as it ever was.” She calls this the “wobble.” One is reminded of tacking a sailboat–there is no way to keep the sail from luffing in the transition. Before stability is reached, people may make excursions back to the earlier disregulated state, a kind of rebound effect. Val brought up the analogy of the B-2 Stealth bomber. High maneuverability means that it must fly at the edge of instability, to the point where no human pilot can manage. The plane must have its control surfaces managed by computer, and this is all by design. Our capacity for rapid reaction also takes out brains to the edge of stability, but in the case of Sue’s wobbling patients, they live there by virtue of pathology.

The NCP system most likely could benefit from a complementary technology such as the ROSHI. Grant Bright and Ray Wolff have moved in to fill the vacuum with their Nexis system, but I have not yet explored the capability of that product. And soon Chuck will be back in business with his ROSHI.

The knock against NCP is that it is difficult to set up, and a bear to keep going with all of the software security measures in place. With respect to the initial install, the problem is apparently with various non-standard hardware out there that does not really conform to the required Microsoft specifications. If a high-functioning computer meeting appropriate specifications is simply dedicated to NCP, and not loaded up with a bunch of other resource-hogging stuff, things should be fine. No defragging will ever be needed. No anti-virus software needs to be lurking there. No AOL needs to be loaded, since the instrument will never see the web. Windows has minimum cause to take time out for housekeeping duties while feedback is going on. The instrument should not be thought of as a computer at all, but rather as a neurofeedback device. Val himself showed up with an Acer Ferrari portable computer, all dressed up in Ferrari red.

The NeuroCarePro system and our own single-channel mechanisms-based and symptom-driven approach stand as a continuing challenge to the main thrust of the presentations at the ISNR, which is QEEG-based training. After the conference, Michael O’Bannon was tempted to say that the onslaught of all those claims is forcing him to reconsider his position. Of course there is a built-in bias in that any scientific investigation of neurofeedback has to concentrate on EEG data in general, and on the changes in the stationary properties of the EEG in particular. What else would they do? It does not follow, however, that the results of such studies should drive the clinical agenda. One of the problems has been all along that our findings are so opaque to the usual research tools. No fixed protocols; no fixed EEG targets; no firm EEG criteria of success; and the worst possible situation in terms of outcome in that we seem to have a panacea on our hands. No wonder scientists are staying away in droves. Even if everything we say were to be true, there is no way to get that into a publishable paper. One would have to blind oneself to 90% of our claims in order to get down to something publishable. As Val says, “Our Western culture is founded on splitting.”

One of the perverse consequences of all the published work of the Gruzelier group at Imperial College is that people now feel constrained to take those protocols as prescriptive because they have research behind them. In the meantime we have abandoned those techniques, and don’t feel any need at all to go back there at all.

There were two papers in particular that spoke to the issue of comparing traditional protocol-based training with QEEG-based training. One was a study done by Marvin Sams and Peter Smith. The other was a presentation by Jonathan Walker on QEEG-guided training of Tourette Syndrome. In the case of Sams and Smith the latest QEEG-based interpretations are compared to our historical protocols. Walker simply asserted that his current results exceed those obtained with the earlier protocol-based training. Problem is that everything is really a moving target, including the protocol-based training. One is reminded of the “Red Queen Principle” from Alice in Wonderland. One has to run faster and faster just to stay in the same place because everything around you is evolving as well. That being the case, it is not valid to make comparisons of a modern QEEG-based training strategy with a protocol-based one that is dated.

The Sams/Smith study concerned eating disorders. Marvin Sams does a nine-point EEG measurement prior to each session and then determines what he is going to do on the basis of an eight-band analysis. Marvin was one of the first to exploit 40-Hz and higher training extensively. This is based on the observation that global gamma-frequency coherences have been observed in healthy individuals, so these can present a suitable target for reinforcement. Marvin also utilizes high reward incidence with tones, so that feedback comes from the dropout of the rewards.

Sams found this class of people to be the most pathological in EEG terms that he has ever seen, dominated by coherence anomalies (87%); delta anomalies (81%); and focal irregular activity he calls dysrhythmias (60%). Obvious brain dysfunction is still observable even in cases of anorexia where normal body weight has been successfully recovered. Eating disorder cases also exhibit the highest mortality of any of the DSM categories. Surely we are seeing here the devastating effects of trauma.

Beck Depression Inventory scores went from 31 to 11 with the training, and to 10 in later follow-up. Clinical results and medication reductions were greatest in the QEEG-based group. Curiously, however, Peter only used A/T with 10% of his patients in the protocol-training group. That is surprising, given the centrality of trauma in eating disorders.

Joe Horvat reprised his presentation from two years ago in Melbourne on coherence training, down to the last power point slide. A few years ago he was still a somewhat isolated voice for the coherence-normalization approach, whereas now there is more of an organized movement toward training site-to-site coordination, whether by coherence or comodulation training, or any other measure of connectivity. The last slide of Horvat’s talk makes the point that normalization of coherence is typically accompanied by an increase in the asymmetry measures, which leaves people in the audience uncomfortable. Horvat claims symptom relief accompanied the coherence normalization. If we cannot assume that functional normalization is ipso-facto correlated with normalization of the QEEG measures across the board, then what do we actually know? In fact, the asymmetry Z-scores are so out of sight after Horvat’s training that they lose meaning. Seven standard deviations from the mean? Something is clearly amiss, and things have not gotten any better over three years. How is that no further insight has been gained on this subject in two years?

Joe addresses the lowest frequency coherence anomalies first. After the delta band he addresses theta coherences, followed by beta. Alpha coherence is addressed only if it is the only one showing up. Otherwise it is left to normalize on its own as the other coherence anomalies are addressed. He also targets the longest-run coherence anomalies first. He finds that things may move slowly in the beginning, so if coherence training is done after some initial protocol-based training, the speed of response should not necessarily be taken to indicate that such responsiveness would have been available at the outset.

Hershel Toomim had fun showing pre-post QEEG data from some of his HEG clients. They were different, but he challenged people to guess correctly which one referred to before training, and which one to after-training. Hershel apparently hoped that QEEG data would be more revealing after SPECT data did not always track with HEG training either. In the heart-warming story of his son’s recovery from head trauma with HEG training we also had an example of non-locality of training effects, in that HEG training at Af8 showed up dramatically at AFz. In another case, a pre-training finding of a localized beta excess at F8 was found symmetrically at both F7 and F8 after the training, but accompanied by symptomatic relief. Lubar told of a similar finding, in which training targeting the anterior cingulate resulted in EEG change at F3. Such non-local training effects are commonplace. It was said that Lubar similarly has found that the long-term effect of beta training in ADHD is to be found in the alpha and theta bands. Achaaa!

Marco Congedo put it simply: “The brain is too complicated to manipulate in any simple way.” Actually we do manipulate it in a simple way, and we are very good at it. Really what is meant is that one cannot expect to train the brain with simple targeting of “A” in order to get “A.” The traditional mindset is so deeply imbedded that it makes it difficult to have our inter-hemispheric training evaluated in research. Nobody knows what “A” is in that case. What are we actually targeting? What are we trying to do? It is just too maddening for words!

Jaime Romano-Micha from Mexico City gave an invited paper on his QEEG-based approach to neurofeedback. However, he gave recognition to the fact that the QEEG-derived measures hide several features of interest: 1) waveform morphology; 2) manner of event occurrence; and 3) reactivity. In short, the classical QEEG approach focuses on stationary properties, when in fact there is good reason to pay attention to the dynamics. Neurofeedback, of course, is the obverse of this. It ignores stationary phenomena, and responds only to the dynamics.

There are in fact now a number of efforts to tease out more of the relevant dynamical interactions. Pfurtscheller & Co. have been working in the area of event-related synchronization-desynchronization (ERS-ERD) for many years. The field of event-related potentials (ERP) goes all the way back to the sixties. (In fact, Sue Othmer’s Ph.D. dissertation at Cornell involved attentional influences on the ERP in cats under a classical conditioning design. That was 1968-71.) Now Klimesch’s group is merging the two disciplines by reinterpreting ERP’s in terms of phase-shifting or time-locking of the basic EEG rhythms under the synchronizing influence of a sensory challenge. Everything is moving toward frequency-based analysis, or in this case of brief transient behavior, joint time-frequency analysis (JTFA). [Incidentally, Grant Bright did a search a few years ago on the combination JTFA and EEG. According to Val, he found six references. Repeating the same exercise recently, he found 6,000.]

All of this is part of the larger enterprise of QEEG analysis. It is therefore absurd to indict the field wholesale (as neurologists are still doing), when in fact it will increasingly yield insights that will help us in neurofeedback. Our own past criticism of the field has always been of particular claims for specific types of QEEG analysis, most importantly the claim that one analysis or another was reliably and globally prescriptive for neurofeedback. It was also irritating because the steady-state techniques are clearly blind to the very phenomena that most interest us, i.e. transient behavior. Because of the tremendous face validity and concreteness that attends QEEG analysis, people have always tended to claim more than was demonstrable, whereas we on the clinical side always knew that our reality went beyond what we were claiming early on, and beyond what was readily demonstrable in a standard research design. It is an imbalance not easily righted.

All this is by way of introduction to yet another type of analysis of the transient behavior of cortical EEG rhythms. Richard Silberstein has been doing work with repetitive visual evoked potentials for many years. This periodic signal can be seen as a stimulus that acts as a time reference for a Klimesch type of analysis or equivalently as a phase reference in a frequency-based analysis. With a sufficiently long train of stimuli, the brain settles down to a fairly repeatable response, and that can then be analyzed with respect to the imposed temporal signature. The phase of the evoked response at the stimulus frequency of 13 Hz can be seen at all of the cortical sites and compared with each other. The timing of the stimulus then drops out of the picture. The technique is known as phase-sensitive detection, or coherent detection, and is well known in physics and engineering. Indeed, Richard Silberstein is another transplanted physicist. (Similar work was also carried out by Tom Collura at the Cleveland Clinic quite some years ago.)

If the above repetitive stimulus is used as a kind of background to the performance of a challenge such as a mental rotation task, the evolving phase relationships over cortex—the instantaneous connectivities—can be discerned with high spatial resolution and decent time resolution. Interestingly, the most prominent features observed overall related to drops in coherence rather than increases. This highlights a central problem, namely that the “binding problem” brings in train its complement, the “unbinding problem.” The book “Sync,” by Stephen Strogatz, makes clear how easy it is for nature to fall into rhythmic patterns. This is of no use to us, however, until the process becomes controllable. The more remarkable reality is not that brain organization depends on rhythmic processes, but rather the fact that we are not all having seizures all the time. The binding process must be bounded, and this must be an active process.

This gives us another way that we can encapsulate what we do in neurofeedback at the neurological level: We challenge the brain at the boundary of the ensemble in both spatial and frequency domains. We challenge the brain to either increase the size of the ensemble, or to shift its frequency or phase, and it will in first order yield to this process, and in second order it will resist. The mounting of resistance builds strength in the regulatory loops, and it does so quite independently of the nature of the challenge, provided that the latter is sufficiently small. (Now tell that to your favorite neurologist.)

Silberstein pictures the cognitive processes as occurring in a “sea of neurobabble,” with the decks needing to be cleared for insight and decision-making. This means truncating network pathways that are irrelevant to the immediate issue. The propagation of signals is always an excitatory process, and the shaping of responses is always dominated by inhibitory processes—one he calls “inhibitory sculpting.” Thus it is found that coherence relationships are at a minimum globally at the nominal response time, and that they are minimal also for high-IQ people. [This is in contrast to the higher coherences that characterize high IQ in Thatcher’s early work. One refers to steady-state behavior, the other to transient behavior.]

In considering these results, I don’t find it obvious at all what the implications are for training. For a while we should just continue to do what we do, and watch what falls out of the neuroscience work. I am glad that we are working in baseline. Consider the finding that high-ability people raise their frontal alpha levels after practicing a skill to the point of mastery. Would it make sense to reward frontal alpha in others? I don’t think so. The alpha is a signature of disengagement, and if one has not mastered the task, such a signature would be inappropriate. Training transients may turn out to be a very different matter than training in baseline.

Finally, I should relate two side notes from Silberstein’s talk. The first is that he began his talk with an expression of disdain for what he called “hot-spot-ology,” a current preoccupation throughout the imaging world, not only in QEEG. Such a focus obscures the network relationships that are the real issue, along with obscuring the dimension of time. Secondly, he said that he had been on board with neurofeedback since 1981, and that he has been trying to talk to neurologists in Melbourne to get them interested. For a number of years we donated the use of a NeuroCybernetics system for his graduate students. Jacques Duff has been doing a study of NF for ADHD under his guidance for some years now, an effort to which we also contributed NeuroCybernetics software. So we are pleased to have played a role in launching neurofeedback at one of the premier brain research centers. Silberstein gave me the courtesy of attending my workshop, so we are all gradually getting on the same track.

A final story from the conference must be told to round out the picture. The very last talk at the conference was given by Jack Johnstone. I missed much of it because the hotel did not give me dispensation on the checkout time. Jack reported on work by Suffin and Emory that remains largely unpublished, on the prediction of depressant medication response with QEEG data. On this occasion, Jack surfaced additional data on bispectral analysis that also was capable of predicting medication response. Previously, it has been shown that bispectral data can be used to characterize depth of anesthesia—the other end of the arousal domain!

We have been recently enamored of the Rodolfo Llinas model of thalamocortical dysrhythmias, since it was first presented in late 1999. Significantly, the bispectral data are all derived from single-channel measurements. There is a natural complement here to the pre-occupation with connectivity in the spatial domain, and that is a specific focus on “linkages”(i.e., temporal correlations) between different brain rhythms at a single site.

It is possible that the importance of bispectral relationships underpin the robust clinical response we are able to obtain with merely single-site training. It is possible that the tightening up of the inhibit strategies all around this field has had the effect of encroaching upon this phenomenon somewhat inadvertently, though somewhat crudely. And it is possible that the advantage that accrues to the multiple targeting strategy of NCP is also testimony to the importance of this mechanism. Since the world will largely gravitate toward simple and accessible tools, it may be more efficient to move in the direction of a more sophisticated real-time temporal (frequency-based) analysis of the signal than to move in the direction of multiple site training, where multiple means more than two.

It is possible to envision a near-term development of a two-channel system equipped with bispectral analysis, one that is in no particular need of clinical judgment. Then psychologists can get back to doing what they do best, along with all the other health and educational professionals.

After the end of the conference, I saw a newbie splayed out in a lounge chair, distressed about what she would have to do to enter this field. It had all been very daunting. The field cannot grow this way. This technique has to be just as simple as flying a B-2 Stealth bomber. Use the stick–point the plane in the direction you want to go, and let the computers handle all the interfaces. Flying the plane must be simple. The mission remains complicated.

Memorable quotation: “I don’t want negative effects.” Hershel Toomim

Leave a Reply