Some Belated Reflections on the 2006 ISNR Conference

by Siegfried Othmer | November 27th, 2006

The 2006 ISNR Conference was held in a congenial setting in Atlanta with some 350 in attendance. That’s more than fifty percent of the membership, which is a good turnout. On this occasion, BrainMaster celebrated its ten-year anniversary with a festive evening. The overall attendance notwithstanding, many of the talks were only sparsely attended. One reason was parallel sessions. One wonders if it isn’t time to rethink conference organization in view of what is now possible with web-casting and inexpensive mass storage media.

The highlights of the conference among the programs I attended included the continuing work of Rob Coben with the autism spectrum using both HEG and EEG neurofeedback, and documenting change with a variety of assessment instruments. Rob is now training at low EEG frequency if the coherence anomalies fall in that range. He compared coherence-based training with inter-hemispheric training. The inter-hemispheric bipolar training yielded better outcomes on attentional measures than the coherence-based training, but all other measures tended to favor the coherence-based protocols. The frequency range employed here fell into the range of 7-14 Hz, the range over which the deviations were greatest.

Coben also compared near infrared and passive infrared (NIR and PIR) Hemoencephalography training. He found an overall 42% improvement in the ATEC score (Autism Treatment Evaluation Checklist, by Bernard Rimland, the evaluation instrument with the most extensive history in the field) over twenty sessions of training. This was accomplished with children with minor to moderate impairment. NIR and PIR HEG yielded comparable gains for most of the measures, with the advantage going to PIR for several. The NIR training lends itself more toward localization of training.

Significantly, this improvement was documented after these same children had received twenty sessions of EEG NF training, where an overall improvement on the order of 40% was also seen on the ATEC scores. The cumulative improvement was therefore in the range of >60%. It is also noteworthy that connectivity deficits tended toward normalization with the above interventions.

Coben also investigated the mu rhythm in autistic children, seen as an index to the functioning of the mirror neuron system, in a comparison with normal functioning children. The results of all the studies may be found on the website
http://www.thebrain labs.com/ conditions. shtml

The mirror neuron hypothesis of autism is featured in a recent issue of Scientific American, so it is clearly gaining traction. The mirror neuron system is not mere virtual mimicry, however. It is engaged when we observe directed, purposeful movement on the part of another, i.e. when we get caught up in the narrative of what is happening. And it is not the mirror neuron system that “decides” whether we are engaged with the narrative. That is a more global decision. So is the failure of the mirror neuron system to activate properly the essence of the problem or merely a concomitant symptom of it? That remains to be seen.

We have a tendency in our science to alight on the detail whenever it is available, to the disadvantage of more comprehensive and more complex models. Ironically, the scientific methods with which we are examining autism are exhibiting their own autistic features.
I am reminded here of a story that kids used to find hilarious in elementary school. A scientist trains a grasshopper to jump on verbal command. After testing the grasshopper’s performance he pulls off one hind leg, and the grasshopper still performs, albeit more poorly. He then pulls off the other hind leg. As the grasshopper no longer jumps when prompted, the scientist concludes that the organ of hearing must be located in the legs…

Coben has documented extensive hypo-coherences (and a few hyper-coherences) among these autistic children, and this has recently been confirmed in a study out of the University of Washington. The indicated deficits are global in autism. The fact that the coherences in some linkages are elevated at some frequencies while being depressed at others is taken as evidence that the deficits lie in the functional realm. The trend toward normalization with training confirms this.

In comparing Coben’s EEG training as best we can with our own work, it is likely that with our approach many of these children would be trained at even lower frequencies than Rob employed. Moreover, many of them would now be trained with right-lateralized bipolar placements, unless there was seizure activity (e.g., Landau-Kleffner Syndrome) or other specific rationale for the inter-hemispheric placement. (One recent study found epileptogenic activity in some 60% of autistic children, so the candidate group is substantial.) Another difference is that we are more likely to work with more severely impaired children, simply by virtue of the fact that this is the population we currently attract. It would be difficult to imagine putting some of these children through a QEEG measurement at the outset, for example, without being apprehended for child abuse.

The whole issue of aversion to electrodes tends to diminish significantly over the first few sessions, so even if a QEEG is wanted, perhaps that can wait until the child has been calmed and acclimated over the first few sessions. And since progress seems to be equally available with HEG, why not start out the first few sessions with no electrodes at all on the scalp, before EEG NF is even introduced?

The Thompsons reported on a data summary on over 130 Asperger’s children who were treated with conventional SMR-type training. In future, training protocols will be tailored to the person rather than being simply protocol-based. The emerging story on neurofeedback for the autism spectrum just keeps getting more compelling.

Ed O’Malley and Merlyn Hurd combined their efforts to present on a case history of Lyme disease extending over two years, and effecting substantial normalization of coherence and amplitude anomalies using NeuroCarePro. The technique targets variability in a two-channel design, but there is no explicit targeting of coherence anomalies. This case history is particularly significant since the EEG training with NeuroCarePro was preceded by conventional protocol-based EEG neurofeedback for a period of a year and a half. The training with NCP resulted in substantial normalization of a whole host of coherence anomalies, and significant normalization in the amplitude domain as well, particularly in the delta regime. The presentation may be found at the Zengar site, at www.zengar.com

The case history represents another exemplar in the general argument that effective self-regulation strategies tend to produce normalization of EEG parameters irrespective of the targeting of specific deviations. This shifts the debate away from the issue of efficacy to the more relevant one of relative clinical efficiency.

The major novelty at the conference was Juri Kropotov’s announcement that the Institute of the Human Brain has been working with dc potential training since the seventies. This involves the application of an anodic (positive) potential to a region of interest at the scalp and obtaining functional change over time. The negative electrode is placed at a neutral site. This work has apparently never been reported before. It opens up a highly accessible method by which localization phenomena can be evaluated. A three-volt battery is used, presumably in a current-limited configuration. One is reminded, of course, of the related CES modality (that was also developed in Russia), where ac-modulation is employed in order to minimize electrostatic drift (polarization effects). With a dc signal, one has to learn to live with such effects.

It seems that a number of techniques are emerging that promise substantial functional gains through fairly generic interventions, presumably largely through an activation mechanism. This includes HEG of both flavors, CES (quasi-dc and ac), hyperbaric oxygen therapy (HBOT), and now DC potential work. These generic activation procedures find their analog also in the neurofeedback technologies. The pROSHI comes to mind here, in that it stimulates a reaction of the CNS generically without reference to the EEG specifics that may prevail. The NeuroCarePro fits into this category also, as it is usually deployed in its default mode, without consideration of the underlying stationary properties of the EEG.

With increasing flexibility in protocol-based training, such as the tailoring of the reward frequency that we are doing, we are also covering a lot of bases with a very finite set of tools. The trend seems once again to be in the direction of simplification of neurofeedback methods across the field for most of the people who come for our help. (This comes after a considerable period of protocol inflation.) On the other hand, the exceptions seem just as unambiguous. Some conditions, such as specific learning disabilities and physical trauma, call on our most sophisticated methods based on comprehensive analysis of the QEEG.

The thrust here is in the direction of coherence-based training by various methods, relying either on the standard databases or on the novel connectivity analysis done by Bill Hudspeth. One of the characteristics of the more targeted approaches is that progress is not always unilateral. New coherence anomalies may arise as the targeted ones are expunged. This was first reported years ago by Joe Horvat, and the problem is not going away with the subsequent refinement of methods. Jonathan Walker pointed out that one can get to a satisfactory status in clinical terms without contending with these newly emerging coherence anomalies. This was Horvat’s observation early on as well.

The newly arising anomalies did not appear to be connected with new clinical complaints. This calls into question the complete identification of coherence anomalies with functional disorders that are of clinical interest. The possibility should be allowed for that some coherence anomalies may have a compensatory function that actually serves the overall objective of self-regulation. And if that is the situation after neurofeedback training, it could also be the situation before. One should therefore not jump to the conclusion that coherence anomalies always index pathology.

A divergence appears to be taking place where the scientific interest within the field follows its natural bent toward greater specificity in training, whereas the clinical side follows its natural bent toward the more generic methods. The trend toward simplicity is very welcome within the practitioner community. The clinicians who come to the ISNR conference often feel intimidated by all of the cross-currents among the various presentations. What can never be said from the lectern is that different methods may turn out to be substantially equivalent in terms of final outcome, when all is said and done. The point can never be made because the comparison data are almost never available.
Increasingly the clinical issues turn not on how well we do but rather how quickly we do it. Concern about efficacy is turning into a concern about efficiency. And until formal comparison studies are done, the basis for claiming a general superiority of one technique over any other is increasingly suspect.

A second divergence is taking shape as well, and it relates to training frequency. The more specific techniques are tending to gravitate toward higher EEG frequencies (through the work of people like Kirt Thornton and Marvin Sams), whereas the more universal or generic methods are trending the other way toward lower EEG frequencies. This suggests that a kind of complementarity exists. The lower frequencies are likely involved with the most fundamental of regulatory burdens, whereas the higher frequencies organize specific cognitive function, etc. The higher-frequency activity is more episodic, burst-like, presumably encoding transient activity; the lower frequency activity is more persistent and steady-state, encoding the persistence of states.

This divergence in approach is not a flaw in our collective and varied approaches to neurofeedback. We are simply following the way nature actually behaves with respect to various regulatory functions. So the Holy Grail of one over-arching, comprehensive, singular, and unitary approach to neurofeedback is most likely not in prospect. The organic growth of the field can only suffer if the organization continues to be regimented in pursuit of that objective. The ongoing attempt by the in-crowd to adjudicate between the acceptable and the beyond-the-pale is just laughable. A scientific method organized around the taming of human prejudicial propensities ends up being tethered to rampant prejudice.

To anchor the close of the conference, Jay Gunkelman presented on the model of EEG patterns as indicating psychophysiological phenotypes that cut across the classic diagnostic boundaries. The talk covered the same ground that Jack Johnstone had covered the year before. The same topic is covered in the current issue of the Biofeedback Magazine. Surprisingly, the article carries Jay’s sole authorship, and Jack Johnstone’s prior contribution in this area is reduced to a mere reference.

The general thrust here is to be welcomed. It is reminiscent of Daniel Amen’s categorization of childhood attention and behavior problems into six major classes based on SPECT characteristics. This classification has already received considerable clinical validation through the emergence of distinct medical approaches for each. Similarly, Suffin and Emory have refined their phenotyping approach to the point where very specific medication recommendations can be made. (It was no doubt Jack Johnstone’s long-term association with Suffin and Emory that first gave impetus to the extrapolation of these ideas to neurofeedback.)

Neurofeedback strategies match up with the posited physiological subtypes more than they match up with the classical diagnoses, so acceptance of this model will lead to the adoption of better research designs. With the traditional focus on specific protocols for specific diagnoses, the field was unfortunately cosseted with research designs that were too narrowly constrained.

With the classification of EEG patterns into major categories, it also becomes apparent that most of these can be discriminated with relatively few EEG measurements. A clinician can get a pretty good picture of what is going on with just a few minutes of sampling EEGs in various places on the scalp. This means that the essence of the EEG phenotype model can be readily incorporated into a clinical approach without much additional grief.

The clinical utility of the phenotyping approach of Suffin and Emory was discussed by Daniel Hoffman at this conference. In some dramatic cases, the medication washout period that precedes analysis reveals the medications to have been the problem rather than the remedy. In both the QEEG and the SPECT realms, however, nearly all cases studied end up with significant alteration in treatment recommendations.

With the clinically relevant EEG information becoming more immediately accessible to the clinician, a natural hierarchy emerges, a kind of “stepped-care” model (in the words of Peder Fagerholm) in which the most generic methods are first in line, augmented over time as necessary with more refined and targeted approaches.

The person who has taken the phenotype approach the farthest is probably Paul Swingle, and with good success indeed. What I find useful in Paul Swingle’s orientation that is perhaps under-emphasized in Jay’s presentation is the tie back to the realm of behavior. We see the same distinction when we compare the Daniel Amen model with the Suffin and Emory model. As a child psychiatrist, Daniel Amen retains the utility of categorization of subtypes in behavioral terms, and finds these matching up with the SPECT categories. As a neurologist, Suffin operates more exclusively in the realm that ties EEG phenomenology to medication response.

It is clear by now that the phenotypes do not line up with the classic diagnostic categories. But is a different partitioning possible? Our own approach has been to talk in terms of more fundamental regulatory functions such as arousal regulation that tie the EEG data back to behavior. There are various phenomena that can “mask” the relationship between EEG spectral magnitude and arousal level, but absent such perturbations the correlation generally holds. Our sense of it is that we are seeing a modest number of key failure modes of brain regulation, and these may well line up with the principal EEG phenotypes.

Finally, there is the question of what the phenotype model fails to reveal. Neurologists have all along had modest expectations of what information the EEG could readily reveal, and even though the EEG phenotype model nicely expands the space, it still suffers from similar handicaps. Clinicians work with people at a much more subtle level than that to which the stationary EEG properties bear witness. If the DSM model has led to constricted thinking, the phenotype model threatens to do much the same even as it raises the bar.

The ultimate point of reference for our work remains the realm of behavior. No one, for example, treats the EEG as the criterion for whether the value of neurofeedback for a particular person has been exhausted. It is true that coherence-based training is being done with reference to a specific goal. This is because the training can be overdone, and one might not immediately recognize that in behavior. But even here one is usually confronted with a number of linkages that are anomalous in connectivity. The more extreme the coherence anomalies, the more likely that they correlate with matters of clinical interest. But in the realm of more minor anomalies, matters may well be more complex. The temptation is to extend the model even to these minor anomalies, but the evidence will be much harder to pin down. It is quite unlikely that they will all be trained in turn toward normality. They are trained successively only until symptom relief is obtained.

The emerging ability to see the EEG concomitants of behavior unfold before us on the computer screen tempts us once again toward the reductionist perspective that the only behavior that matters is what we can discern in the EEG. Data beguiles. It is as if certain psychologists, once they encounter neurofeedback, discover their error in not having chosen neurophysiology or neuropsychology early in their career. There is a kind of urgency about abandoning the “soft science” aspect of this field, in its unraveling of the complexity of human behavior, in the rush toward a more “scientific” or quantitative psychology based on EEG phenomenology.

This hope may always be kept alive, but it will forever be disappointed. The clinically relevant behavioral complexity will always exceed the dimensionality of our variable space. The astute clinician will always operate at the cusp of an integrative perspective that is beyond the capacity of the left-brained scientist to conceive and of the statistician to quantify. This is why the perspective of the clinician is so urgently needed as part of the conversation.

Leave a Reply