The Unruly Power Grid Revisited

by Siegfried Othmer | August 11th, 2004

The current issue of the IEEE Spectrum, house organ for electrical engineers, revisits the state of the power grid one year after the blackout of August 14, 2003. This topic is of interest to us on many levels, and the present newsletter iterates the one I wrote last year on this topic.

First of all, the issue of grid stability has been getting the attention of system theorists working with nonlinear dynamical models. In simulations of ever greater complexity, it is found that cascading failures in which little blackouts become big ones seem to be a fact of life. But this instability is already evident when we have only three systems to coordinate as opposed to thousands. Sufficient complexity to make predictions nearly impossible in practice is already there at the level of three interacting systems. So we are into modeling complexity, in other words into chaos theory.

The disagreeable fact that large-scale failures may be unavoidable with the present organization of the power grid is not something that politicians or utility officials can be up-front about, so there is a lot of denial at the top. But things are worse than they seem. If one simply extrapolates from “typical” outages to calculate the probability of large ones, then the latter are found to be quite rare. The standard model just assumes a Gaussian distribution, and the exponential tail of that distribution falls off pretty fast. In reality the large outages are happening entirely too frequently to fit this curve. The major outage last year implies an incidence 300 times larger than the Gaussian curve would predict.

When the incidence-versus-magnitude curve is plotted logarithmically on both axes, we obtain a straight line—i.e. a power law! We are back to our (by now familiar) small-world network model (See the book “Linked,” by Barabasi). The cascading failure that involves a large number of nodes is much more prevalent than one would otherwise predict, and given the impact of such large-scale failures, must dominate our considerations. We see similar behavior when we plot incidence versus magnitude of earthquakes and forest fires. We may ultimately have no more effective control over large-scale collapse of the power grid than we do over large earthquakes. That puts quite another spin on the matter. (And by the way, this same kind of thinking can also explain why failures of the space shuttle happened so much more frequently than risk calculations led us to expect.)

When it comes to forest fires, the deliberate setting of small fires is a strategy that may successfully prevent the big conflagration. We have evidence for that in Mexico, where fires in Baja California are not so vigorously suppressed as they are in Southern California. When it comes to earthquakes, there is at present no persuasive evidence that kindling small earthquakes (through lubricating the relevant layers with water, for example) will ultimately prevent big ones. Things are just more complicated. A moderate earthquake will indeed relieve pressure in a region of hour-glass shape around the earthquake zone, but will also increase stresses in another hourglass shape oriented perpendicularly. And finally, when it comes to the power grid, small improvements in the margins of individual legs of the power grid may, paradoxically, make things worse over all.

This latter finding is actually controversial, and it requires us to include a sociological dimension in the analysis. It turns out that when a part of the system is beefed up, the engineers feel more comfortable about running it closer to what we may call “criticality.” The cumulative impact of many such individual decisions at many nodes of the grid then brings about an overall reduction in predictability. That is, the overall system is also run closer to criticality. A real-world appraisal might then find that incremental improvements in the system may have paradoxical results for large-scale stability, which just seems nutty. [A similar phenomenon may be happening in international finance, where improved financial controls in every central bank in the world may still take the world economy closer to criticality rather than away from it. This problem will no doubt be well analyzed after the event!] But at another level the analogy to forest fires may hold: Allowing local grid failures to occur, rather than trying to prevent them at all costs, tends to unburden the rest of the system and leave it more stable in the general case. Tolerance to small failures may help to avoid big ones.

So what lessons might one draw for our interest in improved brain regulation? First of all, we are seeing here on the larger stage what is happening in microcosm within our own field: a variety of models by which we try to come to terms with our data; fundamental disagreements about the value and validity of each of the models; and intense disagreement about “policy” implications, i.e. protocols in our case. Like the math and engineering professors involved with modeling grid dynamics, we are also having to think in terms of distributed network models; nonlinear dynamical models; feedback/feedforward models; stochastic processes; and hierarchical control models.

We see the analogy of cascading failure of the power grid to generalized seizures, in which a localized seizure focus can at certain times propagate its disturbances broadly throughout the brain. From here we may generalize to other kinds of instabilities, such as panic, migraine, vertigo, and even to slower instabilities such as spreading depression or mania. Does allowing small seizures prevent big ones? We don’t see that in the evidence. Conversely, does suppression of small seizures with neurofeedback set one up for rarer but bigger seizures? It is very difficult to get a handle on this important issue in the unruly world of a clinical practice, but we have not really seen much that would give us concern along these lines.

[We have had situations in which Touretters have explained to us that when they “let their tics out” other more disagreeable symptoms such as rage are more contained. Similarly, when the training is used specifically to reduce their tics, there may be a shift toward the expression of other symptoms, at least on a transient basis.]

An important point of difference is that in neurofeedback we happen to be dealing with a system geared toward learning. So when we train any part of the brain toward improved stability, then by virtue of the highly integrated nature of brain organization we end up training the whole brain toward stability. Such stability is then a buffer against any of the conditions we refer to as instabilities, not just the one that was targeted in the training.

In the case of power grid management, learning occurs on another timescale through the gradual enhancement of control system software. Henceforth, the system will be able to respond to challenges on the system timescale as opposed to the human response timescale. This increases the “bandwidth” of the control system to higher frequencies, which, as we know, sets us up for yet other instabilities occurring at shorter timescales. Increasing the control system bandwidth increases the potential for instability, and that is not a paradox.

At present neurofeedback relies almost entirely on the learning model, whereas regulation of the grid puts a lot of emphasis on real-time management of the challenges that come along—challenges that are never quite the same as they were before. We do a neurofeedback session with a seizure-prone patient and send him home. Power grid engineers would never “set the parameters” and then go for coffee. The system has to be constantly self-monitoring.

Can we not move in the same direction, now that we have BraInquiry from Mind Media, and the Pendant from Bruce McMillan? Are we not at the point where we think about sending the unstable individual home with an amplifier in his baseball cap, or hanging on his neck, and gently cuing him with regard to when his EEG is approaching criticality? And should this not be done before we consider doing an implant of a brain stimulator?

Look at what we get for our pains for going to real-time, “always-on” feedback. We get rid of all the “blinded RCT” garbage. With always-on neurofeedback there is a simple question: does it work at this time and with this individual? Group data are not the issue here. The client has an opportunity for an incredible learning curve, and the chances are that the individual will develop a sense of what can be accomplished. There’s not a lot at stake, either, as there would be in the case of decision to do an implant.

Even when we are not worried about a grand mal seizure, most of us have to be concerned about such mundane happenings as nodding off to sleep on a long drive. This has occurred, at one time or another, to a significant fraction of the population. One may again want to resort to “always-on” feedback to keep tabs on one’s alertness while driving, and to detect that moment when the brain decides to go off-line and the car goes on auto-pilot.

But there is a more startling implication of what we are finding with regard to the stability or instability of power grids. Whereas we do not know how to migrate toward a reliably stable grid within the present framework, the brain has in fact done it! Not everybody has to worry about whether they are going to undergo a major instability such as a grand mal seizure at some time in their lives, or drop on the street into a narcoleptic swoon. This is not a hazard that most people need to worry about. In our modeling of the grid, scaling up to greater complexity makes predictions more difficult and stability more problematic. In the case of the brain, despite its incredible complexity, stability is more or less assured.

Of course the brain has had some 500M years of evolution to work out the kinks, whereas the whole North American grid has existed for less than a century. And we do already know where the problem lies. It is in our old favorite, the phase. With power being delivered at 60 Hz, the phase of the signal has to be everywhere coordinated. A categorical circuit breaker would be to have interties that are direct current, so that systems at either end can oscillate independently of each other. A phase disturbance will not propagate throughout the whole network.

The brain has so arranged its affairs that global synchrony or coherence does not really need to be maintained for good function. Timing integrity does have to be maintained at the millisecond level, but in a large number of interacting circuits. Regulation may be hierarchically organized, but there not just one hierarchy. It is a system of checks and balances, of both feedback and feedforward control. Stability is advanced by mutual interaction. There is no master clock. There is no virtual conductor. Hence there is one less single-point failure mode. If we look at this in terms of state space, the power grid must be confined within very narrow operational limits, whereas the brain can navigate with broad latitude and still be fundamentally stable.

So we are able to say at this point that instability is inherent in the power grid, whereas stability is inherent in the way the brain is organized. This is perhaps why nature favors us to the point where we do not have to stimulate or challenge the brain in just the right way in order to get a beneficial effect. One may simply need to stimulate it in any one of a huge number of ways in order to obtain improved function.

The dilemma that we may not be in a position to guarantee the unconditional stability of the power grid drives some people to the conclusion that we should simply face this reality and prepare for the occasional outage. We address the problem from the bottom up, with power conditioners keeping computers going during an outage, or shifts to battery-operated portables, etc. When it comes to protecting ourselves against computer viruses, we are almost entirely dependent on bottom-up regulation. So how do we handle potential brain outages?

We may certainly get to the point very soon where a person will be in a position to appraise their own functional state via “always-on” feedback instrumentation. They will see that their EEG changes when certain food items are consumed, or when they enter a building containing materials to which they are environmentally sensitive. They will see their EEG reflect exposure to allergens, and they will see it reflect fatigue and exhaustion.
This is not going to be everyone’s cup of tea, but the capability will be important for some people under some circumstances. Once we have the capability for such awareness about ourselves, that capability will surely be taken advantage of.

Consider the effect of all the new imaging technology on how brain function is being studied, yet nearly all of this imaging is static, and does not have time as a variable. With the EEG we have access to the relevant dynamics. Allowing people to have access to this information in a useful fashion would be truly bringing power to the people. And no blinded RCT will have had anything to do with it.

Leave a Reply