Virtual Reality
by Siegfried Othmer | January 15th, 2015by Siegfried Othmer, PhD
Developments in the virtual reality sphere were another highlight of the latest Consumer Electronics Show in Las Vegas. Matters were raised to a higher level of visibility even before the conference, when Facebook pumped $2B into Oculus. A billion here, a billion there; pretty soon you are talking about real money. As it happens, we took a look at an Oculus system a while back to evaluate its suitability for neurofeedback. It wasn’t long before each of us felt just a little woozy from the experience, and opted for going back to maneuver in the real rather than the virtual world.
Of course we don’t actually experience the real world. We live in the world that our brain constructs for us, and that world in turn emerges out of the brain’s experience, which consists largely of the neuronal dance and of its neuro-chemical milieu. In the words of Walter Freeman, “the only knowledge that the rabbit could have of the world outside itself is what it had made in its own brain.” [(Ref.) Walter Freeman, Societies of Brains, P.2] So we are actually living in a state of virtual reality already, even without the help of Oculus. And that virtual reality world emerges out of a neuronal dance that encodes some amalgamation of present sensory input with prior experience. There is no witness of the outside world that has not been sluiced through the filter of prior experience.
When such a brain experiences Oculus, it does not take long for it to notice a discrepancy between the presented information and what ought to be there in a perfect world. The latter is the brain’s construct, its projection forward of its interpretation of reality. If that discrepancy persists, the brain may even become disregulated and plunge into wooziness or nausea. The challenge to the Oculus development is substantial because our detection threshold for the above discrepancy is low indeed. We know this from other examples.
Nausea turns out to be a big problem among astronauts, even though the deviations from expectations are surely small in the space environment. There are only occasional firings of the thrusters to reorient the ship. Stasis becomes the expectation. Of course the zero-g environment can cause problems as well, so a better example may be an earth-bound situation. Those who undergo long flights in the belly of a B-52 tend to get nauseous also. Here the motions of the ship are also small, but that is not of much help. Even small deviations can upset the brain’s calculus, and if truth be told, the small deviations can be even more troublesome than the large ones. In a large excursion, the brain recalibrates its expectations. Small excursions, on the other hand, do not alter the expectation of stasis. They register as a discrepancy from expectations.
Our sensitivity to these discrepancies constitutes some of the best evidence we have of how our brains come to terms with the environment. The instantaneous reality is always played off against our expectations, and the latter is entirely constructed out of what the brain already holds to be true. This process works so exquisitely well that Oculus has to meet a very high bar indeed in order to fool us with a virtual reality model over the long haul. The challenge is summed up in the word “presence,” the felt sense of realism in the experience.
Historically the chief source of the problem has been processing-related delays as the next image to be projected is calculated in a context-sensitive manner. Even with modern computing horsepower at our disposal, apparently that problem has not yet been fully resolved. There are inevitably delays in catching up to reality, and apparently these are still detectable by the brain. The other problem is the accuracy with which head movements are determined.
When we think about it, we realize that the brain has those very same problems to contend with in its own virtual reality. There is finite accuracy in the brain’s determination of head position, and there are propagation delays in the processing of the sensory information. In fact, those problems are far more substantial in the ‘real’ world of the brain than in the virtual world of Oculus. So why aren’t we nauseous all the time, one might wonder? Why aren’t we always a few hundred milliseconds late because of all those processing delays? It is because the brain does one more piece of magic, which is to compensate for those delays and to project things forward to the present moment.
The brain has to do time-base correction in order to recreate the simultaneity of an “event” out of signal streams with differential processing delays. It also has to do time-shifting in order to give us the experience of living in the present moment. And that is why we can sometimes actually hit a fastball.
Finally, our explanation of the experience with Oculus may also serve to explain infra-low frequency neurofeedback. At the outset, the brain must ‘discover’ the connection between the slow cortical potential unfolding on the screen and its own internal state. As soon as that recognition takes place, the brain assumes responsibility for that signal and projects it forward in time. The subsequent discrepancy between the actual trajectory and the expectation for that trajectory must then be minimized, and that attempt to reach convergence is the essence of the training. In the real world, reaching closure is a never-ending proposition for as long as the signal is available, so the brain remains continuously engaged on the challenge. This job description accounts for the fact that the brain does not get bored with the task—although it may well get fatigued by it!