Comparison Shopping in Medicines
by Siegfried Othmer | February 12th, 2008
A few weeks ago the Economist Magazine offered up the lament that comparison shopping is very difficult to do when it comes to medications because the underlying studies have not been done. It is difficult enough for pharmaceutical companies to get new drugs past their regulatory hurdles via Randomized Controlled Trials against a placebo control. Once that goal has been reached, drug companies feel free to peddle their new medications at will, perhaps allowing the implicit assumption to be propagated that newer is better.
Only a few weeks after the appearance of this article (Jan 12th issue, p. 68), this very issue was brought to the front pages with the published finding that two newer anti-cholesterol drugs, Vytorin and Zetia, may not hold an advantage over older drugs, such as Zocor, that have gone off patent. (Vytorin is a combination of Zetia and Zocor.) In the immediate task of reducing cholesterol levels, Vytorin did appear to offer an advantage. But this numerical advantage did not seem to yield a clinical payoff in terms of reduced plaque buildup in coronary and carotid arteries. Also the trendlines were worse for heart attacks, strokes, and the incidence of associated medical procedures such as angioplasties and bypass operations. This deterioration did not reach statistical significance, but one had expected better on the basis of the cholesterol numbers. On balance, there did not seem to be any selective clinical benefit from using the more expensive new drugs. One could even go so far as to say that the basic cholesterol hypothesis was being called into question.
On top of everything else, the publication of these data appeared to have been delayed for nearly two years, over which time these new drugs contributed handsomely to profit at Schering-Plough and Merck. Vytorin contributes some 70% to profits at Schering-Plough, and the $5B in annual sales of these new drugs amount to 20% of the whole anti-cholesterol drug market. Congress may now get into the act and mandate comparison studies. When the FDA has asked for such studies in the past, the agency has sometimes simply been ignored.
It goes without saying that comparison studies hold far greater value for clinical decision-making than the original design of comparison against placebo. It is a low hurdle indeed that drugs simply have to be better than doing nothing at all. Once an effective remedy is available, the standard of comparison should clearly be raised to the higher bar.
This argument holds even more strongly when it comes to neurofeedback. It is just absurd that neurofeedback critics continue to try to hold back the tide of neurofeedback by pointing to the lack of placebo-controlled studies while at the same time dismissing the solid data which exist to demonstrate the effective equivalence of neurofeedback and stimulant medication in ADHD, just for one example. If neurofeedback can hold its own against the higher bar of comparison against the best available treatment, then the relative dearth of placebo-controlled studies should not be held against the treatment.
This evidence has been accumulating since the publication of Rossiter and LaVaque in 1995, in which protocol-based neurofeedback was compared to stimulant medication with cohorts of 23 participants. This was followed in 2004 by an even larger study by Rossiter (30 in each cohort) that further cemented the case. Additionally, we have the controlled study done by Thomas Fuchs. We have the evidence from Michael and Lynda Thompson of the successful withdrawal from stimulant medication of more than 85% of neurofeedback trainees. And finally, there is the large-scale study by Monastra that demonstrated the lingering benefit of neurofeedback trained ADHD kids when they were withdrawn from stimulant medication.
Neurofeedback suffers from the practical difficulty that the more effective it is, the more difficult it is to do under conditions of blindness—blindness either on the part of the client or on the part of the clinician. In practice, the technique’s effectiveness therefore has to be dumbed down to meet even the possibility of maintaining blindness. The technique has to be compromised in order to admit of the proof that is demanded. This is just absurd, and we should not cooperate with such a ridiculous enterprise. With the bar being raised in drug research to the level of comparison-testing, we should make the same case for ourselves.
This issue has recently arisen in yet another context. A contretemps has been unleashed with the publication in Lancet of proposals to deal with childhood malnutrition. The organization Medecins sans Frontieres raised objections because the simple expedient of home-based ready-to-use therapeutic foods had been overlooked in their compilation. The authors, in turn, respond lamely that such an approach has not been subjected to randomized controlled trials to document the effects on mortality. Just as in our field of neurofeedback, controlled clinical trials in nutrition are difficult to do. Said Roger Shrimpton, Secretary of the United Nations Standing Committee on Nutrition, “If we waited for randomized controlled trials for everything, we’d do only half of what we are doing.” The Medecins sans Frontieres deserve attention in this matter because they are actually on the front lines of this issue, and the same holds true for all the clinicians doing neurofeedback.
Please feel free to share your thoughts in the comments section below.
It is also worth looking at where to set the bar when NF side effects do not present the same potential level of harm as many drugs. Pediatric Endocrinologists write on the paucity of informqtion on the variability in children of different ages and sizes in metabolizing drugs.
As for controlling for the effects of placebe please see below:
In May 2001, The New England Journal of Medicine published an article that called into question the validity of the placebo effect. “Is the Placebo Powerless? An Analysis of Clinical Trials Comparing Placebo with No Treatment” by Danish researchers Asbjorn Hrobjartsson and Peter C. Gotzsche “found little evidence in general that placebos had powerful clinical effects.” Their meta-analysis of 114 studies found that “compared with no treatment, placebo had no significant effect on binary outcomes, regardless of whether these outcomes were subjective or objective. For the trials with continuous outcomes, placebo had a beneficial effect, but the effect decreased with increasing sample size, indicating a possible bias related to the effects of small trials.” (Most of the studies evaluated by Hrobjartsson and Gotzsche were small: for 82 of the studies the median size was 27 and for the other 32 studies the median was 51.)
“The high levels of placebo effect which have been repeatedly reported in many articles, in our mind are the result of flawed research methodology,” said Dr. Hrobjartsson, professor of medical philosophy and research methodology at the University of Copenhagen.*