« Is Addiction About Pleasure? | Main | Spirituality and Addiction »

Is Evidence-Based Treatment All It's Cracked Up To Be?

Watch out for bad science masquerading as "evidence."

Evidence-based treatment is as accepted as motherhood and apple pie.  After all, evidence is the cornerstone of science itself.  Before the Age of Enlightenment, people could claim anything was true, but evidence tells us whether claims are justified.  There's just one problem: if the evidence is faulty or irrelevant, so is the claim based on it.  That is exactly what has happened for too much of what we are told in health care, and addictionin particular.  As a result, much of the country believes things about addiciton that are simply untrue.

We all know about the fallibility of "scientific evidence" from experience.  The media has repeatedly (and excitedly) told us of published proof of results later found to be completely false.  Think about all the times we've heard that one food or another is good, or bad, for us, only to hear later that the reverse is true. Or, a new treatment is a miracle cure, but then has to be stopped to prevent it from causing even more harm.  What happened to the "evidence"?  The answer is that it is often simply a correlation between two things.  Any time a research study consists of obtaining data from a lot of people, inevitably correlations will show up among separate factors.  If the correlation is statistically significant (unlikely to be caused by chance) then regardless of whether it is correct or meaningful, it becomes scientific evidence.  In one famous, and disastrous, case (we cited it in our book The Sober Truth: Debunking the Bad Science Behind 12-Step Programs and the Rehab Industry), women who took hormone replacement pills were found to live longer, a correlation that led doctors to advise women to take hormone replacement treatment.  Only later was it discovered that this was precisely the wrong advice.  The correlation (the "evidence") was caused by another factor entirely: the women who took hormone pills lived longer because they took better care of their health in general, not because of the pills.  In fact, taking hormone replacement therapy was dangerous and had to be stopped.  An even more dramatic example was a study which found that people who took a sugar pill (a placebo) were half as likely to die as those who did not!  This was a highly statistically significant finding, a clear piece of "evidence," but in the end just a meaningless correlation. As in the hormone replacement case, the people who complied with the requirement to take the placebo were simply different from the women who didn't comply: they were more actively engaged in their health and tended to follow directions.  They lived longer because of those factors, not the placebo.

This kind of problem with evidence is so well-known that it has its own name: "compliance bias".  It is just this bias that is common in addiction research, where people who become invested in (comply) with any given treatment regularly do better than can be expected from the general population.  It has become routine, for instance, for researchers to extrapolate from the 5 to 10%  percent of people who invest in AA and do well, using this finding as "evidence" to recommend that everyone should attend AA.  Of course, the reverse is true: only the 5-10% percent of people who can invest in (comply with) the AA approach should attend.  For everyone else it's a loss of time and effort that could be spent pursuing other approaches.  Indeed, the supposedly "evidence-based" recommendations for everyone to go to AA are a prime example of the compliance bias fallacy.

Another form of faulty evidence comes from studies that are conducted without randomly choosing the study population.  An example we found in addiction occurs when the only people studied to measure effectiveness of 12-step programs are those who have been exposed to AA-oriented treatment before the study even began, then assuming that the results of subsequent AA-oriented treatment applies to the general population.  This problem with gathering evidence is also well-known enough to have its own name: "selection bias".

A different problem with evidence is that it can be derived from studies that are simply looking at the wrong question.  Here is a classic example, originally described by renowned statistician Nate Silver of election-prediction fame (we discussed this in The Sober Truth):  The annual winner of the Super Bowl (either the AFC or the NFC champion) used to be thought to predict whether the stock market would rise or fall that year.  Why?  Because it had worked out exactly that way for 30 years in a row.  That was absolute, definite, hard evidence, because the statistical likelihood of this occurring by chance was practically zero.  There was only one chance in 5 million that this evidence was wrong.  But, of course it was wrong.  The Super Bowl result had nothing to do with how the market performed, as shown in the next 14 years.  What happened to that very definite evidence?  The answer is simple.  If researchers study something that is absurd, they still may find strong evidence -- statistical "proof" -- that it is true.  It was nonsensical to think that the Super Bowl winner had anything to do with the stock market, because of the enormous body of knowledge we already have about how stock markets work.  So, any result from testing that hypothesis, no matter how statistically significant, will also be nonsensical.   This kind of problem unfortunately happens quite a lot in research, and like other problems with "evidence," it has been carefully studied (a way to deal with this problem was devised centuries ago by a mathematician named Thomas Bayes ("Bayes' Theorem"), but his solution is routinely ignored).  Addiction research has been badly hurt by this error, because many researchers -- who, after all, are only human -- have studied and drawn conclusions about only the things that interest them, or that they wish to prove, without bothering to take into account evidence from outside their fields that suggests that their results might be scientifically absurd.

A prime example is the claims of evidence for the "chronic brain disease" hypothesis of addiction.  The proponents of this idea have taken evidence from rats and assumed that they apply to humans (because all mammals share the same type of "reward" pathway in the brain), and also assumed that changes produced in brains as a result of using drugscan turn people into addicts (because they confused the excited behavior of rats with human addiction).  These researchers are neurobiologists who are focused on neurobiology, and have ignored or not known all the evidence from outside their field which demonstrates that their conclusion is indefensible.  Not only is human addiction nothing like the behavior observed in rats, we've known for 40 years (since the famous Robins study of Vietnam veterans) that humans cannot be turned into addicts by this mechanism.  The veterans who took large amounts of heroin didn't become addicts, just as very few people given narcotics for pain leave the hospital looking for a drug pusher, and people commonly stop taking other drugs after using them for a long time and never return to them, even without any treatment.  The brain disease hypothesis is based on "evidence" that, like the Super Bowl example, is the result of ignoring the enormous body of truly relevant human evidence that proves the idea is false.

Many of the problems with bad evidence can be, and often are addressed by doing what researchers consider the gold standard: create experiments -- not passive data mining -- with randomized subjects and a control group to reduce factors such as selection and compliance biases.  But, when we reviewed studies on the effectiveness of 12-step programs for our book, we found that virtually none of them met these criteria.  In fact, almost every one suffered with not only selection and compliance biases, but numerous other issues of invalid evidence such as inadequate length of study (drawing conclusions about patients with a lifelong condition like alcoholism on the basis of a study lasting just 3-12 months, for example), and inadequate data collection (most "evidence" in addiction studies is based on self-report, in which people are telephoned and asked how they're doing!).

But the most serious problem with nearly all the evidence in studies of 12-step programs was that researchers ignored data that didn't fit their conclusions.  In virtually all of these studies, the majority of the subjects dropped out because they weren't doing well.  Yet, in the reported statistical results -- the "evidence" -- these people were ignored as if they had never been part of the study.  By taking into account only the people who were doing best, researchers falsely concluded that the 12-step approach was very successful.  As we showed in our book, once the dropouts were added back in, the results were reversed.

When people ignore the problems of "evidence-based" claims, false ideas become perpetuated as established scientific fact.  We've seen that recently in nutrition, where years of "evidence" supporting the idea that people should eat low-fat diets are now being overturned, replaced by advice to eat low-carbohydrate diets, as the old evidence is now understood to have been a bandwagon effect in which everyone hopped on board with the "low fat" idea, and ignored evidence to the contrary.  This "bandwagon" error is just what has allowed the "brain disease" idea to flourish, but we also discovered that not a single major addiction journal had published an article about the psychology behind the behavior we call "addiction" over the past 3 years!  It's not that such articles don't exist; there's a longstanding academic literature on the topic.  I've contributed to it myself in scientific journals and two earlier books about the psychology behind addiction (The Heart of Addiction, and Breaking Addiction).  But psychological understanding of addiction is simply not the interest, or expertise, of current addiction editors.  As a consequence, more and more highly questionable "evidence" is published, perpetuating its conclusions as if they are scientific fact, while alternative or conflicting ideas are pushed to the side.

The more you know about "evidence-based" science, especially in health care and particularly in addiction, the more skeptical you should be.  Investigative journalist Gary Taubes, citing the comments of  two  British epidemiologists, wrote that "those few times that a randomized trial had been financed to test a hypothesis supported by results from these large observational [correlation] studies, the hypothesis either failed the test or … failed to confirm the hypothesis."

The term "evidence-based" falls far short of what people imagine, or hope.  Motherhood and apple pie are fine, but we have to omit "evidence-based" treatment from the list of things we can trust.

[An earlier version of this post ran on the website The Fix.]

Posted on Wednesday, June 10, 2015 at 02:19AM by Registered CommenterLance Dodes, M.D. | CommentsPost a Comment | References9 References

References (9)

References allow you to track sources for this article, as well as articles that were written in response to this article.

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>