bad pharma: drug research riddled
with half truths, omissions, lies.
Excerpted from "Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients"
Sponsors get the answer they want.
Before we get going, we need to establish one thing beyond any doubt: Industry-funded trials are more likely than independently funded trials to produce a positive, flattering result. This is our core premise, and one of the most well-documented phenomena in the growing field of “research about research.” It has also become much easier to study in recent years because the rules on declaring industry funding have become a little clearer.
We can begin with some recent work. In 2010, three researchers from Harvard and Toronto found all the trials looking at five major classes of drug — antidepressants, ulcer drugs and so on — and then measured two key features: were they positive, and were they funded by industry? They found over 500 trials in total: 85 percent of the industry-funded studies were positive, but only 50 percent of the government-funded trials were. That’s a very significant difference.
In 2007, researchers looked at every published trial that set out to explore the benefit of a statin. These are cholesterol-lowering drugs which reduce your risk of having a heart attack, and they are prescribed in very large quantities. This study found 192 trials in total, either comparing one statin against another, or comparing a statin against a different kind of treatment. Once the researchers controlled for other factors (we’ll delve into what this means later), they found that industry-funded trials were 20 times more likely to give results favoring the test drug. Again, that’s a very big difference.
We’ll do one more. In 2006, researchers looked into every trial of psychiatric drugs in four academic journals over a 10-year period, finding 542 trial outcomes in total. Industry sponsors got favorable outcomes for their own drug 78 percent of the time, while independently funded trials only gave a positive result in 48 percent of cases. If you were a competing drug put up against the sponsor’s drug in a trial, you were in for a pretty rough ride: You would only win a measly 28 percent of the time.
These are dismal, frightening results, but they come from individual studies. When there has been lots of research in a field, it’s always possible that someone — like me, for example — could cherry-pick the results and give a partial view. I could, in essence, be doing exactly what I accuse the pharmaceutical industry of doing by only telling you about the studies that support my case while hiding the rest from you.
To guard against this risk, researchers invented the systematic review. In essence a systematic review is simple: Instead of just mooching through the research literature, consciously or unconsciously picking out papers here and there that support your pre-existing beliefs, you take a scientific, systematic approach to the very process of looking for scientific evidence, ensuring that your evidence is as complete and representative as possible of all the research that has ever been done.
Systematic reviews are very, very onerous. In 2003, by coincidence, two were published, both looking specifically at the question we’re interested in. They took all the studies ever published about whether industry funding is associated with pro-industry results. Each took a slightly different approach to finding research papers, and both found that industry-funded trials were, overall, about four times more likely to report positive results. A further review in 2007 looked at the new studies that had been published in the four years after these two earlier reviews: It found 20 more pieces of work, and all but two showed that industry-sponsored trials were more likely to report flattering results.
I am setting out this evidence at length because I want to be absolutely clear that there is no doubt on the issue. Industry-sponsored trials give favorable results, and that is not just my opinion or a hunch from the occasional passing study. This is a very well-documented problem, and it has been researched extensively without anybody stepping out to take effective action, as we shall see.
There is one last study I’d like to tell you about. It turns out that this pattern of industry-funded trials being vastly more likely to give positive results persists even when you move away from published academic papers and look instead at trial reports from academic conferences, where data often appears for the first time (in fact, as we shall see, sometimes trial results only appear at an academic conference, with very little information on how the study was conducted).
Fries and Krishnan studied all the research abstracts presented at the 2001 American College of Rheumatology meetings that reported any kind of trial and acknowledged industry sponsorship in order to find out what proportion had results that favored the sponsor’s drug. There is a small punchline coming, and to understand it we need to talk a little about what an academic paper looks like. In general, the results section is extensive: The raw numbers are given for each outcome and for each possible causal factor, but not just as raw figures. The “ranges” are given, subgroups are perhaps explored, statistical tests are conducted and each detail of the result is described in table form and in shorter narrative form in the text, explaining the most important results. This lengthy process is usually spread over several pages.
In Fries and Krishnan [2004], this level of detail was unnecessary. The results section is a single, simple and — I like to imagine — fairly passive-aggressive sentence:
The results from every RCT (45 out of 45) favored the drug of the sponsor.
This extreme finding has a very interesting side effect for those interested in time-saving shortcuts. Since every industry-sponsored trial had a positive result, that’s all you’d need to know about a piece of work to predict its outcome: If it was funded by industry, you could know with absolute certainty that the trial found the drug was great.
How does this happen? How do industry-sponsored trials almost always manage to get a positive result? It is, as far as anyone can be certain, a combination of factors. Sometimes trials are flawed by design. You can compare your new drug with something you know to be rubbish — an existing drug at an inadequate dose, perhaps, or a placebo sugar pill that does almost nothing. You can choose your patients very carefully so that they are more likely to get better on your treatment. You can peek at the results halfway through and stop your trial early if they look good (which is — for interesting reasons we shall discuss — statistical poison). And so on.
But before we get to these fascinating methodological twists and quirks — these nudges and bumps that stop a trial from being a fair test of whether a treatment works or not — there is something very much simpler at hand.
Sometimes drug companies conduct lots of trials, and when they see that the results are unflattering, they simply fail to publish them. This is not a new problem, and it’s not limited to medicine. In fact, this issue of negative results that go missing in action cuts into almost every corner of science. It distorts findings in fields as diverse as brain imaging and economics, it makes a mockery of all our efforts to exclude bias from our studies, and despite everything that regulators, drug companies and even some academics will tell you, it is a problem that has been left unfixed for decades.
In fact, it is so deep-rooted that even if we were to fix it today — right now, for good, forever, without any flaws or loopholes in our legislation — that still wouldn’t help because we would still be practicing medicine, cheerfully making decisions about which treatment is best, on the basis of decades of medical evidence which is — as you’ve now seen — fundamentally distorted.
But there is a way ahead.
Why missing data matters
Reboxetine is a drug I myself have prescribed. Other drugs had done nothing for this particular patient, so we wanted to try something new. I’d read the trial data before I wrote the prescription, and I had found only well-designed, fair tests, with overwhelmingly positive results. Reboxetine was better than placebo and as good as any other antidepressant in head-to-head comparisons. It’s approved for use by the Medicines and Healthcare products Regulatory Agency (the MHRA), which governs all drugs in the UK. Millions of doses are prescribed every year around the world. Reboxetine was clearly a safe and and effective treatment. The patient and I discussed the evidence briefly, and we agreed it was the right treatment to try next. I signed a prescription saying I wanted my patient to have this drug.
But we had both been misled. In October 2010, a group of researchers were finally able to bring together all the trials that had ever been conducted on reboxetine. Through a long process of investigation — searching in academic journals but also arduously requesting data from the manufacturers and gathering documents from regulators — they were able to assemble all the data, both from trials that were published and from those that had never appeared in academic papers.
When all this trial data was put together it produced a shocking picture. Seven trials had been conducted comparing reboxetine against placebo. Only one, conducted in 254 patients, had a neat, positive result, and that one was published in an academic journal for doctors and researchers to read. But six more trials were conducted in almost 10 times as many patients. All of them showed that reboxetine was no better than a dummy sugar pill. None of these trials were published. I had no idea they existed.
It got worse. The trials comparing reboxetine against other drugs showed exactly the same picture: Three small studies, 507 patients in total, showed that reboxetine was just as good as any other drug. They were all published. But 1,657 patients’ worth of data was left unpublished, and this unpublished data showed that patients on reboxetine did worse than those on other drugs. If all this wasn’t bad enough, there was also the side-effects data. The drug looked fine in the trials that appeared in the academic literature. But when we saw the unpublished studies, it turned out that patients were more likely to have side effects, more likely to drop out of taking the drug and more likely to withdraw from the trial because of side effects if they were taking reboxetine rather than one of its competitors.
I did everything a doctor is supposed to do. I read all the papers, I critically appraised them, I understood them and I discussed them with the patient. We made a decision together, based on the evidence. In the published data, reboxetine was a safe and effective drug. In reality, it was no better than a sugar pill, and worse, it does more harm than good. As a doctor, I did something which, on the balance of all the evidence, harmed my patient, simply because unflattering data was left unpublished.
If you find that amazing, or outrageous, your journey is just beginning. Because nobody broke any law in that situation, reboxetine is still on the market, and the system that allowed all this to happen is still in play, for all drugs, in all countries in the world. Negative data goes missing, for all treatments, in all areas of science. The regulators and professional bodies we would reasonably expect to stamp out such practices have failed us.
“Publication bias” — the process whereby negative results go unpublished — is endemic throughout the whole of medicine and academia; regulators have failed to do anything about it, despite decades of data showing the size of the problem. But before we get to that research, I need you to feel its implications, so we need to think about why missing data matters.
Evidence is the only way we can possibly know if something works — or doesn’t work — in medicine. We proceed by testing things, as cautiously as we can, in head-to-head trials and gathering together all of the evidence. This last step is crucial: If I withhold half the data from you, it’s very easy for me to convince you of something that isn’t true. If I toss a coin a hundred times, for example, but only tell you about the results when it lands heads-up, I can convince you that this is a two-headed coin. But that doesn’t mean I really do have a two-headed coin. It means I’m misleading you, and you’re a fool for letting me get away with it. This is exactly the situation we tolerate in medicine and always have. Researchers are free to do as many trials as they wish and then choose which ones to publish.
The repercussions of this go way beyond simply misleading doctors about the benefits and harms of interventions for patients, and way beyond trials. Medical research isn’t an abstract academic pursuit: It’s about people, so every time we fail to publish a piece of research we expose real, living people to unnecessary, avoidable suffering.
TGN1412
In March 2006, six volunteers arrived at a London hospital to take place in a trial. It was the first time a new drug called TGN1412 had ever been given to humans, and they were paid £2,000 each. Within an hour these six men developed headaches, muscle aches and a feeling of unease. Then things got worse: high temperatures, restlessness, periods of forgetting who and where they were. Soon they were shivering, flushed, their pulses racing, their blood pressure falling. Then, a cliff: one went into respiratory failure, the oxygen levels in his blood falling rapidly as his lungs filled with fluid. Nobody knew why. Another dropped his blood pressure to just 65/40, stopped breathing properly, and was rushed to an intensive care unit, knocked out, intubated, mechanically ventilated. Within a day, all six were disastrously unwell: fluid in their lungs, struggling to breathe, their kidneys failing, their blood clotting uncontrollably throughout their bodies, and their white blood cells disappearing. Doctors threw everything they could at them: steroids, antihistamines, immune-system receptor blockers. All six were ventilated on intensive care. They stopped producing urine; they were all put on dialysis; their blood was replaced, first slowly, then rapidly; they needed plasma, red cells, platelets. The fevers continued. One developed pneumonia. And then the blood stopped getting to their peripheries. Their fingers and toes went flushed, then brown, then black, and then began to rot and die. With heroic effort, all escaped, at least, with their lives.
The Department of Health convened an Expert Scientific Group to try to understand what had happened, and from this two concerns were raised. First: Can we stop things like this from happening again? It’s plainly foolish, for example, to give a new experimental treatment to all six participants at the same time in a “first-in-man” trial if that treatment is a completely unknown quantity. New drugs should be given to participants in a staggered process, slowly, over a day. This idea received considerable attention from regulators and the media.
Less noted was a second concern: Could we have foreseen this disaster? TGN1412 is a molecule that attaches to a receptor called CD28 on the white blood cells of the immune system. It was a new and experimental treatment, and it interfered with the immune system in ways that are poorly understood and hard to model in animals (unlike, say, blood pressure, because immune systems are very variable between different species). But, as the final report found, there was experience with a similar intervention: It had simply not been published. One researcher presented the inquiry with unpublished data on a study he had conducted in a single human subject a full 10 years earlier using an antibody that attached to the CD3, CD2 and CD28 receptors. The effects of this antibody had parallels with those of TGN1412, and the subject on whom it was tested had become unwell. But nobody could possibly have known that because these results were never shared with the scientific community. They sat unpublished and unknown when they could have helped save six men from a terrifying, destructive, avoidable ordeal.
That original researcher could not foresee the specific harm he contributed to, and it’s hard to blame him as an individual because he operated in an academic culture where leaving data unpublished was regarded as completely normal. The same culture exists today. The final report on TGN1412 concluded that sharing the results of all first-in-man studies was essential: They should be published, every last one, as a matter of routine. But phase 1 trial results weren’t published then, and they’re still not published now. In 2009, for the first time, a study was published looking specifically at how many of these first-in-man trials get published and how many remain hidden. They took all such trials approved by one ethics committee over a year. After four years, nine out of 10 remained unpublished; after eight years, four out of five were still unpublished.
In medicine, as we shall see time and again, research is not abstract: It relates directly to life, death, suffering and pain. With every one of these unpublished studies, we are potentially exposed, quite unnecessarily, to another TGN1412. Even a huge international news story with horrific images of young men brandishing blackened feet and hands from hospital beds wasn’t enough to get movement because the issue of missing data is too complicated to fit in one sentence.
When we don’t share the results of basic research, such as a small first-in-man study, we expose people to unnecessary risks in the future. Was this an extreme case? Is the problem limited to early, experimental new drugs in small groups of trial participants? No.
In the 1980s, doctors began giving anti-arrhythmic drugs to all patients who’d had a heart attack. This practice made perfect sense on paper: We knew that anti-arrhythmic drugs helped prevent abnormal heart rhythms; we also knew that people who’ve had a heart attack are quite likely to have abnormal heart rhythms; we also knew that often these went unnoticed, undiagnosed and untreated. Giving anti-arrhythmic drugs to everyone who’d had a heart attack was a simple, sensible preventive measure.
Unfortunately, it turned out that we were wrong. This prescribing practice, with the best of intentions, on the best of principles, actually killed people. And because heart attacks are very common, it killed them in very large numbers: well over 100,000 people died unnecessarily before it was realized that the fine balance between benefit and risk was completely different for patients without a proven abnormal heart rhythm.
Could anyone have predicted this? Sadly, yes, they could have. A trial in 1980 tested a new anti-arrhythmic drug, lorcainide, in a small number of men who’d had a heart attack — less than 100 — to see if it was any use. Nine out of 48 men on lorcainide died, compared with one out of 47 on placebo. The drug was early in its development cycle, and not long after this study, it was dropped for commercial reasons. Because it wasn’t on the market, nobody even thought to publish the trial. The researchers assumed it was an idiosyncrasy of their molecule and gave it no further thought. If they had published, we would have been much more cautious about trying other anti-arrhythmic drugs on people with heart attacks, and the phenomenal death toll — over 100,000 people in their graves prematurely — might have been stopped sooner. More than a decade later, the researchers finally did publish their results, with a mea culpa, recognizing the harm they had done by not sharing earlier:
When we carried out our study in 1980, we thought that the increased death rate that occurred in the lorcainide group was an effect of chance. The development of lorcainide was abandoned for commercial reasons, and this study was therefore never published; it is now a good example of “publication bias.” The results described here might have provided an early warning of trouble ahead.
This problem of unpublished data is widespread throughout medicine — and indeed the whole of academia — even though the scale of the problem, and the harm it causes, have been documented beyond any doubt. We will see stories on basic cancer research, Tamiflu, cholesterol blockbusters, obesity drugs, antidepressants and more, with evidence that goes from the dawn of medicine to the present day, and data that is still being withheld, right now, as I write, on widely used drugs which many of you reading this book will have taken this morning. We will also see how regulators and academic bodies have repeatedly failed to address the problem.
Because researchers are free to bury any result they please, patients are exposed to harm on a staggering scale throughout the whole of medicine, from research to practice. Doctors can have no idea about the true effects of the treatments they give. Does this drug really work best, or have I simply been deprived of half the data? Nobody can tell. Is this expensive drug worth the money, or have the data simply been massaged? No one can tell. Will this drug kill patients? Is there any evidence that it’s dangerous? No one can tell.
This is a bizarre situation to arise in medicine, a discipline where everything is supposed to be based on evidence and where everyday practice is bound up in medico-legal anxiety. In one of the most regulated corners of human conduct, we’ve taken our eyes off the ball and allowed the evidence driving practice to be polluted and distorted. It seems unimaginable.
Excerpted from “Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients” by Ben Goldacre. Published by Faber & Faber, an affiliate of Farrar Straus & Giroux. Copyright 2012. Republished with permission of the publisher.
Sponsors get the answer they want.
Before we get going, we need to establish one thing beyond any doubt: Industry-funded trials are more likely than independently funded trials to produce a positive, flattering result. This is our core premise, and one of the most well-documented phenomena in the growing field of “research about research.” It has also become much easier to study in recent years because the rules on declaring industry funding have become a little clearer.
We can begin with some recent work. In 2010, three researchers from Harvard and Toronto found all the trials looking at five major classes of drug — antidepressants, ulcer drugs and so on — and then measured two key features: were they positive, and were they funded by industry? They found over 500 trials in total: 85 percent of the industry-funded studies were positive, but only 50 percent of the government-funded trials were. That’s a very significant difference.
In 2007, researchers looked at every published trial that set out to explore the benefit of a statin. These are cholesterol-lowering drugs which reduce your risk of having a heart attack, and they are prescribed in very large quantities. This study found 192 trials in total, either comparing one statin against another, or comparing a statin against a different kind of treatment. Once the researchers controlled for other factors (we’ll delve into what this means later), they found that industry-funded trials were 20 times more likely to give results favoring the test drug. Again, that’s a very big difference.
We’ll do one more. In 2006, researchers looked into every trial of psychiatric drugs in four academic journals over a 10-year period, finding 542 trial outcomes in total. Industry sponsors got favorable outcomes for their own drug 78 percent of the time, while independently funded trials only gave a positive result in 48 percent of cases. If you were a competing drug put up against the sponsor’s drug in a trial, you were in for a pretty rough ride: You would only win a measly 28 percent of the time.
These are dismal, frightening results, but they come from individual studies. When there has been lots of research in a field, it’s always possible that someone — like me, for example — could cherry-pick the results and give a partial view. I could, in essence, be doing exactly what I accuse the pharmaceutical industry of doing by only telling you about the studies that support my case while hiding the rest from you.
To guard against this risk, researchers invented the systematic review. In essence a systematic review is simple: Instead of just mooching through the research literature, consciously or unconsciously picking out papers here and there that support your pre-existing beliefs, you take a scientific, systematic approach to the very process of looking for scientific evidence, ensuring that your evidence is as complete and representative as possible of all the research that has ever been done.
Systematic reviews are very, very onerous. In 2003, by coincidence, two were published, both looking specifically at the question we’re interested in. They took all the studies ever published about whether industry funding is associated with pro-industry results. Each took a slightly different approach to finding research papers, and both found that industry-funded trials were, overall, about four times more likely to report positive results. A further review in 2007 looked at the new studies that had been published in the four years after these two earlier reviews: It found 20 more pieces of work, and all but two showed that industry-sponsored trials were more likely to report flattering results.
I am setting out this evidence at length because I want to be absolutely clear that there is no doubt on the issue. Industry-sponsored trials give favorable results, and that is not just my opinion or a hunch from the occasional passing study. This is a very well-documented problem, and it has been researched extensively without anybody stepping out to take effective action, as we shall see.
There is one last study I’d like to tell you about. It turns out that this pattern of industry-funded trials being vastly more likely to give positive results persists even when you move away from published academic papers and look instead at trial reports from academic conferences, where data often appears for the first time (in fact, as we shall see, sometimes trial results only appear at an academic conference, with very little information on how the study was conducted).
Fries and Krishnan studied all the research abstracts presented at the 2001 American College of Rheumatology meetings that reported any kind of trial and acknowledged industry sponsorship in order to find out what proportion had results that favored the sponsor’s drug. There is a small punchline coming, and to understand it we need to talk a little about what an academic paper looks like. In general, the results section is extensive: The raw numbers are given for each outcome and for each possible causal factor, but not just as raw figures. The “ranges” are given, subgroups are perhaps explored, statistical tests are conducted and each detail of the result is described in table form and in shorter narrative form in the text, explaining the most important results. This lengthy process is usually spread over several pages.
In Fries and Krishnan [2004], this level of detail was unnecessary. The results section is a single, simple and — I like to imagine — fairly passive-aggressive sentence:
The results from every RCT (45 out of 45) favored the drug of the sponsor.
This extreme finding has a very interesting side effect for those interested in time-saving shortcuts. Since every industry-sponsored trial had a positive result, that’s all you’d need to know about a piece of work to predict its outcome: If it was funded by industry, you could know with absolute certainty that the trial found the drug was great.
How does this happen? How do industry-sponsored trials almost always manage to get a positive result? It is, as far as anyone can be certain, a combination of factors. Sometimes trials are flawed by design. You can compare your new drug with something you know to be rubbish — an existing drug at an inadequate dose, perhaps, or a placebo sugar pill that does almost nothing. You can choose your patients very carefully so that they are more likely to get better on your treatment. You can peek at the results halfway through and stop your trial early if they look good (which is — for interesting reasons we shall discuss — statistical poison). And so on.
But before we get to these fascinating methodological twists and quirks — these nudges and bumps that stop a trial from being a fair test of whether a treatment works or not — there is something very much simpler at hand.
Sometimes drug companies conduct lots of trials, and when they see that the results are unflattering, they simply fail to publish them. This is not a new problem, and it’s not limited to medicine. In fact, this issue of negative results that go missing in action cuts into almost every corner of science. It distorts findings in fields as diverse as brain imaging and economics, it makes a mockery of all our efforts to exclude bias from our studies, and despite everything that regulators, drug companies and even some academics will tell you, it is a problem that has been left unfixed for decades.
In fact, it is so deep-rooted that even if we were to fix it today — right now, for good, forever, without any flaws or loopholes in our legislation — that still wouldn’t help because we would still be practicing medicine, cheerfully making decisions about which treatment is best, on the basis of decades of medical evidence which is — as you’ve now seen — fundamentally distorted.
But there is a way ahead.
Why missing data matters
Reboxetine is a drug I myself have prescribed. Other drugs had done nothing for this particular patient, so we wanted to try something new. I’d read the trial data before I wrote the prescription, and I had found only well-designed, fair tests, with overwhelmingly positive results. Reboxetine was better than placebo and as good as any other antidepressant in head-to-head comparisons. It’s approved for use by the Medicines and Healthcare products Regulatory Agency (the MHRA), which governs all drugs in the UK. Millions of doses are prescribed every year around the world. Reboxetine was clearly a safe and and effective treatment. The patient and I discussed the evidence briefly, and we agreed it was the right treatment to try next. I signed a prescription saying I wanted my patient to have this drug.
But we had both been misled. In October 2010, a group of researchers were finally able to bring together all the trials that had ever been conducted on reboxetine. Through a long process of investigation — searching in academic journals but also arduously requesting data from the manufacturers and gathering documents from regulators — they were able to assemble all the data, both from trials that were published and from those that had never appeared in academic papers.
When all this trial data was put together it produced a shocking picture. Seven trials had been conducted comparing reboxetine against placebo. Only one, conducted in 254 patients, had a neat, positive result, and that one was published in an academic journal for doctors and researchers to read. But six more trials were conducted in almost 10 times as many patients. All of them showed that reboxetine was no better than a dummy sugar pill. None of these trials were published. I had no idea they existed.
It got worse. The trials comparing reboxetine against other drugs showed exactly the same picture: Three small studies, 507 patients in total, showed that reboxetine was just as good as any other drug. They were all published. But 1,657 patients’ worth of data was left unpublished, and this unpublished data showed that patients on reboxetine did worse than those on other drugs. If all this wasn’t bad enough, there was also the side-effects data. The drug looked fine in the trials that appeared in the academic literature. But when we saw the unpublished studies, it turned out that patients were more likely to have side effects, more likely to drop out of taking the drug and more likely to withdraw from the trial because of side effects if they were taking reboxetine rather than one of its competitors.
I did everything a doctor is supposed to do. I read all the papers, I critically appraised them, I understood them and I discussed them with the patient. We made a decision together, based on the evidence. In the published data, reboxetine was a safe and effective drug. In reality, it was no better than a sugar pill, and worse, it does more harm than good. As a doctor, I did something which, on the balance of all the evidence, harmed my patient, simply because unflattering data was left unpublished.
If you find that amazing, or outrageous, your journey is just beginning. Because nobody broke any law in that situation, reboxetine is still on the market, and the system that allowed all this to happen is still in play, for all drugs, in all countries in the world. Negative data goes missing, for all treatments, in all areas of science. The regulators and professional bodies we would reasonably expect to stamp out such practices have failed us.
“Publication bias” — the process whereby negative results go unpublished — is endemic throughout the whole of medicine and academia; regulators have failed to do anything about it, despite decades of data showing the size of the problem. But before we get to that research, I need you to feel its implications, so we need to think about why missing data matters.
Evidence is the only way we can possibly know if something works — or doesn’t work — in medicine. We proceed by testing things, as cautiously as we can, in head-to-head trials and gathering together all of the evidence. This last step is crucial: If I withhold half the data from you, it’s very easy for me to convince you of something that isn’t true. If I toss a coin a hundred times, for example, but only tell you about the results when it lands heads-up, I can convince you that this is a two-headed coin. But that doesn’t mean I really do have a two-headed coin. It means I’m misleading you, and you’re a fool for letting me get away with it. This is exactly the situation we tolerate in medicine and always have. Researchers are free to do as many trials as they wish and then choose which ones to publish.
The repercussions of this go way beyond simply misleading doctors about the benefits and harms of interventions for patients, and way beyond trials. Medical research isn’t an abstract academic pursuit: It’s about people, so every time we fail to publish a piece of research we expose real, living people to unnecessary, avoidable suffering.
TGN1412
In March 2006, six volunteers arrived at a London hospital to take place in a trial. It was the first time a new drug called TGN1412 had ever been given to humans, and they were paid £2,000 each. Within an hour these six men developed headaches, muscle aches and a feeling of unease. Then things got worse: high temperatures, restlessness, periods of forgetting who and where they were. Soon they were shivering, flushed, their pulses racing, their blood pressure falling. Then, a cliff: one went into respiratory failure, the oxygen levels in his blood falling rapidly as his lungs filled with fluid. Nobody knew why. Another dropped his blood pressure to just 65/40, stopped breathing properly, and was rushed to an intensive care unit, knocked out, intubated, mechanically ventilated. Within a day, all six were disastrously unwell: fluid in their lungs, struggling to breathe, their kidneys failing, their blood clotting uncontrollably throughout their bodies, and their white blood cells disappearing. Doctors threw everything they could at them: steroids, antihistamines, immune-system receptor blockers. All six were ventilated on intensive care. They stopped producing urine; they were all put on dialysis; their blood was replaced, first slowly, then rapidly; they needed plasma, red cells, platelets. The fevers continued. One developed pneumonia. And then the blood stopped getting to their peripheries. Their fingers and toes went flushed, then brown, then black, and then began to rot and die. With heroic effort, all escaped, at least, with their lives.
The Department of Health convened an Expert Scientific Group to try to understand what had happened, and from this two concerns were raised. First: Can we stop things like this from happening again? It’s plainly foolish, for example, to give a new experimental treatment to all six participants at the same time in a “first-in-man” trial if that treatment is a completely unknown quantity. New drugs should be given to participants in a staggered process, slowly, over a day. This idea received considerable attention from regulators and the media.
Less noted was a second concern: Could we have foreseen this disaster? TGN1412 is a molecule that attaches to a receptor called CD28 on the white blood cells of the immune system. It was a new and experimental treatment, and it interfered with the immune system in ways that are poorly understood and hard to model in animals (unlike, say, blood pressure, because immune systems are very variable between different species). But, as the final report found, there was experience with a similar intervention: It had simply not been published. One researcher presented the inquiry with unpublished data on a study he had conducted in a single human subject a full 10 years earlier using an antibody that attached to the CD3, CD2 and CD28 receptors. The effects of this antibody had parallels with those of TGN1412, and the subject on whom it was tested had become unwell. But nobody could possibly have known that because these results were never shared with the scientific community. They sat unpublished and unknown when they could have helped save six men from a terrifying, destructive, avoidable ordeal.
That original researcher could not foresee the specific harm he contributed to, and it’s hard to blame him as an individual because he operated in an academic culture where leaving data unpublished was regarded as completely normal. The same culture exists today. The final report on TGN1412 concluded that sharing the results of all first-in-man studies was essential: They should be published, every last one, as a matter of routine. But phase 1 trial results weren’t published then, and they’re still not published now. In 2009, for the first time, a study was published looking specifically at how many of these first-in-man trials get published and how many remain hidden. They took all such trials approved by one ethics committee over a year. After four years, nine out of 10 remained unpublished; after eight years, four out of five were still unpublished.
In medicine, as we shall see time and again, research is not abstract: It relates directly to life, death, suffering and pain. With every one of these unpublished studies, we are potentially exposed, quite unnecessarily, to another TGN1412. Even a huge international news story with horrific images of young men brandishing blackened feet and hands from hospital beds wasn’t enough to get movement because the issue of missing data is too complicated to fit in one sentence.
When we don’t share the results of basic research, such as a small first-in-man study, we expose people to unnecessary risks in the future. Was this an extreme case? Is the problem limited to early, experimental new drugs in small groups of trial participants? No.
In the 1980s, doctors began giving anti-arrhythmic drugs to all patients who’d had a heart attack. This practice made perfect sense on paper: We knew that anti-arrhythmic drugs helped prevent abnormal heart rhythms; we also knew that people who’ve had a heart attack are quite likely to have abnormal heart rhythms; we also knew that often these went unnoticed, undiagnosed and untreated. Giving anti-arrhythmic drugs to everyone who’d had a heart attack was a simple, sensible preventive measure.
Unfortunately, it turned out that we were wrong. This prescribing practice, with the best of intentions, on the best of principles, actually killed people. And because heart attacks are very common, it killed them in very large numbers: well over 100,000 people died unnecessarily before it was realized that the fine balance between benefit and risk was completely different for patients without a proven abnormal heart rhythm.
Could anyone have predicted this? Sadly, yes, they could have. A trial in 1980 tested a new anti-arrhythmic drug, lorcainide, in a small number of men who’d had a heart attack — less than 100 — to see if it was any use. Nine out of 48 men on lorcainide died, compared with one out of 47 on placebo. The drug was early in its development cycle, and not long after this study, it was dropped for commercial reasons. Because it wasn’t on the market, nobody even thought to publish the trial. The researchers assumed it was an idiosyncrasy of their molecule and gave it no further thought. If they had published, we would have been much more cautious about trying other anti-arrhythmic drugs on people with heart attacks, and the phenomenal death toll — over 100,000 people in their graves prematurely — might have been stopped sooner. More than a decade later, the researchers finally did publish their results, with a mea culpa, recognizing the harm they had done by not sharing earlier:
When we carried out our study in 1980, we thought that the increased death rate that occurred in the lorcainide group was an effect of chance. The development of lorcainide was abandoned for commercial reasons, and this study was therefore never published; it is now a good example of “publication bias.” The results described here might have provided an early warning of trouble ahead.
This problem of unpublished data is widespread throughout medicine — and indeed the whole of academia — even though the scale of the problem, and the harm it causes, have been documented beyond any doubt. We will see stories on basic cancer research, Tamiflu, cholesterol blockbusters, obesity drugs, antidepressants and more, with evidence that goes from the dawn of medicine to the present day, and data that is still being withheld, right now, as I write, on widely used drugs which many of you reading this book will have taken this morning. We will also see how regulators and academic bodies have repeatedly failed to address the problem.
Because researchers are free to bury any result they please, patients are exposed to harm on a staggering scale throughout the whole of medicine, from research to practice. Doctors can have no idea about the true effects of the treatments they give. Does this drug really work best, or have I simply been deprived of half the data? Nobody can tell. Is this expensive drug worth the money, or have the data simply been massaged? No one can tell. Will this drug kill patients? Is there any evidence that it’s dangerous? No one can tell.
This is a bizarre situation to arise in medicine, a discipline where everything is supposed to be based on evidence and where everyday practice is bound up in medico-legal anxiety. In one of the most regulated corners of human conduct, we’ve taken our eyes off the ball and allowed the evidence driving practice to be polluted and distorted. It seems unimaginable.
Excerpted from “Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients” by Ben Goldacre. Published by Faber & Faber, an affiliate of Farrar Straus & Giroux. Copyright 2012. Republished with permission of the publisher.