So, in light of the Reuben scandal of forged data about pain management in surgery patients, the question naturally comes to mind: how much role did industry play? I’ve seen articles (and had comments here) to the effect that industry-sponsored research is worthless: discount it, can't trust it, bought and paid for, and so on.
The problem is, you can't completely shake that accusation. Industries (and not just the drug industry, by any means) are willing to pay for results that tell them what they want to hear. And while at times that's crossed over into outright fraud, many times it's just that you can set up all kinds of studies, in all kinds of ways, and get all kinds of answers. Run enough of them, and you can choose the ones you like and pretend the others aren't there.
The whole idea of scientific research is that you don't operate like this, of course, and eventually these things do get settled out. If the drug industry really did make sure that only happy results came out, we'd never have catastrophic clinical trial failures, and never have any drugs recalled from the market. And things like the (Nobel-worthy) H. pylori story behind stomach ulcer formation never would have seen the light of day if the industry were capable (on the other hand) of burying everything it didn't want to hear about.
But there are biases, real and potential, and they always have to be looked out for. One error, though, is to assume that these biases can be eliminated by turning to academic research instead. That's the point of a recent Op-Ed in the Washington Post by David Shaywitz, who's worked both sides of the business:
Part of the problem is that we've been conditioned to trust university research. It is based, after all, on the presumably lofty motives of its practitioners. What's not to like about science carried out by academics who have nobly dedicated their lives to understanding the unknown, furthering knowledge and serving humanity?
. . .University researchers are in a constant battle for recognition and the rewards associated with success: research space, speaking engagements, funding and autonomy. Consequently, while academic research is often described as "curiosity-driven," the reality is messier, as (curiously) many researchers tend to pursue the trendiest technologies and explore topics that happen to be associated with the most generous levels of research support.
Moreover, since academic success is determined almost exclusively by the number and prestige of research publications, the incentives to generate results are exceedingly powerful and can encourage investigators to see patterns that may not exist, to disregard contradictory observations that might be important, to overvalue data that might be preliminary or unreliable, and to embrace conclusions that deserve to be viewed with far greater skepticism.
Shaywitz goes on to make the same point I did above - that the system is ultimately self-correcting - but is calling for people to recognize that academic research is also done by human beings, with all that entails. John Tierney at the New York Times had taken up this topic last fall, and wondered about what would happen if enough researchers decided to stop taking industry funding because they were tired of having their integrity questioned.
Tierney's responded to the Shaywitz piece now as well. The comments from his readers are all over the place each time. Some of them are (correctly, to my mind) going along with the idea that research always comes in with various potential biases and agendas, and should be judged case-by-case no matter the source. There are, naturally, some who aren't buying anything that might get industrial research off the hook.
"In industry sponsored comparative studies of medical treatments, the sponsor’s product always comes out on top," says one commenter there. But that's not true. I can give you plenty of examples right off the top of my head. For sure, we try to run studies that will show a benefit for our therapies - but we also have to pin these down to the real world for people (and the FDA) to have a better chance of trusting the results. We're not going to set up a trial that we have good reason to think will fail: life is too short, and the supply of funds is not infinite. You target the diseases (and the patients) that you think will benefit the most (and show the most impressive results, naturally).
And that's a bias to consider right there: we don't set up our trials randomly, so keep that in mind. But no one sets up drug trials randomly, anywhere. There's always a reason to do something so expensive and time-consuming - you should always keep that in mind, weigh it in your calculations, and decide from there.