Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« So, How Come You're So Darn Lucky, Eh? | Main | Ah, Remember Those Days? How Will We Remember These? »

September 28, 2011

Andy Grove's Idea For Opening Up Clinical Trials

Email This Entry

Posted by Derek

The last time I talked here at length about Andy Grove, ex-Intel CEO, I was rather hard on him, not that I imagine that I ruined his afternoon much. And in the same vein, I recently gave his name to the fallacy that runs like this: other high-tech R&D sector X is doing better than the pharmaceutical business is. Therefore the drug industry should do what those other businesses do, and things will be better. In Grove's original case, X was naturally "chip designers like Intel", and those two links above will tell you what I think of that analogy. (Hint: not too much).

But Grove has an editorial in Science with a concrete suggestion about how things could be done differently in clinical research. Specifically, he's looking at the ways that large outfits like Amazon manage their customer databases, and wonders about applying that to clinical trial management. Here's the key section:

Drug safety would continue to be ensured by the U.S. Food and Drug Administration. While safety-focused Phase I trials would continue under their jurisdiction, establishing efficacy would no longer be under their purview. Once safety is proven, patients could access the medicine in question through qualified physicians. Patients' responses to a drug would be stored in a database, along with their medical histories. Patient identity would be protected by biometric identifiers, and the database would be open to qualified medical researchers as a “commons.” The response of any patient or group of patients to a drug or treatment would be tracked and compared to those of others in the database who were treated in a different manner or not at all. These comparisons would provide insights into the factors that determine real-life efficacy: how individuals or subgroups respond to the drug. This would liberate drugs from the tyranny of the averages that characterize trial information today.

Now, that is not a crazy idea, but I think it still needs some work. The first issue that comes to mind is heterogeneity of the resulting data. One of the tricky parts of Phase II (and especially Phase III) trials is trying to make sure that all the patients, scattered as they often are across various trial sites, are really being treated and evaluated in exactly the same way. Grove's plan sort of swerves around that issue, in not-a-bug-but-a-feature style. I worry, though, that rather than getting away from his "tyranny of averages", that this might end up swamping things that could be meaningful clinical signals, losing them in a noisy pile of averaged-out errors. The easier the dosing protocols, and the more straighforward the clinical workup, the better it'll go for this method.

That leads right in to the second question: who decides which patients get tested? That's another major issue for any clinical program (and is, in fact, one of the biggest differences between Phase II and Phase III, as you open up the patient population). There are all sorts of errors to make here. On one end of the scale, you can be too restrictive, which will lead the regulatory agencies to wonder if your drug will have any benefit out in the real world (or to just approve you for the same narrow slice you tested in). If you make that error in Phase II, then you'll go on to waste your money in Phase III when your drug has to come out of the climate-controlled clinical greenhouse. But on the other end, you can ruin your chances for statistical significance by going too broad too soon. Monitoring and enforcing such things in a wide-open plan like Grove's proposal could be tough. (But that may not be what he has in mind. From the sound of it, wide-open is the key part of the whole thing, and as long as a complete medical history and record is kept of each patient, then let a thousand flowers bloom).

A few other questions: what, under these conditions, constitutes an endpoint for a trial? That is, when do you say "Great! Enough good data!" and go to the FDA for approval? On the other side, when do you decide that you've seen enough because things aren't working - how would a drug drop out of this process? And how would drugs be made available for the whole process, anyway? Wouldn't this favor the big companies even more, since they'd be able to distribute their clinical candidates to a wider population? (And wouldn't there be even more opportunities for unethical behavior, in trying to crowd out competitor compounds in some manner?)

Even after all those objections, I can still see some merit in this idea. But the details of it, which slide by very quickly in Grove's article, are the real problems. Aren't they always?

Comments (46) + TrackBacks (0) | Category: Clinical Trials | Regulatory Affairs


COMMENTS

1. PharmaHeretic on September 28, 2011 9:05 AM writes...

Maybe the answer lies in something between his proposal and the current FDA-lawyer-conmen heavy system.

What about opening up data from clinical trials in a manner that still protects the identity of their participants? Even something like that would be an improvement over the current system.

Permalink to Comment

2. johnnyboy on September 28, 2011 9:23 AM writes...

"Even after all those objections, I can still see some merit in this idea."

Derek, I think you are being extremely generous.

It's extraordinary how being "successful" at one thing (eg. being the CEO of a company at the right time) somehow qualifies all your subsequent musings about everything else as worthy of publication. I would like to have the energy to go over Grove's 'proposal' and explain how pretty much every single assumption in there is faulty and dangerous (starting with the idea that a drug's safety is "proven" after a phase I study - tell that to the torcetrapib deaths), but frankly the task seems herculean, and pointless, because nothing like that is ever going to be applied. Thank god.

Permalink to Comment

3. Todd on September 28, 2011 9:36 AM writes...

One of my immediate responses is that the need in clinical development probably is not bigger broader trials, but smaller, more narrowly tailored trials, which focus on a defined population of patients (adeno-NSCLC, or mutant bRaf or ALK translocation) which may be most sensitive to your drug.

Not sure you can merge the Grove plan with the current need for biomarker driven clinical trial cohorts.

Anyway, it is food for thought.

Permalink to Comment

4. Anonymous on September 28, 2011 9:45 AM writes...

> other high-tech R&D sector X is doing better than the pharmaceutical business is. Therefore the drug industry should do what those other businesses do, and things will be better

Are we back talking about Formula1 again?

Permalink to Comment

5. DCRogers on September 28, 2011 9:48 AM writes...

Linus Pauling with his screwball theories on Vitamin C and the common cold are another example of expertise in one area not only not transferring, but actively contributing to the pursuit of wackiness.

Permalink to Comment

6. simpl on September 28, 2011 10:04 AM writes...


There are quite elegant methods to check data over time like a production series for relevance/non-relevance/not yet. They might be too complex to track all the variables in a clinical trial. Has anybody experience in this area?

Permalink to Comment

7. biotechtranslated on September 28, 2011 10:12 AM writes...

I agree with you Derek. The heterogeneity of the data will be a huge issue. Look at the inclusion criteria on any clinical trial; will you really be able to pull anything worthwhile out of the data? My biggest concern would patients who have received prior therapies or are still on a therapy when they "test" the new drug.

The other issue is that you're passing the quality control to the physician. Do you really expect a physician, who has no experience running a clinical trial, to really understand how important it is to fully profile a patient? And if they do provide that data, would you trust it? At least in clinical trial there are usually a limited number of trained people screening and monitoring patients.

I think it's a neat idea and worth pursuing, but the premise that it could replace phase 3 trials is pretty laughable to be honest.

Mike

Permalink to Comment

8. TJMC on September 28, 2011 10:21 AM writes...

#1 PH opens an interesting tangent to Grove's idea. From a data point of view, what if all eHR (electronic health records) were anonymized but accessible in the "commons"? We are on that path anyway in some ways.

I have wondered why the FDA would NOT begin to utilize this upcoming consolidation of information for both safety and efficacy "signals". Data mining and adaptive trial tools are accelerating in use (and maybe utility, towards #6 and #3's musings.)

Permalink to Comment

9. ELA on September 28, 2011 10:32 AM writes...

Identifying subgroup effects is a product of the number of people in the trial. If the limiting factor for clinical trial size is the expense of providing the drug to each additional patient, then I'm not sure how Grove's proposal helps. Or is he proposing that the drug would be available for purchase after it passes safety? If the latter, he's not so much proposing to reform the electronic architecture of the trial system as he is proposing to scrap 50% of the function of the FDA. In that case, his argument is with the government, not with industry.

Permalink to Comment

10. Pete on September 28, 2011 10:33 AM writes...

In this brave new world of clinical trials, who actually pays the bills? If you're going to outsource some of the regulatory function, why not outsource Phase 1 as well? Is Mr Grove suggesting that we should completely abandon the idea of design in the context of clinical trials?

Permalink to Comment

11. anonymous on September 28, 2011 10:50 AM writes...

He just proposed what exists for supplements. Almost always are the benefits anecdotal or vague. The FDA only weighs in on safety. How's that system working out? Great if you're into peddling snake oil, no so much for public health.

Permalink to Comment

12. JC on September 28, 2011 10:59 AM writes...

Can you still maintain statistical significance if you liberate drugs from the tyranny of the averages?

Permalink to Comment

13. opsomath on September 28, 2011 11:08 AM writes...

After a friend of mine's family spent millions of dollars on her father's brain tumor treatments, we (both of us chemistry grad students at the time) were brainstorming on how we could bring down drug costs in this country. We hit on nearly this exact idea. I get that it's unpopular in this crowd, but I see a lot of advantages. First, no one is preventing anyone from running clinical trials - I would hope that any proposal of this nature would come with increased public funding for clinical research. Second, I assume that new drugs would still require a prescription. That should remove some of the "snake oil" aspect of things that people are worrying about.

Permalink to Comment

14. barry on September 28, 2011 11:32 AM writes...

Whether we get rid of Phase II and Phase III testing or not, all drug approvals should be conditional on showing efficacy in Phase IV. Can that take the place of those clinical trials? We don't need that answer yet. First, instituted the Phase-IV test.

Permalink to Comment

15. MTK on September 28, 2011 11:37 AM writes...

@13,

It's unpopular in this crowd, because it's bad science. That's what a clinical trial is after all, an experiment. Would you run an experiment where all of your datapoints weren't collected under the same conditions in a controlled manner? Of course not, because there would be no way to deconvolute all the possible confounding factors into anything meaningful. That's what I see Groves' proposal doing.

That's not to say something like this may not have value post-approval once a drug is being used and prescribed in an essentially unstructured manner, but as a new clinical paradigm? I can't see it.

And I don't necessarily see how this reduces cost. Someone still pays for the trial.

Permalink to Comment

16. tuan bla on September 28, 2011 12:52 PM writes...

Interesting idea, but first of all how to control against placebo? Further one can forget about double-blind studies.
I fear with this idea we tansfer development to the statistical clown-about, thinking of e.g. :
-intention to treat
-Will-Rogers Phenomenon
-Simpson´s paradox
...

Permalink to Comment

17. Anonymoius on September 28, 2011 1:03 PM writes...

#15 - Payors are putting out a lot more on healthcare already than clinical trial spends. Seeing MD's (anyway) and getting various treatments, diagnostics, tests... What is changing is the electronic records push that aggregates and will offer a huge amount of data, higher granulation, and at lower costs.

Agree that traditional standards of clinical biostatistics would be challenged, but "deconvolution" might not really be the problem. Considering what we are trying to do with biomarkers, etc., from far, far smaller samples.

Our problem is that gerrymandering of data per state, per hospital system, per CRO, per Pharma has created a cost overhead that is untenable. Grove and others are just tring to think out of the box, and piggy-back on big trends and tech changes.

Permalink to Comment

18. entropyGain on September 28, 2011 1:17 PM writes...

Ummm....
"Once safety is proven," in Phase I?
Remarkable level of naivety in an environment when 10,000 patient Phase III trials are being required to examine cardiovascular outcomes in diabetes/obesity space.

Now something like this for postmarketing surveillance/data mining... might be rather interesting.

Permalink to Comment

19. itscientist on September 28, 2011 1:30 PM writes...

I think this would prove problematic quite quickly from a cost and security perspective.
You would probably have to have pseudononymisation criteria held outside the storage framework. This may also have to be based in each country where the patient resides.

Maybe practical for the US, but wider global agreement around reusing patients across national borders is quickly proven difficult.

Then it would be limited to those countries willing to let patient data leave their borders. Countries like Germany have very tight restrictions on use a persons data... a good idea but the devil would be in the legal details and the governance framework, and who would own that framework

Permalink to Comment

20. Edward Taussig on September 28, 2011 1:57 PM writes...

@MTK is 100% right, this is just data mining, with a large enough sample you're always going to find an effect, but that doesn't provide the cause.
It is shocking that someone like Andy Grove has so little understanding of the scientific method.

Permalink to Comment

21. MTK on September 28, 2011 2:09 PM writes...

@17,

I was specifically addressing #13's reasoning that this would somehow reduce the cost of clinical trials thereby reducing the cost of treatment later, not the the cost of this relative to actual treatment.

Once again a clinical trial is just that, a trial. Larger numbers doesn't mean it's better, it just means more crappy data. I honestly can't foresee a way to get real meaningful data without at least some defined patient population, a comparator arm with a similarly defined patient population, and clear clinical endpoints.

Now I realize that if every person who used the experimental drug had all their pertinent lifestyle, health, medication, etc data entered into one giant database you could design a "virtual study" which contained all the elements of a traditional clinical study. That, however, raises all sorts of practical and ethical issues.

a) Who would pay for the drugs in the trial and the cost associated with entering, validating, and maintaining the data and database?
b) The number of patients dosed with the experimental drug would have to rise dramatically, since many would not fit the desired clinical profile. How can a physician ethically prescribe an experimental drug in that situation if the chances are good that it won't lead to meaningful data?
b) How would one appropriately get patient consent consistently and train doctors on how to fully imform and obtain such consent regarding the use of a particular experimental drug with no evidence of benefit and the slimmest of evidence of safety?
c) How would an IRB review the ongoing "trial" and what criteria would they use to pull the plug on the trial? It would seem to be almost impossible to do so. You just start and hope that you get enough patients eventually in enough arms to meet the goals of the study, I guess.

Scientifically, ethically, and economically I find the idea highly flawed to say the least.

Permalink to Comment

22. Placebo on September 28, 2011 2:42 PM writes...

How do you define safety under this method? Aren't safety thresholds many times defined relative to the efficacy - hence, the term risk-to-benefit ratio? For example, aren't we willing to tolerate more safety events if the efficacy is unbelievable and life-altering?

Until this is worked out, the proposal is a non-starter.

Permalink to Comment

23. Martin Griffies on September 28, 2011 3:04 PM writes...

The idea seems to be a forward extension of adverse event reporting, i.e taking post-release methodologies to earlier phases. In that sense it's a very positive idea and has the benefit of opening up potential therapies to a bigger audience, sooner. If the data are really open to common access then the potential for fiascos like Vioxx will be reduced and the chances of therapeutic switches like sildenafil will be increased.

Permalink to Comment

24. MTK on September 28, 2011 3:25 PM writes...

@23,

Absolutely not. The chances of fiascos like Vioxx would not be decreased, because you may not have been able see the increase in cardiac events through the increased noise. And by the time you did spot it, more people, not less, would probably have been adversely effected.

Let's remember the patient group that was effected by Vioxx in the VIGOR study. It was only specific to the group that was aspirin indicated in comparison to the patients on naproxen.

The best example of the kind of study that Groves plan might be is a meta-study published in 2005 on Vioxx which showed increased MI risk vs. users of Celebrex, but indicated no increase in risk vs. other patient groups. That was a year after Vioxx was already off the market and they were specifically looking for this. Without the VIGOR study, and Merck's subsequent mishandling of the data, I'm not sure if Vioxx's risk would ever have been fleshed out and certainly not by some data mining exercise.

Permalink to Comment

25. drug_hunter on September 28, 2011 5:38 PM writes...

Waaaay too much tsk-tsking going on here.

Let's start by remembering that the current system is NOT SUSTAINABLE. I think from our many conversations on Pharma effectiveness, we pretty much all agree on that .... or do you want to have another discussion about the psychological effects of layoffs and how much we hate senior management?

Grove, whatever his flaws, has suggested a very interesting and different approach. None of us has thought more than a microsecond about his idea and we're all shooting it down. Very disappointing.

Let's think about something like Grove's proposal in the context of cancer. 500K people in the USA are dying each year. If I had cancer, I would be willing to take some chances. A view fairly common amongst cancer patients I might add.

The key here I think is ubiquitous monitoring so the data analytics start to work in your favor [more chances to find true signal(s)]. The $100 genome (circa 2020) will help a lot too. Look up "quantified self" for an example of a group that is moving us in this direction.

Permalink to Comment

26. MIMD on September 28, 2011 6:04 PM writes...

You hit the major problem on the head.

Computational alchemy (turning unclean data into gold) works no better than the medieval alchemy (turning lead to gold).

See a paper I wrote and posted here on just the difficulties of post marketing surveillance:

"A Medical Informatics Grand Challenge: the EMR and Post-Marketing Drug Surveillance"

http://www.scribd.com/doc/12392184/A-Medical-Informatics-Grand-Challenge-the-EMR-and-PostMarketing-Drug-Surveillance-

"Significant technical, scientific and social challenges" starts on p. 20.

Permalink to Comment

27. MTK on September 28, 2011 8:11 PM writes...

@25,

Drug hunter,

The example of a cancer patient using an experimental drug is not relevant. This isn't about an individual trying any and all means he or she has available in order to save or extend his or her life. This is about trying to determine whether a drug is safe and effective. Clinical trials are about answering a specific question or two so that decisions on how to properly use (or not use) the drug can be made in the future.

And you are right about one thing, I haven't thought about it for that long, because it didn't take long to see the scientific, ethical, and practical shortcomings of the idea.

Permalink to Comment

28. drug_hunter on September 28, 2011 10:17 PM writes...

@27 (MTK) - You are dead wrong that the cancer example is not relevant. We need better ways to collect data and learn from clinical trials. Grove is proposing one way. Check out quantified self: ubiquitous monitoring is coming. That could be one data stream that enables the sort of analytics that could support Grove's idea in practice and teach us what we need to learn from a clinical trial. Along similar lines, look at "N of 1" methodology. Have you even heard of that?!

Of course it would take a lot of careful thought to see how/whether to do anything very different - risky, challenging, and all the rest. Meanwhile, I haven't heard your proposal yet. Time to take a stand. Are you defending the current system, or have you just given up and tossed in the towel? It is easy to just pooh-pooh any new idea, and you don't sound all that open to new ideas. Lord knows, the current method doesn't do that very effectively. Surely you agree on that. Given our abysmal track record I can't see how we can afford to dismiss any idea however strange or different.

Permalink to Comment

29. MTK on September 28, 2011 11:27 PM writes...

Drug hunter,

Thanks for the condescension. Since you can judge me as not open to new ideas through a single post, I'll judge you as highly perceptive.

Yes, I'm well aware of n of 1 and single-subject experiments. Are you even aware of the shortcomings of these types of experiments including order of treatment, carry-over effects, etc.? This doesn't even include ethical issues in single subject research.

No, I don't have an alternative to our current method of RCT. And actually given the state of modern medicine and how successfully many disease states can be treated along with increased longevity, I'd argue that our current methods have been fairly effective in improving the overall human condition. Surely you agree on that. So in that respect perhaps I am defending the current system, although I have no vested interest in keeping it. At the same time, I'm not going to grasp on to any idea just because it's new and shiny either.

My objections to Grove is not because it's strange or different, but rather because I see major problems which I have already mentioned.


Permalink to Comment

30. Jose on September 28, 2011 11:52 PM writes...

"It's unpopular in this crowd, because it's bad science." Ding ding!

The inherent spastic self-selection of patients, lack of randomization, and conflicting priorities for the dispensing MDs make this a non-starter:

Huh, (he/she) thinks- this patient is too sick (or not sick enough) for this new drug, so I won't mention it.

Moreover, how the hell would you capture any useful clinical data to go towards establishing efficacy? The logistics and infrastructure to collect FDA-quality clinical data sure as hell don't exist in the GPs office... Any resulting data would be nothing short of worthless.

Permalink to Comment

31. drug_hunter on September 29, 2011 12:05 AM writes...

I guess we just see the world very differently.

I stand by my early comment that our track record is abysmal. I surely don't agree that we've been fairly effective in improving the overall human condition -- if you subtract antibiotics, vaccines, environmental cleanliness (relative to earlier centuries) and a few other medicines I don't see that we've made a huge impact yet given the huge size of the global bio-medical research enterprise. Of course there are some recent success stories about which I'm just as happy as the next person. Hard not to call Gleevec an advance for medicine. But for most cancers, AD, stroke, CHF, lupus, diabetes, schizophrenia, .... there is still an awful lot to do.

And so I don't feel any need to defend the current system. It worked reasonably well for simple, 20th century problems but is not suited for the challenges we face today.

Rather, IMHO we need to find new ways to do it. Grove has put a proposal on the table. Can we *build* on his idea rather than simply shooting it down? Or can we point to other creative strategies that we think *might* be worth considering?

Permalink to Comment

32. tgibbs on September 29, 2011 9:44 AM writes...

I think such a database would be highly useful, although no substitute for placebo-controlled trials. But I doubt if it will ever happen, because it is impossible for a database that contains the amount of detailed information needed for it to be truly useful to be reliably anonimized. This is why Netflix had to stop running contests to improve their individualized movie recommendation system--because sophisticated data mining could identify individual users from the "anonimized" data.

Permalink to Comment

33. MIMD on September 29, 2011 10:32 AM writes...

@30 Jose:

"The logistics and infrastructure to collect FDA-quality clinical data sure as hell don't exist in the GPs office... "

Just the data we need for Comparative Effectiveness Research...NOT.

Permalink to Comment

34. fred on September 29, 2011 11:46 AM writes...

Thank you very much for this post. Just as the UK cancels its nationwide patient database project, efforts in the U.S. are well underway to require that every physician contact, diagnosis, precription and treatment be collected from every medical contact and lodged in large databases controlled by government or unaccountable third parties. The excuse for all of this is that having the data will somehow improve medical practice.

The mantra is that we can't manage what we can't measure. Most of the people making this claim have no idea how many ways data can be corrupted and do not wish to consider how one identifies better treatments when one has virtually no information about the circumstances, genetic patterns, compliance, or completeness of the sample being studied.

Permalink to Comment

35. neandrothal on September 29, 2011 11:50 AM writes...

@32,

If the database is to enable any use of patients' genomic information for trials (e.g., pharmacogenomics, mutation-specific disease subsets), it will necessarily contain enough genetic information to allow someone to link a complete medical record to a set of genetic markers--say, 20-50 SNPs. (See Homer et al. 2008, PLoS Genetics which explains how you could start with a subset of genetic information and identify other linked genetic and phenotypic information from a pool.)

To prevent this kind of abuse, you'd want to restrict queries of the database so that you don't get results if your query returns too few individuals. For example, if you're looking for potential trial participants who are male AND between 50-55 AND have disease X AND genotypes A, B, C, and D, AND who live in a particular ZIP code, there might be only one person who fits that. So you'd be required to relax your criteria before you could view any results.

In any case, given the state of EMRs in the U.S. a good first step would be to propose a centralized EMR system.

Permalink to Comment

36. Drug Wonk on September 29, 2011 1:12 PM writes...

Better real world data is necessary for detecting post-market safety signals and, potentially, for better understanding risks and beneficial responses in particular sub-populations. But such data are likely to be, at best, hypothesis generators. We didn't end up with the current paradigm of double-blind, randomized controlled trials by accident. We got here because it turns out that patient self-selection and or clinician selection and observer bias, all have the potential to invalidate observations of clinical effect.

Permalink to Comment

37. Sisyphus on September 29, 2011 7:51 PM writes...

So where do the blood-sucking trial lawyers fit into this scheme?

Permalink to Comment

38. cliffintokyo on September 29, 2011 7:58 PM writes...

If you water-down, and scientifically beef-up, Andy Grove's proposal, you would probably get something like the existing Accel/Cond Approval procedure, which most med research scientists are comfortable with.
Perhaps we should be looking at expanding Accel Approval, e.g. based on rapidly improving genomic biomarker technology, as a constructive build on Grove's rather over-simplified suggestion.

Permalink to Comment

39. chris on October 2, 2011 11:15 AM writes...

This sounds vaguely like Brin's idea of googling everything every PD patient thinks, but I'm not sure how much we have learnt from that (yet)

Permalink to Comment

40. hibob on October 2, 2011 10:22 PM writes...

#35: I don't think restricting queries to those that return a minimum number of patients would work for preserving anonymity. For one thing, a user or group of users could combine the results of overlapping queries instead of making a single more detailed (forbidden) query.

But on a broader scale I think that if the database returns enough information about patients for it to be useful for clinical research, the patients will only remain anonymous for a short while. You don't need genetic data to identify someone if you have patient/family medical histories and you can spider facebook, etc.

Permalink to Comment

41. WLU on October 3, 2011 12:40 PM writes...

@drug_hunter:

...if you subtract antibiotics, vaccines, environmental cleanliness (relative to earlier centuries) and a few other medicines I don't see that we've made a huge impact yet given the huge size of the global bio-medical research enterprise.

So basically, if you subtract the main contributors to longevity, based on the germ-theory of disease, which have population-wide impacts, you don't see the point of modern medicine or research. You might as well add in knowledge about diet and exercise to really round it out.

Drig research is complicated and slow. It needs to address safety and efficacy. Though the current methods are indeed slow, they go a long way towards ensuring the drugs are genuinely effective and safe. We've knocked off most of the things that used to lower life expectancy in the past - first communicable diseases, now heart disease. People are living longer, and cancer is becoming more of a killer as a result. Ultimately we have to die of something, and when you remove the big killers, people end up dying of diseases they previously would never have lived to die from. Complaining that the current drug development process is too slow ignores the complexity of human biology when it's functioning normally, let alone when it's causing disease (and for some people, their genome means "normal functioning" will actually kill them).

@Sisyphus

So where do the blood-sucking trial lawyers fit into this scheme?

They're the ones driving up the cost of drugs so drug companies can offset the expected cost of litigation for drugs and vaccines. Big Pharma ain't no angels, but Petty Lawyers ain't helping things.

Permalink to Comment

42. MIMD on October 3, 2011 4:37 PM writes...

I should point out that another former Intel CEO wondered why his 45 horses had electronic medical records, but people don't.

See Intel's former CEO, EHR's for his horses, and Equus asinus.

Permalink to Comment

43. Micha Elyi on January 12, 2012 4:58 PM writes...

It's extraordinary how being "successful" at one thing (eg. being the CEO of a company at the right time)...

johnnyboy

One thing? You don't know much about Andy Grove, do you?

Permalink to Comment

44. Lou on January 15, 2012 10:29 AM writes...

I worked with Andy Grove at Fairchild Semiconductor in the 1968.

He was not an IT guy. He was a brilliant engineer, economist, and Innovator.

The problem as I see it is the use of Bayesian Statistics at the NIH in performance of clinical trials.

The use of Bayesian Statistics allows researchers to assume their drug discovery premiss is 60% valid.

Permalink to Comment

45. DensityDuck on March 6, 2012 2:50 PM writes...

I kind of get the sense that Grove's idea is the kind of thing you come up with when you don't know anything about the subject beyond what you read on Wikipedia and BoingBoing.

Permalink to Comment

46. TxDoc on July 7, 2012 8:22 PM writes...

I would have much more interest in Grove's proposal for alt-FDA IF someone would tell me how I can get patients to:
1. Buy their meds, even off WM $4 list,
2. Take their meds as directed,
3. Follow up as requested in regards to efficacy, problems, side-effects.
Trying to farm this out to >50k providers and who-knows how many patients on a distributed manner is one more big waste of time and money.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
Scripps Update
What If Drug Patents Were Written Like Software Patents?
Stem Cells: The Center of "Right to Try"
Speaking of Polyphenols. . .
Dark Biology And Small Molecules
How Polyphenols Work, Perhaps?
More On Automated Medicinal Chemistry
Scripps Merging With USC?