About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
April 29, 2011
I've been browsing through my journal RSS feeds, and a question occurs to me. When you're scanning through the current literature, what sort of paper makes you most likely to keep scrolling? What kind of work are you least likely to actually read?
We'll stipulate that you're looking at a journal or subject that's relevant to your field - I read a lot of stuff, but I'm most certainly not going to slow down to look over (say) a theoretical paper calculating the stability of isomeric inorganic complexes. But that said, there are things that make you slow down while going through the abstracts, and there are things that absolutely made you speed up.
My particular biases are to walk more quickly past the following sorts of titles, which have been only slightly exaggerated for effect. And you?
"Synthesis of A Natural Product That You Don't Care About, Using Methods That Bore You"
"Slight Enantiomeric Excess Realized Through Use Of A Humungous Catalyst That Takes Nine Steps To Make All By Itself"
"Nanorods Attached to Nanoplates by Nanosprings: Progress Toward a Nanomattress"
"Green Chemistry, Part 87: A Novel Reagent to Prepare Nitriles from Oximes"
"Isolation Of Known Terpenoid Natural Products From Weeds In Our Back Yard"
+ TrackBacks (0) | Category: The Scientific Literature
So Merck has announced that they're spending another $5 billion to buy back their own stock. How does this square with the CEO's recent refusal to give detailed earnings guidance, on the grounds that R&D spending comes first and is inherently unpredictable?
"Not too well" is my first response. Wall Street liked the news, taking it as a sign that the company has put its J&J problems behind it, and has (let's lapse into Streetspeak) "the visibility to deploy its capital". And stock buybacks help keep up the share price, and help the earnings-per-share, so what's not to like, if you're holding the stock?
Well, what's not to like is that there are other places for the company to deploy all that capital, now that it's so visible and everything. Like, for example, on their business. (Note that I'm not just saying that they should spend it on R&D alone - I've addressed the whole "how come Big Pharma spends so much on marketing" question, and if Merck wants to spend some of this on marketing, that's fine by me. Short explanation: marketing is supposed to bring in even more money; if it doesn't, you're doing it wrong).
I think that if Ken Frazier really wanted to stand out from the crowd, he could say that Merck is not going to spend all this money buying their own stock - that he feels that the best thing that they could do for their shareholders is to redouble their efforts to find, discover, buy, in-license, develop, and sell drugs. That, after all, is what they're on this planet to do, when you get down to it. Isn't it?
+ TrackBacks (0) | Category: Business and Markets
April 28, 2011
As it turns out, I'm not at the bench cranking out the wonder drugs today (or tomorrow) - I'm at this flow chemistry conference, right across the river in Boston, hoping to learn about new ways to, well, pump out the wonder drugs instead from our flow machines. No live-blogging, but I do have wireless in here, and will be keeping an eye on any interesting developments out there.
+ TrackBacks (0) | Category: Blog Housekeeping
I'm hearing that yesterday and today Pfizer has been handing out notices - today it appears to be in the cardiovascular area. Word is that the small-molecule people are especially hard hit, but anyone in a position to know, please feel free to add details in the comments. . .
+ TrackBacks (0) | Category: Business and Markets
Here's the cry of someone who's been jerked around by too many journal referee reports. Hidde Ploegh of the Whitehead Institute has a piece in Nature News called "End the Wasteful Tyranny of Reviewer Experiments". That could have just possibly have been phrased more diplomatically, but I know what he's talking about.
Too often, reviewers try to show that they're fulfilling their responsibilities by requesting additional work from the authors of a paper under consideration. This happens more and more as you move up the hierarchy of journals, as both the novelty of the work and the incentive to publish it increase. No one's going to exert themselves too much to get their paper into Acta Retracta, even if some rogue reviewer were to try it, but Science and Nature (among others) can really make you perform tricks.
What this reminds me of is a story about Steve Wozniak, of Apple fame. When he was in college, his dorm had an old TV down in the lobby with a rabbit-ear antenna, which had to be messed with constantly to get a good picture. Woz apparently built a gizmo to fuzz out the reception, and used to sit inconspicuously in the back of the room, trying to see what sort of crazy positions he could twist people into as they held the antenna in what was seemingly the One Perfect Spot.
The referee equivalent is Just One More Experiment, and it's not always justified:
Submit a biomedical-research paper to Nature or other high-profile journals, and a common recommendation often comes back from referees: perform additional experiments. Although such extra work can provide important support for the results being presented, all too frequently it represents instead an entirely new phase of the project, or does not extend the reach of what is reported. It is often expensive and unnecessary, and slows the pace of research to a crawl. Among scientists in my field, there is growing concern that escalating demands by reviewers for the top journals, combined with the increasingly managerial role assigned to editors, now represents a serious flaw in the process of peer review.
Ploegh's point is that too many referees aren't reviewing the paper that they have; they're suggesting a whole new project or phase of research. And some of these wouldn't even affect the results and conclusions of the paper under review very much - they're just "Gosh, wouldn't it be nice if you would also. . ." experiments. The benefit for science, he says, is nowhere near commensurate with the disadvantage of holding up publication, messing with the career prospects of younger investigators, spending extra time and grant money, and so on. His suggestion?
The scientific community should rethink how manuscripts are reviewed. Referees should be instructed to assess the work in front of them, not what they think should be the next phase of the project. They should provide unimpeachable arguments that, where appropriate, demonstrate the study's lack of novelty or probable impact, or that lay bare flawed logic or unwarranted conclusions.
He also suggests that reviewers provide an estimate of the time and cost involved for their suggested experiments, and compare that to their purported benefits. I wouldn't mind seeing editors crack down on this some, either. I've had useful feedback with my own manuscripts, which had identified things that really did need to be shored up. But submitting a paper should not routinely be an exercise in having other people tell you what experiments you should run before you can publish your. When there really is a gap or flaw, naturally, it's appropriate to ask for more, but I agree with Ploegh that a reviewer needs to make a case for such things, rather than just asking for them as a matter of routine.
Ploegh has a larger historical point to make as well. Looking back at the earlier days of , say, molecular biology, you get the impression that if someone sent in an interesting paper that seemed reasonable, it would just get published, without all these trips back to the bench. Somehow, the mechanics of science (and especially scientific publication) have changed. Has it been for the better? Or would we all be better off letting more things through as they stand, if they're clearly presented and logically consistent?
I wonder if journals might consider publishing in this style, while then adding an editorial note about what further experiments had been suggested by reviewers. This would fulfill the function of pointing out potential weak points or areas for further exploration, but without delaying things so much. I don't see this happening - but why not, exactly?
+ TrackBacks (0) | Category: The Scientific Literature
April 27, 2011
Now here's a structure that you don't see every day. A company called RadioRx is developing compounds as radiotherapy sensitizers for oncology, designed to release reactive free radicals and intensify the cell-killing effects of ionizing radiation. And these compounds are not from the usual sources. As they put it:
In collaboration with a major defense contractor, RadioRx is developing its first lead candidate, RRx-001, a best-in-class small molecule, adapted from an energetic solid rocket propellant. The development candidate is scheduled to enter first-in-man phase 1 clinical studies by Q1 2011.
I've been forwarded a report that this is the structure of their compound, which would make their defense-contractor partner Thiokol (the assignee where that compound appears in the patent literature). (Here's one of RadioRx's own patents in this area). And I truly have to salute these guys for going forward with such an out-there structure. Can anyone doubt that this is the first gem-dinitroazetidine to reach the clinic? And with a bromoamide on the other end of it, yet?
It's easy to look at something like this and mutter "Only in oncology", but at the same time, it takes some nerve and imagination to go forward with compounds this odd. I hope that they work - and I hope that everyone else looks at their own chemical matter and decides that hey, maybe there's more to life than Suzuki couplings and benzo-fused heterocycles.
+ TrackBacks (0) | Category: Cancer | Drug Development
April 26, 2011
Well, this takes things along another step - AstraZeneca has looked over its Wilmington-area site, which has a lot of empty space in it now, and decided that the best thing to do is: start tearing buildings down:
AstraZeneca will demolish 450,000 square feet of laboratory space in three buildings at its North American headquarters campus off Concord Pike in Fairfax as part of its global restructuring, the drug giant has confirmed.
The three buildings account for a major chunk of the company's Fairfax campus and house all of the company's Delaware-based research efforts. The huge complex west of Concord Pike is only about a decade old.
But as this article clarifies, the buildings that are coming down are generally 30 years old and more. In this climate, leasing them out to someone else is probably almost impossible, even if it were physically feasible. And it's not easy to turn a lab building into much of anything else. So what else to do? I hate to see this, but I can't come up with a better answer, either.
The first research site I worked at (Schering-Plough in Bloomfield, NJ) was torn down, after repeated attempts to find a buyer for it in the early 1990s. A Home Depot (and its parking lot) occupies the space now. That one was an even harder sell, with cramped and still older buildings, and in the end, the company couldn't even give the place away. But this is a sad thing to see, no matter what.
+ TrackBacks (0) | Category: Business and Markets
And now a brief note from the "trivial but annoying" department, since it's been a couple of years since I last complained about this. Is there any way that we can start a petition, or take up a collection, or do something to make K. C. Nicolaou stop drawing ring systems like this? Coloring the insides of them with gradient fills adds no information and actually obscures elements of the structure.
If all else fails, can we at least send the man a set of Ed Tufte books?
+ TrackBacks (0) | Category: The Scientific Literature
Remember that weird Tetrahedron paper from last December? The one that claimed that it isolated the reverse transcriptase inhibitor nevirapine as a natural product from an Indian plant? In chiral form, no less?
Well, the journal would now like to say "Never mind". The lead author has retracted the paper, "due to doubt created in the scientific community on the origin of nevirapine from the seeds of Cleome viscosa". That's an odd way to put it. Isn't it? Doubts that other people might have are irrelevant if you're right, aren't they?
No, this is similar to the classic weasel-word apology, the one that goes on about regretting the way that some people took offense rather than regretting the original action itself. The reason this paper was retracted, surely, was that those doubts in the scientific community were well-founded. This paper made no sense on several levels, and those problems should have been caught immediately. Its publication was an embarrassment for Tetrahedron and for Elsevier.
I think that the reason I get so worked up about these things is the laziness and sloppy thinking involved. Scientific research deserves more than that, and the rest of us deserve more from the people who publish it.
+ TrackBacks (0) | Category: The Scientific Literature
April 25, 2011
Nature News has a big article on the "Too Many PhDs" problem, which we've discussed several times around here:
In some countries, including the United States and Japan, people who have trained at great length and expense to be researchers confront a dwindling number of academic jobs, and an industrial sector unable to take up the slack. Supply has outstripped demand and, although few PhD holders end up unemployed, it is not clear that spending years securing this high-level qualification is worth it. . .
The piece looks at several different countries, each with its own set of problems. Japan seems to be in just awful shape as far as doctorates go; it makes the situation over here look not so bad. China, for its part, is cranking out zillions of fresh PhD holders these days, but (as the article is quite frank about) many of them aren't worth much. That isn't stopping them from getting jobs (for now), but it's something to worry about.
And we all know the picture here in the US. But this article doesn't, to my mind, do as good a job as it should. Mention is made of the problems in the pharma/biotech/life sciences industries, but all the hard numbers refer to academic positions. Looking at this graph, you'd think that academia was the main destination for all PhDs, all the time - after all, that's all that's over in the right-hand box. (I'll leave aside the poor graphic design. The same colors mean completely different things in each of those three graphs, which means that you're constantly having to tell your brain not to draw the conclusions it's trying to draw).
The article also details conditions in Germany, Poland, Egypt, and India. About the latter, I have to wonder if they're facing the same quality-control problems that China has. The good people there are quite good, but there are plenty of others. I occasionally get unsolicited e-mails from PhD candidates (or finished doctorates) from the more obscure Indian universities. They're either seeking a job, with apparently no idea who I am other than some guy with a e-mail address, or seeking advice on some aspect of chemistry that (it seems to me) they should have mastered long since. . .
Update: See this Wall Street Journal piece for more on India, and just that very problem.
+ TrackBacks (0) | Category: Academia (vs. Industry)
"Big Pharma should get smaller". Now that's something that most readers around here will have heard or thought several times in recent years. But what if you were hearing it from Pfizer's former head of global development?
You are now. Peter Corr, formerly of Parke-Davis/Warner Lambert, had a chance to see how things worked from the inside at Pfizer. And as he tells Xconomy, it wasn't a thing of beauty:
Warner Lambert/Parke Davis was a larger company “but decisions were still made fast,” he says.
It was not until 2003, when Dr. Corr was Pfizer’s executive vice president of global research & development and president of worldwide development, that he realized the old model was not sustainable.
The company was spending about $8 billion on R&D but only producing about four products a year, a whopping $2 billion per drug, Dr. Corr says.
“That doesn’t work,” he says. “We needed to go out and license (drug candidates) and keep smaller (R&D) sites and let them go on their own. Let them be funded independently. Let them define how they can work best at their particular site as opposed to manage all of these sites around the world and pretend that we knew what was actually going on.”
Of course, this is basically what a lot of people were saying at the time, as they watched productive research organizations being shaken, shuffled, and shuttered. And it's just what many of us have wondered over the years, what the various companies that Pfizer has acquired might have done if they'd just been left alone. Could they possibly have been less productive than they were after they were absorbed?
+ TrackBacks (0) | Category: Business and Markets
April 22, 2011
I think that attendance across the pharma/academic/blog-reading-at-work world is rather low today (and I'm not at work myself), so I'll take a blogging holiday. Regular service will resume on Monday - see everyone then!
+ TrackBacks (0) | Category: Blog Housekeeping
April 20, 2011
Well, I've been traveling this week, but have found a bit of time to blog. Today we have something new about SAR, courtesy of the BASF marketing department.
Medicinal chemists are all familiar with the "magic methyl group" effect - the phenomenon of a single methyl sending a compound over the top in terms of activity, selectivity, PK, or what have you. I've seen it several times myself. Usually you starting wondering why you didn't just put the thing in six months before, but that's rarely the way things work out.
Well, we're not the only people who notice such things. Check out this ag-chem ad from BASF for their Kixor herbicide (sent along by an alert reader). Scroll down to the bottom of the page, and you'll find:
Methyl groups serve as "metabolic handles" in crops delivering crop safety and giving you confidence knowing you have made the right choice"
There you have it! Methyl groups add confidence! Now that's worth knowing - and you have to wonder what secrets other functional groups hold. Do carboxylic acids put a spring in your step? Do para-fluoros freshen the breath? Will a sulfonamide help you make the big sale? Someone in the advertising department might believe it - as Dilbert put it, marketing people even believe marketing surveys, so what's the limit?
+ TrackBacks (0) | Category: Life in the Drug Labs
There's an interesting follow-up over at SciBX to Bruce Booth's piece on the reproducibility of academic research. Booth, in his position as a venture capital purse-string holder, advocated caution and careful verification of exciting academic discoveries before starting the company-formation process.
The SciBX folks followed up with him and with several other VCs. Booth sticks to his position, and says that his firm, Atlas Venture, has allocated money to allow CROs to do reality checks on the new ideas that they see. Daphne Zohar at PureTech Ventures takes a similar line, but says that they do this sort of work with the originators of the technology, giving it a quiet shakedown before talking to investors. They do use CROs when appropriate, though.
On the other end of the spectrum, though, you have Camille Samuels at Versant Ventures:
“I think the best way to prevent yourself from funding biotechs that have a faulty scientific basis is to develop a trusting relationship with the scientific founders,” she told SciBX. “I think that starting a productive, long-term business relationship is hard to do if you use a ‘guilty before proven innocent’ approach.”
Samuels favors vetting the science with a top-notch scientific advisory team before launching a company. “If you hire great scientists to the company you will uncover the ‘over-reaching’ before you’ve spent any real money,” she noted.
I'm not so sure about that myself. While I agree that a good relationship between the VC people and the founding scientists is crucial, I think that any such relationship worthy of the name should be able to stand up to this sort of review. Everyone involved should be wise enough to realize this, and not take it personally. "Guilty until proven innocent", after all, is not such a bad attitude when you're looking at something that's interesting enough to trigger millions of dollars worth of investment. If the idea or technology is strong enough for real money, it's strong enough to handle a good shaking - and if it isn't, you'd want to know that as early as possible.
And to be honest, isn't it the same attitude that greets any big new discovery when it hits the literature? When some hot news comes out in a competitive field, the first thought of all the outside teams is "I wonder if that's real?" A big name or a trusted institution will buy a bit more benefit of the doubt, but not much, as well it shouldn't. I'm willing to believe that interesting results from a reliable research group are probably true, but I'll only put them in the "solid" category when I've seen someone else reproduce them (or have done it myself). That's science.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets
April 18, 2011
So the long-delayed settlement between Merck and J&J has finally been announced. The drawn-out process had everyone speculating that some sort of deal was in the works, and so it's proved:
Under the resolution, Merck must relinquish its rights to sell Remicade in Canada, Central and South America, the Middle East, Africa and the Asia Pacific effective July 1. The lost territories represent about 30 percent of Merck’s 2010 Remicade revenues.
Merck retains the ability to sell the arthritis medicine across Europe, Russia and Turkey, where it generated 70 percent of its 2010 Remicade revenue. Beginning in July, however, Merck will begin sharing its profits equally with Johnson & Johnson. . .
In a research note, Tim Anderson, an analyst with Bernstein Research, pointed out that while Merck is retaining most of its ex-U.S. franchise, it is giving up the product in markets where the growth rate has been — and is likely to remain — higher.
Merck's stock went up a bit on the news, probably from relief that the whole issue has finally been worked out. But this really can't be seen as a plus for Merck - back when they acquired Schering-Plough, those Remicade revenues were supposed to be a good part of the package. Thus all that SP-buys-Merck charade, all of which looks pretty ridiculous now.
So was I off base in my prediction that Merck would come out the loser? Matthew Herper has a more positive view of the outcome than I do. At any rate, finally resolving the whole dispute is worth quite a bit to both companies. But what did all this accomplish, in the end, except giving the lawyers something to do?
+ TrackBacks (0) | Category: Business and Markets
April 15, 2011
You don't see too many drugs with selenium in them, that's for sure. It's one of those elements that can be used to illustrate the Paracelsian doctrine that the dose makes the poison: selenium is an essential element that's also toxic. There's no doubt at all about either of those properties; it all depends on how much of it you get.
And that's the problem with using the element in a drug molecule - the dose of many pharmaceuticals would then exceed the safe amount of selenium that a person could take in. That's especially true for whopping-dose areas like antibiotics (Home of the Horse Pill reads the sign over the door). So it's especially interesting to see that Achillion has spent some time and effort developing just that: a new antibiotic candidate whose essential feature is a selenium substitution.
No, they're not idiots. In fact, I have to salute them for having the nerve to go down this path. The key here is that the selenium in tied up in a heterocycle, a selenophene (analogous to thiophene, and not a heterocycle that very many chemists will have seen.) This keeps the element from being bioavailable, as is apparently the case with the even stranger heterocycle ebselen.
And going from a thiophene to a selenophene is not a neutral switch - in this case, it seems to have been quite helpful. The structures are in a family of topoisomerase/gyrase inhibitors that have shown a lot of promise, but have dropped out of development due to potential cardiac side effects. It's the dreaded hERG channel again, which has sunk many a development program. Binding to that ion channel can lead to long QT syndrome in some patients, and you really don't want that risk. (Neither do the regulatory agencies, which require testing of any new drug candidate for just this reason).
Switching to selenophene gave the cleanest hERG profile for Achillion's entire series of compounds, while still retaining antibacterial activity. So these selenium heterocycles are, for the adventurous, probably worth a look - they can be similar to thiophene in some situations, and not so similar in others. People are going to look at you funny if you make them, but you should never let that slow you down.
+ TrackBacks (0) | Category: Infectious Diseases | Odd Elements in Drugs
April 14, 2011
Here's a topic that's not unrelated to that job-loss post below. Venture-capital guy Bruce Booth writes on contract research organizations (CROs):
Contract Research Organizations (CROs) have historically been sleepy fee-for-service partners for the drug industry, widely disregarded as not innovative, and their scientists certainly not treated with the same professional respect as their counterparts in Pharma R&D.
But this is clearly changing. . .Over the past decade, Big Pharma organizations have supported, willingly or not, a huge knowledge and talent transfer to CROs. Many of the project leaders in offshore CROs are Big Pharma trained medicinal chemists. Clinical trial management expertise has also flowed out of Pharma and into CROs. Furthermore, many CROs have recently been attracting some very seasoned executive talent. . .
He has a number of examples, for both companies and for people. His take on this is that the CRO world is (preforce) much more focused on cost containment than the Pharma one, since they've come up in a low-margin world, and that this (overall) could be a good thing for the pharma ecosystem:
An obvious ecosystem trend is that large pharma disgorges itself of more research sites and infrastructure, some of which will be shut down, others absorbed into existing CROs or spun-out into new ones. I also think smaller biotech will follow the same trend: more and more virtual or semi-virtual biotechs will be funded. . .
He could well be right about that - but working under these conditions will be a different experience, for sure, and a bumpy ride. But given the conditions in the industry, a bumpy ride is the absolute least that we can expect. . .
+ TrackBacks (0) | Category: Business and Markets
Matthew Herper has the numbers, as tallied up by a consulting firm. Since 2000, there have apparently been about 300,000 layoffs in the drug industry. It's important to remember that a good number of those people have found other jobs in the business - I'm one of them. But there are a lot of people who haven't.
Those exact figures, and the balance between them, are something we'll probably never be able to get a good read on. But there's no way that everyone found a new position, and I don't see any way that new hires could have filled the gap, either. The total head count of the industry is down over this period - not hugely, but it's down, and it's not like we've cured a huge slate of diseases over the last ten years and put ourselves out of business that way.
As you'll see from Matt's table, 2009 seems to have been the absolute worst year so far, with 2010 still in second place. (And since those have come on top of all the cutbacks in prior years, it tends to make them seem even harder). I was one of the 15,638 laid off in 2006; several hundred of my colleagues helped to swell that total. But 2006, in retrospect, looks like an afternoon by the lake compared to what came after. . .
+ TrackBacks (0) | Category: Business and Markets
April 13, 2011
That hedgehog/fox distinction reminds me of my own graduate school experience. I'm a natural fox myself; I've always had a lot of interests (scientifically and otherwise). So a constant diet of my PhD project got to be a strain after a while. I was doing a total synthesis of a natural product, and for that last couple of years I was the only person on it. So it was me or nothing; if I didn't set up some reactions, no reactions got run.
And I don't mind admitting that I got thoroughly sick of my synthesis and my molecule by the time I was done with it. It really went against my nature to come in and beat on the same thing for that length of time, again and again. I kept starting unrelated things, all of which seemed much more interesting, and then having to kill them off because I knew that they were prolonging my time to the degree. Keep in mind that most of my time was, necessarily, spent making starting material and dragging it up the mountainside. I only spent comparatively brief intervals working up at the frontier of my synthesis, so (outside of any side projects) my time was divided between drudgery and fear.
My doubts about the utility of the whole effort didn't help, I'm sure. But since coming to industry, I've happily worked on many projects whose prospects I was none too sure of. At least in those cases, though, you know that it's being done in a good cause (Alzheimer's, cancer, etc.) - it's just that you may worry that your particular approach has a very low chance of working. In my total synthesis days, I wasn't too sanguine about the approach, and by the end, I wasn't so sure that it was in a good cause, either. Except the cause of getting a degree and getting the heck out of grad school, naturally. That one I could really put my back into. As I used to say, "The world does not need another synthesis of a macrolide antibiotic. But I do."
+ TrackBacks (0) | Category: Graduate School | Who Discovers and Why
Over at The Curious Wavefunction, there's an interesting post on Isaiah Berlin's famous hedgehog/fox distinction (which goes back a long way) and how it applies to chemistry. Wavefunction makes the case, which I hadn't thought through in such detail, that chemistry has for a long time been a field for foxes. That is, our famous names tend to be people who jump around from area to area as their interests take them, rather than people who spend their careers digging into one particular problem.
At the outset, one thing seems clear: chemistry is much more of a fox's game than a hedgehog's. This is in contrast to theoretical physics or mathematics which have sported many spectacular hedgehogs. It's not that deep thinking hedgehogs are not valuable in chemistry. It's just that diversity in chemistry is too important to be left to hedgehogs alone. In chemistry more than in physics or math, differences and details matter. Unlike mathematics, where a hedgehog like Andrew Wiles spends almost his entire lifetime wrestling with Fermat's Last Theorem, chemistry affords few opportunities for solving single, narrowly defined problems through one approach, technique or idea. Chemists intrinsically revel in exploring a diverse and sometimes treacherous hodgepodge of rigorous mathematical analysis, empirical fact-stitching, back of the envelope calculations and heuristic modeling. These are activities ideally suited to foxes' temperament. One can say something similar about biologists.
I think he's right, and I think that that's rarely been more true than now. Whitesides, Schreiber, Sharpless. . .start listing big names and you get a list of foxes. As it did for Wavefunction, the most recent hedgehog that springs to my mind in organic chemistry was H. C. Brown, although if Buchwald continues to work on metal-catalyzed amine couplings for another forty years he could come close. Am I missing anyone? Nominations welcome in the comments.
+ TrackBacks (0) | Category: Who Discovers and Why
April 12, 2011
A new paper in PLoSOne goes over the existing studies that have tried to put a number on how many scientists falsify data (or have done so at least once) or commit other scientific offenses (ranging from the quite grave to the pretty questionable).
For what it's worth, the meta-analysis comes out with a figure of about 2% of scientists admitting that they've fabricated, falsified, or modified data. Of course, that group itself is a wide one, and deserves to be broken into various levels (which is just what Dante ended up doing, come to think of it, for similar reasons). To my mind, people who are modifying data want to make the numbers look better than they are, and people who are falsifying data want to make the numbers just flat-out say things that they don't say. And the far end of that process is fabrication, where you give up on tweaking and bending and processing, and just make the stuff up. As the PLoS paper says, you slide along from what could be explained as carelessness all the way to what can only be described as blatant fraud.
There are, of course, a lot of difficulties in getting good numbers on this sort of thing, and the whole purpose of this meta-analysis was to try to set a lower bound. There are limits to what people will admit, and limits in how objectively they see their own behavior:
The grey area between licit, questionable, and fraudulent practices is fertile ground for the “Mohammed Ali effect”, in which people perceive themselves as more honest than their peers. This effect was empirically proven in academic economists and in a large sample of biomedical researchers (in a survey assessing their adherence to Mertonian norms, and may help to explain the lower frequency with which misconduct is admitted in self-reports: researchers might be overindulgent with their behaviour and overzealous in judging their colleagues. In support of this, one study found that 24% of cases observed by respondents did not meet the US federal definition of research misconduct.
There's another interesting possibility raised:
Once methodological differences were controlled for, cross-study comparisons indicated that samples drawn exclusively from medical (including clinical and pharmacological) research reported misconduct more frequently than respondents in other fields or in mixed samples. To the author's knowledge, this is the first cross-disciplinary evidence of this kind, and it suggests that misconduct in clinical, pharmacological and medical research is more widespread than in other fields.
He goes on to speculate whether this is due to financial pressures, or different levels of self-awareness or self-reporting. And this brings up another reaction I had to the whole paper, for which I'll have to go back to its introduction:
The image of scientists as objective seekers of truth is periodically jeopardized by the discovery of a major scientific fraud. . .A popular view propagated by the media and by many scientists sees fraudsters as just a “few bad apples”. This pristine image of science is based on the theory that the scientific community is guided by norms including disinterestedness and organized scepticism, which are incompatible with misconduct. Increasing evidence, however, suggests that known frauds are just the “tip of the iceberg”, and that many cases are never discovered. The debate, therefore, has moved on to defining the forms, causes and frequency of scientific misconduct.
I wonder about some of that. Is the image of science really as pristine as all that, at this date? And does the media really help to propagate such a view? I think that the real world is quite a bit messier. I would guess that you'd have to go back to the 1950s (or perhaps before the Second World War) to find a solid majority of people thinking that scientists were pretty much all pristine truth-seekers, and perhaps not even then. And as for media depictions of scientists, those have been mixed for a long time now.
I think that you'll definitely find more objective truth-seeking in the physical sciences than you'll find in most other human endeavors, but science is done by humans with all the failings that humans come equipped with (and it's quite a list). One should always be open to some possibility of misconduct in any field and any situation; lying is one of the things that people do. That's not to condone it, of course - but being shocked by it doesn't seem to be too useful, either.
+ TrackBacks (0) | Category: The Dark Side
April 11, 2011
Now here's a piece that I'm looking for good reasons to dismiss. And I think its author, Jim Edwards, wouldn't mind some, too. You've probably heard that Valeant Pharmaceuticals is making a hostile offer for Cephalon, a company that's dealing with some pipeline/patent problems (and, not insignificantly, the recent death of their founder and CEO).
Valeant's CEO, very much alive, is making no secret of his business plan for Cephalon should he prevail: ditch R&D as quickly as possible:
“His approach isn’t one that most executives in the drug business take,” (analyst Timothy) Chiang said in telephone interview last week. “He’s even said in past presentations: ‘We’re not into high science R&D; we’re into making money.’ I think that’s why Valeant sort of trades in a league of its own.”
. . .Pearson’s strategy and viewpoint on research costs have been consistent. When he combined Valeant with drugmaker Biovail Corp. in September, he cut about 25 percent of the workforce, sliced research spending and established a performance-based pay model tied to Valeant’s market value.
“I recognize that many of you did not sign up for either this strategy or operating philosophy,” Pearson wrote in a letter to staff at the time. “Many of you may choose not to continue to work for the new Valeant.”
Valeant does, in fact, make plenty of money. But my first thought (and the first thought of many of you, no doubt) is that it's making money because other people are willing to do the R&D that they themselves are taking a pass on. In other words, there's room for a few Valeants in the industry, but you couldn't run the whole thing that way, because pretty soon there'd be nothing for those whip-cracking revenue-maximizing managers to sell. Would there?
But we don't have to go quite that far. Edwards, for his part, goes on to wonder (as many have) whether the drug industry should settle out into two groups: the people that do the R&D and the people that sell the drugs. This idea has been proposed as a matter of explicit government policy (a nonstarter), but short of that, has been kicked around many times. Most of the time, this scheme involves smaller companies doing the research, with the big ones turning into the regulatory/sales engines, but maybe not:
If you agree that there ought to be a division of labor in the pharma business — that some companies should develop drugs and then sell those products to the companies that have the salesforces to market them — then this says some interesting things about recent corporate strategy moves among the largest companies. Pfizer (PFE) is downsizing its R&D operations and Johnson & Johnson (JNJ) is said to be on the prowl for a ~$10 billion acquisition.
Merck, on the other hand, is doubling down on its own research and stopped giving Wall Street guidance in hopes of lessening the scrutiny paid to its R&D expense base..
The heralds of this restructuring of the industry haven't quite called it this way, but instead splitting from each other, perhaps the big companies will divide into two camps (Merck vs. Pfizer) and the smaller ones, too (Valeant vs. your typical small pharma). Prophecy's not an exact science - Marx thought that Germany and England would be the first countries to go Communist, you know.
For my part, I think that there are game-theory reasons why a big company won't explicitly renounce R&D. As it is, a big company can signal that "Yes, we'd like to do a deal for your drug (or your whole company), but you know, there are other things for us to do with the money if this doesn't work out." But if you're only inlicensing, then no, there aren't so many other things for you to do with the money. Everyone else can look around the industry and see what's available for you to buy, and thus the price of your deals goes up. You have no hidden cards from your internal R&D to play (or to at least pretend like you're holding). This signaling, by the way, is directed to the current and potential shareholders as well: "Buy our stock, because you never know what our brilliant people are going to come up with next". That's a more interesting come-on line than "Buy our stock. You never know who we're going to buy next." Isn't it?
And that's a separate question from the even bigger one of whether there are enough compounds out there to inlicense in the first place. No, I think that big companies will hold onto their own R&D in one form or another. But we'll see who's right.
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History
April 8, 2011
One morning back in 1989, a guy from Stanford visited the biotech company Cetus and signed a few forms. That action has gradually become the central issue in a nasty patent dispute that's dragged on for years. Roche (who bought Cetus in 1991) and Stanford have been fighting it out through the judicial system, and earlier this year they made their cases before the Supreme Court, who will probably deliver a decision next month. So how did a quick signature
How did This article in Science has a good summary of the details (here's another). What seems to have happened was Thomas Merigan at Stanford sent a postdoc, Mark Holodniy, over to Cetus to learn about their PCR technology. Holodniy signed an agreement to respect Cetus' intellectual property, the standard sort of thing - you'd think. But that's the problem. Ten years later, Stanford (building on work from the Merigan lab and its collaboration with Cetus) received patents on a method to quantify viral RNA in human serum, which turned into a useful assay for monitoring HIV. Roche began to sell kits to do just that in 1996, and starting in 2000, Stanford started pressing them to pay licensing fees to the university.
Roche didn't, so Stanford sued, then Roche claimed that the Stanford patents were invalid, anyway. We'll get back to that question, but the rest of the court cases have turned on a different matter: did what exactly did Holodniy sign away, and was he bound by that agreement, or did that extend to the whole Merigan lab and to Stanford? A district court said that the Bayh-Dole act (which among other things prevents university researchers from cutting patent deals independent of the university), won out, and that Holodniy's Cetus form, which said that he was assigning patent rights to Cetus, was therefore invalid. But the Court of Appeals for the Federal Circuit completely reversed that, and said that Holodniy's agreement (when he was hired) to assign patents to Stanford was just a promise for the future ("I agree to assign. . .", whereas the Cetus agreement took force immediately ("I do hereby assign. . .") and took priority. And thus to the Supreme Court
Academia (and the US Solicitor General) have lined up on Stanford's side, and industry on Roche's, as anyone could have foreseen. If Roche wins, say the former, then no university research group will want to work with industry. If Stanford wins, say the latter, than no corporation will want to work with academia. Here's a hard-core legal summary from the Cornell law school. Their conclusion:
. . .the Supreme Court will decide whether the Bayh-Dole Act precludes an inventor working on a federally funded project from assigning his ownership rights in the invention to a third party. Stanford argues that both the Act and public policy considerations require that research institutions get an exclusive opportunity to patent their employees’ creations. Stanford contends that, if research institutions did not receive this privilege, they would hesitate to pursue costly and time-consuming research projects. Roche, on the other hand, argues that the Bayh-Dole Act did not affect the longstanding rule allowing inventors to assign their ownership rights to third parties. Constitutional and equitable considerations, Roche asserts, caution against Stanford’s interpretation of the Act.
My guess is that Roche will probably win, and that academic/university collaboration will continue anyway, but under even more strictly defined rules. MIT, for example, has already changed its patent assignment forms to the present tense, in a sign that they think that this argument has validity (even though the university has sided with Stanford in this case). One thing that's been lost in all the dust is whether this whole question had to come up. If Stanford's patents were to have been invalidated (another case in itself), then the whole Bayh-Dole argument would have been a moot point. None of the later legal wrangling has addressed this point. As often happens in the courtroom and on the battlefield, the armies end up fighting for larger stakes (and in a different place) than anyone would have predicted at first.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Patents and IP
April 7, 2011
Update: link fixed!
Anthony Nicholls over at OpenEye really unburdens himself here, in a post that I recommend to anyone in the business (or anyone who wants to see what some of our problems are). Some highlights:
I have come to believe (and I admit that this is only a theory) that as more and more of pharma’s budget was funneled into advertising and direct marketing to both the general public and to doctors themselves, the path to the top in pharma ceased to be via the lab bench and instead was by way of Madison Avenue. . .
. . .I want to end with one of my favorite management insanities- the push within big pharma to remake themselves in the image of biotechs—the reasoning being that biotechs “get things done” and are more productive. Leaving aside the fact that over its history, biotech as a whole has mostly lost money (with only two years of profit in the last twenty-five), I wonder if it occurs to upper management that the principal difference between big pharma and biotech is simply much less upper management. If they are truly serious about making pharma like biotech, then upper management should simply resign. I’m confident that one step would do wonders for innovation.
There's a lot of good stuff in there, on management fads, dealing with the scientific staff, bean-counting, and more. Regular readers of this blog (and its comments section) will find a lot of their opinions reflected, for sure. . .
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
While we're on the what's-going-on-inside-cancer-cells topic, there's another new Nature paper that makes interesting reading on the subject. This one also confirms some earlier work, but does a pretty thorough job of it, and sheds some new light as well on that breast cancer mutation work that I blogged about the other day.
This group has taken around 100 cells at a time from tumor samples, and applied single-nucleus sequencing techniques to them. After correcting for a number of factors (see the paper, of course, if you're really into this stuff), they can use this technique to get a good read on the genomic copy number of the cells, which is of particular interest for unstable things like tumor samples. (Comparison of single-cell data versus the averages from samples of millions of cells were also made; the correlations are quite good).
The cells were taken from a breast cancer sample ("triple negative" ductal carcinoma) and from an associated metastatic liver tumor from the same patient. (Given that situation, I'm guessing that these were post-mortem). Each sample was dissected into physical zones, and the cells were then flow-sorted and subjected to sequencing. This revealed a number of interesting patterns.
For one thing, the original tumor sample showed different cell populations across its diameter. There were standard diploid cells in the entire sample, but one population type (less than diploid) was found only one end of the sample, fading out gradually towards the middle, with two other varieties (near-tetraploid) showing up at the other end. On closer inspection, almost all of those garden-variety diploids were normal cells, and most of those were white blood cells, immunocytes that had infiltrated the tumor.
Of the cancer cells themselves, half of them sorted out into three distinct clonal populations, and the metastatic tumor was found to be purely derived one of these. Looking these over, it appears that these three groups emerged very early in the process, and the various mutations (and there were many) still traced back to these early branch points. One of them (arising much later in the process) was clearly more like to break loose and resettle than the others, a pattern that has been seen in other studies. Trying the same thing with another set of tumors from a different patient, they found in this case that the tumor had emerged from a clonal expansion of a single aneuploid cell line, and the metastatic tumor was from one of the later resulting mutants (and had hardly evolved since).
But what about the rest of the tumor cells in these samples? Those turned out to be the pseudodiploid population that they'd seen in the initial sorting, and these were all over the place genetically. No family-tree relationship could be drawn between them (in contrast to aneuploids), indicating that they hadn't been doing the clonal-exapansion thing. They seem to be the result of some ongoing genetic instability in the tumor population, generating a steady stream of one-of-a-kind messed-up cells with missing chunks of chromosomes. You have to wonder if a lot of the 1700 mutations that the full-genome sequencing work picked up were from these cells, and that the other clonally-similar lines showed less variation.
If that's true, it would probably be good news - perhaps all these mutations aren't as evenly spread across the worrisome cells types as you might fear, and especially among the ones that go metastatic. It's a pity that we can't yet do whole-genome sequencing on single nuclei - combining the population breakdown this copy-number technique gives with the hardcore sequence data would really tell the story, and tell us which cells we have to try to kill off first. And finding the root causes of all the genomic instability would be a big advance, too - I wrote about this back in 2002, and it's still relevant and still not well worked out.
+ TrackBacks (0) | Category: Cancer
April 6, 2011
Hmmm. I see that Christopher Westphal is leaving GlaxoSmithKline, specifically departing his job at the company's venture-capital arm, SR One. He'd been in that position just about a year. InVivoBlog has a good roundup of what this means, and there are several layers to it.
For one thing, Westphal came in as part of the Sirtris deal, which has been the subject of all sorts of comment here and in many other places. We'll say for now that the deal was (a) not cheap and (b) is still in the "wait and see" stage as far as paying off, or even being a good idea in the first place. And he was involved in that bizarre buy-some-resveratrol business last year, at least until GSK brought down the hammer. So there's a lot of interesting history from that angle.
Then, when you look at it from the perspective of GSK's attempts to fund small companies, there are other things to think about. Running their venture arm seems to have been a real hot seat, as that InVivoBlog post demonstrates. Not much seems to have come out of all the time, money, and effort so far - you'd have to think that if you'd mapped out the coming accomplishments of SR One when it was founded, that the company wouldn't have been happy to see that future.
Westphal may well be leaving mainly to concentrate on his own venture fund. Or some other execs at GSK may be glad to see him go. Or GSK in general may be wondering what to do about SR One. . .or all three of these at once.
+ TrackBacks (0) | Category: Business and Markets
April 5, 2011
You may have detected, here and there, a certain amount of skepticism on this blog about the direct application of genomic information to complex human diseases. And several times I've beaten the drum for the position that there is no such disease as "cancer" - just a lot of conditions that all result in the phenotype of uncontrolled cellular growth.
Well, here's some pretty dramatic evidence in favor of both of those positions. A new study, one of those things that could only be done with modern sequencing techniques, has given us the hardest data yet on the genomic basis of cancerous cells. This massive effort completely sequenced the tumors from 50 different breast cancer patients, along with nearby healthy cells as controls for each case.
Over 1700 mutations were found - but only three of them showed up in as many as 10% of the patients. The great majority were unique to each patient, and they were all over the place: deletions, frame shifts, translocations, what have you. The lead author of the study told Nature News that the results were "complex and somewhat alarming", and I second that, only pausing to drop the "somewhat". I add that qualification because these patients were already more homogeneous than the normal run of breast cancer cases - they were all estrogen-receptor positive, picked for trials of an aromatase inhibitor.
Half the tumors were estrogen-sensitive, and half weren't, and one of the goals of the study was to see if any genetic signatures could be found that would distinguish these patients. There was an association with the MAP3K1 gene, but hardly a powerfully predictive one, since that one only showed up in 10% of the samples to start with. (Mind you, that still makes it one of the top three mutations).
The Nature piece contains some brave-face material about how this study has uncovered a whole list of new therapeutic targets, but sheesh. What are the odds that any of these will prove to be crucial, even for the low percentage of women who turn out to have them? No, instead of making me yearn for ever-more-personalized targeted therapies, this makes me think that early detection and powerful, walloping chemotherapy (and surgery) must be the way to go for now. I mean, this was still only fifty patients, and uncovered this much complexity: how tangled must the real world be?
We'll get a chance to start finding out - the same team is now moving ahead to expand this effort to 1,000 patients. These are also, I believe, from clinical trials, so we'll be able to correlate outcomes with exact genetic sequences. If there are any correlations that we can understand, that is. . .that's the next thing that I'm really looking forward to seeing. If the whole personalized-medicine idea is ever to work, this is just the sort of thing that's going to have to be done. But we shouldn't be surprised if the results, for some time to come, are that the whole era of personalized medicine is a lot further away than we might have thought.
+ TrackBacks (0) | Category: Cancer
Well, this post needs updating. In it I mentioned never running a Prins reaction again since the 1980s, nor any photochemistry, and today what do I find myself doing? Both of them, although not at the same time.
I am, fortunately, not running the Prins this way. But even bringing it up at all recalls to me a key part of my education. When I first joined my graduate school research group, I was put to making some tetrahydropyran systems. I was handed a synthesis, drawn up before my arrival, of how to make the first one, and like most first-year grad students, I gamely dug and and started to work on it.
I should have devoted a bit more thought to it. I won't go into the details, but it was a steppy route that relied, in the final ring-closure step, on getting the cyclic ether to form where one of the partners was a neopentyl center. The organic chemists in the audience will immediately be able to guess just how well that went.
So I beat on it and whacked at it, getting nowhere as I used up my starting material, until I was finally driven to the library. In the spring of 1984, that was a different exercise than it is now, involving the 5-year Chemical Abstracts indices and an awful lot of page flipping. (I haven't so much as touched a bound volume of CA in I don't know how many years now). If you were a nomenclature whiz, you could try looking up your compound, or something like it, in the name index, but a higher-percentage move was often to look up the empirical formula. That gave you a better shot, because (if it was there at all) you could see how CA named your system and work from there.
To my great surprise, the second set of collective indices I checked (the good ol' 9th), yielded a direct hit on an empirical formula, and the name looked like exactly what I had been trying to make. The reference was in Tetrahedron, which we most certainly had on the shelf, and I zipped over to see if there was any detail on how to make the stuff.
There was indeed. A one-stepper Prins cyclization gave just the ring system I'd been trying to make, and that was one step from the intermediate I needed. I just stared at the page, though. I honestly couldn't believe that this was real (as I mentioned, I was in about my second month of grad school lab work). Surely the synthesis I'd been given was the way to make this stuff? Surely the people responsible for it had checked the literature before drawing it up? (After all, it had only taken my a few minutes to find the stuff myself). Surely I couldn't just make the ring in one afternoon using two starting materials I could buy cheaply from Aldrich?
Well, surely I could. And that's just what I did, and got my project moving along until the next interesting difficulty came up a couple of months later. But I still recall standing there in the Duke chemistry library, looking at that journal article "with a wild surmise" that perhaps I should check things out for myself next time instead of just taking everyone else's word. It took a couple more lessons for me to really grasp that principle (Nullius in verba!, but it's helped me out a great deal over the years. I have the 27-year-old photocopy I made that afternoon in front of me now. It's a good reminder.
+ TrackBacks (0) | Category: Graduate School | Life in the Drug Labs | The Scientific Literature
April 4, 2011
Here's a question for all the organic chemists out there. A discussion with some colleagues the other day got me to thinking about the reactions that we all tend to underuse. The category I offered up was gaseous reagents. Outside of hydrogenation, I think that many of us sort of go "Ehh. . ." when we come across transformations that need lecture bottles, cylinder, regulators, and so on.
Add to that the unpleasant nature of many of the gases themselves, and it's easier to find something else to do. But there are a lot of good reactions and reagents in this category - metal-catalyzed CO insertions, reactions with ammonia, acetylene, sulfur dioxide, etc. There's just a bit of a higher activation barrier to getting around to running them.
I'd say that photochemistry and electrochemistry are in this "rather do something else" category as well. Other nominations welcome!
+ TrackBacks (0) | Category: Life in the Drug Labs
The Lucentis/Avastin story is going to get more complicated as the year goes on. Next month the results of a head-to-head study of the two drugs (one far less costly than the other) in cases of macular degeneration will be revealed, and it's widely thought that they'll come up as basically equivalent in efficacy.
But as this Wall Street Journal article makes clear, they may not be equal in safely. The same meeting that will see the trial results presented will also feature an analysis of Medicare claims for both drugs, which looks like it'll show that Lucentis has a better safety profile. This is exactly what Roche/Genentech would like to hear, naturally. We'll have to wait until May to see which message wins out. . .
+ TrackBacks (0) | Category: Clinical Trials | Drug Prices
April 1, 2011
I'll freely admit to being very interested in research on aging and lifespan. It's a great subject from a scientific (and philosophical) point of view, but perhaps the prospect of turning 50 years old next year has something to do with it, too (not that that age seems anywhere near believable from my end).
Model organisms such as nematodes and fruit flies have already helped identify a number of highly conserved pathways that affect lifespan, many of them having to do with nutrient sensing and various insulin-related pathways. But there are other possibilities. One hallmark of aging at the cellular level is an accumulation of protein defects, chiefly misfolded and chemically modified proteins that apparently are difficult to clear out.
A new paper in Nature takes an alarmingly direct route to investigating potential therapies for this pathway. The researchers looked at small molecules that are known to bind tightly to insoluble protein aggregates and fibrils like amyloid. And what sort of compounds are we sure bind tightly to such things? Why, the sorts of dyes used to selectively stain them for histopathology slides, what else? (See, I told you that this was a rather forceful approach).
But it certainly seems to have paid off. As it turns out, treating nematodes (roundworms, C. elegans) with the dye Thioflavin T (also known as ThT or Basic Yellow 1) extends their lives quite significantly - up around a 60% increase in both median and maximal lifespan. Several other related benzazole compounds were also tried, which produced lifespan extension of up to 40%, and at much lower concentrations.
There are some nematode strains with known defects in protein handling - they produce extra amyloid or polyglutamine proteins, which eventually paralyze them and kill them off. Treating these with the dye had a significant lowering effect on the number of paralyzed nematodes, and the protein aggregrates in their muscle tissue were much lower as well. Similar effects were seen in several other mutant strains that had been used as markers of protein homeostasis.
A number of RNAi and immunological experiments (this is a very data-rich paper, by the way) indicated that ThT's effects depend on several known protein regulators and chaperones. In particular, a strain with a defective heat-shock factor 1 (HSF-1) gene showed no effects with ThT treatment at all, and neither do nematodes with an RNA knockdown of SKN-1 (also known to be implicated in stress responses and longevity). Taken together, these folks really do seem to have found a way to enhance the protein homeostasis functions of living cells, and this seems to have a very beneficial effect on their aging process.
Very interesting work, and very thoroughly followed up on, as it should be. I would be absolutely certain that similar experiments are underway in other species as we speak - I'd go straight to mice, personally, and not neglect some of the mutantmouse strains with protein-handling defects of their own, and compare them to mice that overexpress or underexpress HSF-1 itself. (I can't find any references to SKN-1 mutant mice). Those would be excellent experiments, but I'll bet that I'm not the only one who thinks so. In fact, I'll clean my lab bench off with my tongue if the people who did these studies haven't already thought of them, too.
Oh, and just one more thing: as my wife pointed out to me when I told her about this paper, the FDA was just making headlines the other day by recommending that more study be given to any possible links between food dyes and hyperactivity (though stopping short of recommending any warning at this time, due to lack of convincing evidence). On the basis of this latest work, though, I'm starting to wonder if we're not putting enough dyes in our food. . .
+ TrackBacks (0) | Category: Aging and Lifespan