Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Monthly Archives

October 31, 2008

Fructose In The Brain?

Email This Entry

Posted by Derek

Let’s talk sugar, and how you know if you’ve eaten enough of it. Just in time for Halloween! This is a field I’ve done drug discovery for in the past, and it’s a tricky business. But some of the signals are being worked out.

Blood glucose, as the usual circulating energy source in the body, is a good measure of whether you’ve eaten recently. If you skip a meal (or two), your body will start mobilizing fatty acids from your stored supplies, and circulate them for food. But there’s one organ that runs almost entirely on sugar, no matter what the conditions: the brain. Even if you’re fasting, your liver will make sugar from scratch for your brain to use.

And as you’d expect, brain glucose levels are one mechanism the body uses to decide whether to keep eating or not. A cascade of enzyme signals has been worked out over the years, and the current consensus seems to be that high glucose in the brain inactivates AMP kinase (AMPK). (That’s a key enzyme for monitoring the energy balance in the brain – it senses differences in concentration between ATP, the energy currency inside every cell, and its product and precursor, AMP). Losing that AMPK enzyme activity then removes the brakes on the activity of another enzyme, acetyl CoA-carboxylase (ACC). (That one’s a key regulator of fatty acid synthesis – all this stuff is hooked together wonderfully). ACC produces malonyl-CoA, and that seems to be a signal to the hypothalamus of the brain that you’re full (several signaling proteins are released at that point to spread the news).

You can observe this sort of thing in lab rats – if you infuse extra glucose into their brains, they stop eating, even under conditions when they otherwise would keep going. A few years ago, an odd result was found when this experiment was tried with fructose: instead of lowering food intake, infusing fructose into the central nervous system made the animals actually eat more. That’s not what you’d expect, since in the end, fructose ends up metabolized to the same thing as glucose does (pyruvate), and used to make ATP. So why the difference in feeding signals?

A paper in PNAS (open access PDF) from a team at Johns Hopkins and Ibaraki University in Japan now has a possible explanation. Glucose metabolism is very tightly regulated, as you’d expect for the main fuel source of virtually every living cell. But fructose is a different matter. It bypasses the rate-limiting step of the glucose pathway, and is metabolized much more quickly than glucose is. It appears that this fast (and comparatively unregulated) process actually uses up ATP in the hypothalamus – you’re basically revving up the enzyme machinery early in the pathway (ketohexokinase in particular) so much that you’re burning off the local ATP supply to run it.

Glucose, on the other hand, causes ATP levels in the brain to rise – which turns down AMPK, which turns up ACC, which allows malonyl-CoA to rise, and turns off appetite. But when ATP levels fall, AMPK is getting the message that energy supplies are low: eat, eat! Both the glucose and fructose effects on brain ATP can be seen at the ten-minute mark and are quite pronounced at twenty minutes. The paper went on to look at the activities of AMPK and ACC, the resulting levels of malonyl CoA, and everything was reversed for fructose (as opposed to glucose) right down the line. Even expression of the signaling peptides at the end of the process looks different.

The implications for human metabolism are clear: many have suspected that fructose could in fact be doing us some harm. (This New York Times piece from 2006 is a good look at the field: it's important to remember that this is very much an open question). But metabolic signaling could be altered by using fructose as an energy source over glucose. The large amount of high-fructose corn syrup produced and used in the US and other industrialized countries makes this an issue with very large political, economic, and public health implications.

This paper is compelling story – so, what are its weak points? Well, for one thing, you’d want to make sure that those fructose-metabolizing enzymes are indeed present in the key cells in the hypothalamus. And an even more important point is that fructose has to get into the brain. These studies were dropping it in directly through the skull, but that’s not how most people drink sodas. For this whole appetite-signaling hypothesis to work in the real world, fructose taken in orally would have to find its way to the hypothalamus. There’s some evidence that this is the case, but that fructose would have to find its way past the liver first.

On the other hand, it could be that this ATP-lowering effect could also be taking place in liver cells, and causing some sort of metabolic disruption there. AMPK and ACC are tremendously important enzymes, with a wide range of effects on metabolism, so there's a lot of room for things to happen. I should note, though, that activation of AMPK out in the peripheral tissues is thought to be beneficial for diabetics and others - this may be one route by which Glucophage (metformin) works. (Now some people are saying that there may be more than one ACC isoform out there, bypassing the AMPK signaling entirely, so this clearly is a tangled question).

I’m sure that a great deal of effort is now going into working out these things, so stay tuned. It's going to take a while to make sure, but if things continue along this path, there could be reasons for a large change in the industrialized human diet. There are a lot of downstream issues - how much fructose people actually consume, for one, and the problem of portion size and total caloric intake, no matter what form it's in, for another. So I'm not prepared to offer odds on a big change, but the implications are large enough to warrant a thorough check.

Update: so far, no one has been able to demonstrate endocrine or satiety differences in humans consuming high-fructose corn syrup vs. the equivalent amount of sucrose. See here, here, and here.

Comments (22) + TrackBacks (0) | Category: Biological News | Diabetes and Obesity | The Central Nervous System

October 30, 2008

Lilly And Imclone: Not Expensive Enough!

Email This Entry

Posted by Derek

I honestly didn’t think I’d ever get the chance to write about Imclone again. But never say never! It turns out that a pension fund, the State-Boston Retirement System of Massachusetts, is suing to block the Lilly acquisition – because, hold on to your hot beverages, they think that $70/share is just too cheap an offer. Not in the best interest of shareholders, the deal is. Clearly needs to be bumped up.

We have, in other words, found the only chordates who think that Imclone is worth even more than Carl Icahn thinks it is. I’ll be very interested to hear their reasoning. Surely this is just one of those “worth-a-try” ploys – the investment fund figures that they’ll spend a little on legal fees, and who knows, they might get a bit more money out of it. A lot of mergers attract these sorts of lawsuits, and if you’ve ever wondered how securities lawyers manage to enrich themselves, look no further. But you do have to advance serious arguments if you expect to win these things. And just what are those going to be?

And the problem is, after the acquisition goes through, it’ll be tricky to figure out if Lilly got their money’s worth or not. The Imclone stuff will disappear into Lilly’s accounting system, and odds are we’ll never see it broken out again. There are quite a few deals that never pay for themselves, but you sure don’t see the numbers laid out to show it. The main measures will be Erbitux sales, and whether Bristol Myers-Squibb gets a share of the follow-up antibody – beyond that, Lilly can always argue that they added value to the earlier-stage Imclone projects, making the accounting impossible to clear up.

At least in this case it’s something being added to Lilly’s pipeline – they’re not stopping work on other things, so you don’t have to factor in opportunity cost. That’s a real consideration when a company says “OK, we’re not going to do our own XYZ work – we’ll do a deal for it”. If that deal doesn’t work out, you always wonder how things would have been if they’d just kept the money at home. I’ve seen a couple of major examples of this in my own experience – one of them, at least, worked out OK on paper, since various stock transactions got a lot of the money back. But the time, the time spent working on things that didn’t ever deliver? We never got that back. Once the deal wound down, we went back to working on (to a good approximation) what we would have been doing if it had never happened. No, if the scientific (and clinical) output of the deal is underwhelming, those involved will always wonder What Might Have Been.

Comments (4) + TrackBacks (0) | Category: Business and Markets

October 29, 2008

Cutbacks - But Not As Bad This Time

Email This Entry

Posted by Derek

So Wyeth is cutting back on its therapeutic areas as well. According to that Bloomberg story, they're going to focus on oncology, inflammation, neuroscience, vaccines, metabolic diseases and muscular-skeletal disorders. That's still a pretty wide swath of territory, but note what it leaves out: cardiovascular and infectious diseases, just to pick two big areas. And there are clearly some details to be worked out - for example, neuroscience clearly encompasses Alzheimer's, where the company has a big effort. But are they going to try dementia, too? Antidepressants? Pain? Multiple sclerosis? These are big fields.

At least this one isn't coming along with its own whopping package of R&D cuts. The company says that some of its research staff will lose their jobs, as their specialty areas disappear, but overall, they claim that they're not cutting staff, and that R&D spending will remain constant. I certainly hope that's true; the last thing we need is another big layoff around this industry. Anyone inside Wyeth care to comment?

What these rounds of research concentration might do, in the longer term, is open up a number of areas to smaller companies. There are a number of less-heavily-populated therapeutic fields now: does that create opportunities to be filled? Of course, the reasons some of these are being abandoned still obtain - lack of good targets, lower profit potential, and so on. But smaller outfits may well be able to colonize these environments, and I hope that they do.

Comments (14) + TrackBacks (0) | Category: Business and Markets

October 28, 2008

Out the Door and Down the Stairs

Email This Entry

Posted by Derek

I’ve noticed over the years that my patience in seminars and talks has been eroding. This started in graduate school – I certainly sat through my share of lousy talks back then, but I was starting to skip out on the occasional one, after a certain level of grimness was reached.

For example, I remember walking down the hall with a new post-doc, when the building’s speakers came to life. “May I have your attention, please. . . “ We stopped to listen. “There will be a seminar in the main auditorium in ten minutes, entitled “Raman Spectroscopy of Synthetic Asphalt Roofing Materials” (I swear that this is a real title, or something very close; it was appalling). The new guy asked, in a slightly worried tone “Do you guys in the group usually go to these things?”

And at that point, one of my fellow group members came lurching out into the hallway, pantomiming elaborate choking gestures as he pointed desperately at the speaker up on the wall, slumping against the wall as the horror of the seminar’s title overcame him completely. We watched him slide to the floor, still gesturing at the intercom, and I said calmly: “No, we skip a few of them now and then”.

Well, over the years I’ve continued to skip a few of them now and then, and my threshold has been steadily creeping up. I realize that many of the topics that keep me glued to my seat are, by any objective standard, rather dry. Give a detailed talk about enantioselective hydrogenation, the thermodynamics of multivalent binding, or even the latest thinking about the patent office’s requirements for obviousness rejections, and I’ll be right there, practically munching popcorn. To me, those things are interesting. But plenty of things aren’t.

It’s to the point now where there are single phrases that give me that “late for the door” feeling. After that hits, it’s a major effort for me to stay in my seat. So, speakers, if you see me out in the audience and think that the ambience would be improved without me, it isn’t hard. Just spend a few minutes going on about “cross-functional goal setting” or the wonders of ISO nine-thousand-whatever. I’ll spray gravel on my way out. One day I’ll probably end up dangling from a bunch of knotted tablecloths, having rappelled down the side of my building from an upper-floor conference room. “Vision statement”, I’ll gasp to the passers-by as I drop to the sidewalk in relief. “They invited me to work on a new vision statement. . .”

Comments (11) + TrackBacks (0) | Category: Graduate School | Life in the Drug Labs

October 27, 2008

Publish And Be Damned, Most Likely

Email This Entry

Posted by Derek

As I’ve mentioned here before, publication in the scientific journals is not necessarily a big priority in the drug industry. Patent filings, on the other hand, are very serious business indeed. If you’re an organic chemist looking to see if some particular compounds have been made, you ignore the patent literature at your peril. That’s where most of our med-chem procedures and analogs end up, and there’s a lot of good chemistry in there that never sees the light of day otherwise. Most of it’s even reproducible! (And that’s a crack that you can make about a number of open-literature journals, too, for that matter).

But we do write up papers from time to time. The problem is, it’s often only after the project has been finished for a while. Sometimes it takes that long to decide that the work is safe to publish, and sometimes there are just too many other (more important) things going on. But the result is the same, and I’ve experienced it myself: you go back to the old project data, ready to assemble it into a manuscript. . .and large sections of it appear to make no sense at all.

It’s disconcerting. By “no sense”, I mean that while the chemistry is fine, and the compounds are what they’re supposed to be, it’s nonetheless hard to see why some of them got made in the first place. Whose idea was it to react that amine with every single isocyanate on the whole shelf? None of the resulting ureas were all that good, so why did we decided we needed seventy-nine of them? And there are always gaps in the story that weren’t so apparent while things were going full speed: how come we never made any more of those N-alkyl compounds? And didn’t we resolve that series of racemates at some point? Somebody was supposed to do that.

All this makes it hard to turn many med-chem projects into coherent stories, and a coherent story is what you'd like for a journal publication. Ideally, you want a narrative, something along the lines of: ”We started with this screening hit – promising, but lacking so many key things. By careful, thorough experimentation, we solved those problems one after the other. Moving from strength to strength, and hardly wandering down any blind alleys at all, the analogs became more potent, more selective, and their physical properties and PK fell right into line. In the end, we prepared the wonderful clinical candidate shown in the last table of data on the last page. Not too obvious, is it? Bet you wouldn’t have gotten there yourself. But that’s how good we are.”

Right. The problem is, no projects ever work like that. Or if they do, I've somehow missed seeing them over the last nineteen years. A more realistic story would go something like: “We started out with this screening hit, and decided that we’d change the right-hand side of it – well, the left-hand side, for those guys down the hall who always drew the thing upside down and drove the rest of us crazy. That’s the part of the molecule that had the easiest chemistry, naturally. And naturally, everything we did to it over there made things worse. So the weeks went by, with management tapping their feet, and finally some of the guys said the heck with it and started changing the back end of the molecule. You thought those other compounds were less potent? You should see these! But one of the changes actually worked, for some reason. Then when we went back and starting messing with the easy side of the molecule, two things happened: for one, the chemistry wasn’t so easy any more. But now those changes actually made things better. So we cranked out a whole pile of these things, hoping for the best, and finally got down to two compounds: one with great potency and selectivity, but iffy blood levels, and one with great PK, but not so great on the potency. Never could bridge the gap. We put ‘em both into two-week tox, and they both flunked out for the same completely unexpected reason. They're not gonna be drugs, we've filed the patents already. . .so, here they are!”

Well, you can’t quite say that, not even in Bioorganic and Medicinal Chemistry Letters. So you put things into the most coherent shape you can, and trust your fellow medicinal chemists – those of them who might actually read your paper – to understand. They generally do.

Comments (13) + TrackBacks (0) | Category: The Scientific Literature

October 24, 2008

BlackLight Power Responds

Email This Entry

Posted by Derek

After my post the other day, I’ve heard from some folks at Blacklight Power, including their founder, Randell Mills. He says that I have a number of details wrong about their system, and wrote with more information. I’ll quote from Mills:

”We do not add water to R-Ni. Any water present after drying is in the form of Bayerite or Gibbsite (Al(OH)3) which is quantified by XRD and TPD. Regarding the Rowan University team validation, the maximum theoretical heat from the measured content was 1% of the observed energy as stated with the analytical results given in the Rowan report which is on-line at our website.”

He also takes exception – as well he might – to my line about the correlation of the company’s activities to their fund-raising needs, stating that Blacklight currently has no need to raise any money at all. And as for the NMR figure that I could make no sense of, that appears to have been mislabeled. The one I was looking at, Mills says, is indeed a solution NMR and was actually Figure 45 in the document. Figure 58, he says, has now been fixed, although I have to say that it still looks like a duplicate of Figure 45 this morning at this link. Update: here's the correct version,

But as best I understand it now, the fundamental claim of the Blacklight work is that formation of their lower-energy states of hydrogen is extremely exothermic. Alkali metal hydrides, they say, are particularly good catalysts for this, giving you hydrinos and sodium metal (see equations 32 through 34 in their PDF). So the Raney nickel in these experiments is being used as a source of atomic hydrogen, and forming small amounts of sodium hydride on its surface gives you a system to see all this in action. Figure 17 would seem to be one of these, and Figure 21 is the same thing on a kilo scale.

I’ll not comment on these just yet, but will continue to see if I can make sense of what’s going on. I’ll invite readers to do the same if they wish, and to post queries about the stuff in the comments here (or to e-mail them to me). We’ll come back for another round as the process goes on.

Mills has been good enough to offer to help me out with any aspects of the data that they’ve published, and to get in contact with the company should I be in the area, which is a good sign, and much appreciated. They’re also supposed to have a video of the reaction up shortly, and we’ll see what we can learn from that as well. Against all this, I have to put the fact that I still find the physics behind the company quite odd and improbable. And one has to remember that the track record of odd, improbable physics breakthroughs that promise huge supplies of energy is. . .not good. And that’s putting it very mildly indeed.

But all it takes is one. And all Blacklight has to do to quiet the skeptics (many of whom are much more vitriolic than I am) is to throw that big switch at some point and have the kilowatts (or megawatts) come streaming out. That’ll do it, for sure, and the company assures everyone that this is their goal. I wish them luck with it, because a huge and unexpected new source of energy would be a good thing indeed. I’m actually glad to live in a country where ideas this wild can raise tens of millions of dollars, but (for the time being) I’m also glad that none of that money is mine.

Update: I'm already getting queries about how I can come down on the likes of Kevin Trudeau or Matthias Rath but not give Blacklight the same treatment. One reason is that Blacklight doesn't seem to be trying to extract money from the general public, which is, of course, Kevin Trudeau's whole reason for living. Another related reason is that Rath, Trudeau and their ilk are preying, in many cases, on people who are already ill and urging them to do things which will actually make them worse. Blacklight, as far as I can tell, is not urging people to chop down their power lines and send off for Home Hydrino Kits.

I find Blacklight's physics weird and unconvincing, too. But proposing weirdo physics theories is no crime.

Comments (156) + TrackBacks (0) | Category: Current Events

October 23, 2008

Merck Cuts Back (Again)

Email This Entry

Posted by Derek

As everyone knows, Merck announced some big job cuts yesterday – 7,200 positions, around 10% of the work force. It’s worth looking at the details of these, because they differ a bit from what’s been going on at some other companies.

For one thing, the company doesn’t seem to be exiting any one therapeutic area, as Pfizer has, or rearranging their whole R&D structure the way GSK is doing. Merck just seems to be thinning things out across the whole organization. And that includes management, since they’ve announced that they’re eliminating 25% of their middle and senior manager positions. (I should note, in response to some of the more nativist comments that show up to posts like this, that Merck does not appear to be replacing these people with executives from Shanghai and Bangalore).

Overall, 60 per cent of the layoffs are taking place outside the US. That includes the complete closure of research sites in Japan (the Merck Banyu institute in Tsukuba) and Italy, as well as one in Seattle. (I have to say, I didn’t even know that Merck had research in Seattle). So it’s going to be harder to fit this one into the “traitorous execs in expensive suits send jobs to China” template.

That doesn’t mean that Merck isn’t outsourcing research work, though. The company’s press release says that they will “make greater use of outside technology resources” and “expand access to worldwide external science”. You can always make the case that job growth that might otherwise have taken place here will not, and in fact, I think that’s true. And it’s unfortunate – but it’s also true that doing the outsourced work here would be more expensive (otherwise why outsource?), so that job growth would have come at a higher cost to the companies involved.

So that’s where the argument really resides. If you believe that drug company profits are coupled to eventual productivity, then outsourcing makes sense, because it decreases costs. Of course, you then have to cut that estimate back some, because outsourcing (in a great number of cases) is not as effective as doing the work in-house. Does that cancel out the cost savings, or not? I think if you choose your outsourced work judiciously, the savings are real, even after you take efficiency into account. Handled poorly, of course, you could outsource your way into the dumpster. It’s a tool, and tools can be used wisely or well.

Then we get into the second-order arguments – the ones that go beyond money and effectiveness. I realize that there are many people who, although they may argue that outsourcing research is not all it’s cracked up to be, would still be against it even after stipulating its efficacy. For these people, it’s wrong even if it does work. I will not be able to convince anyone in this camp, just as I don’t see them convincing me. If we’re arguing about numbers, there can possibly be an end to the discussion at some point – but if we’re arguing about morals, there can’t. I’m willing to make my own moral arguments, to go along with my utilitarian ones, but the audience for whom those appeals would be crucial is the one least likely to be convinced by them.

Comments (20) + TrackBacks (0) | Category: Business and Markets

October 22, 2008

Blacklight Power: What on Earth?

Email This Entry

Posted by Derek

Today, thanks to a story in the New York Times, we take up the unusual case of Blacklight Power. You may have heard of them before - I had, and I didn't realize that they were still around. Their founder, Randell Mills, has been telling people for years now that there is another energetic state of hydrogen, which he calls the “hydrino”, and that transitions to and from this state can be used to generate power.

My competence in physics isn’t sufficient to wade through Blacklight’s thicket of equations – but what competence I have in the subject strongly suggests that the company is very likely delusional (or, less charitably, hoping to delude others). A “state below the ground state” for hydrogen atoms, based on fractional Rydberg coefficients, seems. . . highly unlikely, to put it mildly. This is a perfect example of extraordinary claims that call for extraordinary evidence.

And that’s where the Times article comes in. According to it, the company has send samples of Raney nickel, apparently enriched in their putative hydrinos, to Rowan University down the road from them in New Jersey. When reacted with water, calorimetry of this system appears to show a release of heat “far beyond anything anticipated”. (It should be noted that this is a burst of heat when the water is added, as you’d expect, not some sort of sustained reaction. Its application to electric power generation is unclear). Update: Blacklight has responded, pointing out that I have several details of this experiment wrong - see this later post.

I know, I know – we’ve been down this road before, and more than once. Breeding even more skepticism is Blacklight’s history (link thanks to Glenn Reynolds at Instapundit). The company has been around since at least the early 1990s, and appears to have been promising various breakthroughs Real Soon Now the whole time. The timing of these announcements would seem to correlate more closely to the company’s financial demands than to their scientific accomplishments. Update: Blacklight disputes this statement, too, saying that they're not raising money This is not a totally unfamiliar business model in the drug industry, to be sure, but neither are most drug companies proposing revolutions at the level of the hydrogen atom. No, Occam’s Razor doesn’t leave much stubble behind when you run it over Blacklight Power.

But when people start talking Raney nickel, they’re heading into my territory, and the territory of many of this site’s readers. The Times names associate professor Peter Jansson at Rowan as the faculty member who’s conducting the tests, and I’ve written him this morning, as one scientist to another, to ask for more details and comment, if possible. We’ll see what can be learned.

Blacklight, for their part, have this PDF available. This part would appear to be what’s being tested at Rowan:

”To achieve high power, R-Ni having a surface area of about 100 m2/g was surface-coated with NaOH and reacted with Na metal to form NaH. Using water-flow, batch calorimetry, the measured power from 15g of R-Ni was about 0.5 kW with an energy balance of delta-H = -36 kJ compared to delta-H of roughly 0 kJ from the R-Ni starting material, R-NiAl alloy, when reacted with Na metal. The observed energy balance of the NaH reaction was -1.6 x 10 to the 4th kJ/mole H2, over 66 times the -241.8 kJ/mole H2 enthalpy of combustion.”

I'll wait for more details before commenting on this, but it's clearly rather odd. Also in the rather-odd category are some of the figures in the Blacklight PDF - take a look at Figure 58, for example, which is labeled "MAS NMR spectra relative to external TMS Of NaCl, KCl, and CsCl showing the expected trend of increasing intensity of H2 (1/4) at 1.1 ppm relative to the H2 at 4.3 ppm down the column of the Group I elements."

Well, fine - but hold on a minute. MAS is "magic angle spinning", which is a solid-state NMR technique - and that NMR spectrum is clearly taken with a lot of DMF around. The dimethylformamide peaks are labeled as such, and it looks like a solution spectrum, not a solid-state one. Second, where's the trend? I see no series presented, just a single spectrum of something, with no labels to suggest various alkali metals. What's more, although I can't find a value for the NMR chemical shift of hydrogen gas in DMF, it's known to be 4.5 in deuterochloroform, so their 4.3 ppm is reasonable. But there's no peak at 4.3 to compare that big 1.1 ppm peak to - what am I looking at here? Update: Blacklight has informed me that this figure was mislabled, and that they're correcting the error

We shall see - maybe. I'll report back if I hear from the group at Rowan. For now, I remain skeptical. I would truly enjoy the discovery a new energy source, but the history of this field does not inspire confidence.

Comments (45) + TrackBacks (0) | Category: Current Events

October 21, 2008

Things I Won't Work With: Triazadienyl Fluoride

Email This Entry

Posted by Derek

Now this is a fine substance. Also known in the older literature as fluorine azide, you make it by combining two other things that have already made my “Things I Won’t Work With” list. Just allow fluorine (ay!) to react with neat hydrazoic acid (yikes), and behold!

Well, what you’re most likely to behold is a fuming crater, unless you’re quite careful indeed. Both of those starting materials deserve serious respect, since they're able to remove you from this plane of existence with alacrity, and their reaction product is nothing to putz around with, either. The first person to prepare the compound (John F. Haller back in 1942) survived the experience, and made it (rightfully) the centerpiece of his PhD dissertation. But relatively few buckaroos had the fortitude to follow his trail over the years, and it’s not hard to understand why. Haller himself wrote on the subject in 1966 from an industrial position at Olin Mathieson, and got right to the point:

”(Fluorine azide) is described as a greenish-yellow gas at room temperature, liquefying at −82°C when diluted with nitrogen and freezing to a yellow solid at −143°C. Evaporation of this solid generally results in violent explosion.”

Yes, it does, and that does tend to slow down the march of science a bit. Not until 1987 was an improved procedure published, from Helge Willner and group in Hannover. (We'll see him again - most of his publication list falls into the "Things I Won't Work With" category, and I really have to salute the guy). Basically, it was the same reaction, but done slowly and Teutonically. You start off by making absolutely pure anhydrous hydrogen azide, which is a proposal that you don't hear very often around the lab, and is the sort of thing that leads to thoughts of career changes. (Maybe I could go into the insurance business and sell policies to whoever took over the prep). The next step is introduction of the fluorine, and when elemental fluorine is the most easily handled reagent in your scheme, let me tell you, you're in pretty deep. After the reaction, attention to painstaking fractional evaporation at very cold temperatures, in the best traditions of German experimental chemistry, is needed to clear out the reactants along with some silicon tetrafluoride, difluorodiazene, and other gorp. Willner's group managed to make about 20 milligrams of the pure stuff, but strongly recommend that no one ever make more than that. As far as I can tell, no more than a few drops of the compound have ever existed at any one time. This is not really a loss:

”The synthesis of pure N3F by the method described above was repeated more than 30 times without explosion. But if N3F is cooled to -196 C or N3F is vaporized faster than described, very violent explosions may occur. One drop of N3F will pulverize any glass within a 5-cm distance.”

They managed to get pretty full spectroscopic data on the compound while they had it, which was good of them, and even explored its chemistry a bit. Life must have a peculiar vividness when your job is to come in and see if triazadienyl fluoride does anything when you expose it to fluorine monoxide. (Oddly, they report that that reaction is OK – go figure). Still, most of the literature on this compound remains computational, rather than experimental (other than Willner's lab), and unless it turns out to be the secret to faster-than-light travel or something, that situation will continue to obtain. It's already good for accelerating Pyrex fragments past the speed of sound, but there are easier ways.

Comments (28) + TrackBacks (0) | Category: Things I Won't Work With

October 20, 2008

Fearful Symmetry?

Email This Entry

Posted by Derek

It’s worth examining your own scientific prejudices and biases from time to time, to see if they’re still valid. Of course, that begins with the difficult task of figuring out what they are – it’s hard to think of these things when you need them. So I try to make note of my presuppositions when I find myself acting by them, flagging them for later review.

One of these that’s come up recently is the bias that I (and many other medicinal chemists) have against symmetric compounds. (By that I mean palindromic compounds with a mirror-plane right down the middle of their structures). We tend not to make such compounds; we downgrade screening hits with that look to them, and if we start to work on one the first things we do is to desymmetrize it and see if it gets any better. Why?

I think that one reason must be that there aren’t many truly symmetric binding sites out there. Proteins, while they can have large-scale symmetric structures, are usually pretty twisty and heterogeneous on the scale of a drug-sized molecule. Even in the cases of real protein symmetry (a dimer of two identical subunits, say), your compound would have to be fitting into some very select spaces to be feeling that symmetrical environment perfectly.

So a symmetric drug molecule feels wrong, somehow unoptimized. But there’s no reason that its two seemingly identical ends have to be doing the same thing on each side. They could easily be binding to completely different residues, or in different ways – it’s worth remembering that the symmetric structure we draw on the board may not have much in common with the molecule’s real 3-D conformation: a few zigs and zags in the rotatable bonds, and things aren’t as balanced as they looked.

Perhaps we shouldn’t be so hard on these structures. I’ve crossed several of them off my lists over the years, but I think from now on I’ll give them more of a chance. Anyone with me?

Comments (15) + TrackBacks (0) | Category: Life in the Drug Labs

October 17, 2008

Down The Chute in Phase III

Email This Entry

Posted by Derek

Here's a good article over at the In Vivo Blog on this year's crop of expensive Phase III failures. They've mostly been biotech drugs (vaccines and the like), but it's a problem everywhere. As In Vivo's Chris Morrison puts it:

Look, drugs fail. That happens because drug development is very difficult. Even Phase III drugs fail, probably more than they used to, thanks to stiffer endpoints and attempts to tackle trickier diseases. Lilly Research Laboratory president Steve Paul lamented at our recent PSA meeting that Phase III is "still pretty lousy," in terms of attrition rates -- around 50%. And not always for the reasons you'd expect. "You shouldn't be losing Phase III molecules for lack of efficacy," he said, but it's happening throughout the industry.

Ah, but efficacy has come up in the world as a reason for failure. Failures due to pharmacokinetics have been going down over the years as we do a better job in the preclinical phase (and as we come up with more formulation options). Tox failures are probably running at their usual horrifying levels; I don't think that those have changed, because we don't understand toxicology much better (or worse) than we ever did.

But as we push into new mechanisms, we're pushing into territory that we don't understand very well. And many of these things don't work the way that we think that they do. And since we don't have good animal models - see yesterday's post - we're only going to find out about these things later on in the clinic. Phase II is where you'd expect a lot of these things to happen, but it's possible to cherry-pick things in that stage to get good enough numbers to continue. So on you go to Phase III, where you spend the serious money to find out that you've been wrong the whole time.

So we get efficacy failures (and we've been getting them for some time - see this piece from 2004). And we're getting them in Phase III because we're now smart and resourceful enough to worm our way through Phase II too often. The cure? To understand more biology. That's not a short-term fix - but it's the only one that's sure to work. . .

Comments (16) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | Pharmacokinetics | Toxicology

October 16, 2008

Animal Models: How High to Set the Bar?

Email This Entry

Posted by Derek

A key step in all drug discovery programs are the cellular and animal models. The cells are the first time that the compounds are exposed to a living system (with cellular membranes that keep things out). The animals, of course, are a very stringent test indeed, with the full inventory of absorption, metabolism, and excretion machinery, along with the possibility of side effects in systems that you might not have even considered.

So it’s a tricky business to make sure that these tests are being done in the most meaningful way possible. You can knock your project out of promising areas for development if your model systems are too tough – and it’s even easier to water them down in the interest of getting numbers that make everyone feel better. “As stringent as they need to be” is the rule, but it’s a hard one to handle in practice.

Take, for example, the antibacterial field. The first cell assays there are unusually meaningful, since they’re being done on the real live targets of the drugs. (That doesn’t do much to get you past the high barrier of animal testing, though, since you have to see if your compounds that kill bacteria in a dish will still do it in that much more demanding environment). But there are all sorts of strains of bacteria out there, and it’s up to you to choose the ones that will tell you the most about what your compounds can do.

One way that bacteria evade being killed off by our wonder drug candidates is by pumping the compounds right back out once they get in. There are quite a few of the efflux pumps, and wild-type bacteria (particularly the resistant strains) are well stocked with them. You can culture all sorts of mutants, though, with these various transport mechanisms ablated or wiped out completely. If your compound doesn’t work on the normal lines, but cuts a swath through some of these, you have good evidence that your problem is efflux pumping, not some intrinsic problem with your target mechanism.

The problem is, we often don’t have a very good idea of what to do about efflux pumping. These proteins recognize a huge variety of different structures, and there aren’t really many useful ways to predict what they’ll take up versus what they’ll leave alone. In many cases, you just have to throw all sorts of variations at them and hope for the best. (The same goes for the other situations where active transport can be a big factor, such as with cancer cells and the blood-brain barrier).

So, how do you set up your assays? You can run the crippled bacteria first, which will give you an idea of the intrinsic potencies of your compounds, minus the pumping difficulty. That may be the way to go but you’d better follow that up with some things closer to wild-type, or you’re going to end up kidding yourself. Having a compound that infallibly kills only those bacteria that can’t spit it out is probably not going to do you (or anyone else) much good, considering what the situation is like out in the real world.

The same principle holds for other assays, all the way up to rats. If you run a relative pushover model in oncology, you can put up a very impressive plot of how powerful your compounds are. But what does that do for you in the end? Or for cancer patients, whose malignant cells are much more wily and aggressive? The best course, I’d say, is to run the watered-down models if they can tell you something that will help you move things along. But get to the wild-types, the real thing, as soon as possible. Those latter models may tell you things that you don’t want to hear – but that doesn’t mean that you don’t need to hear them.

Comments (16) + TrackBacks (0) | Category: Animal Testing | Drug Assays | Drug Development

October 15, 2008

Where Are the Drugs?

Email This Entry

Posted by Derek

A recent correspondence on the topic of “Why aren’t there more drugs for the big CNS disorders” got me thinking about the topic. My take, having worked in the field, is that there is still so much unmet need in that area because we just don’t understand what's going on. It’s hard to come up with disease-altering therapies when you don’t really understand a single disease in the whole field.

Does amyloid cause Alzheimer’s, or does Alzheimer’s give you amyloid, or is amyloid just a sideshow? What sets off the chain of events that ends up killing off cells in the substantia nigra in Parkinson’s? What are the detailed molecular mechanisms of depression, or schizophrenia? Why don’t neurons remyelinate in multiple sclerosis? We don’t know. We know a lot more than we used to; we know more every year. But we don't know enough to cure anyone yet. Even in the areas where we know more than average, we still don’t know enough to step in with therapies that can do what people really want them to do.

By that, I mean do for these diseases what insulin does to Type I diabetes, or what antibiotics do to infections. To any working CNS researcher, such results in their field would be hard to distinguish from magic. We can’t even touch the surrogate endpoints, and do what statins do for LDL levels, or the various antihypertensives do for blood pressure. We understand those areas a lot better than we understand the brain. Even so, we still get surprised, as witness the controversy over Vytorin, and the various ongoing attempts to find something that will raise HDL – you push a bit beyond the mechanisms that you’ve worked out, and all sorts of things start to happen.

The best way I can illustrate how difficult it is to find a disease-stopping therapy for something like Alzheimer’s is to point out the incentives for one. Any drug company that came out with such a therapy would immediately have one of the most profitable drugs on the market, and they would go on to reap more and more money every year. Think of the sensation that a treatment that stopped – just plain stopped – schizophrenia. As I said, indistinguishable from magic. But the success that such a thing would have would be immense. The incentives are there; it’s just that the barriers are very, very high.

Of course, it may not be possible to do some of these things. I’d be very careful to rule anything out, at our current stage of ignorance, but schizophrenia may well be one of these things where a dozen (or a hundred) different pathways lead to the same roughly similar disease state. (Cancer, as I’ve said here before, is the best example of something like this). And even if it’s not quite that bad, it may be that the tangle of the disease just doesn’t lend itself to a single agent – that, I’d say, is quite likely. I strongly doubt if just stepping in and adjusting the D-whatever dopamine receptor a bit will turn out to do the trick. This doesn’t mean that it’ll be impossible to treat, it just means that it’ll be very complex.

And so it is, and so are most of the other big CNS conditions. I find it hard to explain to people outside the field just how complex these things are, and why progress has been so painfully slow for the patients who need these things now. It’s not that there’s no explanation. It’s that actually finding a drug that works for anything is ridiculously hard and expensive, a very difficult task by anyone’s standards. And CNS drugs are fiendishly difficult even by the standards of drug discovery.

Comments (14) + TrackBacks (0) | Category: Alzheimer's Disease | Drug Development | Drug Industry History | The Central Nervous System

October 14, 2008

Impact Factors: Can We Pretend That They Don't Exist?

Email This Entry

Posted by Derek

Science has been writing on and off about scientific publishing, which naturally leads to a discussion of the ways that publication records are evaluated. Fortunately, I haven’t had to deal with this sort of thing myself, but if the reports are accurate, the whole “impact factor” business seems to be well out of control.

Impact factors, for those who haven’t had to worry about them, are an attempt to measure how good different journals are by how often papers in them are cited. The rankings that result are fairly well correlated with the way people have “good” journals ranked in their heads, although review publications get over-ranked by a straight citation count. There have been all sorts of refinements introduced, but the basic principle is the same: to quantify the publication list in someone’s c.v.

And that’s how it’s used in tenure evaluations. There are all sorts of tales of needed at least so-and-so many papers in journals of such-and-such impact factor and above. And in the cases where such things aren’t flatly written down, they’re widely felt to be calculated quietly behind the closed doors. As you’d imagine, not everyone thinks that this is a good thing. One of the letters that came in to Science this time, from Abner Notkins of NIH, says that:

”. . .many scientists are now more concerned about building high-impact factor bibliographies than their science.

The adverse effects of the impact factor culture must be reversed before more damage is done to the orderly process of scientific discovery. Although there may be no way of stopping computer-generated evaluation of journals and published papers, the scientific community certainly can control its use. . .each institution should make it clear, in a written statement, that it will not use the impact factor or the like to evaluate the contributions and accomplishments of its staff. Second, the heads of laboratories should prepare similar written statements and in addition discuss in depth with their fellows the importance of solid step-by-step science. Third, the editors of journals published by professional societies, joined by as many other journal editors as are willing, should indicate that they will not advertise, massage, or even state the impact factor score of their respective journals. By means such as these, it might be possible to put science back on the right track.”

Strong stuff, and to some extent I agree with it. The thing is, there’s nothing wrong per se with publishing in good journals. Aiming your research high is a good thing, as long as good publications are the by-product and not the entire goal. Now, I think that the advertising of impact factors by journals is irritating, especially when they trumpet things down to the sccond decimal place. But I think that a statement that impact factors will not be considered for academic evaluations would be useless. After all, these numbers just put a quantitative coat of paint on a process that everyone engaged in anyway. Papers in Science, Nature, and the like already counted for a lot more on a publication list than did papers in many other journals, and saying that you’re not going to use someone’s numerical rating for them won’t change that. Every scientist in every field has an idea of which journals are harder to publish in (and publish more high-impact work); getting a paper into one of them will always count for more.

As it should. We have to remember what the opposite situation looks like. Everyone’s seen publication lists with page after page of low-quality stuff that’s been turned out for quantity, not quality. Communication after communication in high-acceptance-rate journals, obscure conference proceedings, every poster session noted – you know the sort of thing. It’s supposed to look impressive (why list all this stuff, otherwise?) but ends up looking pathetic. We don’t want to end up rewarding this kind of thing.

So what to do? Perhaps a realistic compromise: tell junior faculty and staff that their publication records will be a part of their evaluations, of course. But tell them that they’re not the most important part, and that a short publication list can be balanced out by other factors (and a long one balanced out in the other direction, too!) Someone who’s doing really good work, but who declines to slice it up into publishable bits, or whose research is just not on a schedule for lots of publications no matter what, should know that they’ll be evaluated with these things in mind. Likewise, someone who runs every single experiment to slot into the next manuscript had better also be running the ones that they’d set up even if journals didn’t exist, and we all still communicated by handwritten letters. Good science is still good science, whether it’s published (or even if it’s published!) in Science or not.

Comments (13) + TrackBacks (0) | Category: The Scientific Literature

October 13, 2008

Old School - Really Old

Email This Entry

Posted by Derek

We try to be delicate when we synthesize our molecules – really, we do. Delicate reactions often have better yields and fewer side products. Exotic catalysts in perfectly tuned metal coupling reactions – these things are wonderful when they work, because you go from pure starting material to darn near pure product.

But life in the drug labs is not always thus. We also have to turn back the clock, and break out reactions that our grandfathers would have recognized – dark, fuming things that will eat a hole in your lab coat. Nitration is one of these – good old nitric acid is still very much around for that reaction, often in vile mixtures with sulfuric and the like. It’s cheap, and it often works, so you can’t get away from it. And if 1:1 nitric/sulfuric won’t perforate your clothing, you must have put on armor instead of Armani.

Chlorosulfonic acid is another such reagent. It’s nasty by anyone’s standards, but it’ll stick a chlorosulfonyl group onto an activated aromatic ring in one step, which is nothing to take lightly. You don’t want to pour that into water to work it up, not unless you want to see it splatter all over your hood. Nope, you’ll need a trip to the ice machine – slow drizzling over crushed ice is the traditional workup, for good reason.

That’s a good acid for another brute-force reaction that we still have with us: the Friedel-Crafts. Fancier ways exist to acylate an aromatic ring – those metal-catalyzed ones, for example, often in the presence of carbon monoxide. But who wants to use CO if you don’t have to? And you need a leaving group where the acyl group is going to go. The Friedel-Crafts will just come in and jam one in on an unsubstituted carbon, if the electronics of the ring are right. And all you need to do is treat your molecule with some hammer-of-the-gods reagent like chlorosulfonic acid, polyphosphoric acid (which looks and acts like honey from Hell), or straight aluminum chloride powder. That last one is at least a solid, albeit a corrosive one, but you pay the toll during the workup. That’s when it hydrolyzes to piles of white aluminum oxide junk, often turning your reaction into a thick mess.

So no, it’s not all twenty-first chemistry, all the time. World War I-era chemistry is still very much with us at times. Actually, I sort of like it that way. When I have to break out the polyphosphoric acid, the powdered iron, or the elemental bromine, I feel as if I’m keeping faith with my predecessors. They wouldn’t know what to make of the LC/mass spec machine, but they’d grin when they saw me trying to work up my aluminum chloride reactions.

Comments (27) + TrackBacks (0) | Category: Life in the Drug Labs

October 10, 2008

Kevin Trudeau: A Bit of Good News

Email This Entry

Posted by Derek

I thought, given all the recent news, that everyone could use a story that would bring a smile to their faces, so here we go: Kevin Trudeau, the infomercial king who makes his living slandering drug research and feeding conspiracy theories about diet and health, has been fined $5 million dollars over the marketing of his weight-loss book. He's also been banned from the infomercial business for three years, and found in contempt of court.

The judge in the case, clearly exasperated, called Trudeau "not a reliable witness" and said that he had "clearly, and no doubt intentionally" violated a 2004 order to restrain from deceptive marketing practices. To give you an idea, Trudeau stated repeatedly that his weight-loss plan involved no exercise, and could be completed easily at home. Lucky customers who sent in their money found that the plan included an hour of walking a day, and that it involved colonic cleansing, injections of human growth hormone, and that the last phase of the plan was stated to go on for the rest of their lives.

Hey, walking an hour a day is good advice, but you don't have to send money to some guy (who tells you that it isn't exercise) to find out about it. And that seems to describe his books pretty well: a mixture of obvious, well-known advice and lunacy, served up at the highest price the market will bear, over and over. If you want to get the Full Trudeau, this Washington Post profile from 2005 will be your window into his wonderful world - it starts with the electromagnetic chaos eliminator necklace he wears, which he says keeps his brain from being microwaved, and goes on from there. Is anyone surprised that he also recommends Scientology?

With any luck, this charlatan will be off the airwaves for a few years - or, if he reappears, perhaps we can all hope for a jail term next time. I'll cheer. After all, this is a person who goes around telling people that drug researches like me are deliberately poisoning millions of people and withholding cures. Whatever he has coming to him is fine with me.

Comments (41) + TrackBacks (0) | Category:

October 9, 2008

More Glowing Cells: Chemistry Comes Through Again

Email This Entry

Posted by Derek

I’ve spoken before about the acetylene-azide “click” reaction popularized by Barry Sharpless and his co-workers out at Scripps. This has been taken up by the chemical biology field in a big way, and all sorts of ingenious applications are starting to emerge. The tight, specific ligation reaction that forms the triazole lets you modify biomolecules with minimal disruption (by hanging an azide or acetylene from them, both rather small groups), and tag them later on in a very controlled way.

Adrian Salic and co-worker Cindy Yao have just reported an impressive example. They’ve been looking at ethynyluracil (EU), the acetylene-modified form of the ubiquitous nucleotide found in RNA. If you feed this to living organisms, they take it up just as if it were uracil, and incorporate it into their RNA. (It’s uracil-like enough to not be taken up into DNA, as they’ve shown by control experiments). Exposing cells or tissue samples later on to a fluorescent-tagged azide (and the copper catalyst needed for quick triazole formation) lets you light up all the RNA in sight. You can choose the timing, the tissue, and your other parameters as you wish.

For example, Salic and Yao have exposed cultured cells to EU for varying lengths of time, and watched the time course of transcription. Even ten minutes of EU exposure is enough to see the nuclei start to light up, and a half hour clearly shows plenty of incoporation into RNA, with the cytoplasm starting to show as well. (The signal increases strongly over the first three hours or so, and then more slowly).

Isolating the RNA and looking at it with LC/MS lets you calibrate your fluorescence assays, and also check to see just how much EU is getting taken up. Overall, after a 24-hour exposure to the acetylene uracil, it looks like about one out of every 35 uracils in the total RNA content has been replaced with the label. There’s a bit less in the RNA species produced by the RNAPol1 enzyme as compared to the others, interestingly.

There are some other tricks you can run with this system. If you expose the cells for 3 hours, then wash the EU out of the medium and let them continue growing under normal conditions, you can watch the labeled RNA disappear as it turns over. As it turns out, most of it drops out of the nucleus during the first hour, while the cytoplasmic RNA seems to have a longer lifetime. If you expose the cells to EU for 24 hours, though, the nuclear fluorescence is still visible – barely – after 24 hours of washout, but the cytoplasmic RNA fluorescence never really goes away at all. There seems to be some stable RNA species out there – what exactly that is, we don’t know yet.

Finally, the authors tried this out on whole animals. Injecting a mouse with EU and harvesting organs five hours later gave some very interesting results. It worked wonderfully - whole tissue slices could be examined, as well as individual cells. Every organ they checked showed nuclear staining, at the very least. Some of the really transcriptionally active populations (hepatocytes, kidney tubules, and the crypt cells in the small intestine) were lit up very brightly indeed. Oddly, the most intense staining was in the spleen. What appear to be lymphocytes glowed powerfully, but other areas next to them were almost completely dark. The reason for this is unknown, and that’s very good news indeed.

That’s because when you come up with a new technique, you want it to tell you things that you didn’t know before. If it just does a better or more convenient job of telling you what you could have found out, that’s still OK, but it’s definitely second best. (And, naturally, if it just tells you what you already knew with the same amount of work, you’ve wasted your time). Clearly, this click-RNA method is telling us a lot of things that we don’t understand yet, and the variety of experiments that can be done with it has barely been sampled.

Closely related to this work is what’s going on in Carolyn Bertozzi’s lab in Berkeley. She’s gone a step further, getting rid of the copper catalyst for the triazole-forming reaction by ingeniously making strained, reactive acetylenes. They’ll spontaneously react if they see a nearby azide, but they’re still inert enough to be compatible with biomolecules. In a recent Science paper, her group reports feeded azide-substituted galactosamine to developing zebrafish. That amino sugar is well known to be used in the synthesis of glycoproteins, and the zebrafish embryos seemed to have no problem accepting the azide variant as a building block.

And they were able to run these same sorts of experiments – exposing the embryos to different concentrations of azido sugar, for different times, with different washout periods before labeling all gave a wealth of information about the development of mucin-type glycans. Using differently labled fluorescent acetylene reagents, they could stain different populations of glycan, and watch time courses and developmental trafficking – that’s the source of the spectacular images shown.

Bertozzi%2Cjpg.jpg

Losing the copper step is convenient, and also opens up possibilities for doing these reactions inside living cells (which is definitely something that Bertozzi’s lab is working on). The number of experiments you can imagine is staggering – here, I’ll do one off the top of my head to give you the idea. Azide-containing amino acids can be incorporated at specific places in bacterial proteins – here’s one where they replaced a phenylalanine in urate oxidase with para-azidophenylalanine. Can that be done in larger, more tractable cells? If so, why not try that on some proteins of interest – there are thousands of possibilities – then micro-inject one of the Bertozzi acetylene fluorescence reagents? Watching that diffuse through the cell, lighting things up as it found azide to react with would surely be of interest – wouldn’t it?

I’m writing about this the day after the green fluorescent protein Nobel for a reason, of course. This is a similar approach, but taken down to the size of individual molecules – you can’t label uracil with GFP and expect it to be taken up into RNA, that’s for sure. Advances in labeling and detection are one of the main things driving biology these days, and this will just accelerate things. (It’s also killing off a lot of traditional radioactive isotope labeling work, too, not that anyone’s going to miss it). For the foreseeable future, we’re going to be bombarded with more information than we know what to do with. It’ll be great – enjoy it!

Comments (7) + TrackBacks (0) | Category: Analytical Chemistry | Biological News

October 8, 2008

A Green Fluorescent Nobel Prize

Email This Entry

Posted by Derek

So it was green fluorescent protein after all! We can argue about whether this was a pure chemistry prize or another quasi-biology one, but either way, the award is a strong one. So, what is the stuff and what’s it do?

Osamu Shimomura discovered the actual protein back in 1962, isolating it from the jellyfish Aequoria victoria. These were known to be luminescent creatures, but when the light-emitting protein was found (named aequorin), it turned out to give off blue light. That was strange, since the jellyfish were known for their green color. Shimomura then isolated another protein from the same jellyfish cells, which turned out to absorb the blue light from aequorin very efficiently and then fluoresce in the green: green fluorescent protein. The two proteins are a coupled system, an excellent example of a phenomenon known as FRET (fluorescence resonance energy transfer), which has been engineered into many other useful applications over the years.

Fluorescence is much more common in inorganic salts and small organic molecules, and at first it was a puzzle how a protein could emit light in the same way. As it turns out, there’s a three-amino-acid sequence right in the middle of its structure (serine-tyrosine-glycine) that condenses with itself when the protein is folded properly and makes a new fluorescent species. (The last step of the process is reaction with ambient oxygen). The protein has a very pronounced barrel shape to it, and lines up these key amino acids in just the orientation needed for the reaction to go at a reasonable rate (on a time scale of tens of minutes at room temperature). This is well worked out now, but it was definitely not obvious at the time.

In the late 1980s, for example, the gene for GFP was cloned by Doug Prasher, but he and his co-workers believed that they could well express a non-fluorescent protein that would need activation by some other system. He had the idea that this could be used as a tag for other proteins, but was never able to get to the point of demonstrating it, and will join the list of people who were on the trail of a Nobel discovery but never quite got there. Update: Here's what Prasher is doing now - this is a hard-luck story if I've ever heard one Prasher furnished some of the clone to Martin Chalfie at Columbia, who got it to express in E. coli and found that the bacteria indeed glowed bright green. (Other groups were trying the same thing, but the expression was a bit tricky at the time). The next step was to express it in the roundworm C. elegans (naturally enough, since Chalfie had worked with Sydney Brenner). Splicing it in behind a specific promoter caused the GFP to express in definite patterns in the worms, just as expected. This all suggested that the protein was fluorescing on its own, and could do the same in all sorts of organisms under all sorts of conditions.

And so it’s proved. GFP is wonderful stuff for marking proteins in living systems. Its sequence can be fused on to many other proteins without disturbing their function, it folds up just fine with no help to its active form, and it’s bright and very photoefficient. Where Roger Tsien enters the picture is in extending this idea to a whole family of proteins. Tsien worked out the last details of the fluorescent structure, showing that oxygen is needed for the last step. He and his group then set out to make mutant forms of the protein, changing the color of its fluorescence and other properties. He’s done the same thing with a red fluorescent protein from coral, and this work (which continues in labs all over the world) has led to a wide variety of in vivo fluorescent tags, which can be made to perform a huge number of useful tricks. They can sense calcium levels or the presence of various metabolites, fluoresce only when they come into contact with another specifically labeled protein, used in various time-resolved techniques to monitor the speed of protein trafficking, and who knows what else. A lot of what we’ve learned in the last fifteen years about the behavior of real proteins in living cells has come out of this work – the prize is well deserved.

I want to close with a bit of an interview with Martin Chalfie, which is an excellent insight into how things like this get discovered (or don't!)

Considering how significant GFP has been, why do you think no one else came up with it, while you were waiting for Doug Prasher to clone it?

"That’s a very important point. In hindsight, you wonder why 50 billion people weren’t working on this. But I think the field of bioluminescence or, in general, the research done on organisms and biological problems that have no immediate medical implications, was not viewed as being important science. People were working on this, but it was slow and tedious work, and getting enough protein from jellyfish required rather long hours at the lab. They had to devise ways of isolating the cells that were bioluminescent and then grinding them up and doing the extraction on them. It’s not like ordering a bunch of mice and getting livers out and doing an experiment. It was all rather arduous. It’s quite remarkable that it was done at all. It was mostly biochemists doing it, and they were not getting a lot of support. In fact, as I remember it, Doug Prasher had some funding initially from the American Cancer Society, and when that dried up he could not get grants to pursue the work. I never applied for a grant to do the original GFP research. Granting agencies would have wanted to see preliminary data and the work was outside my main research program. GFP is really an example of something very useful coming from a far-outside-the-mainstream source. And because this was coming from a non-model-organism system, these jellyfish found off the west coast of the U.S., people were not jumping at the chance to go out and isolate RNAs and make cDNAs from them. So we’re not talking about a field that was highly populated. It was not something that was widely talked about. At the time, there was a lot of excitement about molecular biology, but this was biochemistry. The discovery really was somewhat orthogonal to the mainstream of biological research."

Here's an entire site dedicated to the GFP story, full of illustrations and details. That interview with Chalfie is here, with some background on his part in the discovery. Science background from the Nobel Foundation is here (PDF), for those who want even more).

Comments (34) + TrackBacks (0) | Category: Biological News | Current Events

October 7, 2008

Nobel Season 2008

Email This Entry

Posted by Derek

So we come upon Nobel season again. As I do every year, I'm going to throw the comments section open for nominations for who should (and who shouldn't!) get the prize in Chemistry this year. We may well have a trapdoor open on us again, since some years the committee uses the Chemistry prize as a dumping ground for spare biology prizes, but we'll see how it goes.

If we do get a chemistry prize this time, my money is against my own field, synthetic organic chemistry. In fact, long-term, I'm betting against it, unless the work has a hook into some broader story. That could be nanotechnology, drug discovery (wouldn't that be nice?), advances in materials science, energy storage or conversion, and the like. But I don't see many (any?) prizes being given out for straight organic synthesis, the way E. J. Corey's was. I think that the time for that has indeed passed.

But there's room for a prize or two in synthetic methods, I have to say, a sort of H. C. Brown-type prize. A lot of people have waited to see if palladium couplings would get one, for example. I think that metal-catalyzed couplings are definitely worthy of the recognition - they've taken over the world to a degree that younger chemists can't realize - but I don't know if the Nobel committee has ever been able to unravel the prize distribution to where they feel safe with it.

That's a problem in several areas (drug discovery being another example where credit is often spread around). Individual researchers can end up in the same boat, which is the usual opinion about, say, George Whitesides of Harvard. He's done a lot of very interesting work over the years, but it's been in several rather different areas. I think we can use all of those sorts of scientists we can get, myself, but the profile doesn't match up well with what the Nobel folks are looking for.

So, place your bets, folks. For reference, the Thomson/Reuters folks have a short list of their own, based on literature citations: Charles Leiber of Harvard for nanotech, Roger Tsien of UCSD for green fluorescent protein, and Krzysztof Matyjaszewski of Carnegie Mellon for atom-transfer radical polymerization.

Comments (21) + TrackBacks (0) | Category: Current Events

October 6, 2008

Imclone Really Does Get Bought

Email This Entry

Posted by Derek

Well, it looks as if I'll finally be able to stop talking about Imclone: the word came out this morning that they've agreed to a $70/share deal with Lilly. Some thoughts on this:

1. I would still like to know how the uncertainty around the Erbitux follow-up antibody is supposed to be resolved. It's hard for me to make sense of this for Lilly unless they think that they can get substantial revenue from it, and Bristol-Myers Squibb presumably will disagree with their projected figures. None of the news stories so far have addressed this issue, and I presume that it's going to be a matter for negotiations (or for the courts, if it comes to that).

2. It seems that some analysts are seeing this deal as a sign of weakness in Lilly's pipeline, perhaps signaling that Effient (prasugrel) might be delayed more or labeled so restrictively that it has no chance of living up to expectations. We'll see how Lilly's stock performs today, and read the mood of its investors.

3. Well, Carl Icahn really did have something up his sleeve. Considering what Imclone was trading at before all this, he has plenty of reasons to be happy. But now will he turn his attention to Biogen again, and try to do the same thing with (or to) them?

4. I stand corrected! I had trouble believing that someone would come in at this price under these conditions, but, well, here they are. I should keep in mind that a fair number of mergers and acquisitions in this industry seem problematic (or downright senseless) to me, and adjust accordingly.

Comments (6) + TrackBacks (0) | Category: Business and Markets

October 3, 2008

Day Off

Email This Entry

Posted by Derek

No time for a post this morning, unfortunately. The arguments are continuing full speed in the comments to Hard Times: A Manifesto, though, and I plan to do a long-overdue blogroll update this weekend. There are several sites that have needed to be added for quite a while now, and several others that have fallen into inactivity.

Inactivity doesn't seem to be a problem around here, anyway - today's an exception! Have a good weekend, and I'll see everyone on Monday.

Comments (2) + TrackBacks (0) | Category: Blog Housekeeping

October 2, 2008

Taranabant Is No More

Email This Entry

Posted by Derek

Merck has taken a step that many people have been expecting, and announced that they are no longer developing taranabant, their cannabinoid antagonist (or is it an inverse agonist?)

I'd expressed grave doubts about the drug earlier this year, which turned out to be well-founded. That latter post included the line "I don't see how they can get this compound through the FDA", and now Merck seems to have come to the same conclusion. Further clinical data seem to have shown far too many psychiatric side effects (anxiety, depression, and so on), which increased along with the dose of the drug.

The cannabinoid antagonist field has already experienced a crisis of confidence after Sanofi-Aventis's rimonabant failed to gain approval in the US. This latest news should ensure that no company tries to develop one of these drugs until we've learned a great deal more about their pharmacology. Given how little we know about the mechanisms of these mental processes, though, that could take a long, long time. We can pull the curtain over this area, I think.

Comments (15) + TrackBacks (0) | Category: Diabetes and Obesity | Drug Development | The Central Nervous System | Toxicology

Eli Lilly and Imclone: Sensible? Real?

Email This Entry

Posted by Derek

Word leaked out yesterday that Imclone’s secret bidder is Eli Lilly. Well, let’s revise that – so far, Lilly hasn’t made a bid for the company. And that’s the first thought I had about this business: isn’t it taking quite a while? You’ll recall that Carl Icahn told Bristol-Myers Squibb a couple of weeks ago that he’d been in talks with someone else. Then we were going to hear about it over the weekend. Then the name would be revealed Wednesday at midnight (of all times). Now here we are on Thursday with no official announcement.

And the delay probably doesn’t have anything to do with the situation in the credit markets, because Icahn has been sure to emphasize that the deal he’s looking at is not subject to financing. That means all this extra time is probably due to good old caution – and I don’t blame Lilly for mulling things over. There are plenty of reasons to wonder if Imclone is worth the money for an outside company, given its status with BMS. This clearly isn’t the instant-winner operators-are-standing-by deal that Imclone would like to have us believe it is.

Does a Lilly deal make sense? It might, if they could be sure that they were going to get the Erbitux follow-up. But I’m willing to bet that this is exactly the issue that things are stuck on, since BMS believes (with reason) that they have a share of it, and won’t give it up easily.

And I’d be willing to see this go through, even at a ruinous price, if it would get Carl Icahn out of the drug industry. But no such luck, I’m afraid. He’s probably still eyeing Biogen, and who knows who else. I spent some time yesterday going on about how we shouldn’t blame evil MBA types for the problems in our business, but Icahn is the sort of guy I’m nearly willing to make an exception for. A pure dealmaker, I don’t see him as someone who understands scientific research or who has the patience for it. If we’re going to point the finger at managers whose only goals seem to be to make the quarterly numbers and pump up the stock price, he’s as good an example as I can think of. A dose of this stuff is exactly what we don’t need at the moment.

One more consideration: who leaked Lilly’s name, anyway? The Wall Street Journal seems to have been the first with the story, quoting "people familiar with the matter". Well, cui bono? Who has an interest in moving it along and showing that it’s a real possibility, in getting possible bidders to feel some pressure? Who indeed?

Comments (6) + TrackBacks (0) | Category: Business and Markets

October 1, 2008

Hard Times: A Manifesto

Email This Entry

Posted by Derek

The more I think about all the research layoffs that have been going on for the last year or two around the industry, the more I think that we really are seeing a change in the way drug discovery is being done.

Most of the jobs have been lost from the large companies. There have, of course, been shutdowns at the smaller ones, but I don’t think that those have been running at any different rate than usual. Startups and other smaller shops are always rearranging as their skills, finances, and luck dictate – that seems to be going on at the usual pace. But what’s different is the wave after wave of job cuts at the Pfizers, GSKs, AstraZenecas, J&Js – the big hitters (and big employers) of the industry. Even the companies that haven’t had major layoffs (Novartis comes to mind) aren’t exactly hiring heavily.

So what’s going on? My take is still that this is a shift – as far as the US end is concerned – from larger research outfits to smaller ones. After all, the drugs are going to have to come from somewhere, and the deal-making for small companies that have something promising has been intense. It just seems that the larger companies don’t think that they can do as much of this discovery work themselves – not, at least, at the prices that make sense.

Now, it’s true that a lot of chemistry has been outsourced to contractors in India and China, and that several firms have opened research divisions of their own overseas. That’s a cost-cutting move, too, certainly – but look at what this says about research here in the US. Everyone knows – including the people in Shanghai and Hyderabad – that the difficult, high-level research is still not being done there. That’ll change, as the human and physical infrastructure improves, but the bulk of the outsourced chemistry is methyl-ethyl-butyl-futile stuff. It’s “Hey, make me a library based on this scaffold structure” or “Hey, make me fifty grams of this intermediate”.

This kind of thing is definitely cheaper to do outside the country. It’s not always as timely as it should be, or as well-done – so it’s not as cheap as it always looks. But overall, on the average, you can bang out compounds for less money by outsourcing. That’s not going to change, either. The countries that furnish the services may change, as time goes on. But until the whole world is a high-wage environment (or, more horribly, until the only countries that aren’t are so benighted that no such work can be done there), ordinary chemistry is going to be done where it can be done for the least money.

So what’s left for us here in the US? The hard stuff. The risky stuff. The science that needs well-paid experienced people hovering over it the whole time. The cheaper, easier research is leaving – a lot of it has left already. We get to take on the stuff that can’t be outsourced.

And that’s why I think that there’s a shift to smaller firms. They’re traditionally the risk-takers in this business, and I think that’s going to be more true than ever. The larger companies, to me, seem to be trying to play it safer than ever. They have huge costs to meet, and don’t seem to think that they can devote as much of their resources to taking chances. We can argue about whether’s that’s wise (after all, you might think that larger companies with more cash might be the ones who could afford more risk). But that’s not how it’s been working – not for quite a while, when you think about it.

Here’s the hard part: the world does not owe any of us a high-paying research job. Neither the world, nor the US government or the US pharma industry owe us jobs of any kind. I wish that that weren’t true, but it most certainly is. Those of us trying to make a living through science and drug discovery are going to have to scramble for it. We’re going to have to prove our worth to those who are in a position to pay for us, and we’re going to have to try to make as many of our own opportunities as we can.

There are some things that can help us out in this period (see below), and there are some others that will do none of us any good at all. I know from some of the comments here that not all of you will agree with this, but as far as I’m concerned, here are some of the no-good-whatsoever moves:

1. Complaining about the Evil Suits Who Are Ruining the Industry. Look, I’ve been unemployed in this business, too. A merger pitched several hundred of us out into the market when our entire site was shut down. But I didn’t think that it was being done because upper management was enjoying it. They were, as far as I can tell, trying to keep the company going while having it make as much money as possible – the same behavior that had been paying my salary, actually. The constant drive to do those things is what’s paid all our salaries. Now, that doesn’t mean that upper management is always right. I didn’t say that they couldn’t be stupid (hey, I’ve sat through some of those presentations, too). I’m just saying that they’re not evil. Ranting about it is a pointless distraction from the business of keeping your job or getting another one. And besides, if they really are making a stupid mistake, that creates opportunities later on (see below).

2. Complaining about All Those Foreigners. I have even less time for this one. As far as outsourcing goes, I don’t see how I can tell chemists in China not to do the same work as American chemists for less money. (We should be making sure that we’re not doing the same work – see below). This is how economies grow, and how the world improves. I’m living in one of the greatest places in the world, and have been making a better living than most of the world’s population: I have no room to tell someone that they can’t try to reach for the same standard of living.

And as for foreign scientists working here, well, I think that one of the reasons I’ve been living in one of the greatest places in the world is that it’s been a haven for all sorts of bright, hard-working people. We’re not going to turn this into an immigration blog – there’s lots of room to argue about our current policies, particularly regarding unskilled laborers. But that’s not what we’re dealing with in the sciences. As far as I can see, we can use all the intelligent, creative, entrepreneurial people we can take, and we need to make sure that our country is the kind of place that people like that aspire to live in.

So if those don’t do any good, what does? Well, look at the situation. This is, as I’ve said before, a terrible time to be an ordinary chemist in this industry. That goes for the ordinary biologists, too. We’ve all got to demonstrate why we’re worth what we want to earn, and doing something that can be done for half the price somewhere else isn’t going to cut it.

So improve your skills. Learn new techniques, especially the ones that are just coming out and haven’t percolated down to the crank-it-out shops in the low-wage countries. Stay on top of the latest stuff, take on tough assignments. Keeping your head down in times like these will move you into the crowd that looks like it can be safely let go.

That’s one thing. Another one is the traditional advice given in all industries: keep in touch with everyone you know around the business. Use networking sites, keep current phone numbers, drop people an e-mail now and then. Getting laid off may well have had nothing to do with what you did – but finding a new job will have everything to do with it. If you don’t have any contacts around the business, large outfits and small, you’re going to have a harder time of it for sure.

And finally, here’s a more macro-scale suggestion. We medicinal chemists need to think more about being the source of startup companies ourselves. That’s harder to do if you’re part of a service group, or if you have that mentality. If your job is to crank out molecules, then you need to find a place that needs someone to do that. But if you’ve got a larger skill set, it may be large enough to get together with some other creative people and try to get some funding for ideas that no one else is doing. People still need medicines, and as long as we can still discover them here, it sure beats waiting for the phone to ring. If the bigger companies are in fact making a mistake by cutting research, what better revenge than to make them wish they hadn’t?

Comments (101) + TrackBacks (0) | Category: Business and Markets