Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Monthly Archives

October 31, 2002

And While We're On the Subject - Mercury?

Email This Entry

Posted by Derek

There's another report this morning of an arrest of a suspected Chechen terrorist, who was carrying what's described as 18 pounds of mercury in a champagne bottle. ""Such an amount of mercury would poison a very large number of people," said a spokesman for the Moscow police.

Would it? The amount is right - 18 pounds of mercury works out to about 600 mL, which would fit just fine into a bottle. But what could you do with it? Mercury, in its elemental form, is a very, very slow toxin indeed. You can even drink a shot of the stuff and pass it out of your body without getting killed. It won't improve you, that's for sure, but it won't kill you.

If you try that stunt, your body (or your intestinal bacteria) will take a very small amount of the metal up and convert it to organomercurial compounds, which are the real problem. Those are much more easily absorbed than the pure metal, and can really do some damage. (These are the forms of mercury found in fish, for example - it's not the free metal.) Mercury reacts with sulfur-containing proteins, among other things, and there are plenty of proteins that depend on sulfur for their structure and activity. You can't afford to lose 'em. Long-term exposure to mercury vapor (which the liquid metal is always producing, very slowly,) gives you the best (ie, worst) chance to absorb the element. That's how mercury's toxicity was first noticed, but this can take months or years to develop as the protein damage piles up.

Now, if this guy had been carrying a few pounds of something like dimethyl mercury, then things would be different. That's one of the simplest organomercurials, and it is extremely bad news indeed. Just a few years ago, a research chemist at Dartmouth was when a few drops of this compound fell onto her latex-gloved hand. It penetrated the glove, then her skin. She didn't notice a thing; nothing seemed amiss for several months. Then neurological symptoms rapidly began to show up, and she died within weeks.

That's about as bad as mercury compounds get, and it still takes time for it to kill you. This Chechen probably thought he was carrying a serious poison, but he was mostly hauling around a rather expensive barbell. Here's hoping he paid a lot of money for it.

Comments (0) + TrackBacks (0) | Category: Chem/Bio Warfare | Toxicology

More Faces in Even More Clouds

Email This Entry

Posted by Derek

Talking about the urge to quantify things - even the stubbornly unquantifiable - leads me back to what I spoke of earlier ("Faces in the Clouds", Oct. 20) about finding patterns even in random noise. I think these are two aspects of the same phenomenon.

We seem to have this information-processing machinery in our brains, constantly grinding away trying to integrate the flood of sensory input. Back in the visual cortex, for example, there are layers of neurons that specialize in things like horizontal contrast lines and sideways-moving objects. Further up in the processing, we're especially tuned in to important things like human faces and facial expressions, to the point that people see them in rock formations and half-cooked tortillas. (If anyone thinks I made that last one up, I'd be happy to cite chapter and verse.) Other senses seem to be broken down in the same way, with local processing picking out specialized patterns in the raw sensory stream.

We're looking for ordered data, because random noise doesn't give our brains any traction, and they can't stand it. Noise is the enemy of sensory processing - consider, say, blank-channel TV static. "What do you mean," says the brain, "random flashes of light all over the visual spectrum? That's not how the world works. Things stay pretty much the same color on that time scale, and stuff doesn't just pop in and out that way without leaving a trail of motion. Something's wrong. I'll figure it out, just give me a minute. . ."

If we use our brains to think about non-sensory abstractions, we tend to map them to sensory data so we can get a handle on them. "Employee performance" is a tough concept to picture, but how about a ranking from 1 to 10? That's something we can grasp (whether we should, in the first place, is a topic for another day.)

So we look for lines and curves on our graphs, and clumps of points on our scatterplots. The same systems that served us to warn about crouching sabretooth tigers now try to tip us off to epidemiology. And it wouldn't surprise me a bit if we uncover higher-order structures (or neuronal patterns, at least) that work in a similar way. Higher cortical functions might have taken sensory processing as their model, and set themselves up to do unconscious curve-fitting and shape-filling in the world of logic and causality. Being able to infer cause-and-effect must have been quite a survival advantage, too.

Steve Postrel at SMU wrote me after my earlier post on this subject. He pointed that it's true that the general public gets basic statistical patterns wrong pretty regularly, but scientists don't do much better once things get past that. He's got a point: one of his examples was the handling of global warming data. There's so much information out there that you can argue just about any direction you want to on the subject. I am not going to get that debate right now (neither was he!) but whichever side of the argument you take is a statistical minefield. There are so many things that can influence the presentation of the data, and the conclusions drawn from it (starting and ending dates for sampling, location of same, error bars of the measurements - when you can even state them, hidden variables or assumptions in the models - it's a mess.)

No doubt about it, whatever the human brain is optimized for, statistics and probability isn't it. (Quantum mechanics sure isn't it, either, come to think of it - and depending on your take, that has a generous dose of probability in it, too.) I suppose we shouldn't be wondering why we don't do it better, and be impressed that we can do it at all. . .

Comments (0) + TrackBacks (0) | Category: The Central Nervous System

October 30, 2002

What Sort of Number Did You Have in Mind?

Email This Entry

Posted by Derek

There's a good article by Leandro Herrero in the October issue of Scrip magazine (no online content without a subscription.) He's teeing off on the overuse of numerical measures in the drug industry (and industry in general:)

"Your business education and experience tell you that if you can't measure, you can't manage. . .good managers are sometimes defined by their ability to measure outcomes; bad ones by the apparent vagueness in describing what needs to be achieved."

The way I've heard that first point expressed is "Anything that's measured will be managed." The problem is that many of the measurements are bogus. The urge to quantify things gets the better of us, and we attach numbers to things that either aren't measured well or can't be measured at all.

"A friend of mine once issued an open invitation to dinner for anybody who could unequivocally prove that any performance forecast by strategic planning (be it peak sales, market share, or NPV achievement) had ever been hit. To my knowledge he has not dined with anyone yet."

I wouldn't get a meal out of him myself. Over the years, I've seen some real where'd-that-number-come-from assessments of a compound's potential market. And, as Herroro points out, it does not breed confidence when the marketing gurus suddenly come up with a completely different number, carved into the same stone tablets as the first one. The higher the pressure to come up with an estimate, the less reliable it is. I've long summarized that effect as "You need a number real bad? Well, here's a real bad number!"

"In many areas of management today, many things won't be taken seriously (ie, qualify for funding) if their return on investment (ROI) can't be measured. . . never mind that the number has been cooked up via a combination of clever arithmetic. . .I have seen ridiculous ROIs that have gone down well with management, just because they were done, called ROI, and looked good."

There's the key point right there. People are going to believe what they want to believe, or what they feel that they should believe. Science gets us away from that a little bit, but it's still done by human beings, and these are human tendencies.

I can think of another area where a false attachment to measurement gets people into trouble. Drug companies often have an unhealthy fixation with quantifying their drug pipelines. And once you get serious about counting these things, it's impossible to resist the temptation to set them up as goals: next year, we'll start (insert whole number) projects and recommend development of (insert smaller whole number) clinical candidates. When you go down that road a little further, you come to another hard-to-resist temptation: starting a project (or worse, recommending something to the clinic) mainly just to make your numbers. Those of you working for Big Pharma companies are probably grinning sheepishly right about now. . .

I know, that isn't what's supposed to happen. It's not what anyone set out to do. But it's not where you start with this kind of thing: it's where you end up that counts.

Comments (0) + TrackBacks (0) | Category: Business and Markets | Drug Development

October 29, 2002

Et in Arcadia Ego

Email This Entry

Posted by Derek

There's a backlog of pharmaceutical news to catch up on, but I couldn't resist linking to this article from today's New York Times. It's a pet subject of mine, and the only fault I can find is the tone of surprise that comes through in it.

It's titled "Don't Blame Columbus," and it reports on studies on health and life expectancy in Pre-Columbian America. It's the most comprehensive look at the subject yet. Most of this information comes from the bones, naturally, but there's a lot of good information there. Unfortunately for the previous owners of said skeletons, the story they tell is often one of anemia, osteomyelitis, tuberculosis and malnutrition. But at least you didn't have to put up with that for too long: living to the age of 50 was a rare accomplishment - 35 to 40 was more like it.

The study found a long-term decline in health as the populations grew in different areas, which is interesting. But any surprise people have at the general results surprises me. When my brother and I were small children, we accompanied our parents to achaeological digs back in Arkansas. My father was a dentist, and he was there for some forensic work on the teeth of the Indian remains. What he told me back then has stayed with me: these folks had lousy teeth. They had cavities, they had abcesses, impactions, the lot. (The weakened condition of their gums due to lack of Vitamin C probably had a lot to do with it.)

So, growing up, I knew that the Hollywood depiction of Indian life was rather idealized. For one thing, all the movie actors had great teeth. And the young braves weren't like those 24-year-old actors - they were maybe 14. And the ancient medicine man, he wasn't 80 years old at all. He was in his 40s; he just looked 80. You never saw extra tribesmen in the background, hobbling around because of poorly set broken bones or clutching their jaws in pain. No skin problems, no infections, not even so much as a bad allergy - no doubt about it, the tribe to belong to was MGM.

You can imagine how I feel about the rest of the cheap thinking that goes along these lines. Oh, the way preindustrial cultures loved the land, lived in harmony with it while everyone ate the wholesome diet of natural purity and stayed true to those simple values that we've lost touch with. . .spare me. I'm with Hobbes: the life of man in the natural state was solitary, poor, nasty, brutish and short. And let's not forget it.

I'd like to blame Rousseau for the whole thing - after all, he's the usual suspect for introducing the whole Nobel Savage concept. (He extended the concept to children, too, of course. Who knows what the history of philosophy would be like had he actually raised any of his brood instead of farming them all out?) But I think that the sources of this mistake - which it is, a terrible one - go deep into human nature. No matter where you go, it seems that there's always a myth of the Golden Age, the simple, pure time when everything was right.

Manure. Fertilizer. The only thing worse than mourning this illusion is trying to do something about it: you could always set up some wonderful political system to bring Arcadia back. The last hundred years have been a stupefying object lesson in what you get when you try.

Well, enough venting for one evening. I seem to have taken off into the clouds of political theory, not bad mileage considering that I started from a pathologist's report. Tomorrow we'll be back on the ground, I promise.<

Comments (0) + TrackBacks (0) | Category: General Scientific News

October 28, 2002

A Mystery Gas?

Email This Entry

Posted by Derek

Since I did a multipart series on chemical warfare last month, I've had several e-mails asking for my take on the Russian gas used to break up the Chechen hostage situation. The information that I can get from wire-service reports doesn't make for a very coherent picture, but I imagine it's not very coherent in Moscow, either.

First off, I think we can rule out nerve gas itself, or some weaker form of same. None of the victims, as far as I've read, display signs of cholinergic poisoning. Any cholinesterase inhibitor strong enough to knock someone unconscious is strong enough to do a lot more to them, and I'm just not hearing about the symptoms you'd expect. For starters, there are effects on the salivary and sweat glands that are quite noticable. It wasn't nerve gas.

There's been speculation about an unusual agent known as BZ. This isn't one that I covered in my series of posts, since it's rarely (if ever) been used in the real world. BZ is a CNS agent, probably quinuclidinyl benzilate or a similar compound (the precise formula's never been made public.) It hits the muscarinic receptors, which are involved in nerve gas toxicity, and several others as well. It causes tremors, hallucinations, memory loss and various other odd symptoms. In fact, the lack of a predictable response is the main reason it's never been used much. I don't think that this was what was used in Moscow, although it can't be ruled out.

Whatever agent this was, its main effect seems to have been CNS depression. The loss of consciousness and vomiting reported would be typical of sedative overdoses or alcohol poisoning, for example. I've seen a report wondering if this was plain old chloroform, but I doubt it - the quantity of chloroform vapor needed to do what this did would have probably stripped the paint off the place, for one thing, since it condenses out to the liquid if it hits a cold surface. Sticking with that chemical class would lead you to a Freon of some sort, and I guess I can't rule that out, although it seems an odd thing to use. We're getting very close to medical anaesthetics like halothane, though.

Otherwise, I'd wonder about some sort of aerosol sedative, perhaps fentanyl or another compound that acts on opioid receptors. Getting that into an easily used form wouldn't be easy, but it's certainly not impossible. And it would fit with many of the symptoms that have been described.

What this tragic incident points out is that there's not such thing as the "knockout" gas beloved of thrillers and screenplays. Anything that can induce quick unconsciousness in a person can go on to kill them. No one's found a way around that problem, clearly.

Comments (0) + TrackBacks (0) | Category: Chem/Bio Warfare

October 27, 2002

O Brave New Market, That Has Such Medicines In It

Email This Entry

Posted by Derek

I mentioned that Amgen had a rough time with their leptin program, but there are people who benefit tremendously from the protein. There are some people (very few, actually) who are similar to the ob/ob mouse, in that they have a mutation in their leptin protein gene. They tend to have a lot of metabolic troubles, starting with morbid obesity and a terrible blood lipid profile. Administration of the human protein works wonders for them.

So leptin, therapeutically, is an orphan drug. And the identification of patients who can benefit from it is a harbinger of the era of "personalized medicine" that everyone says is coming. They're probably right, because we're actually learning to pick up on more and more cues like this, and finding them is an area of frantic research (and frantic funding.) The push is on to identify things in both directions: positive (who will benefit from Drug X?) and negative (who will show nasty side effects from Drug X?)

That second category could have saved a lot of drugs that have disappeared from advanced clinical trials, or even some that have disappeared from the market. It's quite possible that we'll see some of these brought back from cold storage in some fashion, when we find ways to get around their bad low-incidence problems. This sort of thing (toxicogenomics) is what many drug industry researchers think of when they think of the promise of genomics - who could blame them?

But it's that first category, pharmacogenomics, that'll make life interesting. Look, for example, at some of the cancer therapies in the clinic now. Even though things like Iressa and Erbitux seem to work dramatically in some patients, they completely fail in others. There's no way to tell which group a new patient will end up in - if you could sort them out, you'd only give it to the dramatic-recovery crowd, and tell the others not to waste their time (or their money.)

Or their money. . .there's the rub. What happens when we get to this point, when we can predict who will respond to our new drugs and who won't? The customers sure will be happy - but there won't be nearly as many of them. Face it, when a new therapy for a grave disease (cancer, AIDS, diabetes, etc.) hits the market now, everyone's going to try it out. Even as it fails in a certain percent of users, everyone's going to. . .well. . .buy it. And that's figured into industry calculations. When you think about the potential market for your new drug, you aim for the biggest number of possible patients/customers.

What if you cut that market in half? Or more? What if the only people who buy your drug are the ones that it's certain to work for? You've now got a smaller market. A well-served one, that's for sure, but smaller all the same. But developing your drug didn't cost any less than it did when you developed it for the whole crowd, hit or miss, and it just might have cost even more. So how are you going to do it, selling it to a smaller group?

Well, as far as I can see, the price will go up. It'll have to. No one will care for this one bit, but that's what's going to have to happen. That'll be the surcharge for making sure that the drug does what it's supposed to. And those of us in research will have to get used to the idea that we're going to have to develop even more new drugs as we do now, to make sure that we cover all the patient groups in what used to be larger, less differentiated markets. Given the struggles we're having to develop things as they stand, that's going to be a lively undertaking indeed.

Comments (0) + TrackBacks (0) | Category: Business and Markets | Cancer

October 24, 2002

Of All Sad Words. . .

Email This Entry

Posted by Derek

If you want a good example of how something that seems completely sensible can backfire in drug development, look no further than the story of leptin. I remember when this peptide hormone was discovered in rodents in 1995: the news really made a splash among groups working on obesity and metabolic therapies.

If you raised antibodies to leptin and treated mice with then, taking the protein out of circulation, the animals ate like mad. And on the other hand, injecting extra leptin made them turn up their noses at food, even when they should be eating (which for mice and rats is at night - or whenever their dark phase is in the animal rooms.) It turned out that two well-known mutant mice strains (ob/ob and db/db) were actually mutants of leptin function. The first has bollixed-up leptin protein, the second has a problem with its leptin receptors. Both animals eat heavily and put on serious weight (they're rather odd-looking.) It all fell together.

So you can see how everyone got revved up. Here (after many false starts) was the real eating hormone! Under our noses all along! Some companies took at whack at finding small molecules to affect the leptin receptor, but that didn't pan out well. Trying to find a small-molecule drug to tackle a receptor built for a large peptide is usually a losing proposition (which is why people, these many years later, still have to inject themselves with insulin.)

But Amgen was out in the lead with the protein itself. It wasn't going to be an oral medication, but a real wonder-drug for the terminally obese would be worth injecting, right? While they developed it, the research went on, furiously - and some oddities began to emerge. You'd think, from the rodent data, that really obese humans would be leptin deficient, too. Wrong - not only did they have leptin, they usually had well over the normal amounts. Ahem.

That was disturbing. You're clearly not going to accomplish much by giving more of the stuff to someone who has plenty of it already. The picture that began to form was similar to the role of insulin in Type II diabetes (the adult-onset kind.) Type IIs have plenty of insulin, at least in the first phases of their disease. In fact, they have more than normal. The problem is, their tissues have become resistant to its effects, so the pancreas compensates by pumping out more and more of it. (This can go on for years, until the beta-cells finally start to break down under the strain of constant pedal-to-the-floor insulin secretion - and at that point, your diabetic troubles really start to catch up with you.)

Obese humans are resistant to leptin. No one's sure how that happens, or why (no one's really sure how people get resistant to insulin, either, although there's sure no shortage of theories.) Amgen soldiered on into the clinic, and (despite pulling out all the stops) failed to find any real effects. The craze was over.

These days, everyone in the drug industry who studies metabolism knows about leptin, respects its central role in feeding behavior - and sighs at bit at what might have been.

Comments (0) + TrackBacks (0) | Category: Diabetes and Obesity

October 23, 2002

The Latest Mudfight

Email This Entry

Posted by Derek

Today's court battle is between Pfizer and two rival drug teams. On Tuesday, Pfizer was issued a US patent relating to Viagra, and that same day they filed lawsuits against Eli Lilly and Icos (one the one hand) along with Bayer and GlaxoSmithKline on the other. Both are developing rivals to Viagra (Cialis and Levitra, respectively.) Both work on exactly the same enzyme, phosphodiesterase-5, and it's the PDE-5 action that Pfizer used as the ground for the suits.

The patent has claims for some chemical matter, and several specific compounds. Their ace, though, is near the end of the list: they claim the treatment of erectile disfunction by administering a PDE-5 inhibitor. Any inhibitor. You can make a compound that hits that enzyme, says Pfizer, but if you treat erectile disfunction with it then you can expect to hear from their lawyers.

This is a pure "method of treatment" patent, and regular readers will know that I don't have a very high opinion of those, on principle. The interesting thing is, Pfizer may not have that high an opinion of their chances here. They've already lost to Lilly and Icos in the UK over just this same issue, and the European Patent Office followed suit last year, citing that decision. The argument was that this mode of action was already known before the patent was applied for, and that'll surely be the defense that Lilly/Icos and Bayer/GSK will use here.

So why is Pfizer even trying? Easy: they'll be trying to tie things up so thoroughly that their competitors will be delayed in getting their drugs to market. When you consider that their legal bills will be perhaps a few weeks worth of Viagra profits (at most,) then it looks pretty cost-effective to make the attempt.

And lest anyone think that I'm just picking on Pfizer, I should add that plenty of other companies in the industry have patents that could allow them to do the same thing in such a situation. And most, probably all, of them would. After all, it makes financial sense. But I have to say, I'd like to live in a world, and work in an industry, where it didn't.

Comments (0) + TrackBacks (0) | Category: Patents and IP

October 21, 2002

Structure-Inactivity Relationship Would Be More Like It

Email This Entry

Posted by Derek

Talking about pattern recognition leads a medicinal chemist to thoughts of SAR, structure-activity relationships. We spend a lot of time putting together tables of data - changes on one part of the molecule tabulated on one axis, changes in some other region on the other axis, and boxes filled in with the assay results.

And we spend a lot of time looking at all these piles of data and trying to see if they're telling us anything at all. Most of the time, they are - just not as much as we'd like. (If the project stubbornly refuses to make sense, though, we eventually find a reason to kill it. There's no point in working on something when you have no idea where you're going, why you are where you are, or how to get anywhere.)

No, most of the time what the data tell us are things like: "You can't put anything bulky here. Every time you do, the activity of the compound goes completely into the tank." Or "You can't put a basic nitrogen here - see that one-hundred-thousand-fold loss of activity?" We get mostly negative data, telling us about the things that don't work. Not that that's not valuable. It tells us to go find something else to do to the molecule, and not to waste much more time on that area.

The problem is, you can go for quite a long time without hearing anything positive, like "See? You had a methyl group here and you changed it to propyl. It helps! Maybe you should try butyl, and don't forget to go back in fill in that gap by making the ethyl." And even when you do get some data like that, there's always a limit to how much you can run with it.

I don't know if I've mentioned this before, but I'm convinced that there are no linear relationships in the real world. Everything will fall off the line if you push it hard enough in one direction or another. SAR trends tend to not stand up to much rough treatment at all: methyl, OK. Ethyl, a bit better. Propyl, great! Butyl. . .lousy. Pentyl (just to be sure,) terrible. Isopropyl (to see who's fooling who,) unspeakable. And so it goes. So much for the robust trend. You'll also get many parts of a molecule where just about anything fits OK. Nothing helps, but nothing hurts - until you finally reach the end of that empty cavity the stuff must be poking out into, and wham, back into the dumper.

What keeps you coming back are the occasional jackpots. Once in a while, you make something that (according to the model you've built in your head) should be another plain-vanilla compound, and for some reason it works. Time to change the model! There's always the real possibility that the next compound is going to break the mold, and most of the time the mold sorely needs breaking. So you keep at it.

The frequency with which those winners show up is similar to that used by slot-machine designers, to keep the customers pumping money in. (Quite a business model, that.) The more I think about it, the more I think that it's something the "Intelligent Design" people should look at. I'm not sure that this is the sort of Deity that they're thinking of, though. It's clear that Whoever's in charge of drug research has a really foul temper and a buttered-stairs sense of humor.

Comments (0) + TrackBacks (0) | Category: Drug Development

October 20, 2002

Faces In the Clouds

Email This Entry

Posted by Derek

In the last post I mentioned the tendency people have to look for causes. It's innate; there's nothing to be done. We're conditioned by the world of our senses: a leaf falls in front of us, so we look up to find the tree. And this works fine, most of the time, for the macroscopic objects that we can see, touch, hear and smell.

It stops working so well on the microscopic scale. (And it goes completely to pieces on the really submicroscopic scale, when the colors of quantum mechanics start to seep through into the picture, but that's another story.) When we don't have direct sensory experience of the steps in a process, our intuition can be crippled. You can learn your way around the problems, but that has to be a conscious effort - the rules that we've all been practicing since birth won't be enough.

And this is where a lot of really bad ideas are born. Take the idea of a "cancer cluster." If we see a pile of large rocks with no others around, we assume that something moved them there. If we see a group of similar plants, we assume that they've grown there together from seeds or roots. But what is there to say when there's a group of cancer victims in a given area?

The temptation is overwhelming to say "something put them there." But it doesn't have to be so. People who haven't thought much about statistics don't usually have a good feel for what "random" means. It doesn't mean "even scattered in no particular pattern." It means "no particular pattern, and let the chips fall where they may." Looked at locally, a large random distribution isn't even at all - it's lumpy and patchy. Show a dozen untrained eyes a large scatterplot of random numbers and they'll never guess that there's no design behind it. Surely that bunch down there means something? And that swath that cuts over this way! Imposing patterns is what we do.

Discriminating between these accidental groups and any that might have a cause is fiendishly difficult. Generally, the only proof is statistical - you end up saying that you can't reject the null hypothesis, that this group is not larger than you would expect by chance. So in the absence of any hypothetical cause, there's no reason to assume that it's anything other than noise. Does that convince anyone? No one that really needs the convincing.

Statistics are all that'll save you, though, because the alternative is just noise and advocacy: trying to settle arguments by who's louder and more convinced that they're right. People who understand the math get upset when they argue with people who don't, because they can't make themselves understood. Their best evidence is in a language that the other side can't speak. Likewise, the advocates get terribly frustrated with the statistics-mongers, because they seem to be in the business of denying what's right in front of their eyes.

And that's why scientists and engineers are so happy to talk with other scientists and engineers. It's not that there aren't arguments - oh yeah, plenty of 'em - but there's at least a chance that you can convince people with data. Outside of those fields, I've come increasingly to think, the chances of doing that are often minimal.

Comments (0) + TrackBacks (0) | Category: General Scientific News

October 17, 2002

Not Even Funny

Email This Entry

Posted by Derek

I'm late to this particular party - see Charles Murtaugh and Medpundit for the low-down on a particularly irritating LA Times column. (It requires registration to read, which is fairly irritating all by itself.)

In a nutshell, the writer attempts to blame environmental factors for many cases of breast cancer, specifically chemicals produced by the very companies that are working on treatments. This comes very close to one of the things that will set off even the most mild-mannered pharmaceutical researcher: the conspiracy theory that says that They're Making Us Sick Just So They Can Sell Us Their Drugs. (That one's right next to They've Really Got A Cure, But They're Just Waiting Until More People Are Sick.)

Well, I'm not exactly the most mild-mannered pharmaceutical researcher, myself. And this stuff makes want to throw a one-liter filter flask at the person who espouses it. If the author wants to indulge in stupid breast cancer etiology, why not go for the deodorant theory? That one's even more mindless.

I understand the human tendency to look for a proximate cause for everything, and to search for patterns even in random noise. It's even more tempting to think that the answer's been right under our nose (or under our arms!) the whole time. "Aha! We should have known!"

But this is offensively foolish stuff. I don't have the time to dismantle it thoroughly enough tonight, but perhaps I'll give it another kick in the shins next week. We'll talk about real epidemiology instead of inflammatory guano.

Comments (0) + TrackBacks (0) | Category: Cancer | Press Coverage

October 16, 2002

Cloning's Growing Pains

Email This Entry

Posted by Derek

Ian Wilmut and his colleagues have an interesting review in a recent issue of Nature (no web link) on the status of mammalian cloning. It's still so difficult that it almost qualifies as a stunt. Several species have had the nuclear-transfer technique that produced Dolly the sheep applied successfully (if you can use that word for a technique that has at least a 95% failure rate.) But others haven't, and it's not clear why some work and some don't (for example, mice and rats, respectively.)

What the article makes very clear is that the animals produced in this way are far from normal, and that we don't even have a good handle on the extent of their abnormalities yet. (In cases like cattle, we're going to have to wait years to see how they age, for one thing.) They point out that close examination of even the young cloned animals turns up differences, many of which are probably deleterious.

Why all the problems? Shouldn't the genetic material in the new nucleus just pop right into the cell and go to work? These experiments have been a dramatic demonstration that any scheme that treats a cell and its nucleus as separate entities has serious shortcomings. There are epigenetic influences at work (changes in inheritance by means other than changing DNA sequence,) and we're just barely starting to understand them. Subtle chemical changes in the DNA bases and their associated proteins can lead to large differences down the line, and it appears that some of these signals get scrambled and mismatched during the nuclear transfer.

These are not secrets; the workers in this area have been very upfront about all these problems. (Wilmut published an article not long ago titled "Are There Any Normal Cloned Mammals?") Many researchers are using the technique because these problems exist, actually - it's a good way to study phenomena that would otherwise be hard to unravel. But all this makes me think that those persistant reports of a cloned human baby are probably nonsense.

They'd better be. With the state of the art being what it is, anyone who has actually tried this on humans is going to have a lot to answer for.

Comments (0) + TrackBacks (0) | Category: General Scientific News

October 15, 2002

That Voodoo That We Do

Email This Entry

Posted by Derek

I mentioned in passing that getting cells to express a new gene's protein is voodoo. That's pretty close to the technical term of the art for it. Gene therapy is the high-profile application of the technique, but it's the bread and butter of molecular biology. I can tell you that a lot of drug company research would come to a lurching halt without cloned proteins.

One of my "Laws of the Lab" (more of those are on the way) is "When there are twenty ways in the literature to do something, that means that there is no good way to do it." This applies, in truckloads, to protein expression. When you consider the number of different cell lines you could try and the number of vectors you can use to get your DNA into them, you're already faced with plenty of choices.

Then you have the various things you can add to the main DNA sequence to try to get things to work, once they're in the cell. A good deal of protein-expression time is spent trying out different promoters, sequences that flank the one you're interested in that serve to lure the transcription machinery over to it. "Pick me! I'm important! You need lots of me, and you need it now! And get it right, will you?" is the sort of message these sequences are intended to send. After all, it does no good to insert DNA that doesn't get read at a useful level.

You generally want as much protein production as you can get (although it's possible overdo it, in which case you might start getting insoluble clumps as the stuff piles up inside the cells.) There are all sorts of systems to use that will give you a sign of whether or not your protein is actually being expressed, many of them quite ingenious. A common theme is to put in a marker gene along with your own, and many times that's a gene that confers an unusual resistance to some antibiotic. You transfect in your DNA, grow the cells, and hit them with the chemical agent - anything that survives has a good chance of having taken up and expressed the DNA you gave it (although you'd better check it by another route to make sure you aren't getting fooled.)

The big question, though, once you're sure that the stuff is being made, is whether it's coming out in a form that you can purify, and whether it's actually working the way it's supposed to. Those two questions are often tangled up in a knot - if your protein isn't clean, maybe that's why it isn't working. (Unfortunately, there are many times when your protein looks clean and still does the equivalent of floating belly-up to the top of the tank. Keeps everyone on their toes.)

There are plenty of separation techniques to fish your proteins out of cells. Most of them involve first breaking the cell walls and separating out the larger chunks like the nuclei (which you usually don't want,) the empty cell membrane (which you might, if you know your target protein lives there) and the cytoplasmic contents. Centrifugation is the standard way to do all this. There's a fair amount of gunk in the cytoplasm that you can "spin down," too (like the endoplasmic reticulum - to a cell biologist, "ER" isn't primarily the name of a TV show.)

Once you finish that, you're left with a mixture of thousands of different protein, carbohydrate, and lipid components. This is when you'll be glad that you overexpressed your target, because you'll need all the help you can get to make it stand out from the rest of the stew. Sometimes expression levels are high enough where you can do some minimal cleanup and use the stuff as is.

If that's not the case, a popular trick called "His-tagging" might be the answer. You set things up so that the proteins ends up with a run of histidine amino acids at one end (which you hope is far from the action that you care about.) All those imidazole side chains in a row will coordinate with metals, so you can pass the crude brew over a metal-containing column, wash all the non-histidine-rich stuff out, and then change to a stronger solvent to wash off your desired stuff. It's usually a safe bet that you won't have many natural proteins in there with a dozen histidines in a row - if there are such beasts, I'd like to know what the heck they do.

Another wild card is all the processing that proteins undergo in the cell - folding (sometimes with the assistance of other chaperone proteins,) phosphorylation, glycosidation and so on. As I understand it, if you run into trouble at this stage, you're pretty well sunk. Sometimes you can rescue things a bit by harvesting the protein at a different stage in the life cycle of the cells, but it's usually time to look for another cell type to start over with. This can happen especially if you go too far afield, phylogenetically. Bacteria, for example, are notorious for producing hopelessly hosed versions of some human proteins - although if you can engineer them just the right way, they can be tremendous producers. And (unlike tissue cultures) they're equipped to live on their own and take care of themselves. Yeast fall into that category, too - robust in their way, studied out the wazoo, but not always reliable for getting active protein.

Insect cells, though, are pretty good. A particular strain called SF-9 from the armyworm moth is widely used, in combination with a baculovirus vector, which has the advantage of not being infectious in humans. It doesn't always work, especially with proteins whose glycosylation pattern is critical, but it's one of the first things to try.

Mammalian (or even human) cells are a better bet to produce active protein, but they're often trickier to work with. A favorite line are the beloved CHO (Chinese Hamster Ovary) cells, which are often used when the expressed protein needs to be part of a living cell system for the assay. There are other common lines derived from mice and macaque monkeys. Moving to us, there are standard cell lines from human kidney or liver, and then there are the famous HeLa cells, one of many from human tumor sources. You don't see these types used as much for large-scale transfection/overexpression, though, because they necessarily give huge levels of protein, and any vector that can infect them can infect you, too.

So, getting things to work just the way you want them to involves manipulating a lot of variables, not all of which are well understood. Not to mention the ones about which we don't understand squat. Still, these experiments are a regular feature of any molecular biologist's life, and in a drug company we expect a pretty good success rate at eventually getting the proteins we want. It just depends on how much time and effort you want to throw at the problem - and what problem doesn't at least partially depend on that?

(For those who want more, here's a useful guide from one of the big commercial players in the area. It goes into details that I've skipped over, like various funky ways to get your DNA into the cells, and some fine points of how to grow them and keep them happy.)

Comments (0) + TrackBacks (0) | Category: Drug Assays

October 14, 2002

Gene Therapy Decisions

Email This Entry

Posted by Derek

There's been a flurry of news about gene therapy, a high-risk high-reward area of research from the very beginning. The biggest success stories came recently in the treatment of X-linked severe combined immunodeficiency (SCID,) the so-called "bubble boy" disease. But the course of true therapy never did run smooth, and there have been potentially dire complications.

SCID is fortunately rare, because it's a bad-news condition. Patients are essentially left without a functioning immune system, which makes everyone in that position die early from opportunistic infections. The sort of thing that would give a healthy individual a nasty cough for a few days is a fatal illness if you don't have T-cells and their partners. The most common genetic defect that lead to this condition is a loss of the enzyme adenosine deaminase, but there are several others that will put you in the same boat. The recent good news/bad news incidents concen SCID which was mediated by a loss of a protein called gc (for gamma-chain,) which is involved in cytokine signaling. There are some significant differences in trying to treat these two varieties, but gc-loss is probably easier to treat (a relative judgement if ever there was one; they're both tough.)

The standard therapy is bone marrow transplantation. This uses tissue from a matched healthy donor, usually after some level of intentional destruction of the existing marrow. When things really are matched identically, the prognosis is excellent, but the problem is that finding such a tissue match isn't always easy. A lesser degree of similarity, HLA-haploidentical tissue matching, is the next option. Survival rates in those cases are lower, although still around 75%, which most surely beats an early and certain death. But these patients don't usually get the full range of their immune response back. Specifically, B cells and NK cells aren't restored to normal levels, and even T-cell counts can start falling with time.

So there's room for improvement, and if you're a patient for whom no good tissue match exists, there's room for a lot of improvement. Thus gene therapy. The basic idea is similar to using bone marrow from a donor, only you donate your own marrow, newly refurbished, to yourself. The original marrow cells are replaced with genetically altered cells which have had the proper gene spliced into them.

Which sounds reasonably simple, but getting the gene into the cells is the voodoo part of the whole sequence. There are any number of ways of doing that, each with their known advantages and disadvantages, and each with plenty of unknown things waiting to emerge. Much of the progress in gene therapy has come from refining the vectors used to introduce the genes, but it's still a pretty crude process. In the standard method a crippled form of a retrovirus is used, one without RNA sequences for some key proteins that it would need to reproduce itself.

The problem is, these retroviruses go around jamming in genetic material all over the place. Sometimes it'll end up in a place where it can get transcribed into active protein, and sometimes it won't. If it inserts right into the middle of some key cellular gene that has to be read off later, the cell will probably die when it tries to do that. You just incubate as many stem cells as you can get, and hope for the best.

In several of the patients, that's what they got. They seem to have completely restored immune systems, a first for non-tissue-matching SCID patients. But in one case, the gene appears to have inserted itself into precisely the wrong place, making nonsense out of a gene that codes for a known growth-checking protein called LMO-2. This could have happened in only one cell out of the entire transplant, but one cell is enough. Loss of this protein has sent it into full-tilt reproduction and growth, which is another word for cancer. A new man-made form of leukemia was the result.

Analysis of the proliferating T-cells showed that, indeed, the necessary protein had the viral sequences wedged into it. The boy involved has a family history of a higher incidence of tumors, and he had a chicken-pox infection after his transplant (which must have been a scary test of its efficacy.) Either of these could have made the situation worse. He's receiving chemotherapy now, and as of last report the prognosis is cautiously optimism that the rogue cells can be brought under control.

So, does this stop the gene therapy world in its tracks? Not at all, as it turns out. In what I think is a very realistic risk/reward appraisal, an FDA advisory committee met last week and decided to press on with such experiments in the US. After all, it's the only chance these patients have. And a pediatric oncologist for the National Cancer Institute put it well: "If we threw out every therapy in cancer that causes cancer," she said, "we would get rid of some of our most effective ones." For better or worse, that's the state of the art. Good luck to all involved.

Comments (0) + TrackBacks (0) | Category: Biological News

October 13, 2002

Nobelity and Lesser Nobelity

Email This Entry

Posted by Derek

When I referred to Nobels this year as being well-deserved, that got me to thinking. How many scientific Nobels haven't been? If you go back to the early years of the awards, there actually are some stinkers. And there are a few mild head-scratchers, like Einstein winning for the photoelectric effect (rather than the still-controversial-at-the-time relativity.) But in recent decades, it's hard to find many problematic Chemistry, Medicine, or Physics prizes. As a chemist, when I look back over the list of laureates in my field, I don't find much to argue about.

The three-awardee limitation has caused problems now and then. And the timing of the awards in general has been arguable - sometimes the committees just wait too long beforing honoring someone. (Maybe that's why the Karolinska folks went out on a limb by honoring Stanley Prusiner and the prion hypothesis a couple of years back, probably the most out-on-the-edge medicine Nobel ever. Fortunately, the hypothesis seems to be holding up.) And there are always people that could have won, but never did.

But just check out the other prizes - you couldn't ask for a clearer example of the differences between the sciences and the humanities. Whatever controversies there are in the science prizes start to just look like quibbling compared to what goes on with Literature and Peace. Think of the percentage of those that have been won by people considered by many to be frauds or windbags. Then subtract out the nonentities, and make allowances for clearly deserving candidates who never won - and what do you have left?

You find blunders on the order of say, tapping Maurice Maeterlinck over Tolstoy for Literature. And while we're on the subject, how about ignoring James Joyce, ignoring Vladimir Nabokov, Jorge Luis Borges. . .write your own list, it's easy. Meanwhile, just in English-speaking awards, we have Sinclair Lewis (hrm,) John Steinbeck (hrmmm,) Pearl Buck (hrrrrrrmmmmm.) Other languages get to join the fun, too - how about Dario Fo in Italian? Why not give it to George Carlin while you're at it, a reasonably close equivalent in English?

I won't even get started on the Peace prize. Deserving people and organizations have won it, but so have have blood-drenched maniacs, delusional self-promoters, and insufferable twits. (You can attach your own names to those as you wish; I can assure you that I'm thinking of my selections right now.)

No, the science prizes are oases of sanity compared to those two: Literature winners who have produced nothing except rivers of drivel, and Peace winners who wouldn't know peace if it crawled up their leg. Heck, as long as they award the Chemistry prize to someone who knows what the periodic table is, we're ahead of the game.

Comments (0) + TrackBacks (0) | Category: General Scientific News

October 11, 2002

Alzheimer's Vaccine Refuses to Die

Email This Entry

Posted by Derek

The Alzheimer's vaccine idea that I've covered every so often is back in the news. Two studies coming out in Nature Medicine give it a boost. One shows that the ill-fated Elan clinical trial (which came to a screeching halt when some patients developed brain inflammation) actually did lead to antibody production against the beta-amyloid protein. The antibodies recognized various types of amyloid deposits, and crossed into the brain. (That last part is what has amazed everyone since the first animal results - antibodies aren't supposed to be big players across the blood-brain barrier.)

The other paper reports that a very similar response in rodents can be achieved using a much smaller variant of the amyloid protein. That should lower the chance of inflammatory side-effects considerably, and gives new hope to human studies. This is looking like one of the crazy ideas that just might work - stipulating, for the moment, that amyloid really is the cause of Alzheimer's. . .

Comments (0) + TrackBacks (0) | Category: Alzheimer's Disease

October 10, 2002

Another Stuffed Shirt

Email This Entry

Posted by Derek

Talking about the Nobels brings to mind a story from Sydney Brenner, one of those honored with the Medicine prize this year. He related this story in a column he did for Current Biology a few years ago (8 (23), 19 Nov 1998, R825 if you want to look it up.) He was visiting a company in Japan ("W---- Pharmaceuticals") that made some sort of herbal brew made from fermented garlic, which tasted just as awful as you 'd guess. It had to be given in capsules, but the dose was large enough that they couldn't sell them filled without losing many of the packages to breakage and leaks. So (turning this into a marketing tool,) they sold the stuff as a kit, with empty capsules and a dropper to make your own dose.

Brenner mentioned that he'd like to try the stuff, so they trotted off and brought him one. While he was mixing up his garlic dose, he seems to have had an inspiration: he swallowed it, then cried out, gave a strangled gurgle, and pitched off his chair onto the floor.

Well, that got everyone's attention, as you can imagine. He relates that he kept one eye partially open to gauge the effect of his performance, and what he saw was a stunned roomful of Japanese businessmen with the blood draining from their faces. He claims to have noted a couple of expressions that he interpreted as preliminary thoughts about what to do with the body.

He let them off the hook pretty quickly, which was probably wise, springing to his feet, waving and laughing to the hysterical relief of his hosts. As he says

"I am quite famous in Japan for this, and every now and then, somebody comes up to me, shaking their head, nudging me and saying "W--- Pharmeceuticals!"

My kind of guy! And the Nobel he shared is another well-deserved one. The study of the roundworm C. elegans has been an extremely useful technique, since it's multicellular, but not too much so. You can follow the fate of every single one of its cells as it develops, and some rather odd stuff happens. For example, as it turns out, not all of them make it. Particular excess cells die out at particular times, and this programmed cell death (apoptosis) has now been the subject of more research articles than you can shake an Eppendorf vial at. (That's what the mention of "cancer treatments" that the prize got in the popular press meant - tumor cells generally should have fallen on their metabolic swords and died at some point, but mysteriously haven't.)

This work has set off discovery in all sorts of other areas, too. There are a surprising number of cellular pathways that are conserved all the way to humans, and it's a heck of a lot easier to study them in the worms. Looking for these is almost a guarantee of working on something fundamental, because anything that's similar across that sort of phylogenetic gap is bound to be pretty important. Getting a crib sheet to the key pathways along with a fine model organism, all in the same research program - that's how to do it, all right.

Comments (0) + TrackBacks (0) | Category: General Scientific News

October 9, 2002

Nobel Time!

Email This Entry

Posted by Derek

Congratulations to John Fenn, Koichi Tanaka, and Kurt Wuerthrich for sharing the 2002 Chemistry Nobel. The common theme is characterization of proteins and other macromolecules, and the discoveries are (respectively) electrospray ionization for mass spectrometry, laser desorption for the same, and 2-D NMR techniques.

I'll write more on this tonight, but for what my opinion's worth, I'd say these are well-deserved accolades for some important techniques that otherwise wouldn't be as well recognized.

Comments (0) + TrackBacks (0) | Category: General Scientific News

The Bigger They Are

Email This Entry

Posted by Derek

The Chemistry Nobel this year doesn't include any household names, even by the standards of my branch of the science. But (as I said this morning,) I think the award is a good one. The ability to deal with large molecules like proteins as molecules is a relatively recent development. Before these sorts of methods were worked out, you stepped into another world when you worked with such things. The precision of "real" organic chemistry (such as it is!) disappeared.

A newly discovered protein might weigh, oh, 60,000 or so give or take a few hundred units (or a few thousand.) That's pretty fuzzy, when you compare it to the world of small molecules, which can be measured out to four decimal places. (Doing that, you have to correct for picky things like the 1% abundance of isotopic carbon-13 atoms rather than the usual carbon-12 - not the sort of thing that kept the protein chemists up at night, that's for sure.) And the 3-D structure of your new beast? Good luck! Maybe it would come to you in a vision. . .failing that, you could try to crystallize it and hope for the best in an X-ray analysis. But many proteins don't crystallize (or at least don't crystallize well on human time scales,) many that do don't give good data, and even the ones that can don't always give you realistic structures. After all, the proteins floating around in your cells aren't packed in a crystal lattice with billions of their identical twins. You'd better hope they aren't, anyway. They're surrounded by water, other proteins, lipids, and who knows what.

So, before the mass spec techniques of today's prize were developed, you could put a big ol' protein into a mass spectrometer, sure - and get an extraordinary mess of fragments out the other end. That's not always bad, of course (one of the points of mass spectra is the information that the fragmentation pattern provides,) but you'd like to be able just to see the mass of the parent, too. Now we can. Ridiculously huge molecules can be made to fly off, intact, into the hard vacuum of the mass spectrometer, there to be sorted by mass and charge. A few years ago, some lunatics even tried this on an intact virus. (PDF file.) They ionized it (without destroying it, thanks to these methods) and flew it down the mass spectrometer. When they collected the virus particles at the other end, they were still infectious - the only viruses to survive an ionizing flight in a vacuum, if that's the verb to use with a virus. (Unless, of course, they're raining down on us from space, a possibility this experiment does not dispel.)

The same goes for NMR, the veg-o-matic analytical technique of the organic chemist (thanks to all the tricks you can play with it.) Here's a brief history: The original method (Nobel Prize!) showed you the hydrogens in a molecule, and that's still the first thing we do. Want to see the carbons, instead? You can tune it for that, as well as plenty of more exotic nuclei. Then, in the 1950s and early 1960s, it was discovered that the splitting of the NMR lines (coupling constants) would tell you the angle between the two adjacent hydrogens that caused it, and suddenly 3-D structural information began to be extracted (as well as another Nobel or two.) Then the nuclear Overhauser effect was exploited (if you don't know how NOE works, I'll need to see payment - in cash or precious metals - before I explain it. Inquire within.) That tells you if particular hydrogens are close together in space, no matter how the rest of the molecule might be connected. More 3-D structural information started to come into focus.

The next big thing was 2-D NMR spectra, where you could extract (among other things) all the possible coupling constants or all the possible NOEs simultaneously. (These sorts of techniques will take most smalleunknown molecules and nail their structure up on the wall in matter of minutes or hours, the sort of thing that used to take years of hair-pulling effort.) Now we're getting to the area of today's Nobel: Applying these techniques to really complex molecules, like proteins, allows a look their real structure. That's the structure in solution, with whatever added molecules you want or need. In short, it gives you a look at the real animal, instead of a stuffed and mounted version (which, as mentioned above, is more like what X-ray crystallography does.)

There are limitations. Some kinds and sizes of proteins don't give good spectra, and many of them live in environments too complex to (yet) be reproduced in an NMR experiment. But the field's moving right along. If we're going to realize the promise of medicinal chemistry, we're going to need as much of this sort of work as we can get. The molecules of the living cell aren't special - they're big, they're complex, they do amazing things - but they're just molecules. It's good to be able to work with them that way.

Note: for a very nice technical discussion of today's prizes, see this PDF from the Royal Swedish Academy. Try to avoid most newspaper articles, since (as usual) the subject matter of the awards will be unrecognizably diluted.

Comments (0) + TrackBacks (0) | Category: General Scientific News

October 8, 2002

Genetic Optimism

Email This Entry

Posted by Derek

he genetic news of the day, subject of good-sized headlines in the Wall St. Journal and elsewhere, is an upcoming paper in PNAS on a candidate cancer gene called DBC2. Some of these abbreviations are pretty recondite, but not this one - it stands for "Deleted in Breast Cancer," which is pretty tame by the standards of genomic nomenclature.

These researchers (at Cold Spring Harbor) have looked at a lot of different cell lines, and they've spent years tracking everything down. This gene seems to be altered in a number of breast and lung cancers, and (equally importantly) doesn't seem to be changed in normal tissue samples. There's a reasonable chance that DBC mutations are indeed a causative factor in some of these cancers, and the evidence is good enough to put a lot more people working on it (which is no doubt happening as we speak.) Still, this would be a good time for everyone to recite the Pharmacogenomics Pledge. All together now, especially you folks at New Scientist:

Correlation Does Not Imply Causation

I singled out the New Scientist article because of lines like this one, from their article:

"Hamaguchi thinks treatments based on switching on the gene, dubbed DBC2, could be available in three or four years."

If he really said that, then I hate to be the bearer of bad news: he's almost certainly wrong. Even if they found such a treatment this afternoon, it would take more time than that to make it available. Development, testing, full-scale clinical trials, regulatory scrutiny - it really adds up.

And that's after the hard part, finding the treatment. The problem is, switching on an individual gene isn't something that we're really good at - we tend to switch a few hundred others on (and switch a few hundred others off) when we try that via drug therapy. (And keep in mind that this gene is defective in many cancer cell lines, so presumably switching it on won't do much good in those cases.) This sounds like a possible candidate for gene therapy, but how to apply that to solid tumors is a non-trivial question.

If it were a question of switching the gene off, there would be some hope from either an antisense DNA approach (not that it's easy to get that to work - no drug has made it yet, despite years of effort,) or through a very interesting new technique called RNA interference. That needs to be the subject of another posting entirely, but it's potentially promising - for turning things off, that is.

As the press articles generally make clear, no one knows what the actual function of DBC2 is. It's from somewhere in the wilderness of chromosome 8, and it seems to be from a family of proteins about which almost nothing is known. It doesn't look like a typical cell-surface receptor, nor like the usual classes of enzymes. I'll go out on a limb and guess it's some sort of transcription factor, but that's a pretty broad category.

The real accomplishment of this work is finding the gene, not finding ways to use it. Good new drug targets are getting harder to find, as is becoming painfully clear these days, and a whole new potential class of them is a welcome development. Figuring out what they might be is bound to lead to some useful information. Let's hope it leads to a drug, too - but it's going to be more than four years before that happens, I'm sad to say.

Comments (0) + TrackBacks (0) | Category: Cancer

October 7, 2002

Idle Hands

Email This Entry

Posted by Derek

Events don't leave me much time to blog tonight, and I'm staying busy at work as well. Without going into job-terminating levels of detail, I'll say that we're at the stage now where we not only have to worry about what molecules to make, but how we're going to make them.

Those of you in the field know what I'm talking about. There are any number of ways to make complex molecules if you're just looking to finish with a few milligrams. Big natural products syntheses never finish with more than that, because the earlier stages would have to be performed in a cement mixer. But for the smaller, more drug-like molecules that labs like mine are supposed to turn out, we can only get away with making small amounts for so long. It's fine for the earlier stages of a project, but eventually you'll need more.

If your stuff starts to look interesting, then everyone wants a vial of it. More assays, more animal tests, more safety and formulation and toxicity tests - folks just come out of the floor grates reaching for the stuff. (Medicinal chemists always complain that some of the other departments go through our compounds as if they thought we had barrels with metal scoops chained to them.) That means that a synthesis has to be worked out that can provide gram quantities of material (without having to make people work night and day like they were in graduate school.)

So it has to work well, and work every time, and work in a way that any reasonably competent chemist can sit right down and do it, too. As the compound needs increase, the constraints on the chemistry get more esoteric: can't use that solvent, because we can't get tank cars of it. Can't use that reagent, because the waste stream it generates is too expensive. Can't use that reaction, because once every hundred times it could take off and ventilate the place. We're not at the stage where we have to worry about such things - yet. I hope we end up having to, though, because those sorts of worries are the ones that attach to compounds that can become drugs.

Comments (0) + TrackBacks (0) | Category: Life in the Drug Labs

October 6, 2002

Bad News at a Bad Hour of the Night

Email This Entry

Posted by Derek

10:30 PM Eastern time, to be exact. That's when Schering-Plough released their unwelcome news that they're not going to be making nearly as much money as they'd been letting on they would. They had an all-day session with analysts last Thursday, but only got around to laying the egg after everyone had gone home.

Wall Street didn't react to it very well. Forbes quotes a Lehman Brothers research note that sarcastically refers to this as "the perfect time" to announce earnings guidance. Actually, I think part of the ill will was the that company had waited so long to come clean at all. Claritin goes off-patent in December, and it's become increasingly clear that son-of-Claritin (Clarinex) isn't going to even come close to picking up the slack. So where was the guidance before this?

As I've said before, I think that Clarinex's fate is deserved. I don't see it as any real advance over Claritin, and I believe that it's just there to try to keep what money it can flowing in. (And yes, I still own SGP stock, which is now revisiting the heady days of 1996.)

A better drug, in every way, is the forthcoming cholesterol absorbtion inhibitor Zetia, co-marketed with Merck. That's an actual innovation, with a unique mechanism of action. Neither of those are any guarantee of success, but there's a lot better chance of something like that taking off than there is with a feeble effort like Clarinex. Of course, the big question is just how well Zetia's going to do, and estimates are all over the place.

The statin drugs have been coming under toxicological scrutiny recently, and Zetia could possibly allow them to be used at a lower dose in combination. That would be good news, and that's the bullish case in a nutshell. The bearish case on the drug has to do partly with that new mechanism - no one knows how it's going to play out there in the real world - and partly with worries about liver enzyme elevation seen at a low level in the clinic. That effect is never good, but it doesn't have to be awful. Again, no one's really going to know until the drug gets out there and a wider patient population tries it out.

That'll be a nail-biter even for those who don't hold the stock. If things go well, the shares could retrace their way back through the late 1990s, which would ease the pain a bit. If something goes wrong, the only way to hold the stock will be with rubber gloves and tongs.

Comments (0) + TrackBacks (0) | Category: Business and Markets

October 3, 2002

Am I Blue?

Email This Entry

Posted by Derek

Most of you have probably seen this link by now, but for those who haven't, here's Montana's blue Senate candidate. The picture would seem to do a reasonable job of rendering his color, but I suspect that he's more gray than blue. Still, no doubt the effect is quite striking in person.

Colloidal silver (very fine particles of the metal suspended in water) is to blame. Actually, let me rephrase that: this guy is to blame, because he drank hefty amounts of the stuff for an extended period. The silver just did what silver does; you can't blame an element for acting the way it has to act.

And why, one asks, did this man do all these silver shots? Well, if you go to Google and run the phrase "colloidal silver" through it, you'll be assaulted with come-ons for so much of the stuff that you could start your own currency. It's been around for a long time (turn of the century, at least) and was a common ingredient in nose drops up until the 1950s or so. Here's a rundown on it from Quackwatch.

While it does have antibiotic properties, it's not effective enough (and its side effects are too great) to be of much use. The only modern application of it that I know of is in some kinds of burn salves, where it's at least applied topically.

Unfortunately, it's not a metal that the body handles very well. Silver doesn't have any known endogenous use, and there aren't any clearance mechanisms for it. So it just tends to pile up, which is the general problem with ingested metals. And, for reasons that aren't well understood, many people end up depositing fine particles of the metal in their skin, eyes, fingernails, and so on. It wouldn't surprise me if the metal were present in a number of internal organs, too (I'd start with the liver.) The condition's called argyria, from the Latin.

It's there to stay, too. There is absolutely no way to get it out. Here's an unfortunate woman who was given the nose drops for a period in the 1950s and ended up with argyria for the rest of her life. She's in a rather testy mood about all the latter-day silver promoters, and who can blame her? I'll link to a particularly clueless (and poorly written) example to give you the flavor of the field.

Our metallized Montanan made the stuff at home with a similar kit (probably generously laced with silver salts, depending on what kind of water he used,) because he feared antibiotic shortages after Y2K. And the hucksters told him, you know, that if he took this wonderful silver that he wouldn't have to worry about that sort of thing. How was he to know?

By using his brain, perhaps? By doing a half-hour's research on the web or in any good library? Apparently not. Actually, I shouldn't be making fun of his Senate candidacy. Come to think of it, he'd fit right in.

Comments (0) + TrackBacks (0) | Category: Snake Oil

October 2, 2002

Voluntary. . .For Now

Email This Entry

Posted by Derek

HHS has fired a warning shot across the bow of the drug industry. These draft guidelines don't have the force of law behind them (yet,) but the implication seems clear: shape up, or they will.

This election cycle has seen some grandstanding against the drug companies (and without foreign policy intruding, there would surely have been more.) The industry has to realize that the political wind is against it these days. Nobody's in the mood to hear some more stories about a powerful industry throwing money around to influence people.

I feel a bit odd saying this, but I wouldn't mind seeing some of the restrictions made mandatory, with some vigorous enforcement. It would defuse the marketing arms race in the industry a bit, and it's for sure that nothing else will. Companies will, of course, act in their own interests - it's silly to expect them not to. And as things are set up, it's in their interest to market as aggressively as possible. There aren't that many wonderful new drugs to sell these days, which puts increasing pressure on both the existing portfolio and on anything new that might come up.

I can go on like this because marketing types and drug-discovery types don't spend much time interacting, and tend to regard each other as alien beings. To us, although we realize the value of advertising, some of the marketing campaigns seem in danger of tipping the balance to where they start to eat into the potential profits of the drugs - a diminishing-returns situation. To them, research seems like the black-hole cost center that untold zillions of dollars go spiraling into - and for what? Where's something that they can sell?

Maybe some of the blogger physicians (you, and you, and you, for starters!) can report over the next few weeks or months if they're noticing anything different.

Comments (0) + TrackBacks (0) | Category: Why Everyone Loves Us

October 1, 2002

Silver Tongues, Golden Hands?

Email This Entry

Posted by Derek

I've been thinking more about Sam Waksal's interesting career (see the September 29 post below, and this link for an online version of the story - thanks to Charles Murtaugh for coming up with it.) What I'm specifically wondering about is the phenomenon of the silver-tongued hot-talking scientist that he represents.

Charles mentions that he's run across a few of these himself (and I'm pretty sure we overlap on a couple of them, research being the world that it is.) There's no doubt that in every scientific field, some people are better at creating a mystique, at getting other people to talk about them and their work. My question is: is there any correlation between the ability to do these things and the ability to do great science?

If you go Cartesian and map out four quadrants, you get these categories:

Fluent Talker and Really Good Scientist (the late Peter Medawar comes to mind here, but there are a number of examples)
Awkward Speaker but Really Good Scientist (even more examples - think about the various Big Cheeses you've heard giving seminars.)
Fluent Talker but Poor Scientist (Waksal and his ilk.)
Awkward Speaker and Poor Scientist (legion, unfortunately.)

I'm willing to hazard that if there's a correlation, that it's a slightly negative one. Scientific prowess and a gift for communication can be orthogonal to each other, because the results can speak for themselves (they don't, always, but if they hit at the right moment, they can.) Meanwhile, if someone does low-quality work, the only way for them to achieve recognition is to be able to talk a good game.

By the way, I'm simplifying here by classing written and spoken fluency as the same thing. They certainly aren't - Vladimir Nabokov, for example, said once that he thought like a genius, he wrote like a distinguished author, and he spoke like a child. (Which is why he never gave extemporaneous interviews!) I'd say that the spoken fluency is more important for making a big splash, the written more important for lasting impact.

That doesn't mean, of course, that you should give the next wonderful scientific speaker you hear a fishy glance of suspicion. Sometimes a person's verbal facility outruns their scientific ability, but they don't necessarily use it for harm. It's the BS artists that we have to watch out for - the ones who can spin out wonderful ideas and stories, and who always make sure to leave themselves a starring role.

Comments (0) + TrackBacks (0) | Category: The Dark Side

Overpatenting?

Email This Entry

Posted by Derek

There's an article in the latest New Republic on innovation in the drug industry. As far as I'm concerned, it draws good conclusions from faulty premises (which, admittedly, is a lot better than drawing bad ones from a good starting point!)

The author, Nicholas Thompson, says that

shares in the pharmaceuticals index, meanwhile, are down 25 percent. And the industry deserves it. For years it has squeezed consumers the world over, endlessly arguing that it needs its huge profits in order to invest in new, lifesaving innovations. But while that might once have been true, lately the industry hasn't been innovating at its past rate--and that's probably the main reason investors have started to back off.

Well, that last part rather goes without saying. But, talk of "deserving it" aside, his reasons why this has happened are a little too glib:

The cliché of the moment is that pharmaceuticalcompanies have picked the low-hanging fruit, developing drugs that interact with the limited number of enzymes and molecules that we already understand and have thoroughly modeled.

Sounds like he's been reading this site. Some things get to be truisms because they're true.

But people have always mourned the loss of the low-hanging fruit--and then smart and innovative folks built taller ladders. The tools and computational abilities available to drug companies today dwarf those that companies employed to develop the blockbusters that fueled the 1990s boom. Scientists now have access to the human genome and a vastly increased understanding of everything from gene expression to organ physiology as well as extraordinarily powerful computing and modeling capabilities. In many ways, discovery should be easier and cheaper now than ever before.

You'd think so, wouldn't you? Problem is, all that knowledge has also bred more complications. Look at gene expression: we can now look at thousands and thousands of genes going up and down when we administer our drug candidates. What percentage of that do we actually understand? You'd be hard pressed to get a knowledgable answer that gets out of the single digits. As for computational ability, the big news the other day was that we finally (after decades of trying) predicted a protein's structure from knowing its amino acid sequence. Another few hundred thousand of those and we'll be in business. Then we can start figuring out how they interact with each other, and then we can start figuring out how to make drugs that do the same. It'll be a great day when we do - but that day isn't here.

And the very successes of the industry over the last 25 years have led to the bar being raised. Some things that were huge blockbusters back then wouldn't stand a chance today - too many side effects, for example. Racemic compounds (Prozac, to pick an example from the article) don't stand much of a regulatory chance today; the FDA wants single enantiomers. (Ironically, the active enantiomer of Prozac actually has a worse window for tox effects - the same compound might not have made it at all today.)

Thompson goes on to decry the amount of money put into marketing, and I can't argue with him much there (tomorrow I'll try to talk about the new government guidelines issued today.) But I think he's fallen into a post hoc, ergo propter hoc fallacy: in many cases, the marketing has been cranked up because the drug pipeline is thin. The money spent on marketing isn't necessarily what thinned it out.

Where he strikes gold is when he starts talking about the patenting of "upstream" products: assays, research tools, biological pathways, and methods of treatment. He goes over some examples familiar to readers of this blog (Ariad!) and correctly points out that this sort of thing is fast becoming a crippling influence. Actually, I think it's been a lesser contributor to the productivity decline so far, but it's set to pile on top of what's already a tough situation. Then we really be up the creek.

His solution? Right on target, to my thinking:

The best thing for the pharmaceutical industry would probably be to maintain its strong protection over downstream products while opening access for upstream products.

Preach it, and speed the day! I've been working on some pieces addressing this issue (guess I won't be selling them to TNR, dang it all.) We haven't heard the last about this issue - at least, not if I have anything to do about it.

Comments (0) + TrackBacks (0) | Category: Drug Industry History | Patents and IP