About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
July 31, 2012
Here are two papers in Angewandte Chemie on "rewiring" synthetic chemistry. Bartosz Grzybowski and co-workers at Northwestern have been modeling the landscape of synthetic organic chemistry for some time now, looking at how various reactions and families of reactions are connected. Now they're trying to use that information to design (and redesign) synthetic sequences.
This is a graph theory problem, a rather large graph theory problem, if you assign chemical structures to nods and transformations to the edges connecting them. And it quickly turns into one that is rather computationally demanding, as are all these "find the shortest path" types, but that doesn't mean that you can't run through a lot of possibilities and find a lot of things that you couldn't by eyeballing things. That's especially true when you add in the price and availability of the starting materials, as the second paper linked above does. If you're a total synthetic chemist, and you didn't feel at least a tiny chill running down your back, you probably need to think about the implications of all this again. People have been trying to automate synthetic chemistry planning since the days of E. J. Corey's LHASA program, but we're getting closer to the real deal here:
We first consider the optimization of syntheses leading to one specified target molecule. In this case, possible syntheses are examined using a recursive algorithm that back-propagates on the network starting from the target. At the first backward step, the algorithm examines all reactions leading to the target and calculates the minimum cost (given by the cost function discussed above) associated with each of them. This calculation, in turn, depends on the minimum costs of the associated reactants that may be purchased or synthesized. In this way, the cost calculation continues recursively, moving backward from the target until a critical search depth is reached (for algorithm details, see the Supporting Information, Section 2.3). Provided each branch of the synthesis is independent of the others (good approximation for individual targets, not for multiple targets), this algorithm rapidly identifies the synthetic plan which minimizes the cost criterion.
That said, how well does all this work so far? Grzybowski owns a chemical company (ProChimia), so this work examined 51 of its products to see if they could be made easily and/or more cheaply. And it looks like this optimization worked, partly by identifying new routes and partly by sending more of the syntheses through shared starting materials and intermediates. The company seems to have implemented many of the suggestions.
The other paper linked in the first paragraph is a similar exercise, but this time looking for one-pot reaction sequences. They've added filters for chemical compatibility of functional groups, reagents, and solvents (miscibility, oxidizing versus reducing conditions, sensitivity to water, acid/base reactions, hydride reagents versus protic conditions, and so on). The program tries to get around these problems, when possible, by changing the order of addition, and can also evaluate its suggestions versus the cost and commercial availability of the reagents involved.
Of course, the true value of any theoretical–chemical algorithm is in experimental validation. In principle, the method can be tested to identify one-pot reactions from among any of the possible 1.8 billion two-step sequences present within the NOC (Network of Organic Chemistry). While our algorithm has already identified over a million (and counting!) possible sequences, such randomly chosen reactions might be of no real-world interest, and so herein we chose to illustrate the performance of the method by “wiring” reaction sequences within classes of compounds that are of popular interest and/or practical importance.
They show a range of reaction sequences involving substituted quinolines and thiophenes, with many combinations of halogenation/amine displacement/Suzuki/Sonogashira reactions. None of these are particularly surprising, but it would have been quite tedious to work out all the possibilities by hand. Looking over the yields (given in the Supporting Information), it appears that in almost every case the one-pot sequences identified by the program are equal to or better than the stepwise yields (sometimes by substantial margins). It doesn't always work, though:
Having discussed the success cases, it is important to outline the pitfalls of the method. While our algorithm has so far generated over a million structurally diverse one-pot sequences, it is clearly impossible to validate all of them experimentally. Instead, we estimated the likelihood of false-positive predictions by closely inspecting about 500 predicted sequences and cross-checking them against the original research describing the constituent/individual reactions. In few percent of cases, the predicted sequences turned out to be unfeasible because the underlying chemical databases did not report, or reported incorrectly, the key reagents or reaction conditions present in the original reports. This result underscores the need for faithful translation of the literature data into chemical database content. A much less frequent source of errors (only few cases we encountered so far) is the algorithm's incomplete “knowledge” of the mechanistic details of the reactions to be wired. One illustrative example is included in the Supporting Information, Section 5, where a predicted sequence failed experimentally because of an unforeseen transformation of Lawesson's reagent into species reactive toward one of the intermediates. We recognize that there is an ongoing need to improve the filters/rules that our algorithm uses; the goal is that such improvements will ultimately render the algorithm on a par with the detailed synthetic knowledge of experienced organic chemists. . .
And you know, I don't see any reason at all why that can't happen, or why it won't. It might be this program, or one of its later versions, or someone else's software entirely, but I truly don't see how this technology can fail. Depending on the speed with which that happens, it could transform the way that synthetic chemistry is done. The software is only going to get better - every failed sequence adds to its abilities to avoid that sort of thing next time; every successful one gets a star next to it in the lookup table. Crappy reactions from the literature that don't actually work will get weeded out. The more it gets used, the more useful it becomes. Even if these papers are presenting the rosiest picture possible, I still think that we're looking at the future here.
Put all this together with the automated random-reaction-discovery work that I've blogged about, and you can picture a very different world, where reactions get discovered, validated, and entered into the synthetic armamentarium with less and less human input. You may not like that world very much - I'm not sure what I think about it myself - but it's looking more and more likely the be the world we find ourselves in.
+ TrackBacks (0) | Category: Chemical News
July 30, 2012
And while we're talking oncology, here's a piece from Luke Timmerman at Xconomy that brings up a lot of tough questions. We've talked about some of these before around here, but everyone who works in oncology drug discovery is going to hear them again: How much should a new cancer therapy cost? Who's going to pay for it? Are patients (and their insurance companies) getting value for their money?
I wouldn’t go so far as to say we need a draconian system to discourage drug developers from creating new products. Drug prices are rising fast, but there are a lot of other factors contributing to increased healthcare spending. Drug companies can, and should, be able to recoup the investments they make in the form of high drug prices. But if you’re going to charge a high price for a drug, I think a company needs to have a much stronger value proposition than “Hey, we shrank tumors in half for 20 percent of patients. Now hand over your $100,000.” It needs to be more like, “Hey, my drug has an 80 percent chance of helping people with this genetic profile, and those people can expect to live an extra year, with high quality of life.” Now you’re starting to really talk about $100,000 of value.
Sadly, drug companies tend to be more interested in satisfying the short-term profit desires of their investors than they are in truly delivering cost-effective care to patients. . .
Well, it's like this: we realize that people want inexpensive drugs that work great. But we have an awful time delivering anything like that. As I've said before here, we keep swinging for those fences and missing. That's why these drugs come out, the ones that only extend life span for a limited amount of time: every one of those are drugs that people had higher hopes for, but that's how they performed in the real world, so out they come onto the market to do as best they can. And if they're only going into a small patient population, then the pricing gets set accordingly.
So we have two trend lines that are trying to intersect: the amount of money one can hope to recoup from a new cancer medication, and the amount of money that it takes to find one. They haven't quite crossed, not yet, but they're on course to. If it were less costly to develop these things, or if they delivered more value in the end, we could push them back apart. Will either of those be realized in time to help?
+ TrackBacks (0) | Category: Cancer | Drug Prices
Here's an interesting profile of Bert Vogelstein, who has had a major impact on oncology over the years, especially in the area of cancer-associated genetic mutations. Some of his recent work bears on the question of how useful some of the newer drugs are:
Vogelstein seems to enjoy pricking balloons. Recently, he has focused on a new target: exuberance over targeted cancer drugs. He says he got interested after seeing a paper last year on melanoma therapy. It included photos of the torso of a man with melanoma who had received a new drug aimed at a mutated gene called BRAF. Before treatment, the patient's skin was riddled with metastatic tumors; soon after treatment, the tumors vanished, and the man looked perfectly healthy. Five months later, the tumors reappeared in exactly the same locations. The photos “blew my mind,” Vogelstein says. “Why do the tumors all return at roughly the same time? It's almost as miraculous as when they disappear.”
Targeted drugs for other cancers usually stop working after about the same number of months, presumably because rare resistant cells in the tumors continue to grow and ultimately proliferate. To investigate, Luis Diaz and others in the Vogelstein-Kinzler lab drew on a sensitive technique they had developed for detecting mutations in the very small amount of tumor DNA present in a cancer patient's blood. They collected a series of these “liquid biopsy” measurements from patients with advanced colorectal cancer whose tumors had become resistant to a targeted cancer drug. With Harvard University computational biologist Martin Nowak, they devised a model showing that even before the patient begins treatment, some tumor cells always carry genes with random mutations that can support resistance to targeted drugs. This form of resistance, they wrote last month in Nature, is therefore “a fait accompli.”
But the modeling study also suggested that this resistance can be delayed by combining two drugs that target different pathways. Indeed, Vogelstein and colleagues suggest that once a targeted drug has passed initial safety trials, it's so clear that single-drug therapy will fail that they consider it unethical to give patients just one such drug. “Why shouldn't you design a large, very expensive trial to incorporate more than one agent?” Vogelstein asks.
There are a lot of labs working on this "liquid biopsy" idea, and it's the sort of thing that you could only imagine doing with modern DNA sequencing technology (and modern DNA sequencing costs). A big worry, as with any screening technology, is the false positive rate. As you make finer and finer distinctions among different tumor types, the incidence of any given one in the population gets lower and lower, and thus your test has to be more and more reliable in order to avoid overdiagnosing hordes of panicked patients.
Interestingly, when I talk to people outside of the medical research field, they seem less worried about overdiagnosis than underdiagnosis (false positives versus false negatives). Psychologically, I can see how that happens - they don't want to the test to miss anyone. But being told that you do have cancer, when you really don't, is not a good outcome, considering what the therapy will put you through. And this is what makes things like the PSA test recommendation (and mammograms in younger patients) so controversial. In the push to make sure that you find every patient, you can end up harming more people than you help. "But if you just save one life. . ." goes the phrase, at least goes the phrase from people who don't realize that they might be ending the sentence with ". . .it's worth killing off a few more".
I hope that the blood test idea works out; it would be a great advance. But a less-than-optimal one could be worse than having none at all. Look for plenty of arguments about this in the coming years - I'll fill in some of the talking points in advance: "The FDA is holding back medical progress by not approving this new test". "The FDA has given in to commercial pressures by approving this faulty new test". "This test will end up hurting more people than it helps". "How can you be against cancer screening? Isn't it always worth looking?". "This is all just a disguised cost-cutting effort; they're approving this test because it's cheaper than doing better screening". "This is all just a disguised cost-cutting effort; they're not approving this test because they're afraid that too many people will be diagnosed with cancer". And so on.
+ TrackBacks (0) | Category: Cancer | Regulatory Affairs
July 27, 2012
One of the hazards of medicinal chemistry - or should I say, one of the hazards of long experience in medicinal chemistry - is that you start to think that you know more than you do. Specifically, after a few years and a few projects, you've seen plenty of different compounds and their activities (or lack thereof). Human brains categorize things and seek patterns, so it's only natural that you develop a mental map of the chemical space you've encountered. Problem is, any such map has to be incomplete, grievously incomplete, and if you start making too many decisions based on it (rather than on actual data), you can miss out on some very useful things.
Here's a case in point: an assay against cancer stem cells, which have been a hot research area for some time now. It may well be that some classes of tumor are initiated and then driven by such cells, in which case killing them off or inactivating them would be a very good thing indeed. This was an interesting assay, because it included control stem cells to try to differentiate between compounds that would have an effect on the neoplasm-derived cells while leaving the normal ones alone.
And what did they find? Thioridiazine is what - an old-fashioned phenothiazine antipsychotic drug. For reasons unknown, it's active against these cancer stem cells. When the authors did follow-up screening, two other compounds of this class also showed up active: fluphenazine and prochlorperazine, so I'd certainly say that this is real.
And it appears that it might actually be the compounds' activity against dopamine receptors that drives this assay. The authors found that there's a range of dopamine receptor expression in such cells, and that this correlates with the activity of the phenothiazine compounds. That's quite interesting, but it complicates life quite a bit for running assays:
Our observations of differential DR expression between normal and neoplastic patient samples strongly suggest human CSCs are heterogeneous and drug targeting should be based on molecular pathways instead of surrogate phenotypic markers.
Working out molecular pathways is hard; a lot more progress might be made at this stage of the game by running phenotypic assays - but not if they're against a heterogeneous cell population. That way lies madness.
Interesting, the phenothiazines had been reported to show some anti-cancer effects, and schizophrenic patients receiving such drugs had been reported to show lower incidences of some forms of cancer. These latest observations might well be the link between all these things, and seem to represent the only tractable small-molecule approach (so far) targeting human cancer stem cells.
But you have to cast your net wide to find such things. Dopamine receptors aren't the most obvious thing to suspect here, and ancient antipsychotics aren't the most obvious chemical matter to screen. Drop your preconceptions at the door, is my advice.
+ TrackBacks (0) | Category: Cancer | Drug Assays | The Central Nervous System
July 26, 2012
There's a good state-of-the-field post over at Chemiotics on Alzheimer's, in the wake of the bapineuzumab news the other day.
Of particular interest is the recent finding by deCODE and Genentech researchers that there's a mutation in the Alzheimer's precursor protein (APP) that actually seems to be protective against the disease. There are several APP mutations that are known to bring on amyloid-driven Alzheimer's at much earlier ages, but seeing one that goes the other way does lend more support to the idea that amyloid really is a causative agent:
The mutation seems to put a brake on the milder mental deterioration that most elderly people experience. Carriers are about 7.5 times more likely than non-carriers to reach the age of 85 without suffering major cognitive decline, such as memory loss. They also perform better on the cognitive tests that are administered thrice yearly to Icelanders who live in nursing homes.
For Stefánsson, this suggests that Alzheimer’s disease and cognitive decline are two sides of the same coin, with a common cause — the build-up of amyloid-β plaques in the brain, something seen to a lesser degree in elderly people who do not develop full-blown Alzheimer’s. “Pathologists have always suspected that there was a substantial overlap between Alzheimer’s disease and normal age-related changes,” says Stefánsson. A drug that mimics the effects of the mutation, he says, would have the potential both to slow cognitive decline and to prevent Alzheimer’s.
Stefánsson and his team discovered that the mutation introduces a single amino-acid alteration to APP. This amino acid is close to the site where an enzyme called β-secretase 1 (BACE1) ordinarily snips APP into smaller amyloid-β chunks — and the alteration is enough to reduce the enzyme’s efficiency.
The flip side of this news is that targeting beta-secretase has already been the subject of a huge amount of work in the drug industry. That's good, in that we're not exactly starting from scratch, but that's bad, since the lack of success so far shows you that it's not exactly an easy thing to do. But there are still plenty of people taking cracks at it - CoMentis and Astellas have a compound in development, as do Merck, Lilly, Takeda, and others. Here's hoping that something from this class finally works out, and that this latest result isn't a red herring from a small mutation population.
+ TrackBacks (0) | Category: Alzheimer's Disease
I was using a tertiary amine the other day when the thought occurred to me: these things all smell the same. The amine smell is instantly recognizable, fishy and penetrating, in the same way that sulfur smells are also easy to pick out (rotten egg/skunk/burning rubber and worse). But as the triethylamine smell wafted along, I began to think that the sulfur stenches cover a wider range than the amine ones.
Is that so? Sulfur compounds certainly have the bigger reputation for strong smells, and it's well earned. But I still have the impression that various thiols or low-molecular sulfides are easier to distinguish from each other. They all have that sulfur reek to them, but in subtle and ever-varying ways. I sound like a wine critic. Amines, though, tend to be a big more one-note. Fish market, they say. Low tide. I'm not sure I could tell triethylamine from Hünig's base from piperidine in a blind snort test, not that I'm totally motivated to try.
There are exceptions. The piperazines often take on a musty, dirt-like smell that overrides the fishy one. (Note, however, that the classic "dirt" smell is largely produced by a compound that has no nitrogen atoms in it at all). And when they first encounter pyrrolidine, chemists (especially male ones) are generally taken aback. (Now that I think about it, does piperdine smell more like pyrrolidine or like the generic tertiary amines?) The straight-chain diamines should be singled out, too, for their famously stinky qualities. If you've never encountered them, the mere existence of compounds with names like putrescine and cadaverine should be warning enough.
We should probably leave pyridine out of the discussion, since as an aromatic ring it's in a different class. But it has to be noted that its odor is truly vile and alien, smelling (fortunately) like nothing on earth except pyridine. These examples are enough, though, to make me wonder if I'm short-changing the amines when I don't rate them as highly for range and versatility in the chemical odor department. Examples are welcome in the comments of amines that go beyond the Standard Mackeral. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
July 25, 2012
Now here's one of those structures that you don't see very often in a drug molecule. It wasn't intended to be a drug, though - it's a photolabel tool compound based on the general anesthetic mephobarbital, which is what that trifluoromethyldiazirine group is doing in there. (When those are exposed to light, nitrogen gas takes off, leaving behind a reactive carbene that generally attacks something nearby as quickly as possible).
But when the two enantiomers were tested, it turns out that one of them is about as potent as the best compounds in its class, while the other (the R enantiomer) is ten-fold better. And when used for its intended purpose, as a photolabeling agent, it does show up stuck to specific sites on human GABA receptors, as hoped. So this should provide some interesting information about barbituate binding, although I sort of doubt if anyone's going to try to develop it into a general anesthetic all on its own.
In a related topic, note that the model for this series, mephobarbital itself, is disappearing from the market. It's one of those ancient compounds that never really went through the modern regulatory process, but the FDA has stated that it's not going to let it be grandfathered in. Its manufacturer, Lundbeck, said earlier this year (PDF) that it saw no path forward other than a completely new NDA filing, which didn't seem feasible, so it was abandoning the product. Existing stocks have expired by now, so mephobarbital is no more, at least in the US.
+ TrackBacks (0) | Category: Chemical News | Regulatory Affairs
I wanted to point out this fine piece by Adam Feuerstein, "How to Tell When a Drug Company Fibs About Clinical Trial Results". The points he makes apply especially to small companies trying to stay afloat, but they can show up anywhere.
You need to look at when the trial started (and thus how long it took, relative to how long it should have taken), what the stated endpoints were before the trial, the time points at which these benefits (real and otherwise) occurred, and how the current trial results match up with previous ones. One general rule that I have, which Feuerstein also notes, is that when a company makes a big deal out of their investigational drug being safe/well-tolerated in a Phase II trial, that's a red flag. It's certainly a good thing that the drug was tolerated, but finding that out is not the point of Phase II.
But as the article details, clinical endpoints are where a lot of the hand-waving goes on. If a trial is designed well at all, it's run to look for the most clinically relevant signs that the investigators can think up - the ones that are going to make the patients, the physicians, the regulatory agencies, and the investors pay attention. And if a trial concludes and the company starts talking instead about various other benefits and trends that were seen in the data, while not making as big a deal out of the previously stated endpoints, well. . .there's a reason for that. It's not a good reason, and may not even be a very honorable one, but believe it, there is a reason.
+ TrackBacks (0) | Category: Clinical Trials | The Dark Side
July 24, 2012
This long, long story may finally be coming to an end. Immune-based therapies against beta-amyloid (and the associated amyloid plaques) have been in development for many years now (an excellent review here), and Elan has been in the thick of it for most of that time. Phase II results for this antibody came out in 2008 (here's the publication), and since then, everyone's been waiting to see if anything good would come of the phase III trials.
But not with a lot of hope. That's because the Phase II data weren't too encouraging, press releases aside. The subset of patients
with without the ApoE4 mutation showed what appeared to be some slowing in their rate of deterioration; the patients with have that mutation showed basically no beneficial effects at all (edited, got this reversed at first - DBL). There was a bit of biomarker data released earlier this year, which didn't convince people much one way or another. And now we have the numbers for the first of four Phase III trials.
Endpoints were not met - bapineuzumab seems to have definitely failed to help the patients in this study. Note that these were mild-to-moderate Alzheimer's patients who carry the ApoE4 mutation. There's another study going on with non-carriers, and two similar studies to these going on outside the US, but after this miss, what are the chances that they'll report anything beneficial?
No, if we were going to see something, you'd think that we'd have seen it here. Edit: not necessarily so, because the only hints of efficacy in Phase II were in ApoE4 noncarriers. But that wasn't all that convincing, and my own advice is still not to get any hopes up for the results of the next study).
There's another odd feature to this news: Elan was working with Wyeth, who were acquired by Pfizer. They then signed another development deal with J&J (Janssen) to spread the risk around. The trial results that came out yesterday were from the Janssen end of things (Pfizer's paying for the outside-the-US trials). But the press release was from Pfizer - as far as I can see, J&J has not sent out anything yet. And as for Elan, their press release is titled: "Elan Announces Pfizer’s Release of A Top-Line Result In First Of Four Bapineuzumab Phase 3 Trials". It says nothing about what that result might be - just that Pfizer released it, and it reminds people that more results are coming. Hmm. Was it agreed on that Pfizer would be the people to release these results? Or is that the sound of gritted teeth in the distance?
One other question: will this result finally shake the faith of the people who've been buying Elan stock all these years? Or was the failure of patients to respond the fault of hedge funds and short sellers instead? You know, the usual suspects. . .
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials
July 23, 2012
As expected, there's a lot more to the story about the FDA and its monitoring of whistleblowing employees. Steve Usdin at BioCentury has details (PDF, free access), and good grief, what a mess. Read the whole thing; you'll be amazed.
It looks like the agency's Center for Devices and Radiologic Health has devolved into a cross between "Dilbert" and some ministry in Pyongyang. The employees that the agency has been monitoring have been accusing their superiors of approving screening devices based on flawed (even fraudulent) data, while the agency seems to think that they're out of line and off on their own crusade. At issue is whether the reviewers or their bosses really have the expertise to make regulatory decisions, and there seems to be a real, uh, divided opinion on that question. To put it lightly.
As it turns out, any time an employee at the FDA logs on to a computer, the following warning pops up: "You have no reasonable expectation of privacy regarding any communica- tions or data transiting or stored on this information system. At any time, and for any lawful government purpose, the government may monitor, intercept, and search and seize any communication or data transiting or stored on this information system." This case shows that they're not kidding about any of that, since the employees in question found everything they did being keylogged, screen-captured, etc., without (of course) their knowledge.
The CDRH higher-ups were treating this primarily as a criminal case by this point. But legitimate whistleblowing is a tricky grey area in this regard - with varying values of "legitimate" often decided years later - and the Office of Special Counsel is now investigating the FDA's own steps for their legality. No one is going to come out of this looking good, is my guess.
There's even more reason to think that, because (as it turns out), several of the CDRH employees were simultaneously filing suit (!) against the manufacturers of the devices that they were arguing about inside the agency:
While FDA reviewers were publicly working to persuade the agency to with- draw approval of computer-aided detection (CAD) mammography and CT colonoscopy devices, they also were secretly pursuing a lawsuit against the products’ manufactur- ers. The suit included a request that a substantial share of any financial awards go directly to the plaintiffs. . .
The suit was filed “under seal,” so the defendants were not aware that the FDA staff reviewing their products were also asking a court to levy potentially billions of dollars in civil penalties against them.
Under the False Claims Act, private individuals can file a suit under seal and invite the U.S. Department of Justice to join the case. If the federal government joins the case, it takes responsibility and foots the bill for prosecution, and the individual plain- tiffs can be awarded a portion of the civil penalties.
The FDA employees requested in the suit that they be awarded “at least 15% but not more than 25% of the proceeds of any award or settlement” if the government joined the suit, and more if the government did not join.
No, this whole business is a stink bomb. I really don't see how the CDRH can be operating effectively at all with all this sort of stuff going on. Is the rest of the FDA this hosed up, or is this just a particularly dysfunctional branch? Who knows?
+ TrackBacks (0) | Category: Regulatory Affairs
Looks like my "Things I Won't Work With" series (and John Clark's book "Ignition") has inspired science fiction author Charles Stross
- check out this story, and prepare to see several compounds that you never expected to see mixed together (!)
+ TrackBacks (0) | Category: Chemical News
I wrote here about the Cronin lab at Glasgow and their work on using 3-D printing technology to make small chemical reactors. Now there's an article on this research in the Observer that's getting some press attention (several people have e-mailed it to me). Unfortunately, the headline gets across the tone of the whole piece: "The 'Chemputer' That Could Print Out Any Drug".
To be fair, this was a team effort. As the reporter notes, Prof. Cronin "has a gift for extrapolation", and that seems to be a fair statement. I think that such gifts have to be watched carefully in the presence of journalists, though. The whole story is a mixture of wonderful-things-coming-soon! and still-early-days-lots-of-work-to-be-done, and these two ingredients keep trying to separate and form different layers:
So far Cronin's lab has been creating quite straightforward reaction chambers, and simple three-step sequences of reactions to "print" inorganic molecules. The next stage, also successfully demonstrated, and where things start to get interesting, is the ability to "print" catalysts into the walls of the reactionware. Much further down the line – Cronin has a gift for extrapolation – he envisages far more complex reactor environments, which would enable chemistry to be done "in the presence of a liver cell that has cancer, or a newly identified superbug", with all the implications that might have for drug research.
In the shorter term, his team is looking at ways in which relatively simple drugs – ibuprofen is the example they are using – might be successfully produced in their 3D printer or portable "chemputer". If that principle can be established, then the possibilities suddenly seem endless. "Imagine your printer like a refrigerator that is full of all the ingredients you might require to make any dish in Jamie Oliver's new book," Cronin says. "Jamie has made all those recipes in his own kitchen and validated them. If you apply that idea to making drugs, you have all your ingredients and you follow a recipe that a drug company gives you. They will have validated that recipe in their lab. And when you have downloaded it and enabled the printer to read the software it will work. The value is in the recipe, not in the manufacture. It is an app, essentially."
What would this mean? Well for a start it would potentially democratise complex chemistry, and allow drugs not only to be distributed anywhere in the world but created at the point of need. It could reverse the trend, Cronin suggests, for ineffective counterfeit drugs (often anti-malarials or anti-retrovirals) that have flooded some markets in the developing world, by offering a cheap medicine-making platform that could validate a drug made according to the pharmaceutical company's "software". Crucially, it would potentially enable a greater range of drugs to be produced. "There are loads of drugs out there that aren't available," Cronin says, "because the population that needs them is not big enough, or not rich enough. This model changes that economy of scale; it could makes any drug cost effective."
Not surprisingly Cronin is excited by these prospects, though he continually adds the caveat that they are still essentially at the "science fiction" stage of this process. . .
Unfortunately, "science fiction" isn't necessarily a "stage" in some implied process. Sometimes things just stay fictional. Cronin's ideas are not crazy, but there are a lot of details between here and there, and if you don't know much organic chemistry (as many of the readers of the original article won't), then you probably won't realize how much work remains to be done. Here's just a bit; many readers of this blog will have thought of these and more:
First, you have to get a process worked out for each of these compounds, which will require quite a bit of experimentation. Not all reagents and solvents are compatible with the silicone material that these microreactors are being fabricated from. Then you have to ask yourself, where do the reagents and raw materials come in? Printer cartridges full of acetic anhydride and the like? Is it better to have these shipped around and stored than it is to have the end product? In what form is the final drug produced? Does it drip out the end of the microreactor (and in what solvent?), or is a a smear on some solid matrix? Is it suitable for dosing? How do you know how much you've produced? How do you check purity from batch to batch - in other words, is there any way of knowing if something has gone wrong? What about medicines that need to be micronized, coated, or treated in the many other ways that pills are prepared for human use?
And those are just the practical considerations - some of them. Backing up to some of Prof. Cronin's earlier statements, what exactly are those "loads of drugs out there that aren't available because the population that needs them is not big enough, or not rich enough"? Those would be ones that haven't been discovered yet, because it's not like we in the industry have the shelves lined with compounds that work that we aren't doing anything with for some reason. (Lots of people seem to think that, though). Even if these microreactors turn out to be a good way to make compounds, though, making compounds has not been the rate-limiting step in discovering new drugs. I'd say that biological understanding is a bigger one, or (short of that), just having truly useful assays to find the compounds you really want.
Cronin has some speculations on that, too - he wonders about the possibility of having these microreactors in some sort of cellular or tissue environment, thus speeding up the whole synthesis/assay loop. That would be a good thing, but the number of steps that have to be filled in to get that to work is even larger than for the drug-manufacture-on-site idea. I think it's well worth working on - but I also think it's well worth keeping out of the newspapers just yet, too, until there's something more to report.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Chemical News | Drug Assays
July 20, 2012
Does anyone want to put money into the pharma/biotech industry? Let's widen that question: does anyone want to put money into R&D-driven industries in general? That question, which on first glance seems ludicrous, becomes more worryingly believable the longer you think about it. Consider this exchange between Eric Schmidt of Google and Peter Thiel. Thiel makes a pretty provocative statement about what Google is doing with its money:
PETER THIEL: …Google is a great company. It has 30,000 people, or 20,000, whatever the number is. They have pretty safe jobs. On the other hand, Google also has 30, 40, 50 billion in cash. It has no idea how to invest that money in technology effectively. So, it prefers getting zero percent interest from Mr. Bernanke, effectively the cash sort of gets burned away over time through inflation, because there are no ideas that Google has how to spend money.
That apparently didn't get answered to the moderator's satisfaction, so it came up again:
ADAM LASHINSKY: You have $50 billion at Google, why don’t you spend it on doing more in tech, or are you out of ideas? And I think Google does more than most companies. You’re trying to do things with self-driving cars and supposedly with asteroid mining, although maybe that’s just part of the propaganda ministry. And you’re doing more than Microsoft, or Apple, or a lot of these other companies. Amazon is the only one, in my mind, of the big tech companies that’s actually reinvesting all its money, that has enough of a vision of the future that they’re actually able to reinvest all their profits.
ERIC SCHMIDT: They make less profit than Google does.
PETER THIEL: But, if we’re living in an accelerating technological world, and you have zero percent interest rates in the background, you should be able to invest all of your money in things that will return it many times over, and the fact that you’re out of ideas, maybe it’s a political problem, the government has outlawed things. But, it still is a problem.
ADAM LASHINSKY: I’m going to go to the audience very soon, but I want you to have the opportunity to address your quality of investments, Eric.
ERIC SCHMIDT: I think I’ll just let his statement stand.
ADAM LASHINSKY: You don’t want to address the cash horde that your company does not have the creativity to spend, to invest?
ERIC SCHMIDT: What you discover in running these companies is that there are limits that are not cash. There are limits of recruiting, limits of real estate, regulatory limits as Peter points out. There are many, many such limits. And anything that we can do to reduce those limits is a good idea.
PETER THIEL: But, then the intellectually honest thing to do would be to say that Google is no longer a technology company, that it’s basically ‑‑ it’s a search engine. The search technology was developed a decade ago. It’s a bet that there will be no one else who will come up with a better search technology. So, you invest in Google, because you’re betting against technological innovation in search. And it’s like a bank that generates enormous cash flows every year, but you can’t issue a dividend, because the day you take that $30 billion and send it back to people you’re admitting that you’re no longer a technology company. That’s why Microsoft can’t return its money. That’s why all these companies are building up hordes of cash, because they don’t know what to do with it, but they don’t want to admit they’re no longer tech companies. . .
I agree with Alex Tabarrok; this sort of thing is disturbing food for thought. As his point about the reveal preferences of technology leaders, some possible bright exceptions are people like Elon Musk, who seems quite serious about his space program (and good for him). And I very much hope that Google's Schmidt and others are serious about thing like their asteroid-mining venture, and that it's not just the "propanganda ministry".
Closer to home, I got to thinking that if there were any sort of robust returns to be earned in biotech or pharma, that Google, Microsoft et al. would have probably taken a spare billion or so and funded some ventures in these areas. But they haven't. Keep in mind, these folks have money well in excess of what they seem to need to continue investing in their own business. They're presumably looking for something to do with it all, and the point about not being able to return it to the shareholders is a valid one, because (rightly or not) that's seen as an admission that they don't have any particularly good new ideas. (Of course, the fact that they're letting the cash pile up might also be interpreted that way, but issuing a dividend really nails it down).
There are many similarities between this situation and the discussion/argument we had around here about pharma companies buying back their own stock. The likes of Apple can plausibly claim that hey, their business is going so well that they really don't have to spend all those revenues on running it; their current spend is plenty to keep the good times rolling. But what drug company can say that? Everyone in this business is on a frantic treadmill, thanks to patent expirations above all. You'd think that taking money that could be spent on R&D (yours or someone else's) and using it to prop up the share price would be an unaffordable luxury. I know, I know - a public company has obligations to its shareholders. But in this business, perhaps one of those obligations is to explain to your current (and potential) shareholders just what sort of business this is, and what it requires. In a better world, you might end up with a better-informed and more realistic group of people holding your stock.
Unless - and I can't rule this out - the belief is that a completely honest look at the way things are in this business would scare off too many people from investing in it at all. It seems to have scared off Big Tech, with their massive piles of fallow cash. It's not like they have to become experts to invest over here; expertise can be hired. What if they went to some of the existing investment groups over here and asked them what they would be able to do with a billion dollars? Are there even a billion dollars worth of ideas out there right now?
+ TrackBacks (0) | Category: Business and Markets
July 19, 2012
A couple of commenters took exception to my words yesterday about thiophene not being a "real" heterocycle. And I have to say, on reflection, that they're right. When I think about it, I have seen an example myself, in a project some years ago, where thiophene-for-phenyl was not a silent switch. If I recall correctly, the thiophene was surprisingly more potent, and that seems to be the direction that other people have seen as well. Anyone know of an example where a thiophene kills the activity compared to a phenyl?
That said, the great majority of the times I've seen matched pairs of compounds with this change, there's been no real difference in activity. I haven't seen as many PK comparisons, but the ones I can think of have been pretty close. That's not always the case, though: Plavix (clopidogrel) is the canonical example of a thiophene that gets metabolically unzipped (scroll down on that page to "Pharmacokinetics and metabolism" to see the scheme). You're not going to see a phenyl ring do that, of course - it'll get oxidized to the phenol, likely as not, but that'll get glucuronidated or something and sluiced out the kidneys, taking everything else with it. But note also that depending on things like CYP2C19 to produce your active drug for you is not without risks: people vary in their enzyme profiles, and you might find that your blood levels in a real patient population are rather jumpier than you'd hoped for.
So I'll take back my comments: thiophene really is (or at least can be) a heterocycle all its own, and not just a phenyl with eye makeup. But one of the conclusions of that GSK paper was that it's not such a great heterocycle for drug development, in the end.
+ TrackBacks (0) | Category: Life in the Drug Labs | Pharmacokinetics
I'm pressed for time this morning, so I wanted to put up a quick link to Adam Feuerstein's thoughts on media embargoes of scientific results (and how they're becoming increasingly useless).
And I also wanted to note this odd bit of news: I'll bet you thought that fluorine, elemental gaseous fluorine, wasn't found in nature. Too reactive, right? But we're all wrong: it's found in tiny cavities in an unusually stinky mineral. And part (or all) of that smell is fluorine itself, which I'll bet that very few people have smelled in the lab. I hope not, anyway.
+ TrackBacks (0) | Category: Business and Markets | Chemical News | The Scientific Literature
July 18, 2012
Here's a paper from some folks at GlaxoSmithKline on what kinds of rings seem to have the best chances as parts of a drug structure. They're looking at replacements for plain old aryl rings, of which there are often too many. Pulling data out of the GSK corporate collection, they find that the most common heteroaromatic rings are pyridine, pyrazole, and pyrimidine - together, those are about half the data set. (The least common, in case you're wondering, are 1,3,5-triazine, 1,3,4-oxadiazole, and tetrazole). In marketed drugs, though, pyridine is more of a clear winner, and both pyrrole and imidazole make the top of the charts as well.
When they checked the aqueous solubility of all these compounds, the 1,2,4-triazoles came out on top, and the 1,3,5-triazines were at the bottom, which sounds about right. Other soluble heterocycles included 1,3,4-oxadizole and pyridazine, and other bricks were thiazole and thiophene (not that that last one really counts as a heterocycle in my book). Update: I've revised my thoughts on that! Now, you might look at these and say "Sure, and you could have saved yourself the trouble by just looking at the logD values - don't they line up?" They do, for the most part, but it turns out that the triazines are unusually bad for their logDs, while the five-membered rings with adjacent nitrogens (all of 'em) were unusually good.
The next thing the team looked at was binding to human serum albumin. The 1,3,4-thiadiazoles emerged as the losers here, with by far the most protein binding, followed by thiazoles and 1,2,4-oxadiazoles. Imidazoles had the least, by a good margin, followed by pyrazine and pyridazine. Those last two were better than expected compared to their logD values.
And the last big category was CYP450 inhibition. Here, thiophene, tetrazole, and 1,2,3-triazole were the bad guys, and pyridazine, 1,3,4-thiadizole, and pyrazine (and a few others) were relatively clean. The people at AstraZeneca have published a similar analysis, and the two data sets agree pretty well, with the exception of oxazole and tetrazole. The AZ oxazoles all had open positions next to the ring nitrogen, which seems to have opened them up to metabolism, but the difference in tetrazoles (AZ good, GSK bad) is harder to explain.
The take-home? Pyridazine, pyrazine, imidazole and pyrazole look like the winners from an overall "developability" score. Thiophene brings up the rear, but since I still think that one shouldn't count update (
it's a benzene in disguise), the ones to worry about are then thiazole, 1,2,3-triazole, and tetrazole (that last one with an asterisk, due to the CYP data discrepancy).
The paper tries to do the same analysis with heteroaliphatic rings, but the authors admit that they had a much smaller data set to work with, so the conclusions aren't as strong. There was also a higher correlation with plain ol' logD values across all three categories (not as many surprises). The winners turned out to be piperidine NH and morpholine N-alkyl, with imidazoline and piperidine N-alkyl right behind. The losers? Piperidine N-sulfonamide, followed by pyrrolidine N-sulfonamide, and then 1,3-thiazolidine. (Sulfonamides continue to live up - or down - to their reputation as Bad News).
There are, naturally, limitations to this sort of thing. Ceteris paribus is a mighty difficult state of affairs to achieve in medicinal chemistry, and other factors can rearrange things quickly. But if you're just starting out in an SAR series, it sounds like you might wand to give the pyrazines and pyridazines a look.
+ TrackBacks (0) | Category: Life in the Drug Labs
July 17, 2012
More and Vivus and the peculiar timing of the approval of their new obesity therapy, now named Qsymia. OK, in addition to the article mentioned in the previous post, a video went up this evening at ABC news about the compound's availability, also before any word from the FDA. Not long afterwards, the agency released its official decision.
I heard about both of these leaks via Adam Feuerstein on Twitter, and he's rightly getting credit for getting the word out. All this makes it seem that (1) Vivus had heard earlier from the FDA that the drug was going to be approved, but was told to embargo the news, and (2) the company talked to at least one press outlet (USA Today) before the announcement, and told them to hold the story as well.
But I still have a whole list of questions: for starters, how often is it that companies get the advanced word like this from the FDA? I've never been in the position of being one of the first to hear these things, but I've certainly had the impression that this isn't the usual policy. (The potential for leaks, as we've seen today, is just too high). So why did Vivus get the tip-off this time? And how often does a company in this position go to the news media with quotes and photos ready, telling them to sit on the story until the FDA speaks up? Doesn't that increase the potential for leaks even more?
And what about Regulation FD? Isn't material information like this supposed to made available to all investors at the same time? I realize that there are press embargoes in situations like the ASCO meeting, but those clampdowns have been turning into more of a fiasco every year as well. It seems funny for the FDA to be getting into the information embargo business just as others are realizing how hard it is to make it work.
+ TrackBacks (0) | Category: Regulatory Affairs
People are waiting to hear the fate of Qsymia (or Qnexa?), the obesity combo therapy being developed by Vivus (fixed the typoed names)!. In a weird development, an article went up on USA Today about the drug's approval by the FDA - before any such decision had been announced. (As of this writing, it still hasn't, but there's still time later in the afternoon). The article, which I didn't see before it disappeared, apparently included quotes from the company's CEO about the approval. (Note the URL of that ghostly link).
Basically, it looks like the company knows that the drug is going to be approved, as do some news outlets, but that news is under an embargo, which someone at USA Today inadvertently violated. This has made trading in the company's stock rather interesting, and has confused everyone mightily. So, how often do companies get this sort of tip-off? Your guess is as good as mine. . .
Update: more at Retraction Watch.
+ TrackBacks (0) | Category: Regulatory Affairs
What on earth has been going on at the FDA? The agency has engaged in a large and long-running surveillance of its own staff. This story was broken earlier this year by the Washington Post, but a mistake by a contractor gave the New York Times (and others) access to many more intercepted files. And things seem to have gotten out of hand pretty quickly:
What began as a narrow investigation into the possible leaking of confidential agency information by five scientists quickly grew in mid-2010 into a much broader campaign to counter outside critics of the agency’s medical review process, according to the cache of more than 80,000 pages of computer documents generated by the surveillance effort.
Moving to quell what one memorandum called the “collaboration” of the F.D.A.’s opponents, the surveillance operation identified 21 agency employees, Congressional officials, outside medical researchers and journalists thought to be working together to put out negative and “defamatory” information about the agency.
A good old-fashioned enemies list! The agency used key-logger and screenshot capture software on the government laptops of several employees, and the whole thing started over an internal dispute:
The extraordinary surveillance effort grew out of a bitter dispute lasting years between the scientists and their bosses at the F.D.A. over the scientists’ claims that faulty review procedures at the agency had led to the approval of medical imaging devices for mammograms and colonoscopies that exposed patients to dangerous levels of radiation.
Advertise | AdChoices
A confidential government review in May by the Office of Special Counsel, which deals with the grievances of government workers, found that the scientists’ medical claims were valid enough to warrant a full investigation into what it termed “a substantial and specific danger to public safety.”
The FDA is insisting that these actions had nothing to do with rooting out whistleblowers or tracking down critics of its regulatory decisions. Not at all - they were just making sure that no one was leaking proprietary documents. The rooting out of whistleblowers and identification of critics, those were just side effects. Not everyone is buying that explanation:
Representative Chris Van Hollen, a Maryland Democrat, sent a letter on Monday to Kathleen Sebelius, the secretary of health and human services, calling on her to conduct a full investigation into whether the surveillance program violated federal employee protections and whistle-blower laws.
“The tactics reportedly used by the F.D.A. send a terrible message to those who are prepared to expose waste, abuse or wrongdoing in government agencies,” wrote Mr. Van Hollen, whose staff communications were monitored by the F.D.A.
He's got a point. If a company does something like this to its own disgruntled employees, it exposes itself to a great deal of legal jeopardy. I'm not a lawyer, so I don't know what applies to the FDA itself, but I hope that there's a thorough investigation indeed. The agency takes a lot of flack, from every direction: groups on the left of the political spectrum, among others, blast them for having sold out to industry and not protecting the consumer, drug companies complain about arbitrary decisions and regulatory delays, and the more libertarian types would like big chunks of the whole apparatus to just disappear somehow.
But when I say "the agency", I'm talking about its people, its employees, many of whom are doing very difficult work for not all that much money. The people who authorize this kind of thing, well. . .they're in another category. Now, I understand that people inside the FDA (and other regulatory agencies) need some oversight. They're the referees in this business, and quis custodiet ipsos custodes? is always an appropriate question to ask. The recent insider trading scandal at the agency is just one example of what can happen when someone in a rule-making department goes astray.
But that doesn't mean that you can't go astray while chasing such things. You start reading people's mail and keylogging their passwords, and all sorts of ideas can come to your mind about what you're doing and why. So, who had these ideas at the FDA this time, and just what were they thinking?
+ TrackBacks (0) | Category: Regulatory Affairs
July 16, 2012
Looks like AstraZeneca's internal numbers agree with Matthew Herper's. The company was talking about its current R&D late last week, and this comment stands out:
Discovery head Mene Pangalos told reporters on Thursday that mistakes had been in the past by encouraging quantity over quality in early drug selection.
"If you looked at our output in terms of numbers of candidates entering the clinic, we were one of the most productive companies in the world, dollar for dollar. If you rated us by how many drugs we launched, we were one of the least successful," he said.
Yep, sending compounds to the clinic is easy - you just declare them to be Clinical Candidates, and the job is done. Getting them through the clinic, now, that's harder, because at that point you're encountering things that can't be rah-rah-ed. Viruses and bacteria, neurons and receptors and tumor cells, they don't care so much about your goals statement and your Corporate Commitment to Excellence. In the end, that's one of the things I like most about research: the real world has the last laugh.
The news aggregator Biospace has a particularly misleading headline on all this: "AstraZeneca Claims Neuroscience Shake-Up is Paying Off ; May Advance at Least 8 Drugs to Final Tests by 2015". I can't find anyone from AZ putting it in quite those terms, fortunately. That would be like saying that my decision, back in Boston, to cut costs by not filling my gas tank is paying off as I approach Philadelphia.
+ TrackBacks (0) | Category: Business and Markets | Clinical Trials | Drug Development
July 9, 2012
If you haven't seen it, there's an excellent article in the Washington Post by Brian Vastag on why the whole "America Faces Critical Shortage of Scientists!" thing is ridiculous. I hope it does some good - this idea gets repeated too often by people who have no idea of what they're talking about. Vastag hits a lot of important themes - layoffs in once-thriving sci/tech fields, the perverse incentives to churn out more PhDs and post-docs, and so on.
Chemjobber has good commentary on the article here, as does David Kroll. It's good to see a major media outlet pick up on what people in the field have been saying for some time, and going against the lazy America-falls-behind-in-science-race take.
+ TrackBacks (0) | Category: Business and Markets
July 6, 2012
I wanted to let everyone know that I'll be taking a summer break - as of yesterday, I've cleared myself and my family out of the house and off for some R&R. I'll be lounging around all next week, and will return to blogdom (and research-dom) on Monday the 16th. I've already told folks in the lab not to discover anything gigantic while I'm gone (it looks bad, y'know).
+ TrackBacks (0) | Category: Blog Housekeeping
July 3, 2012
Thomson Reuters is out with their lists of impact factors for journals, and these come with the usual cautions: too much is made of the impact factor in general, and the very fact that the tiniest variations are seized on gleefully by journal publishers should be enough to set off alarms.
This time a record number of journals were taken off the list for excessive self-citation. And as that Nature News article notes, somewhat gleefully, one of the journals had recently been profiled by Thomson Reuters as a "Rising Star". (All that profiling and interviewing has made me wonder in the past, and I'm not surprised at all that this has happened. The company measures the impact factors, promotes them as meaningful, interviews journal editors who have found ways to raise theirs, which makes that important news because the people who sell impact factors say that it's important, and they have the press releases to prove it. I'm standing by my earlier comparison to the Franklin Mint. (And in case you're wondering, the fact that I'm citing my own blog on the topic of self-referentiality has not escaped me).
At any rate, I don't believe that any chemistry journals were on the banned list. The most interesting case was a group of journals that were deliberately citing each other, but I'll freely admit that I'd never heard of any of them, despite their best efforts to rise in the world. If anyone does have any evidence of citation oddities in the chemistry world, though, I'd be happy to help publicize them. . .
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
With all the electronic notebooks around these days, and the ubiquity of computer hardware and keyboards around the HPLCs, LC/mass specs, and so on, I'm surprised that we don't see more of this. But that is the first keyboard I've seen melted in a lab setting - perhaps I'm just leading a sheltered life.
But ginger ale in an Apple wireless keyboard? I can get that at home, courtesy of my kids. (The hardware survived, although some of the keys were a bit crunchy for a while. . .)
+ TrackBacks (0) | Category: Life in the Drug Labs
You'll have heard that GlaxoSmithKline has paid out three billion dollars in a settlement on illegal marketing practices, misreporting of safety data, and other violations. Needless to say, GSK does not have a spare three billion lying around that's not being used for anything; they'd be a lot better off if they hadn't put themselves in this position.
What's hard to figure, though, is how much money the company made through these actions. There's a lot of talk, understandably, about how drug companies (and their executives) could be warned off such behavior, but if GSK realized, say, an extra $4 billion in the process of incurring their $3 billion dollar fine, it's going to be hard to make the case to some of those people. The settlement actually appears to be a bit less than some investors were expecting, and there may, in the end, be no way to have the magnitude of the potential fines do all the work of a deterrent.
Matthew Herper at Forbes notes that the company's current CEO, Andrew Witty, has issued an unusually forthright statement (by CEO standards) on the whole matter:
All of the actions predated the tenure of current GlaxoSmithKline chief executive Andrew Witty, who has been trying to improve the company’s reputation. He has pushed forward with efforts to develop medicines for poor nations, including a malaria vaccine that Glaxo is developing with the Bill & Melinda Gates Foundation. He has also taken steps to remove incentives that made pharma salespeople so overzealous, no longer tying compensation to how much of a drug they can sell. In a statement, he said that employees have been removed from positions as a result of the changes and that new provisions will allow the company to take back compensation from executives if they don’t adhere to the company’s standards.
Glaxo has done something else right, too: Witty actually managed, in the press release disclosing this settlement, something close to a full-throated apology. He said:
“Whilst these originate in a different era for the company, they cannot and will not be ignored. On behalf of GSK, I want to express our regret and reiterate that we have learnt from the mistakes that were made.”
That may not sound like much, but in the context of an industry that has almost never seen fit to apologize for anything it is a step in the right direction.
But I also wanted to mention by name two of the people who set this entire thing in motion. One of them is Blair Hamrick, and another is Greg Thorpe. These were GSK sales reps who grew concerned about illegal activity over ten years ago:
“Regardless of what company policy may be, my letters to human resources and my previous complaints of misconduct have been quashed. My 23-plus year career with this company has been trashed, and it is obvious I can no longer work with my district manager and friends/counterparts just because I have come forward with the truth, which could save the reputation of GSK and millions of dollars in fines,” wrote Thorpe, one of the whistleblowers on whose claims the feds based their allegations, in a January 2002 note to Glaxo officials. . .instead, though, Glaxo officials issued their own warnings to Thorpe about his willingness to be a team player and refused to address various violations of the False Claims Act, which he referred to specifically and repeatedly in numerous communications.
"Team player" is one of those phrases that should put a person on their guard. It can be used completely innocuously, but it can also be used to justify pretty much any behavior that the rest of a group is doing, and on no more basis than, well, the rest of the group is doing it. I reserve my admiration for those who need more justification than that for their actions.
There are some effects that I hope this GSK news will have: making someone think twice about getting caught when they're planning something that goes over the line, or (on the other side) shoring up the resolve of a person who's deciding not to go along with something that they've realized is wrong. The world tends to run short of both of those.
+ TrackBacks (0) | Category: Business and Markets | Regulatory Affairs | The Dark Side
July 2, 2012
Is this the record? At least 172 faked publications from a Japanese anaesthesiology researcher. He doesn't seem to have been a particularly high-impact person in the field, but that makes you wonder, too. Sitting around all day, making up data for papers that no one reads. . .what a life! I don't how anything on quite this scale could happen in chemistry, but perhaps that's wishful thinking.
+ TrackBacks (0) | Category: The Scientific Literature
I wanted to highlight this latest post by Gaussling on starting a chemical business. (Here's an earlier one). In today's environment, I'm sure that this has crossed many people's minds, and this series has a lot of wisdom to offer on how to do it (and how not to). Not everything that looks like it should make money will do so:
The other big negative to selling proprietary reagents or processes is negotiating the terms and pricing. From the customers perspective, adopting your composition or process means that smack in the middle of their process train they have to manage a licensed technology with extra paper work and auditing. This is a big problem with catalysts. Many of the newer catalysts you see in the Aldrich or Strem catalogs are proprietary and must be used under a license agreement. Nothing stirs the creative juices like the desire to avoid paying royalities by finding white space in a patent or inventing a new process.
Having been involved in such license negotiations, I can say that you need to have a lawyer looking over your shoulder while you consider the terms and conditions. These agreements often entail upfront fees and a sliding scale of pricing based on usage. Some IP owners want a piece of your gross product sales resulting from the use of their technology. An annual audits may be expected as well. It’s like having raccoons in your picnic basket.
Indeed. I can tell you from my own experience, on the other side of the table, that few things will make your potential clients want to see your back more than asking for a percentage of the eventual profits. Fee-for-service is a lot easier to handle, but is of course less profitable.
And even then, pricing is tricky. Sometimes there's not much space in between "Who do these people think they are?" and "They're so cheap that there much be something wrong". My advice to anyone starting such a business is to be open to all sorts of different arrangements, to at least get your foot into as many doors as possible. Your potential clients will probably be a pretty variable bunch, and you'll need to be able to vary right along with them.
+ TrackBacks (0) | Category: Business and Markets