Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Information Density | Main | What Makes a Beautiful Molecule? »

April 4, 2012

The Artificial Intelligence Economy?

Email This Entry

Posted by Derek

Now here's something that might be about to remake the economy, or (on the other robotic hand) it might not be ready to just yet. And it might be able to help us out in drug R&D, or it might turn out to be mostly beside the point. What the heck am I talking about, you ask? The so-called "Artificial Intelligence Economy". As Adam Ozimek says, things are looking a little more futuristic lately.

He's talking about things like driverless cars and quadrotors, and Tyler Cowen adds the examples of things like Apple's Siri and IBM's Watson, as part of a wider point about American exports:

First, artificial intelligence and computing power are the future, or even the present, for much of manufacturing. It’s not just the robots; look at the hundreds of computers and software-driven devices embedded in a new car. Factory floors these days are nearly empty of people because software-driven machines are doing most of the work. The factory has been reinvented as a quiet place. There is now a joke that “a modern textile mill employs only a man and a dog—the man to feed the dog, and the dog to keep the man away from the machines.”

The next steps in the artificial intelligence revolution, as manifested most publicly through systems like Deep Blue, Watson and Siri, will revolutionize production in one sector after another. Computing power solves more problems each year, including manufacturing problems.

Two MIT professors have written a book called Race Against the Machine about all this, and it appears to be sort of a response to Cowen's earlier book The Great Stagnation. (Here's an article of theirs in The Atlantic making their case).

One of the export-economy factors that it (and Cowen) bring up is that automation makes a country's wages (and labor costs in general) less of a factor in exports, once you get past the capital expenditure. And as the size of that expenditure comes down, it becomes easier to make that leap. One thing that means, of course, is that less-skilled workers find it harder to fit in. Here's another Atlantic article, from the print magazine, which looked at an auto-parts manufacturer with a factory in South Carolina (the whole thing is well worth reading):

Before the rise of computer-run machines, factories needed people at every step of production, from the most routine to the most complex. The Gildemeister (machine), for example, automatically performs a series of operations that previously would have required several machines—each with its own operator. It’s relatively easy to train a newcomer to run a simple, single-step machine. Newcomers with no training could start out working the simplest and then gradually learn others. Eventually, with that on-the-job training, some workers could become higher-paid supervisors, overseeing the entire operation. This kind of knowledge could be acquired only on the job; few people went to school to learn how to work in a factory.
Today, the Gildemeisters and their ilk eliminate the need for many of those machines and, therefore, the workers who ran them. Skilled workers now are required only to do what computers can’t do (at least not yet): use their human judgment.

But as that article shows, more than half the workers in that particular factory are, in fact, rather unskilled, and they make a lot more than their Chinese counterparts do. What keeps them employed? That calculation on what it would take to replace them with a machine. The article focuses on one of those workers in particular, named Maddie:

It feels cruel to point out all the Level-2 concepts Maddie doesn’t know, although Maddie is quite open about these shortcomings. She doesn’t know the computer-programming language that runs the machines she operates; in fact, she was surprised to learn they are run by a specialized computer language. She doesn’t know trigonometry or calculus, and she’s never studied the properties of cutting tools or metals. She doesn’t know how to maintain a tolerance of 0.25 microns, or what tolerance means in this context, or what a micron is.

Tony explains that Maddie has a job for two reasons. First, when it comes to making fuel injectors, the company saves money and minimizes product damage by having both the precision and non-precision work done in the same place. Even if Mexican or Chinese workers could do Maddie’s job more cheaply, shipping fragile, half-finished parts to another country for processing would make no sense. Second, Maddie is cheaper than a machine. It would be easy to buy a robotic arm that could take injector bodies and caps from a tray and place them precisely in a laser welder. Yet Standard would have to invest about $100,000 on the arm and a conveyance machine to bring parts to the welder and send them on to the next station. As is common in factories, Standard invests only in machinery that will earn back its cost within two years. For Tony, it’s simple: Maddie makes less in two years than the machine would cost, so her job is safe—for now. If the robotic machines become a little cheaper, or if demand for fuel injectors goes up and Standard starts running three shifts, then investing in those robots might make sense.

At this point, some similarities to the drug discovery business will be occurring to readers of this blog, along with some differences. The automation angle isn't as important, or not yet. While pharma most definitely has a manufacturing component (and how), the research end of the business doesn't resemble it very much, despite numerous attempts by earnest consultants and managers to make it so. From an auto-parts standpoint, there's little or no standardization at all in drug R&D. Every new drug is like a completely new part that no one's ever built before; we're not turning out fuel injectors or alternators. Everyone knows how a car works. Making a fundamental change in that plan is a monumental challenge, so the auto-parts business is mostly about making small variations on known components to the standards of a given customer. But in pharma - discovery pharma, not the generic companies - we're wrenching new stuff right out of thin air, or trying to.

So you'd think that we wouldn't be feeling the low-wage competitive pressure so much, but as the last ten years have shown, we certainly are. Outsourcing has come up many a time around here, and the very fact that it exists shows that not all of drug research is quite as bespoke as we might think. (Remember, the first wave of outsourcing, which is still very much a part of the business, was the move to send the routine methyl-ethyl-butyl-futile analoging out somewhere cheaper). And this takes us, eventually, to the Pfizer-style split between drug designers (high-wage folks over here) and the drug synthesizers (low-wage folks over there). Unfortunately, I think that you have to go the full reducio ad absurdum route to get that far, but Pfizer's going to find out for us if that's an accurate reading.

What these economists are also talking about is, I'd say, the next step beyond Moore's Law: once we have all this processing power, how do we use it? The first wave of computation-driven change happened because of the easy answers to that question: we had a lot of number-crunching that was being done by hand, or very slowly by some route, and we now had machines that could do what we wanted to do more quickly. This newer wave, if wave it is, will be driven more by software taking advantage of the hardware power that we've been able to produce.

The first wave didn't revolutionize drug discovery in the way that some people were hoping for. Sheer brute force computational ability is of limited use in drug discovery, unfortunately, but that's not always going to be the case, especially as we slowly learn how to apply it. If we really are starting to get better at computational pattern recognition and decision-making algorithms, where could that have an impact?

It's important to avoid what I've termed the "Andy Grove fallacy" in thinking about all this. I think that it is a result of applying first-computational-wave thinking too indiscriminately to drug discovery, which means treating it too much like a well-worked-out human-designed engineering process. Which it certainly isn't. But this second-wave stuff might be more useful.

I can think of a few areas: in early drug discovery, we could use help teasing patterns out of large piles of structure-activity relationship data. I know that there are (and have been) several attempts at doing this, but it's going to be interesting to see if we can do it better. I would love to be able to dump a big pile of structures and assay data points into a program and have it say the equivalent of "Hey, it looks like an electron-withdrawing group in the piperidine series might be really good, because of its conformational similarity to the initial lead series, but no one's ever gotten back around to making one of those because everyone got side-tracked by the potency of the chiral amides".

Software that chews through stacks of PK and metabolic stability data would be worth having, too, because there sure is a lot of it. There are correlations in there that we really need to know about, that could have direct relevance to clinical trials, but I worry that we're still missing some of them. And clinical trial data itself is the most obvious place for software that can dig through huge piles of numbers, because those are the biggest we've got. From my perspective, though, it's almost too late for insights at that point; you've already been spending the big money just to get the numbers themselves. But insights into human toxicology from all that clinical data, that stuff could be gold. I worry that it's been like the concentration of gold in seawater, though: really there, but not practical to extract. Could we change that?

All this makes me actually a bit hopeful about experiments like this one that I described here recently. Our ignorance about medicine and human biochemistry is truly spectacular, and we need all the help we can get in understanding it. There have to be a lot of important things out there that we just don't understand, or haven't even realized the existence of. That lack of knowledge is what gives me hope, actually. If we'd already learned what there is to know about discovering drugs, and were already doing the best job that could be done, well, we'd be in a hell of a fix, wouldn't we? But we don't know much, we're not doing it as well as we could, and that provides us with a possible way out of the fix we're in.

So I want to see as much progress as possible in the current pattern-recognition and data-correlation driven artificial intelligence field. We discovery scientists are not going to automate ourselves out of business so quickly as factory workers, because our work is still so hypothesis-driven and hard to define. (For a dissenting view, with relevance to this whole discussion, see here). It's the expense of applying the scientific method to human health that's squeezing us all, instead, and if there's some help available in that department, then let's have it as soon as possible.

Comments (32) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | In Silico | Pharmacokinetics | Toxicology


COMMENTS

1. Rick Wobbe on April 4, 2012 8:23 AM writes...

For some reason this reminds me of the Elizabeth Moon novel, "The Speed of Dark", in which pharmaceutical companies determined that it wasn't computers that were best equipped this, but armies of autistic savants. Although it wasn't explicitly stated in the book, the reason seemed to be that idiosyncrasies in human pattern-seeking behavior, which can be raised to an extraordinary level in autistic savants, could not be mimicked by machine intelligence. Hmmm, maybe the vaccine-autism link isn't hogwash after all: it's the pharmaceutical industry building the next decade's workforce. I smell prequel...

Permalink to Comment

2. lt on April 4, 2012 8:28 AM writes...

I'm personally looking quite forward to the time when HTS assays are routinely run on multi-million-well plates with simultaneous parallel detection and high sensitivity. In a machine the size of an inkjet printer and about as expensive...

Permalink to Comment

3. MIMD on April 4, 2012 8:54 AM writes...

So I want to see as much progress as possible in the current pattern-recognition and data-correlation driven artificial intelligence field.

So do I, but am highly skeptical it can happen soon.

Beware the Syndrome of Inappropriate Over-confidence in Cybernetics ™ .

As far back as the 1950's we were promised perfect language translation. We can't even do that.

Signal-to-noise ratios in searching when looking for uncommon/specialized information using current search engines is not, let's just say, entirely optimal.

Lately, with every computer advance we're promised cybernetic miracles in medicine, such as with IBM Watson (link).

We discovery scientists are not going to automate ourselves out of business so quickly as factory workers, because our work is still so hypothesis-driven and hard to define.

I think the danger of management incompetence is several orders of magnitude more important than the threat of a HAL9000 drug scientist at this point. Layoffs, after all, are a sign of management failure.

Permalink to Comment

4. PharmaHeretic on April 4, 2012 9:04 AM writes...

But have you considered the flip side of this issue, namely that

Machines are not consumers!

That is right. Unless we find a way to separate human income (or ability to buy products and services) from human jobs, society will come apart or implode. I find it funny that neoliberal capitalism is doing its best to prove Marx right.

Permalink to Comment

5. John Wayne on April 4, 2012 9:45 AM writes...

I would argue that machines can be considered consumers, and people who learn the job of tending to the machines (even it if is just to feed the dog) will find jobs in the future.

All this fancy equipment breaks, and somebody has to fix it.

Permalink to Comment

6. TJMC on April 4, 2012 10:31 AM writes...

Derek - Great and provocative post. It reminds me of >10 years ago, when the first consulting firm I was with (after 20 years in R&D), had a big practice of putting in SAP. They proposed that after they put it into the drug manufacturing plants, they could then "grow it upstream" into R&D. Much to their dismay, I pointed out many of the same things you raised about (sort of) "treating it (R&D) too much like a well-worked-out human-designed engineering process", which it was not.

I DID point out that Development was far more routinized than Discovery (constrained by regs, etc.) , and that there, decision support as well as project and resource management tools might well be applied. Above, your musings about the far greater power of "second wave" capabilities might just find application even farther upstream (but it will take 5-10 years, as the SAP-like capabilities took.) These technologies always over-promise on timelines. I know that they (some 2nd wave capabilities) are applicable now and I have been guiding folks on piloting several (actually a lot of fun and 1-in-3 have tremendous benefits.) But not the dystopian transformatiuon implied above. Just yet. Still, apart from Watson superpowers, we already use such things as in-silico disease modeling and adaptive trial “what if” modeling. - Terry

Permalink to Comment

7. schinderhannes on April 4, 2012 10:36 AM writes...

I want a Gildemeister to grind whatever molecule I desire out of a solid block of atoms!

Permalink to Comment

8. Anonymous on April 4, 2012 10:47 AM writes...

Here's one strategy for staying relevant: (1) figure out what us humans do better than machines, and (2)try to get better at those things.

For instance we could figure out what human thoughts and thinking are. We understand how computers "think" but don't understand how the brain does it. We could figure it out if we simply asked good questions and kept an open mind. It's a solvable problem.

Permalink to Comment

9. Anonymous on April 4, 2012 11:22 AM writes...

Perhaps we'll finally get that cool GC-MS from "The Medicine Man" that separates a compound, determines its composition & structure (with absolute stereochemistry), and assays for biological activity!

Permalink to Comment

10. patentgeek on April 4, 2012 11:34 AM writes...

#8:
There's a field of scientific endeavor devoted to your task: it's called neuroscience, and a university near you may well offer a graduate program in some aspect of it, if you're interested. Those of us whose careers include a stint in psychopharmacology, or related areas, are somewhat impressed by the magnitude of the problem of understanding the emergent property of consciousness on a cellular and molecular level.

Permalink to Comment

11. barry on April 4, 2012 12:08 PM writes...

Our field is prone to episodic messianic fevers. First QSAR was going to replace med. chem. as we knew it. Then Rational Design, then CombiChem, then FragmentBased Design (no doubt I've missed a few in there). When the fever passes, (the surviving) med. chemists have one more tool. We should use computers for all they're worth, but they too won't replace med. chem. (for a long time yet).

Permalink to Comment

12. Stewie Griffin on April 4, 2012 12:16 PM writes...

#8
The answer to those questions is 42 I believe :)

John Wayne
Machines need someone to maintain them but suppose we got to the point where only a handful of folks are needed to maintain a plant which could produce nearly all consumer goods. What is the rest of society supposed to do? (I do realize this is taking things to the logical extreme, but still an interesting topic to discuss IMHO)

Permalink to Comment

13. Stewie Griffin on April 4, 2012 12:19 PM writes...

FYI, by "plant" I meant manufacturing facility, not vegitation

Permalink to Comment

14. Stewie Griffin on April 4, 2012 12:20 PM writes...

FYI, by "plant" I meant manufacturing facility, not vegetation

Permalink to Comment

15. Hap on April 4, 2012 12:25 PM writes...

If computers look for patterns hard enough they will find them (just like we do, only harder). Is there a procedure to determine which patterns are real as opposed to statistical artifacts which would be automatable? If not, there might still be openings for people in applying statistical knowledge to the correlations found in the piles of data.

Permalink to Comment

16. Tom Womack on April 4, 2012 2:32 PM writes...

Yes, of course there's a technique for finding real patterns: it's the scientific method. If the same patterns show up in the next experiment, they're more likely to have been real. And if the result of the automation is that you can readily do the next experiment - if you're sitting at the back end of California's unified medical records system and can do stats on the next ten thousand heart attacks coming in, or if you've got a robot and a pile of 6144-well plates available to be filled - then you can apply that method.

Permalink to Comment

17. Brad Arnold on April 4, 2012 4:36 PM writes...

My suspicion is that the combination of mind and machine will chart vast paths through tremendous amount of data. Transhumanism. Yet, eventually, AI will triumph (the Singularity is coming).

Permalink to Comment

18. John on April 4, 2012 4:37 PM writes...

Per an acquaintance who works in factory engineering, the Chinese pay his company to rip out their line automation components. Replacing one skilled worker with 20 peasants is a cost-saving device in most of China.

Remember, this is a country where they sometimes use guys with little brushes instead of bees to pollinate crops.

Permalink to Comment

19. MoMo on April 4, 2012 5:25 PM writes...

Schinderhannes! Genius! You stole my idea, but that's OK, because there is plenty of room for innovation here. Its along the same idea though. We have been working on building large models of molecules and then shrink them down to atomic size-much easier to work with and I can get the model-atoms from China. Then we can get any 2 year old to build molecules like building blocks and save even more money!

Permalink to Comment

20. hn on April 4, 2012 5:28 PM writes...

@18: The bees in parts of China were wiped out by indiscriminate use of pesticides. Bees in the US are being threatened too.

We need to build robotic bees.

Permalink to Comment

21. gippgig on April 4, 2012 7:52 PM writes...

Any biotechs working on making plants self-pollinating? That's an interesting one.

Permalink to Comment

22. Anonymous on April 4, 2012 8:17 PM writes...

@ 10: Yes, I know what neuroscience is. I even know what a university is, if you can believe...

What I don't know is why it is taking so long for academic folks to solve the basic questions of how the brain works.

For instance, what is a thought? What is thinking? How does the brain derive meaning from language? These btw are questions of psychology as much as neuroscience. But we have little understanding from either perspective.

It seems to me that the "publish or perish" mentality is partially to blame. Most scientists probably don't have time to really think, to persue a novel question, or thoroughly research unconventional ideas.

The pressure to publish, combined with the group-think necessary to have one's ideas accepted, leaves little room for origional thought or research.

Perhaps you are right that we just need more information about molecules and neurons/glia. But my belief is we already have more than enough information to figure it out.

Permalink to Comment

23. zmil on April 4, 2012 9:38 PM writes...

@12 Stewie Griffin

It is indeed an interesting question, one that I've been pondering for several years now. My current vision of the nearish future (no strong AI, but cheap automation of all repetitive tasks, both manufacturing and service) is that there will be an increase in demand for goods that have a 'personal touch', so to speak.

Live music, handcrafted utensils, clothes, etc, restaurants/artisanal food, personal services (no cashiers, but plenty of hairdressers), and so on.

Also engineers and scientists and other higher level thinking jobs. I'm with Derek on this, I don't think scientists will be replaced any time soon. Grad students, maybe... 80% of the actual work I do could be done by a robot. And if I had a robot, man, I could get so much done...I'd find out that my project is doomed a lot faster...

Even some data analysis can be automated, as the Eureqa article Derek linked to shows.

But designing the experiments? I don't see computers doing that anytime soon. Too many variables to brute force it, too much background knowledge that must be applied.

Permalink to Comment

24. gippgig on April 4, 2012 11:44 PM writes...

#12: Create, help others, discover (not intended to be an exhaustive list; any others?).
#4: We need to accept that many people should not financially support themselves. The people who are good at generating wealth should generate wealth, those who are good at creating should create, etc. Anything else is a waste of people's talent.

Permalink to Comment

25. eugene on April 5, 2012 2:55 AM writes...

"Machines need someone to maintain them but suppose we got to the point where only a handful of folks are needed to maintain a plant which could produce nearly all consumer goods. What is the rest of society supposed to do? (I do realize this is taking things to the logical extreme, but still an interesting topic to discuss IMHO)"

Well, you can take a look at what the rest of the people are doing in Syria where a drought put only about 10% of farmers (many subsistience) out of business over the last five years. So, to answer the question, you'll never reach the logical extreme because a violent and destructive killing spree will happen long before 90% of humans become redundant.

Permalink to Comment

26. eugene on April 5, 2012 2:56 AM writes...

Sorry, "I meant 10% of the population above". "Not 10% of farmers."

Permalink to Comment

27. eugene on April 5, 2012 3:13 AM writes...

"What I don't know is why it is taking so long for academic folks to solve the basic questions of how the brain works.

For instance, what is a thought? What is thinking? How does the brain derive meaning from language?"

These are not basic questions. Either you are wrong here, or the entire field of neuroscience. I don't want to dismiss your views, but from your comment it seems like you are someone from the outside looking in (like me) and we're both not capable of commenting intelligently on the issue before learning a lot of at least the basics of the field. Not to mention we're sorely not equipped to pass judgements on it.

Of course politicians do that all the time by raising or denying funding and they are not scientists, but I hope we don't go down to that level of discourse on a science blog: "What have you done for me lately? You can't even figure out how people think so what's the big idea with your field anyways!?"

Instinctively, to me it sounds like someone asking on this blog, "What the hell have you chemists done for me lately!? You can't even figure out how to assemble non-repeating 3D structures anyway you want them except for accident! And why can't you predict toxicity anyways before sending out that molecule for a clinical trial? What do we pay you for? This is a clear example of the publish or perish academic mentality that stops all original thought."

Permalink to Comment

28. gippgig on April 5, 2012 1:43 PM writes...

#25: Syria is a bad analogy because a drought causes a shortage of resources (which itself often leads to unrest) which is the opposite of the #12 case.

Permalink to Comment

29. eugene on April 6, 2012 3:01 AM writes...

The effect is the same if it's a lot of people out of work with no hope of getting a decent job. I suppose in an advanced robotic society charity would be more kind, but it's not a guarantee.

You probably need to c