Ray Kurzweil has responded to the criticism of his Singularity Summit comments on reverse-engineering the brain, a chorus to which I added my voice here. He says that he was misquoted on the timeline and on the importance of genomic data for doing it.
His plan, he says, is to understand what level of complexity will be needed in order for a system to organize and adapt the way the brain does to stimuli, and the modular nature of its organization gives him hope that this can be realized:
For example, the cerebellum (which has been modeled, simulated and tested) — the region responsible for part of our skill formation, like catching a fly ball — contains a module of four types of neurons. That module is repeated about ten billion times. The cortex, a region that only mammals have and that is responsible for our ability to think symbolically and in hierarchies of ideas, also has massive redundancy. It has a basic pattern-recognition module that is considerably more complex than the repeated module in the cerebellum, but that cortex module is repeated about a billion times. There is also information in the interconnections, but there is massive redundancy in the connection pattern as well.
Fine. But even that argument triggers the reaction in me that Kurzweil's statements often do. I wasn't aware that we had "modeled, simulated, and tested" a cerebellum yet, for one thing. If that's so well worked out, where is it? Why aren't industrial robots a lot more coordinated? I assume that one reason is that we haven't done it with four billion processing modules yet. But if not, does that really qualify as something that's been tested? Will it all really just be a matter of scaling up, or will more subtle features become important along the way?
He also goes on to say that "We have sufficiently high-resolution in-vivo brain scanners now that we can see how our brain creates our thoughts and see our thoughts create our brain." I'd disagree with that statement. The resolution of brain imaging techniques has been improving steadily, but it's still crude compared to what we're going to need. Every time we improve it, we find that things are more complicated than we thought.
If any of Kurzweil's exponential-growth predictions are to come true, though, it'll be the ones that involve computing power most directly, since that's where this sort of growth has come most reliably and spectacularly. I just don't think that our understanding increases at the same rate - and not every problem will find a solution through our ability to throw more processing power at it.
How do I reconcile this attitude of mine with my reasons-for-optimism post of the other day? Well, as I've said, we don't need miracles in drug discovery (although I'll welcome any that might show up). We just need to do things a little bit better than we do already - it's that young a field, and we're that poor at it. Compared to what we could know, and what we might be able to do, we're still way back on the curve. When your clinical failure rate is 90%, anything you can do better is an improvement. I'm not asking to (or claiming that we will) figure out predictive human toxicology in ten years. I just want to fail miserably eight out of ten times, instead of nine. And thus double the number of drugs coming to market. . .