Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Going Hollywood | Main | Lunch at the Boston ACS Meeting »

August 23, 2010

Kurzweil Responds

Email This Entry

Posted by Derek

Ray Kurzweil has responded to the criticism of his Singularity Summit comments on reverse-engineering the brain, a chorus to which I added my voice here. He says that he was misquoted on the timeline and on the importance of genomic data for doing it.

His plan, he says, is to understand what level of complexity will be needed in order for a system to organize and adapt the way the brain does to stimuli, and the modular nature of its organization gives him hope that this can be realized:

For example, the cerebellum (which has been modeled, simulated and tested) — the region responsible for part of our skill formation, like catching a fly ball — contains a module of four types of neurons. That module is repeated about ten billion times. The cortex, a region that only mammals have and that is responsible for our ability to think symbolically and in hierarchies of ideas, also has massive redundancy. It has a basic pattern-recognition module that is considerably more complex than the repeated module in the cerebellum, but that cortex module is repeated about a billion times. There is also information in the interconnections, but there is massive redundancy in the connection pattern as well.

Fine. But even that argument triggers the reaction in me that Kurzweil's statements often do. I wasn't aware that we had "modeled, simulated, and tested" a cerebellum yet, for one thing. If that's so well worked out, where is it? Why aren't industrial robots a lot more coordinated? I assume that one reason is that we haven't done it with four billion processing modules yet. But if not, does that really qualify as something that's been tested? Will it all really just be a matter of scaling up, or will more subtle features become important along the way?

He also goes on to say that "We have sufficiently high-resolution in-vivo brain scanners now that we can see how our brain creates our thoughts and see our thoughts create our brain." I'd disagree with that statement. The resolution of brain imaging techniques has been improving steadily, but it's still crude compared to what we're going to need. Every time we improve it, we find that things are more complicated than we thought.

If any of Kurzweil's exponential-growth predictions are to come true, though, it'll be the ones that involve computing power most directly, since that's where this sort of growth has come most reliably and spectacularly. I just don't think that our understanding increases at the same rate - and not every problem will find a solution through our ability to throw more processing power at it.

How do I reconcile this attitude of mine with my reasons-for-optimism post of the other day? Well, as I've said, we don't need miracles in drug discovery (although I'll welcome any that might show up). We just need to do things a little bit better than we do already - it's that young a field, and we're that poor at it. Compared to what we could know, and what we might be able to do, we're still way back on the curve. When your clinical failure rate is 90%, anything you can do better is an improvement. I'm not asking to (or claiming that we will) figure out predictive human toxicology in ten years. I just want to fail miserably eight out of ten times, instead of nine. And thus double the number of drugs coming to market. . .

Comments (23) + TrackBacks (0) | Category: The Central Nervous System


COMMENTS

1. processchemist on August 23, 2010 7:56 AM writes...

Here's the problem with guru-mode thinking: you take few assumptions (often unproven) then with few lines you sketch the big landscape - who cares about the single leave on the the tree, or, better, who cares about the single tree in forest?
But devil is in the details...

Permalink to Comment

2. Data Police on August 23, 2010 9:07 AM writes...

The discussion is very personal to me. I think all the criticism is reasonable.

I think when it comes to brain we are really in our infancy in our understanding. I am afraid we will be in that state for a long time to come.
I was once a practising neurobiologist. During this time we looked at zebrafish neurons with calcium imaging while the animal is performing behaviors. Yes in vivo imaging at cellular level while the animal is behaving to different stimuli. The whole lab was only looking at 300 odd neurons.
The redundancy and processing power for simple fixed behaviors looks so complex, I begin to wonder how people can make broad stroke statements about higher order functions. Having said that, I believe the true mystery of brain is probably simpler than we see it.

Even if you do understand and model the brain, the path to discovering drugs that modify brain in a safe way is the challenge. My claim is that we need not understand brain that well. We need to understand how to sensibly make drugs. For that we need to make good testable models, both in vitro and in vivo.

Permalink to Comment

3. imarx on August 23, 2010 9:13 AM writes...

"Every time we improve it, we find that things are more complicated than we thought."

I still don't see how you can reconcile this statement with your "we'll figure things out eventually" post. How does this not apply to everything we know about cancer, neuroscience, etc.? Every time we move towards the goalpost, it just moves further away...

Permalink to Comment

4. Cellbio on August 23, 2010 9:58 AM writes...

imarx,

I'll give you my opinion. We need not understand the complexity of biology, only the interaction of a drug with a biological system, or pharmacology. This is the art that has been largely replaced at pharma by target based efforts.

As I posted on another entry, biological insight may be useful in target choice. An example may be Her2. However, this case is a rather straightforward model, and in my experience, more often than not, insight gives you a clue that something is involved, but often fundamental issues about the precise role are lacking. So, instead of perfect clarity about the role of a protein in human disease, we get bits of information that meet our internal target validation check lists that makes everyone feel comfortable about the path ahead. Even heard two projects pitch targets without knowing if the goal was to make an agonist or antagonist. It was kind of like Dorothy asking the Scarecrow which way to go.

So, yes the more we learn the more we realize there is more we don't know, and the more we pin all of our hopes on biological insight as a starting point for unleashing medicinal chemists, the more we will struggle, in my opinion. In essence, I think the problem is that, at least for the slice of the I have seen, medicinal chemists are powerful when partnered with pharmacologists, and struggle to add value when partnered with molecular biologists or the like.

Cellbio (Molecular biologist turned cellular pharmacologist)

Permalink to Comment

5. Neurosloth on August 23, 2010 10:26 AM writes...

Ray Kurzweil still does not understand the brain.

Kurzweil loves to fall back on the exponential growth of IT, but that really has nothing to do with the problem. If you were to go 10, 15, or even 20 years in the future and bring their absurdly fast computers back to the present, we still wouldn't understand the brain, we'd just be able to run our simplified and inaccurate models faster.

His references to the human genome project are more apropos than he realizes. Yes, we handily conquered that goal through brute force and technological advancement, but in the process we advanced our biological understanding only negligible amounts. It's not so much information we need as comprehension, and the only way to gain that is through slow, tedious, in-the-trenches scientific investigation. There's no exponential speed-up for that.

The scope of the problem is simply beyond him. Back to ye sci-fi conventions, futurists!

Permalink to Comment

6. Daniel Newby on August 23, 2010 12:25 PM writes...

Kurzweil is both right and wrong. Yes, nanotech and information processing will produce astonishing knowledge of the brain, and a lot sooner than most people think. It is not at all implausible that we could simulate all the ion flows of a human brain while it "thinks" within a decade or two.

The problem is that the brain is more than ion channels. Dial down the ion channels just a little and you get a coma. Dial them up just a little and you get epilepsy, or migraine, or runaway reinforcement. Desynchronize the functional blocks he talks about by just a little and no reinforcement (learning) occurs. It takes some might fancy regulatory networks to keep all this going, and that's all down to rather subtle chemistry carried out in jelly. And that's a lot harder to scan than architecture and channel location. I mean, how the hell do you accurately measure how the alpha-4 subunits of the nicotinic receptor are selectively phosphorylated and dephosphorylated on a time scale of minutes in living humans? In each of the distinctive sub-organs of the brain? It's a tar pit.

Permalink to Comment

7. MoMo on August 23, 2010 12:59 PM writes...

While Kurzweil is no doubt a genius in his field, that's where he should stay.

Who cares what he thinks about the brain anyway?

One other thing- Another ACS meeting I am skipping. Don't care to see the same old stuff and the the same old players congratulating themselves on nothing.

Permalink to Comment

8. Vince on August 23, 2010 3:45 PM writes...

Unfortunately, reading Derek's response triggers an analogous response in me. It reminds me why I stopped spending time with biologists and started working with neural-engineers and biophysicists.

These arguments and dismissals are somewhat typical of the field: a hand-waving maneuver designed to show the extreme complexity of the field: 'Billions of years', 'Bewildering variety', etc.

So what?

Hodgkin & Huxley did not have molecular details, or the exact proteome configuration space, etc. Yet, they reverse-engineered a model of the neuron which was quite correct and helped further the field tremendously. Today, we can run NEURON (on a laptop) and set-up almost any given scenario.

At this point it needs tiny iterative changes and then it's merely computation bound. IBM's Blue Brain Project has demonstrated this sufficiently for me. And while I don't expect people here to read papers on this, Henry Markram's TED talk is at least a little education (http://tinyurl.com/yj23ya4).

Of course there are always more details to be discovered at a lower-level. I spend a large amount of time working with a handful of these, it's a necessary evil if you wish to work within a biological system and an acute condition. But, I also recognize that while we can wax poetically on-and-on about the majesty of how complex biology is (as PJ Meyers did), at the end of the day it's really not all that important. Not all biological circuits are designed equally, not all genes are equal in a given motif.

#7: Who cares what he thinks?

I, for one, do. If you just spend your time learning from within your own field (or subfield!) you are doomed. Actually, I'm wrong. That's not true, many a biologists have made great livings as stamp-collectors.

There are many techniques, especially from information theory/thermodynamics and computer science, which should really be mandatory knowledge.

Permalink to Comment

9. Osaka on August 23, 2010 3:49 PM writes...

Neurosloth: That isn't true even in the barest sense of the word. A.I. research has largely been processing power limited in the last forty years. Neural networks, for example, were abandoned for a long time (10+ years) because they were thought to be too inefficient and not powerful enough on the hardware of the time. If, in that time frame, someone had reached forward a decade or two, pulled back hardware capable of running neural networks natively or in optimized form (for example, modern GPGPU processing), the entire line of research would be dramatically changed.

One can only develop what they have the capability to run; the ability of computers to do work that humans cannot do is tremendous, and without the technology, the research, as promising as it is, dies on the vine.

In todays world, the processing power is usually less of a problem then the heat required to execute, so if we could reach forward 10-20 years, and bring back CPU's with identical processing power, but vastly better thermal envelopes (perhaps because they use reversible circuits), that would tremendously change the game. Our models are intrinsically based around the technology we have and the limits we have; change them, and you change the models.

Another example would be protein folding; the current models are vastly inferior to a true simulation, because the computing power simply isn't there, and we have to make do with what we have. Remove those restrictions, or change the game, and the algorithm changes; an excellent example of this is, as I said, GPGPU based algorithms, which increase accuracy and speed of folding by large factors.

Permalink to Comment

10. Handles on August 23, 2010 10:03 PM writes...

Vince,

The way the brain processes information is obviously not unique to brains, and can certainly be simulated using hardware other than meat, and a complete understanding of biology is not necessary. No argument there.

The problem is that one line "25 million lines of code". That number is simply insufficient to describe the hardware that the brain runs on. There can be no argument there, except to say you were misquoted...

Permalink to Comment

11. RKN on August 23, 2010 10:09 PM writes...

But even that argument triggers the reaction in me that Kurzweil's statements often do. I wasn't aware that we had "modeled, simulated, and tested" a cerebellum yet, for one thing.

I agree with that much. His review of the current state of knowledge tends toward hyperbole, in a similar way that some Nova and Nature specials often do. Viewer beware of claims prefaced with: "Scientists now know..."

Permalink to Comment

12. cancer_man on August 24, 2010 4:41 AM writes...

Derek, the Sirtris paper was published over 2 weeks ago. Why no response? Kurzweil is a longevity guy, so posting here.

Pfizer posts in January claiming GSK compounds are "Worthless. Really?" (Derek's title):

Jan 12, (123 comments)
Jan 15, (60 comments)
Jan 25, (40 comments)

and a couple more.

GSK/Sirtis paper claiming Pfizer,et al. are wrong?

0 posts, 0 comments

Take your time.

Permalink to Comment

13. sgcox on August 24, 2010 5:56 AM writes...

If anything, that paper confirmed that compounds are not working as advertised.
They are active only with very short peptides. Once you make substrate longer - closer to real stuff - any activation effect disappear.

Permalink to Comment

14. Anonymous on August 24, 2010 9:03 AM writes...

Osaka: well, enough computing power makes AI a solved problem, yeah. If you have a countably infinite amount of computations, then you can just run AIXI (http://www.hutter1.net/ai/aixigentle.htm). It may not be 'AI', but at that point the difference between AI and non-AI is irrelevant.

Derek: I think the cerebellum Kurzweil is talking about is just the Blue Brain project, which was a small part of a mouse or rat brain. It hasn't revolutionized anything because it's infeasible to stick a supercomputer in industrial robots and wait a few weeks between every action.

Permalink to Comment

15. Neurosloth on August 24, 2010 10:16 AM writes...

@Osaka

You're missing the point, in exactly the same way Kurzweil does. AI researchers want to believe that you can keep pushing and refining the computer simulations and eventually you'll come up with a brain. And everyone who has ever done neuroscience wetwork looks at a statement like that and tries not to spray coffee on their monitor. A simplified neuron times 10^8 is not any smarter than a simplified neuron times 10^2.

Using supercomputers to model the Earth's atmosphere hasn't made us any better at predicting the weather, and for similar reasons: without precise variables and a very thorough understanding of the underlying processes, you can't model large-scale dynamic systems accurately. You can throw as many TFLOPS as you want at the problem--it's not going to help.

I do think computational modeling a la Blue Brain is worthwhile, insofar as it helps us gain a better understanding of how these large-scale systems behave. But if it's possible to build a brain in silico (and I believe that, in principle, it is), the necessary advances are going to need to come from the bottom-up, not the top-down. That means improvements in our understanding of biology, not faster processors.

Permalink to Comment

16. TFox on August 24, 2010 11:06 AM writes...

A big assumption here, which seems to be shared by both Kurzweil and his critics, is that understanding how the human brain works is the best approach to building useful machine intelligence. This isn't so clear to me; I'm more impressed by the nonhuman intelligence behind a Google search, or by IBM's Jeopardy project. Besides, there are ~7e9 human intelligences on the planet, and for many tasks they are available very cheaply.

Permalink to Comment

17. Osaka on August 24, 2010 11:11 AM writes...

Neurosloth: What? Have you actually ever worked with neural networks? Because actually programming using neural networks would show you the scalability of the system; the difference etween a system that literally cannot identify a single data point and a system that can identify more data points than are relevant to a system can be a single neuron in the correct place. Neural networks are all about scaling; to deny that is to deny why we even use them in the first place. The addition of more neurons adds to the power of the system, many times in more than additive ways, which is the entire reason we use them.

In addition, using supercomputers to model the Earth's atmosphere HAS made us EXTREMELY better at predicting the weather; it just hasn't lead to what the general public wants weather prediction to be. The inability of a system to make exacting predictions does not negate the validity of the system nor of its uses. The supercomputer in Japan which has run full scale simulations of weather events has lead to interesting and novel techniques for predicting localized and global trends, despite not assisting in casual weather prediction even a single iota.

Understanding how biology works is absolutely necessary to replicating a brain in silico, I'll agree 100% there. But it is not sufficient; without the power to run such a simulation, all the understanding in the world is irrelevant. In addition, I personally believe that the first step will not be from biology, but from engineering; a brain in silico need not develop from our wetware to be capable of cognition.

Permalink to Comment

18. Vince on August 24, 2010 6:27 PM writes...

Handles,

I agree that it's pretty apparent Kurzweil was taken a bit out of context.

About the 25M-lines-of-code. Of course it's insufficient to provide for a full description of a mature CNS; but much of this is unnecessary to understand a first order approximation of the design (or function) of the CNS and is just apparent complexity. The entropic content of, say, a bijective cellular automaton will fluctuate with time but that doesn't mean it's "indescribable" by a given rule. Emergence does not equal uncomputable or undefinable.

That said, the encoded genetic information is all there is to initially describe the brain and as such it does something very important: it imposes constraints. It sets boundary conditions on the biosystem in both time and space that allows us to predict the accessible configuration space of a system. This is what Kurzweil is getting at and he's absolutely correct; although most radio-fixing biologists and chemists may not understand why.

See, this is one of the things PJ Myers just doesn't understand. I don't mean to be rude, but having read both his responses, the man is completely out of his intellectual league.

For example, his Intel CPU argument is atrocious.

The design of a microprocessor has strong analogy to a neural system. On a decently basic informational level, you encode the design in a standardized language. Using VHDL to describe a MOSFET transistor is an example with strong parallel to the encoding of a gene in DNA.

Given the VHDL of, say, an Intel 8086 actually allows us a tremendous amount of usable information. Sure it doesn't give you the explicitly written code for any random program, but it - again - does give us something really important: constraints. This allow us to predict the allowed configuration space of an 8086. Given the universality of information theory, you can emulate the 8086 on a different computational fabric and see what happens, how a specific input or sequence perturbs the program. The same holds true with a neural ensemble, cortical column, or brain (see IBM's Blue Brain) and is very important for reverse-engineering of biology.

This is what irks me about responses such as Derrek's. He seems like an intelligent and thoughtful man, unlike Myers I would add. Yet, their mentality is right out of "How a biologist fixes a radio;" smash the hell out of it and categorize the pieces. Unfortunately that really doesn't yield much. Yes, Myers can pull a random gene out of genebank and point toward it's connectivity and throw his hands up in the air -- It's so complex! Absolutely no way!

Someone needs to remind Myers that all regulatory networks can be broken up into individual network motifs, neatly categorized by their topology, connectivity compared to the Erdos-Renyi model and then generalized across cortical areas, organs, and even species. This is all thanks to something called graph theory, which was developed well outside of the biology world and without knowledge of what RHEB is. Yet, graph theory mixed with computation (will) give(s) us the tools to not only categorize biology, but make structural and functional predictions. And to be honest, a lot of the action at the bottom is going to be redundant and unnecessary for the majority of cases. Just as it is in chemistry I might add, a completed QFT isn't really necessary for the vast majority of cases.

My point isn't to advocate a simple solution or field which will act as a silver bullet, one doesn't exist. But I am dumbfounded by the absolute defense of the current paradigm when someone with a different tool-set and different approach comes and says: Hey, we can do this faster. It's not that complex after all, a lot of what you're doing isn't necessary to get a good first order approximation.

Permalink to Comment

19. Handles on August 25, 2010 12:19 AM writes...

Hi Vince,

I read the response from Kurzweil about "imposing constraints", but I argue that the information in the genome is incomplete, and therefore you cant use it to describe all the possible states of the system, or indeed any of them.

Lets ignore connectivity and networks and emergent properties, and assume that gene A encodes protein A that is directly analogous to a transistor, i.e. it is a switch that can be set to a number of different positions. Now we start talking chemistry: the different positions of the switch are set by sticking e.g. phosphate groups, acetyl groups, sulfate groups onto the protein at various places. The problem is, that the existence of these switching groups is not encoded in the genome. The genes are not annotated with "phosphate goes on here". Gene A gives no infomation on any possible states of protein A, let alone which states are physiologically important.

Then to carry the transistor analogy forward, lets assume protein A is an ion channel, i.e. its a switch that can be set to different positions, which control the flow of ions. Gene A gives no information on which ions, if any, are controlled by protein A. Indeed if all you have is the genome, you wouldnt know that sodium, potassium, calcium, hydrogen ions even exist.

The genome gives no infomation at all about the possible states of protein A, so any constraints you might set on the whole brain are meaningless.

The genome is not a blueprint for making a brain, it is just a table of data that the pre-existing machinery in a fertilised egg uses to build a person. You cannot set constraints on the possible states in the brain without knowing the possible states in the initial zygote. Describing the latter will take a lot more than "25 million lines of code".

Looking forward to your reply, I am learning stuff from this.

Handles

Permalink to Comment

20. sepisp on August 25, 2010 3:46 AM writes...

Why does the name Kurzweil keep popping up and why is that always in context of some ridiculously information-free futurological prediction? Seriously, sci-fi has equivalent (if not better!) content. Regarding processing power, it is currently limited by manufacturing methods, but it is predicted that by 2015 we'll hit the brick wall called quantum tunneling. It's already within one order of magnitude. Following that, increasing the size of a parallel processor is the only method to add more processing power. I don't recall any dramatic advances that would constitute biomimicry or emulation of the human - repeat human - brain. As I understand the details of even singular neurons are not comprehensively mapped.

Permalink to Comment

21. Osaka on August 25, 2010 4:08 PM writes...

Sepisp: All things considered, your point regarding processing power is not necessarily true, and the conclusion it makes is largely irrelevant.

Processing power is not currently limited by manufacturing limits; there are numerous ways to scale all the way down to 16nm and beyond, some of them even on current etching techniques. Double patterning and multiple patterning allow resolution even beyond this, and operate using existing technology. The reason these techniques are not currently used is, as always, cost; double patterning doubles the time taken to etch, and thus the cost. The cost the market will bear is the limiting factor, not the methods.

2015 is merely the date Intel has slated for when we will be forced to abandon conventional CMOS based electronics, as 11-10nm will be sufficient for quantum tunneling to overwhelm the electron flow. Keep in mind, circuits have been designed below this barrier, at approximately 2nm, using different processes, and that Intel's road map is optimistic; less optimistic road maps put 11nm by 2022.

Let's, for the sake of argument, assume that 2015 is, indeed, the end of reduction in electronic circuits, at least for a while. Would this be a substantial loss? Not particularly; a redesign would be in order, and several competing technologies, such as asynchronous or reversible CPU's, would fit into the niche left wide open. Sheer manufacturing limits are not the only place to optimize; by optimizing on other areas, the gains in processing power can be maintained for at least another generation or two.

Even if we assume that no significant gain can be made, that processor development will simply stall in all measurable ways, there is still enormous room to grow in parallel processing, such as the addition of cores or co-processors. We've seen this in the dual and quad and now septa and octo core'd processors; while this is a stop gap, and a marketing tool, it is uniquely suited for modeling the brain, which is itself an enormous parallel processing machine! In fact, the rut we may or may not fall into in 2015 may be just what we need to begin truly emulating the brain on a much better level: a focus on parallel processing!

Furthermore, who said we need to understand the details of singular neurons comprehensively in order to map out the brain to any large scale degree? Yes, that would certainly be helpful, but the real limit is not in understanding of neurons, but in the processing power to emulate enormous groups of them! A single neuron in an empty jar is not a brain, but a swarm of half functioning ones can be. The primary limit in this race is how many neurons can we emulate at an acceptable level, not that we can emulate one exceptionally well.

Permalink to Comment

22. Neurosloth on August 27, 2010 2:21 PM writes...

The neural networks that can actually solve simple problems are related only to biological neural networks in abstract principle. They don't attempt to model neuronal architecture or response on an anatomical or electrophysiological level. And here, I think, is where our lines of thinking differ. If you believe it's possible to create human-level intelligence using abstract models without first understanding the neurobiological mechanisms that underlie intelligence in humans, then Godspeed: I wish you good luck. Perhaps I'm biased because I'm in the neurobio field rather than CS, but I remain entirely unconvinced.

With respect to simulations of biological neural networks, scalability is meaningless without an accurate node to scale; that is, a sufficiently detailed model of the neuron. Not just ion flow or membrane potential or even receptor dynamics, but everything from intracellular signaling cascades on down to gene regulation. Why do we need to model all these things? Because these processes all affect neuronal response, even on small time scales, and if you don't include the factors modulating neuronal response, you can't simulate the brain. To my knowledge, current models still cannot accurately predict the behavior of multi-neuron in vitro or in vivo systems, and this is why.

It's also why projects like Blue Brain don't actually attempt to produce problem-solving responses or intelligent behavior: because our current understanding of the brain is not sufficient to derive intelligence from first principles. A dumb neuron makes for a dumb neocortical column makes for a dumb brain. These projects are merely attempts to model large-scale electrophysiological behavior, and until we have more detailed models, that's about all to which they'll be able to aspire.

Permalink to Comment

23. Osaka on August 28, 2010 1:33 AM writes...

Abstractions are the part and parcel of engineering and mathematics; without abstractions and axiomatization, we would not have many advancements, nor would we have the applications for the more pure ones. Much of advanced geometry, for example, has very little to do with the natural terms to which geometry was originally applied and invented to measure and examine; does this make it any less geometry?

In addition, it belittles both our fields to talk of solutions to "simple" problems; neural networks are often only used when their power is required, and no other solution is sufficient. To say neural networks are put to work on mere simplistic problems is much like saying atomic bombs are fielded in skirmishes; it is technically true that they CAN be used that way, but it isn't advisable.

As to whether it is possible to model a human level intelligence without using humanistic mechanisms, the entire A.I. community is split as to whether we need