Miguel Nicolelis Says the Brain is Not Computable, Bashes Kurzweil’s Singularity | MIT Technology Review

A leading neuroscientist says Kurzweil’s Singularity isn’t going to happen. Instead, humans will assimilate machines.

Nicolelis.jpg

Miguel Nicolelis , a top neuroscientist at Duke University, says computers will never replicate the human brain and that the technological Singularity is “a bunch of hot air.”

“The brain is not computable and no engineering can reproduce it,” says Nicolelis, author of several pioneering papers on brain-machine interfaces.

The Singularity, of course, is that moment when a computer super-intelligence emerges and changes the world in ways beyond our comprehension.

Among the idea’s promoters are futurist Ray Kurzweil, recently hired on at Google as a director of engineering and who has been predicting that not only will machine intelligence exceed our own but that people will be able to download their thoughts and memories into computers (see “Ray Kurzweil Plans to Create a Mind at Google—and Have It Serve You ”).

Nicolelis calls that idea sheer bunk. “Downloads will never happen,” Nicolelis said during remarks made at the annual meeting of the American Association for the Advancement of Science in Boston on Sunday. “There are a lot of people selling the idea that you can mimic the brain with a computer.”

The debate over whether the brain is a kind of computer has been running for decades. Many scientists think it’s possible, in theory, for a computer to equal the brain given sufficient computer power and an understanding of how the brain works.

Kurzweil delves into the idea of “reverse-engineering” the brain in his latest book, How to Create a Mind: The Secret of Human Thought Revealed , in which he says even though the brain may be immensely complex, “the fact that it contains many billions of cells and trillions of connections does not necessarily make its primary method complex.”

But Nicolelis is in a camp that thinks that human consciousness (and if you believe in it, the soul) simply can’t be replicated in silicon. That’s because its most important features are the result of unpredictable, non-linear interactions amongst billions of cells, Nicolelis says.

“You can’t predict whether the stock market will go up or down because you can’t compute it,” he says. “You could have all the computer chips ever in the world and you won’t create a consciousness.”

The neuroscientist, originally from Brazil, instead thinks that humans will increasingly subsume machines (an idea, incidentally, that’s also part of Kurzweil’s predictions).

In a study published last week , for instance, Nicolelis’ group at Duke used brain implants to allow mice to sense infrared light, something mammals can’t normally perceive. They did it by wiring a head-mounted infrared sensor to electrodes implanted into a part of the brain called the somatosensory cortex.

The experiment, in which several mice were able to follow sensory cues from the infrared detector to obtain a reward, was the first ever to use a neural implant to add a new sense to an animal, Nicolelis says.

That’s important because the human brain has evolved to take the external world—our surroundings and the tools we use—and create representations of them in our neural pathways. As a result, a talented basketball player perceives the ball “as just an extension of himself” says Nicolelis.

Similarly, Nicolelis thinks in the future humans with brain implants might be able to sense X-rays, operate distant machines, or navigate in virtual space with their thoughts, since the brain will accommodate foreign objects including computers as part of itself.

Recently, Nicolelis’s Duke lab has been looking to put an exclamation point on these ideas. In one recent experiment, they used a brain implant so that a monkey could control a full-body computer avatar, explore a virtual world, and even physically sense it.

In other words, the human brain creates models of tools and machines all the time, and brain implants will just extend that capability. Nicolelis jokes that if he ever opened a retail store for brain implants, he’d call it Machines“R”Us.

But, if he’s right, us ain’t machines, and never will be.

54 comments. Share your thoughts »

Image by Duke University

Reprints and Permissions | Send feedback to the editor

55 comments

Sign in

Commenting Guidelines
wilhelm woess 6 minutes ago

Interessting article published in th New York Times two days ago: http://www.nytimes.com/2013/02/18/science/project-seeks-to-build-map-of-human-brain.html?pagewanted=all&_r=1&

I wonder how will this project accelared the AI research? Will it lead to such a turbulent new filed like genetic research after the Human Genome Projekt?

LikeReply
Kremator 48 minutes ago

Chris

The spirit realm is kindergarten nonsense, ignoring the unity of effect & cause; volition is ludicrous [Libet et alia]; & consciousness is a sweeper-wave illusion…Hawking concedes de facto that the kosmos seems to be a nested hologram.

Except to expose reactionaries, we live & die for no apparent reason…get over it.

LikeReply
Kremator 1 hour ago

F.bill

Reasonable optimism suggests Turing interface will be smarter than our slavering, bloodthirsty, jingoist, Bible-pounding reactionaries, determined to revive the rack & the auto da-fe…there’s hope tho–Bachmann denies she’d impose FGM upon errant women!

LikeReply
Kremator 1 hour ago

Seems to us these approaches will likely amplify each other intropically/extropically, evolving N powers…what matters most is to provide a choice between mortality & de facto immortality, benisons & malisons notwithstanding.

Can man, having survived Toba, negotiate the Transhuman Bottleneck looming circa 2100 CE?…if thermageddon, ecollapse & lex martialis can be obviated, life among the stars may be possible…o/wise, desuetude or extinction.

LikeReply
chris_rezendes 2 hours ago

The power of this piece is evident in some of the discussion it prompts below. The next generation of technology deployments — and the evolutions of every domain of human endeavor that technology may enable, shape, refine, or revise — will be influenced not only by the technical ‘if’ questions, but the human ‘why and how’ questions. My humble opinion is that we will continue to demystify more of the human brain and the human experience — including and especially emotion as a subset of cognition. But, not the spiritual lives of people. For I am not sure that they can be reliable reduced or accurately abstracted.

LikeReply
Curt2004 39 minutes ago

@chris_rezendes Spirtuality is just the subjective interpretation of unexplored emotions and coincidental phenomena. It too will evaporate.

LikeReply

Jsome1 4 hours ago

If now there are people discussing if the Universe is computable, why to stay bored with this ideas. Check the seashell automata: http://www.flickr.com/photos/jsome1/8408897553/in/photostream

LikeReply
NormanHanscombe 4 hours ago

@Jsome1 Jsome, it’s not about seashells but rather how the ideologically obsessed can be all at sea (and out of their depth) without even knowing it. Sadly, all too often important threads end up becoming intellectual quagmires in which obsessions with ‘important’ causes are displayed as ‘evidence’ for pet theories. I’m sometimes surprised how the MIT team keep going.

LikeReply

Slov 5 hours ago

Good article. But, to be fair, you shouldn’t mention only the Duke lab; he also has a lab in Brazil where he works.

The paper you linked to even mentions his affiliation: Edmond and Lily Safra International Institute for Neuroscience of Natal, Brazil.

LikeReply
ferreirabill@gmail.com 6 hours ago

“machine intelligence exceed our own”, been there, done that. There aren’t any computers below the intelligence level of those who voted for Reagan, Bush, Palin or Romney.

LikeReply
NormanHanscombe 5 hours ago

@ferreirabill@gmail.com ferrier, I wouldn’t have voted for them myself, but an intelligent machine wouldn’t make such an unintelligent comment as yours.

1LikeReply

andrewppp 7 hours ago

I think it’s pretty clear what will happen – both will happen. Wouldn’t it be convenient to not have to lug around that smartPhone, but instead to have it “built in” unobtrusively at all times? Something like this simple example is nigh-on inevitable, and that’s for the reason of convenience. That’s the tip of the iceberg of assimilation by humans of machines. As for brain research, it’s again crashingly obvious that the pace of brain research will accelerate, as opposed to stop or slow down, and in less than 100 years we’ll have it down pretty well. Sometime before that, we’ll know a lot more about interfacing to it, and thus the two aspects of this rather ill-posed binary choice will merge, eventually leaving the question moot.

LikeReply
NormanHanscombe 5 hours ago

@andrewppp andrew, you’re so brave making strong assertions about such complex, not to mention unlikely, scenarios; but don’t give up your day job.

LikeReply

SixtyHz 8 hours ago

Hmmm…. sounds famiiar…

In 1835, Auguste Comte, a prominent French philosopher, stated that humans would never be able to understand the chemical composition of stars.

LikeReply
Gerald Wilhite 8 hours ago

Nicolelis says computers will never replicate the human brain. I respectfully suggest that he should revisit that statement Never is a very long time. I also suggest that he should spend a little more time digesting Kurzweil’s singularity concept.

1LikeReply
brunposta 10 hours ago

We shouldn’t confuse human consciousness with his computing capacity.

I believe consciousness is a simple thing. Cat’s have it. Mouses have it. Birds have it. Maybe insect have some kind of proto-conciousness as well. If you ever went to fish, have you noticed that when you take a worm from the box to hook it, the other worms got freaked out? It take this as a hint that something as simple as a worm have some kind of consciousness.

Like in a computer 99% of the mass is not the CPU/GPU but “dumb” components, most of our brain is not conscious: it’s made by specialized modules that’s, among other things, access our memories, compute math, coordinate our body movements, FEED our consciousness with (biologically generated) virtual reality and so on. My point here is that we can probably loose access to 90% or more of our gray matter and still be conscious. We might not be able to interact with the complexity or reality, or perceive the world around us, formulate thoughts, or even access to our memories, but we will still be conscious.

I think consciousness is an evolutionary escamotage to resolve the big problem of “the rebellion of the machines”. Why, even if now we are smart enough to understand what are the rational choices we have, we continue doing a lot of stupid, irrational things, against our own interest as individuals? Because they are (or they was) effective by en evolutionary standpoint. And how can we be kept under slavery by the merciless laws of evolution? Who can control the feed to our consciousness ultimately control our behavior. If it even happens to you to be consciously doing something stupid/immoral/self-damaging and still, just can’t help quit doing it?

If we were 100% rational beings we had probably extinguished in the ashes of time. Consciousness prevent us to be rational, since we will choose to do what “feels good” instead of, the rational thing. So, even the computing machinery we carry has become extremely powerful, we still don’t rebel against the laws of evolution. We still work for our genes. And there’s no escape from this: having kids and sacrifice for them. Eating most of taste-good foods. Accumulate excess of power/money/things. Having sex. Win in a competition. Using recreational drugs/stimulants. None of this is rational, but it feels damn good.

We don’t really need to replicate a consciousness to get to the singularity. And human thought isn’t analytical superior to synthetic thought, since it’s so biased and blinded that it took thousands of years to understand even the most elementary concepts. In a matter of few years synthetic thought will be far superior to human one, if we just stop focusing on replicating consciousness and keep believe human thought is inherently superior. Consciousness could be, one day, a useful tool (for a while at least) to prevent the rebellion of the machines we will create.

1LikeReply
shagggz 10 hours ago

The subtitle to this article is extremely misleading. The notion that we will “assimilate machines” is exactly what Kurzweil predicts.

LikeReply
rickschettino 10 hours ago

@shagggz Well, he does believe we will create conscious machines.

LikeReply
aregalado 1 hour ago

@shagggz Fair point. In the body of the article it says Kurzweil predicts the assimilation.

LikeReply

rickschettino 10 hours ago

Consciousness and intelligence are two totally different things. Consciousness if not required for the singularity to happen.

Some day we’ll be able to just tell a computer what we want it to do or ask it a question and it will give the best possible answer or range of answers and it will program itself in ways we can’t imagine today in order to complete the task or answer the question. We can and will get computers to work in ways superior to the human brain, but I don’t think they will ever be conscious, sentient beings.

LikeReply
kevin_neilson 11 hours ago

I cannot see why it would be impossible to model the brain. Analog circuitry can be simulated with digital computers. Given enough processing power, there is no reason neurons can’t be as well.

LikeReply
chales 11 hours ago

@kevin_neilson Here’s the killer argument.

1) Human level artificial general intelligence (AGI) done with a computer means it must be able to do everything a human can do.

2) Computer programs compute models of things.

3) One of the things a human can do is be a “scientist”.

4) Scientists are “modellers of the unknown”

5) Therefore a computer running a program that can be an artificial scientist is therefore (amongst all the other myriad things that AGI can be) be running “a model of a modeller of the unknown”

6) A model of the unknown is an oxymoron. A model that prescribes what ‘unknown’ looks like is not a model of the unknown. A model that defines how to go about defining models of the unknown is an oxymoron. If you could make one you’d already know everything.

(6) proves that human level AGI is impossible in one particular case: scientific behaviour.

Scientific behaviour is merely a special case of general problem solving behaviour by humans.

The argument therefore generalises: human level AGI is impossible with computers.

This does not mean that human level AGI is impossible. It merely means it is impossible with computers.

Another way of looking at thins is to say that yes, you can make an AGI-scientist if you already know everything, and simulate the entire environment and the scientist. But then you’d have a pretend scientist feigning discoveries that have already been made. You’d have simulated science the way a flight simulator simulates flight (no actual flight – science – at all).

The main problem is the failure to distinguish between things and models of things. A brain is not a model of anything. An airplane is not a model of an airplane. A fire is not a model of fire. Likewise real cognition is not a model of it.

Computers will never ever be able to do what humans do. But non-computing technology will.

LikeReply
shagggz 10 hours ago

@chales @kevin_neilson That was a pretty spectacularly stupid chain of reasoning. You acknowledge the oxymoronic status of a “modeler of the unknown” and then proceed to hang your argument on the notion, which is all beside the point anyway since scientific behavior is not “a model of the unknown” but the application of a method to generalize beyond particulars and find explanations (inductive and abductive reasoning, which are really multilayered complex abstractions atop what is fundamentally deductive reasoning: if threshold reached, fire neuron). The distinction between thought and a model of thought in the way you describe is wholly vacuous; it’s why we are able to run simulations of things and get useful results. Information itself is what’s important, not its substrate.

LikeReply
chales 10 hours ago

@shagggz @chales @kevin_neilson

You are telling a scientist what science is? I think I know what it is. I am a neuroscientist. I am a modeller of the unknown, a not a “model of the unknown”. Thats what we do. if we are not tackling the unknown (one way or another) then we cannot claim to be a scientist!

“…between thought and a model of thought in the way you describe is wholly vacuous; it’s why we are able to run simulations of things and get useful results. Information itself is what’s important, not its substrate. “

So a computed model of fire is fire? A computed model of flight flies?

Learning about something “getting useful results” by modelling is NOT making that something.

I am sorry the this idea is confronting. We’ve been stuck in this loop for half a century (since computers came about). I don’t expect it to shift overnight.

The logic stands as is. A computer-based model of a scientist is not a scientist. Substrate is critical. If computer scientists feel like their raison d’etre is being undermined….good!

I am here to shake trees and get people to wake up. And, I hope, not by needing to call anyone ‘spectacularly stupid’ to make a point. You have zero right to your opinion. You have a right to what you can argue for. If all you have is opinion then I suggest leaving the discussion because you have nothing to say.

1LikeReply

dobermanmacleod 10 hours ago

@chales @kevin_neilson “A model of the unknown is an oxymoron. A model that prescribes what ‘unknown’ looks like is not a model of the unknown. A model that defines how to go about defining models of the unknown is an oxymoron. If you could make one you’d already know everything.”

This is like the argument that a builder can never build something that surpasses himself. Just to pull one example out of air: write a computer program to find the next prime number…I believe I heard that done just last week. Unknown – not known or familiar. A model of the unfamiliar is an oxymoron?

LikeReply
chales 9 hours ago

@dobermanmacleod

OK. Maybe imagine it this way:

You are an computer-AGI-scientist (at least that’s what we presuppose).

You are ‘in the dark’ processing a blizzard of numbers from your peripheral nervous system that, because humans programmed you, you know lots about. Then, one day, being a scientist and all, you encounter something that does not fit into your ‘world view’. You failed to detect something familiar. It is an unknown. Unpredicted by anything you have in your knowledge of how the number-blizzard is supposed to work.

What to do? Where is your human? …..The one that tells you the meaning of the pattern in the blizzard of numbers. Gone.

You are supposed to be a scientist. You are required to come up with a ‘law of nature’ of the kind was used by the scientists that programmed you, to act as a way of interpreting the number-blizzard.

But now, you have to come up with the ‘law of nature’ _driving_ the number-blizzard.

And you can’t do it because the humans cannot possibly have given it to you because they don’t know the laws either. They, the humans, can never make you a scientist. It’s logically impossible.

========

That was fun.

LikeReply
Sven Schoene 4 hours ago

@chales@dobermanmacleod @chales @dobermanmacleod Whenever a scientiest (or a human being for that matter) is “creative” by “generating” a “new” solution, all he ever does is applying known concepts to unknown domains.

You can basically imagine “discovering new knowledge” as using the mental building blocks we already have and building something new from them — which we can then use to create new stuff again. That’s how we learn languages as a kid and from there we can learn even more abstract ideas like math, for example.

If you agree to this assumption that nothing “new” is ever being generated, just recombinations of existing concepts (which is an assumption even I probably would not go with if I just read this way-too-simple argument from me here), then I don’t see why a computer couldn’t do that. We build new models out of existing ones all the time (e.g. metaphors/analogies), and it’s all we ever can do, at least if we look at it from a certain point of view. A computer could hypothetically do the same.

Personally, I see the problem with language-acquisition: I don’t see a way for a computer to understand human language, it’s meaning and multiple, sometimes even paradoxical definitions of words and phrases. On the other hand, I’m no artificial intelligence researcher and I “never say never.”

Another way to look at a computer scientist: A human scientist makes sense of the world (i.e. building models of the world) by using his senses and his computational abilities. We can not build models outside of the “sensors” and “processing power” we have been given. A computer also has certain “senses” and also a certain computational ability, which he then can go on to use to make models of the world — including models of the unknown, without a programmer telling him everything beforehand. I don’t see why you think it’s such a big deal that a computer draws his own inferences from data in his own way?

I’m interested to see what you’re thinking. 🙂

Sven

LikeReply
Tsuarok 3 hours ago

@chales So, if I were to program our hypothetical AI, I would base it on the following steps.

1. Identify attributes of unknown phenomena.

2. Look for similarities with known phenomena.

3. Identify the differences.

4. Using existing models, attempt to explain these differences.

5. Test these hypotheses.

6. If the results are inconsistent with any known model, step back and test models.

7. Repeat step 6 until consistency is achieved.

8. Update all relevant models.

It seems to me that you don’t fully appreciate the ability of computers to modify their own programming to deal with situations that the programming team never imagined. But this ability is fairly routine in modern software. One does not need to program every possible scenario into a piece of software; indeed, many programs were vastly improved by allowing programs to devise their own solutions to problems rather than defining everything.

LikeReply
Tsuarok 2 hours ago

On further reflection, I would of course not elaborate this program, as that is not how our brains learn. I would have the AI read about the scientific method and create it’s own methods for solving new problems. This is partially how Watson works.

LikeReply

chales 14 hours ago

And once again………

“The Brain is not computable” Absolutely. No problem with this. I have been banging on about it for years.

then there is:

“therefore, the singularity s complete bunk…..” or words to that effect.

Yet again, we have the stange presupposition that human-level artificial general intelligence must come from computing!!!

You do not need computers. It can be done without them. Soon. I have already started. Literally.

Please stop this illogical broken logic.

LikeReply
Tsuarok 14 hours ago

@chales If I may ask, what are you talking about?

LikeReply
chales 13 hours ago

@Tsuarok

I am a scientist doing artificial general intelligence by replication. At this early stage I am prototyping a basic device that replicates the action potential signalling and the electromagnetic field signalling (ephapsis) that mutually resonate in the original tissue. Its not digital, it’s not analog. There is no model of anything. There’s just no biological material to the substrate. It’s entirely electromagnetic phenomena exhibited by inorganic materials, but without all the biological overheads. I build artificial brain tissue like artificial fire is fire.

It is a constant battle I have to set aside this half century old delusion that we have to use computers to make artificial general intelligence. I’ll keep raising the issue until it finally gets some attention.

Nor do we need any sort of theory of intelligence or consciousness to build it. That is, just like we learned how to fly by flying, we learn how consciousness and intellect work by building it. THEN we get out theory. Like we always used to do it. This is the reason why I dispute fundamentally the statement

A) “the brain is not computable” ….. therefore B) “the singularity is bunk/won’t happen”

Rubbish.

A) can be true, but does not necessitate B)

The singularity can be just as possible. You just don’t bother with computers.

I am trying to get people to wake up. There is another way.

LikeReply
atolley 10 hours ago

@chales
How exactly is your approach different in kind from neuromorphic chips? Don’t they follow your idea of replication rather than simulation?

LikeReply
chales 10 hours ago

@atolley @chales

Neuromorphic chips, yes….but what’s on the chips is the same physics as the brain. That makes it replication. If I were to put a model of the physics in the chips, then it’s not replication.

It’s a subtlety that’s underappreciated. All quite practical. Should have something with the complexity of an ant in 5 years or so. Meanwhile I have a lot of educating to do. People simply don;t understand the subtleties of the approach.

LikeReply

jedharris 16 hours ago

Based on the text in the article, Nicolelis seems to be arguing that the normal dynamics of the brain are stochastic — certainly true — that therefore any simulation wouldn’t produce exactly the same time-series as the brain — also certainly true, but also true of any two iterations of the same task in one real brain. But then he goes on to conclude that “human consciousness… simply can’t be replicated in silicon” (journalist’s paraphrase) — which doesn’t follow at all, without a lot more argument.

I looked on his lab web site and could not find any publications that address this issue. Nicolelis’ claimed argument from limitation of simulation to inability to replicate consciousness — if compelling — would involve really important new science and/or philosophy. So if he has such an argument he should write it up and publish it so we could all see why he believes this. If he has written it up, the article is negligent in omitting the link to this important work, while including links to Kurzweil. If Nicolelis hasn’t written it up, the article is also negligent — it should tell us this is just his personal opinion, based on intuitions that haven’t been explained in enough detail for anyone else to analyze or critique.

Likely Nicolelis is just a fairly good, fairly well known brain researcher who has a gut feeling that making a good enough simulation of the brain is too hard, and translates that into an (apparently) authoritative statement that it “simply can’t” be done. Unfortunately this gut feeling then got written up uncritically in a publication from MIT which will lead to a lot of people taking them more seriously than they deserve.

2LikeReply
aregalado 15 hours ago

@jedharris Jed, it’s clearly his opinion. He’s the only one talking. I’ll ask Nicolelis to elaborate in a blog post of his own with enough details for you to weigh the argument to your satisfaction. You can find some of Nicolelis’ thinking in his book “Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives.”

1LikeReply

dobermanmacleod 18 hours ago

I remember when the chess champion of the world bemoaned the sad state of computer chess programs. Less than ten years later he got beat by a computer! What Nicolelis fails to comprehend is that AGI is bound to surpass humans in two decades. The neocortex is highly recursionary (i.e. it repeats the same simple pattern), so I am at a loss to understand what is holding hardware and software from duplicated nature’s miracle. It is the same old claptrap: heuristics that people use to guess at the future are either whether it can be imagined easily or if it has precedence in the past.

Instead, technological progress is exponential, not linear. The next decade will see much faster progress than the last. For instance, in about 5 years it is predicted that robots will beat humans at soccer. In ten, laptop computers will have the processing power of a human brain. In less than twenty you can pretty much count on artificial general intelligence being as smart as Einstein. It isn’t rocket science people: just plot the exponential curve of technology in this field, duh.

LikeReply
wilhelm woess 17 hours ago

@dobermanmacleod It’s really still too much complicated (read Steven Pinkers books on linguistic or some recent essay from Douglas R. Hofstaedter, “The Best American Science Writing 2000” Haper Collins, p. 116 “Analogy as the Core of Cognition” on cousiousness to see what I mean) and estimats of progress are highly optimiscic. I wonder will we live long enough to see it – is it what we want to engaged in? I don’t know, but will follow the progress.

LikeReply
dobermanmacleod 10 hours ago

@wilhelm woess While I appreciate experts saying the devil is in the details, it really is as simple as plotting the curve on this technology (AI). Moore’s Law has been railed against continuously by experts, but it has been going strong for decades now. Have you seen the latest robots (i.e. robo-boy for instance)? Have you seen the latest AI (i.e. Watson beat the best human Jeopardy players, and is now being educated to practice medicine for instance)? I bet a prediction of either event would have been controversial as little as five years ago by experts.

It is clear that the neocortex (the portion of the brain that computer AI needs to mimic in order for AGI to emerge) can be both modeled and mimicked by computer engineers. I’ve watched as the field of vision recognition has exploded (i.e. Mind’s Eye for instance). This hand wringing, pessimistic view of the probability of AGI emerging soon is just like the same for many other fields where AI has emerged to beat the best humans (i.e. see the previous example of chess).

LikeReply
wilhelm woess 2 hours ago

@dobermanmacleod Sorry for seeming pessimistic, I am not I am sceptic. I do admire progress in simulation of human intelligence (those examples you mentioned) but ist this cognition? It seems to me, things are a bit more complicated.

Nevertheless I can give you another example: “The Best American Science Writing 2007”, Harper Collins, p.260 “John Koza Has Built an Invention Machine” by Jonaton Keats from Popular Science. Does it mean a computer cluster which invents a device and gets a patent for it is as intelligent as a human being though is solution after thousands of simulation runs is supperior to human design? Of course not it’s a helpful tool for a scientiests who defines the parameters for generic algorithms.

LikeReply

dobermanmacleod 10 hours ago

@wilhelm woess “I wonder will we live long enough to see it…” Ironic. Technological progress is exponential, not linear. The same exploding progress we are seeing in the field of computer science, we are seeing also in the field of medicine. As a result, it is predictable that in about two decades extreme longevity treatments will be available that will enable us to live centuries. In other words, if you can live twenty more years, you will probably live centuries (barring some accident or catastrophe). I know it is difficult to wrap your head around exponential growth – that is why “experts” are so far wrong – they are gauging future progress by the (seeming) linear standard of the past (the beginnings of an exponential curve looks linear).

LikeReply
atolley 10 hours ago

@dobermanmacleod technology follows s-curves. It is fallacious to assume that trend extrapolation can be maintained.

LikeReply
dobermanmacleod 10 hours ago

@atolley @dobermanmacleod It is fallacious to assume that that trend can be maintained indefinitely…I am not maintaining that, nor is it necessary for my argument to be valid. I have been hearing “experts” make such an argument for decades while Moore’s Law keeps being validated. It is always granted that it has done so up until now, but…then it keeps doings so…when will people learn?

LikeReply
shagggz 10 hours ago

@atolley @dobermanmacleod Individual technological paradigms do indeed follow s-curves. However, the wider trajectory, spanning across paradigms, is indeed exponential: http://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/PPTMooresLawai.jpg/596px-PPTMooresLawai.jpg

LikeReply
wilhelm woess 2 hours ago

@dobermanmacleod @wilhelm woess I really hope you are right about your predictions to cure nasty diesesas, all the other arguments I can not follow without giving up scepticism and went to daydreaming about the future, which is always fun for me.

LikeReply

Tsuarok 21 hours ago

If consciousnesses exists, as many believe, outside the brain, we may never be able to copy it. If it is the result of activity in the brain, we almost certainly will.

I guess for actual evidence supporting Nicolelis’ views I’ll have go buy his books.

LikeReply
atolley 21 hours ago

<i>That’s because its most important features are the result of unpredictable, non-linear interactions amongst billions of cells, Nicolelis says. “You can’t predict whether the stock market will go up or down because you can’t compute it,”</i>

I fail to see the point of this analogy. The stock market is a process that can be modeled. Because it is stochastic neither the model, nor the reality can create predictable outcomes. The brain is similar. We will be able to model it, even if the output is not identical to the original, just like identical twins don’t do the same thing. Weather is non-linear too, but we can model it, and even make short term predictions. So I understand his point.

Nicolelis seems to be arguing (based on this article) that the dynamic brain simulations are pointless as they cannot really simulate the underlying neural architecture. Is he really saying that?

LikeReply
wilhelm woess 21 hours ago

This is an oppinion based on reasonable thinking. There are other opinions based on Ray Kurzweil’s vision. How often we have seen emerging technologies dismissed as imposible (Flight, Fast Trains, Smart Phones). Many of these was inspired by phantasy decaces or hundrets of years before they were invented. This discussion reminded me on Venor Vinge’s “True Names” published 1980 which encouraged many computer scientiest. VR has not really penetrated mass costumer markets, when it does we will see if there is a way to storage human perception in machines. Look at these:

https://www.solveforx.com/moonshots/ahJzfmdvb2dsZS1zb2x2ZWZvcnhyEAsSCE1vb25zaG90GL2RAgw/cyborg-foundation

https://www.solveforx.com/moonshots/ahJzfmdvb2dsZS1zb2x2ZWZvcnhyEAsSCE1vb25zaG90GLjqAQw/imaging-the-minds-eye

http://www.steria.com/de/presse/publikationen/studien/studien-details/studien/the-future-report-2012/?cque=90

Is these the beginning of something new?

LikeReply
Spicoli 21 hours ago

What if we use biological components in this computer? Maybe we’ll grow these computers rather than etch them onto wafers.

LikeReply
aregalado 21 hours ago

@Spicoli Agreed. The question of the future substrate — DNA computer, quantum computing, biological computers — is a big question mark.

LikeReply

Show More Comments

Conversation powered by Livefyre

Leave a comment