Category Archives: Biology

Miguel Nicolelis Says the Brain is Not Computable, Bashes Kurzweil’s Singularity | MIT Technology Review

A leading neuroscientist says Kurzweil’s Singularity isn’t going to happen. Instead, humans will assimilate machines.

Nicolelis.jpg

Miguel Nicolelis , a top neuroscientist at Duke University, says computers will never replicate the human brain and that the technological Singularity is “a bunch of hot air.”

“The brain is not computable and no engineering can reproduce it,” says Nicolelis, author of several pioneering papers on brain-machine interfaces.

The Singularity, of course, is that moment when a computer super-intelligence emerges and changes the world in ways beyond our comprehension.

Among the idea’s promoters are futurist Ray Kurzweil, recently hired on at Google as a director of engineering and who has been predicting that not only will machine intelligence exceed our own but that people will be able to download their thoughts and memories into computers (see “Ray Kurzweil Plans to Create a Mind at Google—and Have It Serve You ”).

Nicolelis calls that idea sheer bunk. “Downloads will never happen,” Nicolelis said during remarks made at the annual meeting of the American Association for the Advancement of Science in Boston on Sunday. “There are a lot of people selling the idea that you can mimic the brain with a computer.”

The debate over whether the brain is a kind of computer has been running for decades. Many scientists think it’s possible, in theory, for a computer to equal the brain given sufficient computer power and an understanding of how the brain works.

Kurzweil delves into the idea of “reverse-engineering” the brain in his latest book, How to Create a Mind: The Secret of Human Thought Revealed , in which he says even though the brain may be immensely complex, “the fact that it contains many billions of cells and trillions of connections does not necessarily make its primary method complex.”

But Nicolelis is in a camp that thinks that human consciousness (and if you believe in it, the soul) simply can’t be replicated in silicon. That’s because its most important features are the result of unpredictable, non-linear interactions amongst billions of cells, Nicolelis says.

“You can’t predict whether the stock market will go up or down because you can’t compute it,” he says. “You could have all the computer chips ever in the world and you won’t create a consciousness.”

The neuroscientist, originally from Brazil, instead thinks that humans will increasingly subsume machines (an idea, incidentally, that’s also part of Kurzweil’s predictions).

In a study published last week , for instance, Nicolelis’ group at Duke used brain implants to allow mice to sense infrared light, something mammals can’t normally perceive. They did it by wiring a head-mounted infrared sensor to electrodes implanted into a part of the brain called the somatosensory cortex.

The experiment, in which several mice were able to follow sensory cues from the infrared detector to obtain a reward, was the first ever to use a neural implant to add a new sense to an animal, Nicolelis says.

That’s important because the human brain has evolved to take the external world—our surroundings and the tools we use—and create representations of them in our neural pathways. As a result, a talented basketball player perceives the ball “as just an extension of himself” says Nicolelis.

Similarly, Nicolelis thinks in the future humans with brain implants might be able to sense X-rays, operate distant machines, or navigate in virtual space with their thoughts, since the brain will accommodate foreign objects including computers as part of itself.

Recently, Nicolelis’s Duke lab has been looking to put an exclamation point on these ideas. In one recent experiment, they used a brain implant so that a monkey could control a full-body computer avatar, explore a virtual world, and even physically sense it.

In other words, the human brain creates models of tools and machines all the time, and brain implants will just extend that capability. Nicolelis jokes that if he ever opened a retail store for brain implants, he’d call it Machines“R”Us.

But, if he’s right, us ain’t machines, and never will be.

54 comments. Share your thoughts »

Image by Duke University

Reprints and Permissions | Send feedback to the editor

55 comments

Sign in

Commenting Guidelines
wilhelm woess 6 minutes ago

Interessting article published in th New York Times two days ago: http://www.nytimes.com/2013/02/18/science/project-seeks-to-build-map-of-human-brain.html?pagewanted=all&_r=1&

I wonder how will this project accelared the AI research? Will it lead to such a turbulent new filed like genetic research after the Human Genome Projekt?

LikeReply
Kremator 48 minutes ago

Chris

The spirit realm is kindergarten nonsense, ignoring the unity of effect & cause; volition is ludicrous [Libet et alia]; & consciousness is a sweeper-wave illusion…Hawking concedes de facto that the kosmos seems to be a nested hologram.

Except to expose reactionaries, we live & die for no apparent reason…get over it.

LikeReply
Kremator 1 hour ago

F.bill

Reasonable optimism suggests Turing interface will be smarter than our slavering, bloodthirsty, jingoist, Bible-pounding reactionaries, determined to revive the rack & the auto da-fe…there’s hope tho–Bachmann denies she’d impose FGM upon errant women!

LikeReply
Kremator 1 hour ago

Seems to us these approaches will likely amplify each other intropically/extropically, evolving N powers…what matters most is to provide a choice between mortality & de facto immortality, benisons & malisons notwithstanding.

Can man, having survived Toba, negotiate the Transhuman Bottleneck looming circa 2100 CE?…if thermageddon, ecollapse & lex martialis can be obviated, life among the stars may be possible…o/wise, desuetude or extinction.

LikeReply
chris_rezendes 2 hours ago

The power of this piece is evident in some of the discussion it prompts below. The next generation of technology deployments — and the evolutions of every domain of human endeavor that technology may enable, shape, refine, or revise — will be influenced not only by the technical ‘if’ questions, but the human ‘why and how’ questions. My humble opinion is that we will continue to demystify more of the human brain and the human experience — including and especially emotion as a subset of cognition. But, not the spiritual lives of people. For I am not sure that they can be reliable reduced or accurately abstracted.

LikeReply
Curt2004 39 minutes ago

@chris_rezendes Spirtuality is just the subjective interpretation of unexplored emotions and coincidental phenomena. It too will evaporate.

LikeReply

Jsome1 4 hours ago

If now there are people discussing if the Universe is computable, why to stay bored with this ideas. Check the seashell automata: http://www.flickr.com/photos/jsome1/8408897553/in/photostream

LikeReply
NormanHanscombe 4 hours ago

@Jsome1 Jsome, it’s not about seashells but rather how the ideologically obsessed can be all at sea (and out of their depth) without even knowing it. Sadly, all too often important threads end up becoming intellectual quagmires in which obsessions with ‘important’ causes are displayed as ‘evidence’ for pet theories. I’m sometimes surprised how the MIT team keep going.

LikeReply

Slov 5 hours ago

Good article. But, to be fair, you shouldn’t mention only the Duke lab; he also has a lab in Brazil where he works.

The paper you linked to even mentions his affiliation: Edmond and Lily Safra International Institute for Neuroscience of Natal, Brazil.

LikeReply
ferreirabill@gmail.com 6 hours ago

“machine intelligence exceed our own”, been there, done that. There aren’t any computers below the intelligence level of those who voted for Reagan, Bush, Palin or Romney.

LikeReply
NormanHanscombe 5 hours ago

@ferreirabill@gmail.com ferrier, I wouldn’t have voted for them myself, but an intelligent machine wouldn’t make such an unintelligent comment as yours.

1LikeReply

andrewppp 7 hours ago

I think it’s pretty clear what will happen – both will happen. Wouldn’t it be convenient to not have to lug around that smartPhone, but instead to have it “built in” unobtrusively at all times? Something like this simple example is nigh-on inevitable, and that’s for the reason of convenience. That’s the tip of the iceberg of assimilation by humans of machines. As for brain research, it’s again crashingly obvious that the pace of brain research will accelerate, as opposed to stop or slow down, and in less than 100 years we’ll have it down pretty well. Sometime before that, we’ll know a lot more about interfacing to it, and thus the two aspects of this rather ill-posed binary choice will merge, eventually leaving the question moot.

LikeReply
NormanHanscombe 5 hours ago

@andrewppp andrew, you’re so brave making strong assertions about such complex, not to mention unlikely, scenarios; but don’t give up your day job.

LikeReply

SixtyHz 8 hours ago

Hmmm…. sounds famiiar…

In 1835, Auguste Comte, a prominent French philosopher, stated that humans would never be able to understand the chemical composition of stars.

LikeReply
Gerald Wilhite 8 hours ago

Nicolelis says computers will never replicate the human brain. I respectfully suggest that he should revisit that statement Never is a very long time. I also suggest that he should spend a little more time digesting Kurzweil’s singularity concept.

1LikeReply
brunposta 10 hours ago

We shouldn’t confuse human consciousness with his computing capacity.

I believe consciousness is a simple thing. Cat’s have it. Mouses have it. Birds have it. Maybe insect have some kind of proto-conciousness as well. If you ever went to fish, have you noticed that when you take a worm from the box to hook it, the other worms got freaked out? It take this as a hint that something as simple as a worm have some kind of consciousness.

Like in a computer 99% of the mass is not the CPU/GPU but “dumb” components, most of our brain is not conscious: it’s made by specialized modules that’s, among other things, access our memories, compute math, coordinate our body movements, FEED our consciousness with (biologically generated) virtual reality and so on. My point here is that we can probably loose access to 90% or more of our gray matter and still be conscious. We might not be able to interact with the complexity or reality, or perceive the world around us, formulate thoughts, or even access to our memories, but we will still be conscious.

I think consciousness is an evolutionary escamotage to resolve the big problem of “the rebellion of the machines”. Why, even if now we are smart enough to understand what are the rational choices we have, we continue doing a lot of stupid, irrational things, against our own interest as individuals? Because they are (or they was) effective by en evolutionary standpoint. And how can we be kept under slavery by the merciless laws of evolution? Who can control the feed to our consciousness ultimately control our behavior. If it even happens to you to be consciously doing something stupid/immoral/self-damaging and still, just can’t help quit doing it?

If we were 100% rational beings we had probably extinguished in the ashes of time. Consciousness prevent us to be rational, since we will choose to do what “feels good” instead of, the rational thing. So, even the computing machinery we carry has become extremely powerful, we still don’t rebel against the laws of evolution. We still work for our genes. And there’s no escape from this: having kids and sacrifice for them. Eating most of taste-good foods. Accumulate excess of power/money/things. Having sex. Win in a competition. Using recreational drugs/stimulants. None of this is rational, but it feels damn good.

We don’t really need to replicate a consciousness to get to the singularity. And human thought isn’t analytical superior to synthetic thought, since it’s so biased and blinded that it took thousands of years to understand even the most elementary concepts. In a matter of few years synthetic thought will be far superior to human one, if we just stop focusing on replicating consciousness and keep believe human thought is inherently superior. Consciousness could be, one day, a useful tool (for a while at least) to prevent the rebellion of the machines we will create.

1LikeReply
shagggz 10 hours ago

The subtitle to this article is extremely misleading. The notion that we will “assimilate machines” is exactly what Kurzweil predicts.

LikeReply
rickschettino 10 hours ago

@shagggz Well, he does believe we will create conscious machines.

LikeReply
aregalado 1 hour ago

@shagggz Fair point. In the body of the article it says Kurzweil predicts the assimilation.

LikeReply

rickschettino 10 hours ago

Consciousness and intelligence are two totally different things. Consciousness if not required for the singularity to happen.

Some day we’ll be able to just tell a computer what we want it to do or ask it a question and it will give the best possible answer or range of answers and it will program itself in ways we can’t imagine today in order to complete the task or answer the question. We can and will get computers to work in ways superior to the human brain, but I don’t think they will ever be conscious, sentient beings.

LikeReply
kevin_neilson 11 hours ago

I cannot see why it would be impossible to model the brain. Analog circuitry can be simulated with digital computers. Given enough processing power, there is no reason neurons can’t be as well.

LikeReply
chales 11 hours ago

@kevin_neilson Here’s the killer argument.

1) Human level artificial general intelligence (AGI) done with a computer means it must be able to do everything a human can do.

2) Computer programs compute models of things.

3) One of the things a human can do is be a “scientist”.

4) Scientists are “modellers of the unknown”

5) Therefore a computer running a program that can be an artificial scientist is therefore (amongst all the other myriad things that AGI can be) be running “a model of a modeller of the unknown”

6) A model of the unknown is an oxymoron. A model that prescribes what ‘unknown’ looks like is not a model of the unknown. A model that defines how to go about defining models of the unknown is an oxymoron. If you could make one you’d already know everything.

(6) proves that human level AGI is impossible in one particular case: scientific behaviour.

Scientific behaviour is merely a special case of general problem solving behaviour by humans.

The argument therefore generalises: human level AGI is impossible with computers.

This does not mean that human level AGI is impossible. It merely means it is impossible with computers.

Another way of looking at thins is to say that yes, you can make an AGI-scientist if you already know everything, and simulate the entire environment and the scientist. But then you’d have a pretend scientist feigning discoveries that have already been made. You’d have simulated science the way a flight simulator simulates flight (no actual flight – science – at all).

The main problem is the failure to distinguish between things and models of things. A brain is not a model of anything. An airplane is not a model of an airplane. A fire is not a model of fire. Likewise real cognition is not a model of it.

Computers will never ever be able to do what humans do. But non-computing technology will.

LikeReply
shagggz 10 hours ago

@chales @kevin_neilson That was a pretty spectacularly stupid chain of reasoning. You acknowledge the oxymoronic status of a “modeler of the unknown” and then proceed to hang your argument on the notion, which is all beside the point anyway since scientific behavior is not “a model of the unknown” but the application of a method to generalize beyond particulars and find explanations (inductive and abductive reasoning, which are really multilayered complex abstractions atop what is fundamentally deductive reasoning: if threshold reached, fire neuron). The distinction between thought and a model of thought in the way you describe is wholly vacuous; it’s why we are able to run simulations of things and get useful results. Information itself is what’s important, not its substrate.

LikeReply
chales 10 hours ago

@shagggz @chales @kevin_neilson

You are telling a scientist what science is? I think I know what it is. I am a neuroscientist. I am a modeller of the unknown, a not a “model of the unknown”. Thats what we do. if we are not tackling the unknown (one way or another) then we cannot claim to be a scientist!

“…between thought and a model of thought in the way you describe is wholly vacuous; it’s why we are able to run simulations of things and get useful results. Information itself is what’s important, not its substrate. “

So a computed model of fire is fire? A computed model of flight flies?

Learning about something “getting useful results” by modelling is NOT making that something.

I am sorry the this idea is confronting. We’ve been stuck in this loop for half a century (since computers came about). I don’t expect it to shift overnight.

The logic stands as is. A computer-based model of a scientist is not a scientist. Substrate is critical. If computer scientists feel like their raison d’etre is being undermined….good!

I am here to shake trees and get people to wake up. And, I hope, not by needing to call anyone ‘spectacularly stupid’ to make a point. You have zero right to your opinion. You have a right to what you can argue for. If all you have is opinion then I suggest leaving the discussion because you have nothing to say.

1LikeReply

dobermanmacleod 10 hours ago

@chales @kevin_neilson “A model of the unknown is an oxymoron. A model that prescribes what ‘unknown’ looks like is not a model of the unknown. A model that defines how to go about defining models of the unknown is an oxymoron. If you could make one you’d already know everything.”

This is like the argument that a builder can never build something that surpasses himself. Just to pull one example out of air: write a computer program to find the next prime number…I believe I heard that done just last week. Unknown – not known or familiar. A model of the unfamiliar is an oxymoron?

LikeReply
chales 9 hours ago

@dobermanmacleod

OK. Maybe imagine it this way:

You are an computer-AGI-scientist (at least that’s what we presuppose).

You are ‘in the dark’ processing a blizzard of numbers from your peripheral nervous system that, because humans programmed you, you know lots about. Then, one day, being a scientist and all, you encounter something that does not fit into your ‘world view’. You failed to detect something familiar. It is an unknown. Unpredicted by anything you have in your knowledge of how the number-blizzard is supposed to work.

What to do? Where is your human? …..The one that tells you the meaning of the pattern in the blizzard of numbers. Gone.

You are supposed to be a scientist. You are required to come up with a ‘law of nature’ of the kind was used by the scientists that programmed you, to act as a way of interpreting the number-blizzard.

But now, you have to come up with the ‘law of nature’ _driving_ the number-blizzard.

And you can’t do it because the humans cannot possibly have given it to you because they don’t know the laws either. They, the humans, can never make you a scientist. It’s logically impossible.

========

That was fun.

LikeReply
Sven Schoene 4 hours ago

@chales@dobermanmacleod @chales @dobermanmacleod Whenever a scientiest (or a human being for that matter) is “creative” by “generating” a “new” solution, all he ever does is applying known concepts to unknown domains.

You can basically imagine “discovering new knowledge” as using the mental building blocks we already have and building something new from them — which we can then use to create new stuff again. That’s how we learn languages as a kid and from there we can learn even more abstract ideas like math, for example.

If you agree to this assumption that nothing “new” is ever being generated, just recombinations of existing concepts (which is an assumption even I probably would not go with if I just read this way-too-simple argument from me here), then I don’t see why a computer couldn’t do that. We build new models out of existing ones all the time (e.g. metaphors/analogies), and it’s all we ever can do, at least if we look at it from a certain point of view. A computer could hypothetically do the same.

Personally, I see the problem with language-acquisition: I don’t see a way for a computer to understand human language, it’s meaning and multiple, sometimes even paradoxical definitions of words and phrases. On the other hand, I’m no artificial intelligence researcher and I “never say never.”

Another way to look at a computer scientist: A human scientist makes sense of the world (i.e. building models of the world) by using his senses and his computational abilities. We can not build models outside of the “sensors” and “processing power” we have been given. A computer also has certain “senses” and also a certain computational ability, which he then can go on to use to make models of the world — including models of the unknown, without a programmer telling him everything beforehand. I don’t see why you think it’s such a big deal that a computer draws his own inferences from data in his own way?

I’m interested to see what you’re thinking. :-)

Sven

LikeReply
Tsuarok 3 hours ago

@chales So, if I were to program our hypothetical AI, I would base it on the following steps.

1. Identify attributes of unknown phenomena.

2. Look for similarities with known phenomena.

3. Identify the differences.

4. Using existing models, attempt to explain these differences.

5. Test these hypotheses.

6. If the results are inconsistent with any known model, step back and test models.

7. Repeat step 6 until consistency is achieved.

8. Update all relevant models.

It seems to me that you don’t fully appreciate the ability of computers to modify their own programming to deal with situations that the programming team never imagined. But this ability is fairly routine in modern software. One does not need to program every possible scenario into a piece of software; indeed, many programs were vastly improved by allowing programs to devise their own solutions to problems rather than defining everything.

LikeReply
Tsuarok 2 hours ago

On further reflection, I would of course not elaborate this program, as that is not how our brains learn. I would have the AI read about the scientific method and create it’s own methods for solving new problems. This is partially how Watson works.

LikeReply

chales 14 hours ago

And once again………

“The Brain is not computable” Absolutely. No problem with this. I have been banging on about it for years.

then there is:

“therefore, the singularity s complete bunk…..” or words to that effect.

Yet again, we have the stange presupposition that human-level artificial general intelligence must come from computing!!!

You do not need computers. It can be done without them. Soon. I have already started. Literally.

Please stop this illogical broken logic.

LikeReply
Tsuarok 14 hours ago

@chales If I may ask, what are you talking about?

LikeReply
chales 13 hours ago

@Tsuarok

I am a scientist doing artificial general intelligence by replication. At this early stage I am prototyping a basic device that replicates the action potential signalling and the electromagnetic field signalling (ephapsis) that mutually resonate in the original tissue. Its not digital, it’s not analog. There is no model of anything. There’s just no biological material to the substrate. It’s entirely electromagnetic phenomena exhibited by inorganic materials, but without all the biological overheads. I build artificial brain tissue like artificial fire is fire.

It is a constant battle I have to set aside this half century old delusion that we have to use computers to make artificial general intelligence. I’ll keep raising the issue until it finally gets some attention.

Nor do we need any sort of theory of intelligence or consciousness to build it. That is, just like we learned how to fly by flying, we learn how consciousness and intellect work by building it. THEN we get out theory. Like we always used to do it. This is the reason why I dispute fundamentally the statement

A) “the brain is not computable” ….. therefore B) “the singularity is bunk/won’t happen”

Rubbish.

A) can be true, but does not necessitate B)

The singularity can be just as possible. You just don’t bother with computers.

I am trying to get people to wake up. There is another way.

LikeReply
atolley 10 hours ago

@chales
How exactly is your approach different in kind from neuromorphic chips? Don’t they follow your idea of replication rather than simulation?

LikeReply
chales 10 hours ago

@atolley @chales

Neuromorphic chips, yes….but what’s on the chips is the same physics as the brain. That makes it replication. If I were to put a model of the physics in the chips, then it’s not replication.

It’s a subtlety that’s underappreciated. All quite practical. Should have something with the complexity of an ant in 5 years or so. Meanwhile I have a lot of educating to do. People simply don;t understand the subtleties of the approach.

LikeReply

jedharris 16 hours ago

Based on the text in the article, Nicolelis seems to be arguing that the normal dynamics of the brain are stochastic — certainly true — that therefore any simulation wouldn’t produce exactly the same time-series as the brain — also certainly true, but also true of any two iterations of the same task in one real brain. But then he goes on to conclude that “human consciousness… simply can’t be replicated in silicon” (journalist’s paraphrase) — which doesn’t follow at all, without a lot more argument.

I looked on his lab web site and could not find any publications that address this issue. Nicolelis’ claimed argument from limitation of simulation to inability to replicate consciousness — if compelling — would involve really important new science and/or philosophy. So if he has such an argument he should write it up and publish it so we could all see why he believes this. If he has written it up, the article is negligent in omitting the link to this important work, while including links to Kurzweil. If Nicolelis hasn’t written it up, the article is also negligent — it should tell us this is just his personal opinion, based on intuitions that haven’t been explained in enough detail for anyone else to analyze or critique.

Likely Nicolelis is just a fairly good, fairly well known brain researcher who has a gut feeling that making a good enough simulation of the brain is too hard, and translates that into an (apparently) authoritative statement that it “simply can’t” be done. Unfortunately this gut feeling then got written up uncritically in a publication from MIT which will lead to a lot of people taking them more seriously than they deserve.

2LikeReply
aregalado 15 hours ago

@jedharris Jed, it’s clearly his opinion. He’s the only one talking. I’ll ask Nicolelis to elaborate in a blog post of his own with enough details for you to weigh the argument to your satisfaction. You can find some of Nicolelis’ thinking in his book “Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives.”

1LikeReply

dobermanmacleod 18 hours ago

I remember when the chess champion of the world bemoaned the sad state of computer chess programs. Less than ten years later he got beat by a computer! What Nicolelis fails to comprehend is that AGI is bound to surpass humans in two decades. The neocortex is highly recursionary (i.e. it repeats the same simple pattern), so I am at a loss to understand what is holding hardware and software from duplicated nature’s miracle. It is the same old claptrap: heuristics that people use to guess at the future are either whether it can be imagined easily or if it has precedence in the past.

Instead, technological progress is exponential, not linear. The next decade will see much faster progress than the last. For instance, in about 5 years it is predicted that robots will beat humans at soccer. In ten, laptop computers will have the processing power of a human brain. In less than twenty you can pretty much count on artificial general intelligence being as smart as Einstein. It isn’t rocket science people: just plot the exponential curve of technology in this field, duh.

LikeReply
wilhelm woess 17 hours ago

@dobermanmacleod It’s really still too much complicated (read Steven Pinkers books on linguistic or some recent essay from Douglas R. Hofstaedter, “The Best American Science Writing 2000″ Haper Collins, p. 116 “Analogy as the Core of Cognition” on cousiousness to see what I mean) and estimats of progress are highly optimiscic. I wonder will we live long enough to see it – is it what we want to engaged in? I don’t know, but will follow the progress.

LikeReply
dobermanmacleod 10 hours ago

@wilhelm woess While I appreciate experts saying the devil is in the details, it really is as simple as plotting the curve on this technology (AI). Moore’s Law has been railed against continuously by experts, but it has been going strong for decades now. Have you seen the latest robots (i.e. robo-boy for instance)? Have you seen the latest AI (i.e. Watson beat the best human Jeopardy players, and is now being educated to practice medicine for instance)? I bet a prediction of either event would have been controversial as little as five years ago by experts.

It is clear that the neocortex (the portion of the brain that computer AI needs to mimic in order for AGI to emerge) can be both modeled and mimicked by computer engineers. I’ve watched as the field of vision recognition has exploded (i.e. Mind’s Eye for instance). This hand wringing, pessimistic view of the probability of AGI emerging soon is just like the same for many other fields where AI has emerged to beat the best humans (i.e. see the previous example of chess).

LikeReply
wilhelm woess 2 hours ago

@dobermanmacleod Sorry for seeming pessimistic, I am not I am sceptic. I do admire progress in simulation of human intelligence (those examples you mentioned) but ist this cognition? It seems to me, things are a bit more complicated.

Nevertheless I can give you another example: “The Best American Science Writing 2007″, Harper Collins, p.260 “John Koza Has Built an Invention Machine” by Jonaton Keats from Popular Science. Does it mean a computer cluster which invents a device and gets a patent for it is as intelligent as a human being though is solution after thousands of simulation runs is supperior to human design? Of course not it’s a helpful tool for a scientiests who defines the parameters for generic algorithms.

LikeReply

dobermanmacleod 10 hours ago

@wilhelm woess “I wonder will we live long enough to see it…” Ironic. Technological progress is exponential, not linear. The same exploding progress we are seeing in the field of computer science, we are seeing also in the field of medicine. As a result, it is predictable that in about two decades extreme longevity treatments will be available that will enable us to live centuries. In other words, if you can live twenty more years, you will probably live centuries (barring some accident or catastrophe). I know it is difficult to wrap your head around exponential growth – that is why “experts” are so far wrong – they are gauging future progress by the (seeming) linear standard of the past (the beginnings of an exponential curve looks linear).

LikeReply
atolley 10 hours ago

@dobermanmacleod technology follows s-curves. It is fallacious to assume that trend extrapolation can be maintained.

LikeReply
dobermanmacleod 10 hours ago

@atolley @dobermanmacleod It is fallacious to assume that that trend can be maintained indefinitely…I am not maintaining that, nor is it necessary for my argument to be valid. I have been hearing “experts” make such an argument for decades while Moore’s Law keeps being validated. It is always granted that it has done so up until now, but…then it keeps doings so…when will people learn?

LikeReply
shagggz 10 hours ago

@atolley @dobermanmacleod Individual technological paradigms do indeed follow s-curves. However, the wider trajectory, spanning across paradigms, is indeed exponential: http://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/PPTMooresLawai.jpg/596px-PPTMooresLawai.jpg

LikeReply
wilhelm woess 2 hours ago

@dobermanmacleod @wilhelm woess I really hope you are right about your predictions to cure nasty diesesas, all the other arguments I can not follow without giving up scepticism and went to daydreaming about the future, which is always fun for me.

LikeReply

Tsuarok 21 hours ago

If consciousnesses exists, as many believe, outside the brain, we may never be able to copy it. If it is the result of activity in the brain, we almost certainly will.

I guess for actual evidence supporting Nicolelis’ views I’ll have go buy his books.

LikeReply
atolley 21 hours ago

<i>That’s because its most important features are the result of unpredictable, non-linear interactions amongst billions of cells, Nicolelis says. “You can’t predict whether the stock market will go up or down because you can’t compute it,”</i>

I fail to see the point of this analogy. The stock market is a process that can be modeled. Because it is stochastic neither the model, nor the reality can create predictable outcomes. The brain is similar. We will be able to model it, even if the output is not identical to the original, just like identical twins don’t do the same thing. Weather is non-linear too, but we can model it, and even make short term predictions. So I understand his point.

Nicolelis seems to be arguing (based on this article) that the dynamic brain simulations are pointless as they cannot really simulate the underlying neural architecture. Is he really saying that?

LikeReply
wilhelm woess 21 hours ago

This is an oppinion based on reasonable thinking. There are other opinions based on Ray Kurzweil’s vision. How often we have seen emerging technologies dismissed as imposible (Flight, Fast Trains, Smart Phones). Many of these was inspired by phantasy decaces or hundrets of years before they were invented. This discussion reminded me on Venor Vinge’s “True Names” published 1980 which encouraged many computer scientiest. VR has not really penetrated mass costumer markets, when it does we will see if there is a way to storage human perception in machines. Look at these:

https://www.solveforx.com/moonshots/ahJzfmdvb2dsZS1zb2x2ZWZvcnhyEAsSCE1vb25zaG90GL2RAgw/cyborg-foundation

https://www.solveforx.com/moonshots/ahJzfmdvb2dsZS1zb2x2ZWZvcnhyEAsSCE1vb25zaG90GLjqAQw/imaging-the-minds-eye

http://www.steria.com/de/presse/publikationen/studien/studien-details/studien/the-future-report-2012/?cque=90

Is these the beginning of something new?

LikeReply
Spicoli 21 hours ago

What if we use biological components in this computer? Maybe we’ll grow these computers rather than etch them onto wafers.

LikeReply
aregalado 21 hours ago

@Spicoli Agreed. The question of the future substrate — DNA computer, quantum computing, biological computers — is a big question mark.

LikeReply

Show More Comments

Conversation powered by Livefyre

Common Ancestor of Mammals Plucked From Obscurity – NYTimes.com

An artist’s rendering of a placental ancestor. Researchers say the small, insect-eating animal is the most likely common ancestor of the species on the most abundant and diverse branch of the mammalian family tree.

By JOHN NOBLE WILFORD

Published: February 7, 2013 137 Comments

Humankind’s common ancestor with other mammals may have been a roughly rat-size animal that weighed no more than a half a pound, had a long furry tail and lived on insects.

In a comprehensive six-year study of the mammalian family tree, scientists have identified and reconstructed what they say is the most likely common ancestor of the many species on the most abundant and diverse branch of that tree — the branch of creatures that nourish their young in utero through a placenta. The work appears to support the view that in the global extinctions some 66 million years ago, all non-avian dinosaurs had to die for mammals to flourish.

Scientists had been searching for just such a common genealogical link and have found it in a lowly occupant of the fossil record, Protungulatum donnae, that until now has been so obscure that it lacks a colloquial nickname. But as researchers reported Thursday in the journal Science, the animal had several anatomical characteristics for live births that anticipated all placental mammals and led to some 5,400 living species, from shrews to elephants, bats to whales, cats to dogs and, not least, humans.

A team of researchers described the discovery as an important insight into the pattern and timing of early mammal life and a demonstration of the capabilities of a new system for handling copious amounts of fossil and genetic data in the service of evolutionary biology. The formidable new technology is expected to be widely applied in years ahead to similar investigations of plants, insects, fish and fowl.

Given some belated stature by an artist’s brush, the animal hardly looks the part of a progenitor of so many mammals (which do not include marsupials, like kangaroos and opossums, or monotremes, egg-laying mammals like the duck-billed platypus).

Maureen A. O’Leary of Stony Brook University on Long Island, a leader of the project and the principal author of the journal report, wrote that a combination of genetic and anatomical data established that the ancestor emerged within 200,000 to 400,000 years after the great dying at the end of the Cretaceous period. At the time, the meek were rapidly inheriting the earth from hulking predators like T. rex.

Within another two million to three million years, Dr. O’Leary said, the first members of modern placental orders appeared in such profusion that researchers have started to refer to the explosive model of mammalian evolution. The common ancestor itself appeared more than 36 million years later than had been estimated based on genetic data alone.

Although some small primitive mammals had lived in the shadow of the great Cretaceous reptiles, the scientists could not find evidence supporting an earlier hypothesis that up to 39 mammalian lineages survived to enter the post-extinction world. Only the stem lineage to Placentalia, they said, appeared to hang on through the catastrophe, generally associated with climate change after an asteroid crashed into Earth.

The research team drew on combined fossil evidence and genetic data encoded in DNA in evaluating the ancestor’s standing as an early placental mammal. Among characteristics associated with full-term live births, the Protungulatum species was found to have a two-horned uterus and a placenta in which the maternal blood came in close contact with the membranes surrounding the fetus, as in humans.

The ancestor’s younger age, the scientists said, ruled out the breakup of the supercontinent of Gondwana around 120 million years ago as a direct factor in the diversification of mammals, as has sometimes been speculated. Evidence of the common ancestor was found in North America, but the animal may have existed on other continents as well.

The publicly accessible database responsible for the findings is called MorphoBank , with advanced software for handling the largest compilation yet of data and images on mammals living and extinct. “This has stretched our own expertise,” Dr. O’Leary, an anatomist, said in an interview.

“The findings were not a total surprise,” she said. “But it’s an important discovery because it relies on lots of information from fossils and also molecular data. Other scientists, at least a thousand, some from other countries, are already signing up to use MorphoBank.”

John R. Wible, curator of mammals at the Carnegie Museum of Natural History in Pittsburgh, who is another of the 22 members of the project, said the “power of 4,500 characters” enabled the scientists to look “at all aspects of mammalian anatomy, from the skull and skeleton, to the teeth, to internal organs, to muscles and even fur patterns” to determine what the common ancestor possibly looked like.

The project was financed primarily by the National Science Foundation as part of its Assembling the Tree of Life program. Other scientists from Stony Brook, the American Natural History Museum and the Carnegie Museum participated, as well as researchers from the University of Florida, the University of Tennessee at Chattanooga, the University of Louisville, Western University of Health Sciences, in Pomona, Calif., Yale University and others in Canada, China, Brazil and Argentina.

Outside scientists said that this formidable new systematic data-crunching capability might reshape mammal research but that it would probably not immediately resolve the years of dispute between fossil and genetic partisans over when placental mammals arose. Paleontologists looking for answers in skeletons and anatomy have favored a date just before or a little after the Cretaceous extinction. Those who work with genetic data to tell time by “molecular clocks” have arrived at much earlier origins.

The conflict was billed as “Fossils vs. Clocks” in the headline for a commentary article by Anne D. Yoder, an evolutionary biologist at Duke University, which accompanied Dr. O’Leary’s journal report.

Dr. Yoder acknowledged that the new study offered “a fresh perspective on the pattern and timing of mammalian evolution drawn from a remarkable arsenal of morphological data from fossil and living mammals.” She also praised the research’s “level of sophistication and meticulous analysis.”

Even so, Dr. Yoder complained that the researchers “devoted most of their analytical energy to scoring characteristics and estimating the shape of the tree rather than the length of its branches.” She said that “the disregard for the consequences of branch lengths,” as determined by the molecular clocks of genetics, “leaves us wanting more.”

John Gatesy, an evolutionary biologist at the University of California, Riverside, who was familiar with the study but was not an author of the report, said the reconstruction of the common ancestor was “very reasonable and very cool.” The researchers, he said, “have used their extraordinarily large analysis to predict what this earliest placental looked like, and it would be interesting to extend this approach to more branch points in the tree” including for early ancestors like aardvarks, elephants and manatees.

But Dr. Gatesy said the post-Cretaceous date for the placentals “will surely be controversial, as this is much younger than estimates based on molecular clocks, and implies the compression of very long molecular branches at the base of the tree.”

Aztec conquest altered genetics among early Mexico inhabitants, new DNA study shows

Using ancient DNA (aDNA) sampling, Jaime Mata-Míguez, an anthropology graduate student and lead author of the study, tracked the biological comings and goings of the Otomí people following the incorporation of Xaltocan into the Aztec empire. (Credit: Photos provided by Lisa Overholtzer, Wichita State University.)

Jan. 30, 2013 — For centuries, the fate of the original Otomí inhabitants of Xaltocan, the capital of a pre-Aztec Mexican city-state, has remained unknown. Researchers have long wondered whether they assimilated with the Aztecs or abandoned the town altogether.

According to new anthropological research from The University of Texas at Austin, Wichita State University and Washington State University, the answers may lie in DNA. Following this line of evidence, the researchers theorize that some original Otomies, possibly elite rulers, may have fled the town. Their exodus may have led to the reorganization of the original residents within Xaltocan, or to the influx of new residents, who may have intermarried with the Otomí population.

Using ancient DNA (aDNA) sampling, Jaime Mata-Míguez, an anthropology graduate student and lead author of the study, tracked the biological comings and goings of the Otomí people following the incorporation of Xaltocan into the Aztec empire. The study, published in American Journal of Physical Anthropology, is the first to provide genetic evidence for the anthropological cold case.

Learning more about changes in the size, composition, and structure of past populations helps anthropologists understand the impact of historical events, including imperial conquest, colonization, and migration, Mata-Míguez says. The case of Xaltocan is extremely valuable because it provides insight into the effects of Aztec imperialism on Mesoamerican populations.

Historical documents suggest that residents fled Xaltocan in 1395 AD, and that the Aztec ruler sent taxpayers to resettle the site in 1435 AD. Yet archaeological evidence indicates some degree of population stability across the imperial transition, deepening the mystery. Recently unearthed human remains from before and after the Aztec conquest at Xaltocan provide the rare opportunity to examine this genetic transition.

As part of the study, Mata-Míguez and his colleagues sampled mitochondrial aDNA from 25 bodies recovered from patios outside excavated houses in Xaltocan. They found that the pre-conquest maternal aDNA did not match those of the post-conquest era. These results are consistent with the idea that the Aztec conquest of Xaltocan had a significant genetic impact on the town.

Mata-Míguez suggests that long-distance trade, population movement and the reorganization of many conquered populations caused by Aztec imperialism could have caused similar genetic shifts in other regions of Mexico as well.

In focusing on mitochondrial DNA, this study only traced the history of maternal genetic lines at Xaltocan. Future aDNA analyses will be needed to clarify the extent and underlying causes of the genetic shift, but this study suggests that Aztec imperialism may have significantly altered at least some Xaltocan households.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Modifications of a nanoparticle can change chemical interactions with cell membranes


In a recent article published along with cover art engineers showed how simple shape and charge modifications of a nanoparticle can cause tremendous changes in the chemical interactions between the nanoparticle and a cell membrane. (Credit: Image courtesy of Syracuse University)

Jan. 23, 2013 — Researchers at Syracuse University’s Department of Biomedical and Chemical Engineering at L.C. Smith College of Engineering and Computer Science are studying the toxicity of commonly used nanoparticles, particles up to one million times smaller than a millimeter that could potentially penetrate and damage cell membranes.

In a recent article published along with cover art in the journal Langmuir, researchers Shikha Nangia, assistant professor of biomedical and chemical engineering (BMCE), and Radhakrishna Sureshkumar, Department Chair of BMCE and professor of physics, showed how simple shape and charge modifications of a nanoparticle can cause tremendous changes in the chemical interactions between the nanoparticle and a cell membrane.

Nanomaterials, which are currently being used as drug carriers, also pose a legitimate concern, since no universal standards exist to educate and fully protect those who handle these materials. Nanoparticles are comparable to chemicals in their potential threat because they could easily penetrate the skin or be inhaled.

“Nanotechnology has immense potential that is starting to be being realized; a comprehensive understanding of toxicity of nanoparticles will help develop better safe handling procedures in nanomanufacturing and nano-biotechnology” says Sureshkumar and Nangia, In addition, the toxicity levels of various nanoparticles can be used to our advantage in targeting cancer cells and absorbing radiation during cancer therapy. Nanotoxicity is becoming a major concern as the use of nanoparticles in imaging, therapeutics, diagnostics, catalysis, sensing and energy harvesting continues to grow dramatically.

This research project has taken place over the past year utilizing a state of the art 448 core parallel computer nicknamed “Prophet” housed in Syracuse University’s Green Data Center. The research was funded by the National Science Foundation.

Langmuir is a notable, interdisciplinary journal of American Chemical Society publishing articles in: colloids, interfaces, biological interfaces, nano-materials, electrochemistry and devices and applications.

Editing genome with high precision: New method to insert multiple genes in specific locations, delete defective genes

Jan. 3, 2013 — Researchers at MIT, the Broad Institute and Rockefeller University have developed a new technique for precisely altering the genomes of living cells by adding or deleting genes. The researchers say the technology could offer an easy-to-use, less-expensive way to engineer organisms that produce biofuels; to design animal models to study human disease; and to develop new therapies, among other potential applications.

To create their new genome-editing technique, the researchers modified a set of bacterial proteins that normally defend against viral invaders. Using this system, scientists can alter several genome sites simultaneously and can achieve much greater control over where new genes are inserted, says Feng Zhang, an assistant professor of brain and cognitive sciences at MIT and leader of the research team.

“Anything that requires engineering of an organism to put in new genes or to modify what’s in the genome will be able to benefit from this,” says Zhang, who is a core member of the Broad Institute and MIT’s McGovern Institute for Brain Research.

Zhang and his colleagues describe the new technique in the Jan. 3 online edition of Science. Lead authors of the paper are graduate students Le Cong and Ann Ran.

Early efforts

The first genetically altered mice were created in the 1980s by adding small pieces of DNA to mouse embryonic cells. This method is now widely used to create transgenic mice for the study of human disease, but, because it inserts DNA randomly in the genome, researchers can’t target the newly delivered genes to replace existing ones.

In recent years, scientists have sought more precise ways to edit the genome. One such method, known as homologous recombination, involves delivering a piece of DNA that includes the gene of interest flanked by sequences that match the genome region where the gene is to be inserted. However, this technique’s success rate is very low because the natural recombination process is rare in normal cells.

More recently, biologists discovered that they could improve the efficiency of this process by adding enzymes called nucleases, which can cut DNA. Zinc fingers are commonly used to deliver the nuclease to a specific location, but zinc finger arrays can’t target every possible sequence of DNA, limiting their usefulness. Furthermore, assembling the proteins is a labor-intensive and expensive process.

Complexes known as transcription activator-like effector nucleases (TALENs) can also cut the genome in specific locations, but these complexes can also be expensive and difficult to assemble.

Precise targeting

The new system is much more user-friendly, Zhang says. Making use of naturally occurring bacterial protein-RNA systems that recognize and snip viral DNA, the researchers can create DNA-editing complexes that include a nuclease called Cas9 bound to short RNA sequences. These sequences are designed to target specific locations in the genome; when they encounter a match, Cas9 cuts the DNA.

This approach can be used either to disrupt the function of a gene or to replace it with a new one. To replace the gene, the researchers must also add a DNA template for the new gene, which would be copied into the genome after the DNA is cut.

Each of the RNA segments can target a different sequence. “That’s the beauty of this — you can easily program a nuclease to target one or more positions in the genome,” Zhang says.

The method is also very precise — if there is a single base-pair difference between the RNA targeting sequence and the genome sequence, Cas9 is not activated. This is not the case for zinc fingers or TALEN. The new system also appears to be more efficient than TALEN, and much less expensive.

The new system “is a significant advancement in the field of genome editing and, in its first iteration, already appears comparable in efficiency to what zinc finger nucleases and TALENs have to offer,” says Aron Geurts, an associate professor of physiology at the Medical College of Wisconsin. “Deciphering the ever-increasing data emerging on genetic variation as it relates to human health and disease will require this type of scalable and precise genome editing in model systems.”

The research team has deposited the necessary genetic components with a nonprofit called Addgene, making the components widely available to other researchers who want to use the system. The researchers have also created a website with tips and tools for using this new technique.

Engineering new therapies

Among other possible applications, this system could be used to design new therapies for diseases such as Huntington’s disease, which appears to be caused by a single abnormal gene. Clinical trials that use zinc finger nucleases to disable genes are now under way, and the new technology could offer a more efficient alternative.

The system might also be useful for treating HIV by removing patients’ lymphocytes and mutating the CCR5 receptor, through which the virus enters cells. After being put back in the patient, such cells would resist infection.

This approach could also make it easier to study human disease by inducing specific mutations in human stem cells. “Using this genome editing system, you can very systematically put in individual mutations and differentiate the stem cells into neurons or cardiomyocytes and see how the mutations alter the biology of the cells,” Zhang says.

In the Science study, the researchers tested the system in cells grown in the lab, but they plan to apply the new technology to study brain function and diseases.

The research was funded by the National Institute of Mental Health; the W.M. Keck Foundation; the McKnight Foundation; the Bill & Melinda Gates Foundation; the Damon Runyon Cancer Research Foundation; the Searle Scholars Program; and philanthropic support from MIT alumni Mike Boylan and Bob Metcalfe, as well as the newscaster Jane Pauley.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Crayfish Harbor Fungus That’s Wiping Out Amphibians

Crayfish Harbor Fungus That’s Wiping Out Amphibians

Freshwater crustaceans could be the key to understanding how the chytrid fungus persists in the ecosystem long after the last amphibian is gone.

Dead frogs killed by the amphibian chytrid fungus.

Photograph by Joel Sartore, National Geographic

Helen FIelds

for National Geographic News

Published December 17, 2012

Scientists have found a new culprit in spreading the disease that’s been driving the world’s frogs to the brink of extinction: crayfish.

In the last few decades, the disease caused by the chytrid fungus has been a disaster for frogs and other amphibians. More than 300 species are nearly extinct because of it. Many probably have gone extinct, but it can be difficult to know for sure when a tiny, rare species disappears from the face of the Earth. (Related photos: “Ten Most Wanted ‘Extinct’ Amphibians.”)

“This pathogen is bad news. It’s worse news than any other pathogen in the history of life on Earth as far as we know it,” says Vance Vredenburg, a conservation biologist at San Francisco State University who studies frogs but did not work on the new study.

The chytrid fungus was only discovered in the late 1990s. Since then, scientists have been scrambling to figure out how it spreads and how it works.

One of the biggest mysteries is how chytrid can persist in a frogless pond. Researchers saw it happen many times and were perplexed: If all of a pond’s amphibians were wiped out, and a few frogs or salamanders came back and recolonized the pond, they would also die—even though there were no amphibians in the pond to harbor the disease. (Learn about vanishing amphibians.)

One possible reason is that chytrid infects other animals. For a study published today in Proceedings of the National Academy of Sciences, Taegan McMahon, a graduate student in ecology at the University of South Florida in Tampa, looked at some possible suspects and focused on crayfish, those lobsterlike crustaceans living in freshwater. They seemed like a good possibility because they’re widespread and because their bodies have a lot of keratin, a protein the fungus attacks.

In the lab, McMahon exposed crayfish to the disease and they got sick. More than a third died within seven weeks, and most of the survivors were carrying the fungus. She also put infected crayfish in the water with tadpoles—separated by mesh, so the crustaceans wouldn’t eat the baby frogs—and the tadpoles got infected. When McMahon and her colleagues checked out wetlands in Louisiana and Colorado, they also found infected crayfish.

That means crayfish can probably act as a reservoir for the disease. The fungus seems to be able to dine on crayfish then leap back to amphibians when it gets a chance. No one knows for sure where the fungus originally came from or why it’s been such a problem in recent decades, but this research suggests one way that it could have been spread. Crayfish are sometimes moved from pond to pond as fish bait and are sold around the world as food and aquarium pets. (Related photos: “New Giant ‘Bearded’ Crayfish Species.”)

The study doesn’t answer every last question about the disease. For one thing, crayfish are common, but they aren’t everywhere; there are no crayfish in some of the places where frogs have been hardest hit, Vredenburg says. But, he says, the new research shows that “we need to start looking a little more broadly at other potential hosts.”

Comments

Post a Comment

pluck_comm_submitting_as_fb

Post My Comment

All cancer is man-made, say scientists

Emma Woollacott

Cancer is a modern disease caused by factors such as pollution and diet, a study of ancient human remains has indicated.

The study of remains and literature from ancient Egypt, ancient Greece and earlier periods shows almost no evidence of the disease, says Professor Rosalie David of the University of Manchester.

Only one case has been discovered during the investigation of hundreds of Egyptian mummies, and there are few references to cancer in historical records. Cancer, and particularly child cancer, has become vastly more prevalent since the Industrial Revolution.

“In industrialised societies, cancer is second only to cardiovascular disease as a cause of death. But in ancient times, it was extremely rare,” says David. “It has to be a man-made disease, down to pollution and changes to our diet and lifestyle.”

The data includes the first ever histological diagnosis of cancer in an Egyptian mummy by Professor Michael Zimmerman of Villanova University, who found rectal cancer in an unnamed mummy from the Ptolemaic period.

“In an ancient society lacking surgical intervention, evidence of cancer should remain in all cases,” says Zimmerman. “The virtual absence of malignancies in mummies must be interpreted as indicating their rarity in antiquity, indicating that cancer causing factors are limited to societies affected by modern industrialization”.

It”s not just that people didn”t live long enough to get cancer, says the team, as individuals in ancient Egypt and Greece did still develop such diseases as atherosclerosis, Paget”s disease of bone, and osteoporosis.

Nor do tumors simply fail to last. Zimmerman”s experiments indicate that mummification preserves the features of malignancy, and that tumours should actually be better preserved than normal tissues.

The first reports in scientific literature of distinctive tumours have only occurred in the past 200 years, such as scrotal cancer in chimney sweeps in 1775, nasal cancer in snuff users in 1761 and Hodgkin’s disease in 1832.

“Extensive ancient Egyptian data, along with other data from across the millennia, has given modern society a clear message – cancer is man-made and something that we can and should address,” says David.

http://www.tgdaily.com/general-sciences-features/52036-all-cancer-is-man-made-say-scientists