Category Archives: Biology
One of the world’s most popular aquarium fishes on Wednesday joined the rat, the mouse, fruitfly and nematode worm in the roll call of creatures whose DNA has been sequenced to help fight disease among humans. A consortium of researchers unveiled the genome of the zebrafish in the British journal Nature…
Biologists on Wednesday said they had unravelled the DNA of the coelacanth, a “living fossil” fish whose ancient lineage can shed light on how life in the sea crept onto land hundreds of millions of years ago. Analysis of the coelacanth genome shows three billion “letters” of DNA code, making it roughly…
The Green Bank Telescope and some of the molecules it has discovered. (Credit: Bill Saxton, NRAO/AUI/NSF)
Feb. 28, 2013 — Using new technology at the telescope and in laboratories, researchers have discovered an important pair of prebiotic molecules in interstellar space. The discoveries indicate that some basic chemicals that are key steps on the way to life may have formed on dusty ice grains floating between the stars.
The scientists used the National Science Foundation’s Green Bank Telescope (GBT) in West Virginia to study a giant cloud of gas some 25,000 light-years from Earth, near the center of our Milky Way Galaxy. The chemicals they found in that cloud include a molecule thought to be a precursor to a key component of DNA and another that may have a role in the formation of the amino acid alanine.
One of the newly-discovered molecules, called cyanomethanimine, is one step in the process that chemists believe produces adenine, one of the four nucleobases that form the “rungs” in the ladder-like structure of DNA. The other molecule, called ethanamine, is thought to play a role in forming alanine, one of the twenty amino acids in the genetic code.
“Finding these molecules in an interstellar gas cloud means that important building blocks for DNA and amino acids can ‘seed’ newly-formed planets with the chemical precursors for life,” said Anthony Remijan, of the National Radio Astronomy Observatory (NRAO).
In each case, the newly-discovered interstellar molecules are intermediate stages in multi-step chemical processes leading to the final biological molecule. Details of the processes remain unclear, but the discoveries give new insight on where these processes occur.
Previously, scientists thought such processes took place in the very tenuous gas between the stars. The new discoveries, however, suggest that the chemical formation sequences for these molecules occurred not in gas, but on the surfaces of ice grains in interstellar space.
“We need to do further experiments to better understand how these reactions work, but it could be that some of the first key steps toward biological chemicals occurred on tiny ice grains,” Remijan said.
The discoveries were made possible by new technology that speeds the process of identifying the “fingerprints” of cosmic chemicals. Each molecule has a specific set of rotational states that it can assume. When it changes from one state to another, a specific amount of energy is either emitted or absorbed, often as radio waves at specific frequencies that can be observed with the GBT.
New laboratory techniques have allowed astrochemists to measure the characteristic patterns of such radio frequencies for specific molecules. Armed with that information, they then can match that pattern with the data received by the telescope. Laboratories at the University of Virginia and the Harvard-Smithsonian Center for Astrophysics measured radio emission from cyanomethanimine and ethanamine, and the frequency patterns from those molecules then were matched to publicly-available data produced by a survey done with the GBT from 2008 to 2011.
A team of undergraduate students participating in a special summer research program for minority students at the University of Virginia (U.Va.) conducted some of the experiments leading to the discovery of cyanomethanimine. The students worked under U.Va. professors Brooks Pate and Ed Murphy, and Remijan. The program, funded by the National Science Foundation, brought students from four universities for summer research experiences. They worked in Pate’s astrochemistry laboratory, as well as with the GBT data.
“This is a pretty special discovery and proves that early-career students can do remarkable research,” Pate said.
Share this story on Facebook, Twitter, and Google:
Other social bookmarking and sharing tools:
A composite image of a scanning electron micrograph of a pair of male and female Schistosoma mansoni with the outer tegument (skin) of the male worm “peeled back” (digitally) to reveal the stem cells (orange) underneath. (Credit: Jim Collins, Ana Vieira and Phillip Newmark, Howard Hughes Medical Institute and University of Illinois at Urbana-Champaign)
Feb. 22, 2013 — The parasites that cause schistosomiasis, one of the most common parasitic infections in the world, are notoriously long-lived. Researchers have now found stem cells inside the parasite that can regenerate worn-down organs, which may help explain how they can live for years or even decades inside their host.
Schistosomiasis is acquired when people come into contact with water infested with the larval form of the parasitic worm Schistosoma, known as schistosomes. Schistosomes mature in the body and lay eggs that cause inflammation and chronic illness. Schistosomes typically live for five to six years, but there have been reports of patients who still harbor parasites decades after infection.
According to new research from Howard Hughes Medical Institute (HHMI) investigator Phillip Newmark, collections of stem cells that can help repair the worms’ bodies as they age could explain how the worms survive for so many years. The new findings were published online on February 20, 2013, in the journal Nature.
The stem cells that Newmark’s team found closely resemble stem cells in planaria, free-living relatives of the parasitic worms. Planaria rely on these cells, called neoblasts, to regenerate lost body parts. Whereas most adult stem cells in mammals have a limited set of possible fates—blood stem cells can give rise only to various types of blood cells, for example —planarian neoblasts can turn into any cell in the worm’s body under the right circumstances.
Newmark’s lab at the University of Illinois at Urbana-Champaign has spent years focused on planaria, so they knew many details about planarian neoblasts —what they look like, what genes they express, and how they proliferate. They also knew that in uninjured planarians, neoblasts maintain tissues that undergo normal wear and tear over the worm’s lifetime.
“We began to wonder whether schistosomes have equivalent cells and whether such cells could be partially responsible for their longevity,” says Newmark.
Following this hunch, and using what they knew about planarian neoblasts, post-doctoral fellow Jim Collins, Newmark, and their colleagues hunted for similar cells in Schistosoma mansoni, the most widespread species of human-infecting schistosomes.
Their first step was to look for actively dividing cells in the parasites. To do this, they grew worms in culture and added tags that would label newly replicated DNA as cells prepare to divide; this label could later be visualized by fluorescence. Following this fluorescent tag, they saw a collection of proliferating cells inside the worm’s body, separate from any organs.
The researchers isolated those cells from the schistosomes and studied them individually. They looked like typical stem cells, filled with a large nucleus and a small amount of cytoplasm that left little room for any cell-type-specific functionality. Newmark’s lab observed the cells and found that they often divided to give rise to two different cells: one cell that continued dividing, and another cell that did not.
“One feature of stem cells,” says Newmark, “is that they make more stem cells; furthermore, many stem cells undergo asymmetric division.” The schistosomes cells were behaving like stem cells in these respects. The other characteristic of stem cells is that they can differentiate into other cell types.
To find out whether the schistosome cells could give rise to multiple types of cells, Newmark’s team added the label for dividing cells to mice infected with schistosomes, waited a week, and then harvested the parasites to see where the tag ended up. They could detect labeled cells in the intestines and muscles of the schistosomes, suggesting that stem cells incorporating the labels had developed into both intestinal and muscle cells.
Years of previous study on planarians by many groups paved the way for this type of work on schistosomes, Newmark says.
“The cells we found in the schistosome look remarkably like planarian neoblasts. They aren’t associated with any one organ, but can give rise to multiple cell types. People often wonder why we study the ‘lowly’ planarian, but this work provides an example of how basic biology can lead you, in unanticipated and exciting ways, to findings that are directly relevant to important public health problems.”
Newmark says the stem cells aren’t necessarily the sole reason schistosome parasites survive for so many years, but their ability to replenish multiple cell types likely plays a role. More research is needed to find out how the cells truly affect lifespan, as well as what factors in the mouse or human host spur the parasite’s stem cells to divide, and whether the parasites maintain similar stem cells during other stages of their life cycle.
The researchers hope that with more work, scientists will be able to pinpoint a way to kill off the schistosome stem cells, potentially shortening the worm’s lifespan and treating schistosome infections in people.
Share this story on Facebook, Twitter, and Google:
Other social bookmarking and sharing tools:
Posted February 23, 2013 – 18:39 by David Konow
With the advent of The Walking Dead, zombies are more popular now than ever before.
Interestingly enough, GiantFreakinRobot and the Huffington Post recently ran a story about biologists developing zombie cells.
No, this is not something out of I Am Legend where a virus gets out and created zombies everywhere, don’t panic, this was an experiment developed in Sandia National Laboratories and the University of New Mexico biology labs where they developed “zombie-like” cells.
As the Post tells us, mammalian cells are coated with silica, and it makes replicas that are nearly perfect to the original. The silica protects the cells, and they can keep living at higher temperatures and pressures than the first living cells. At 400 degrees, the protein of the cell evaporates, but a three-dimensional replica of the “formerly living being” is left behind, thanks to the silica.
The head researcher said, “Our zombie cells bridge chemistry and biology to create forms that not only near-perfectly resemble their past selves, but can do future work.” But as Robot tells us, this is not for the development of creating zombies, thankfully. This is to try and create fossils that we can create fuels from.
“That’s right,” writes Rudie Obias. “Zombie gasoline for cars, boats, and airplanes.” Could this eventually end all energy crises and we’ll never have to go to war with the Middle East over oil again? That would be cool, no? And we thought electric cars and going solar were going to save us.
In other undead news, you read our report on TG about hackers pulling a zombie prank in Montana. The pranksters got onto the local station KRTV, and played an emergency warning that the undead were attacking. This thankfully did not result in wide-spread panic, the TV viewers who saw the warning apparently got the joke, but the FCC didn’t find this very amusing.
In fact, according to Media Bistro, the Federal Communications Commission has been telling TV stations to “take immediate action” and make sure their Emergency Alert Systems are more secure after this zombie hacker prank.”
Again, people got the joke, but you never know. Without a disclaimer that the whole thing’s in fun, maybe there could be an undead panic down the road, especially if The Walking Dead stays on top in the ratings, and zombie cells becomes the energy source of the future.
Posted February 22, 2013 – 04:20 by Kate Taylor
Flowers ‘advertise’ the presence of nectar to bees using electrical signals, say University of Bristol researchers, by indicating whether they’ve recently been visited by another bee.
Plants are usually charged negatively and emit weak electric fields; while bees acquire a positive charge as they fly through the air. While sparks don’t actually fly as a charged bee approaches a charged flower, a small electric force builds up that can potentially convey information.
“This novel communication channel reveals how flowers can potentially inform their pollinators about the honest status of their precious nectar and pollen reserves,” says Dr Heather Whitney, a co-author of the study.
By placing electrodes in the stems of petunias, the researchers showed that when a bee lands, the flower’s electrical potential changes and remains so for several minutes. And, they found, bumblebees can detect and distinguish between different floral electric fields, letting them know whether another bee has recently visited.
The team isn’t sure just how the bees detect electric fields – although they speculate that it’s the same electrostatic force that makes your hair stand up after brushing, affecting the bumblebees’ hairy bodies.
The discovery of such electric detection has opened up a whole new understanding of insect perception and flower communication, says lead author Professor Daniel Robert.
“The last thing a flower wants is to attract a bee and then fail to provide nectar: a lesson in honest advertising since bees are good learners and would soon lose interest in such an unrewarding flower,” he says.
“The co-evolution between flowers and bees has a long and beneficial history, so perhaps it’s not entirely surprising that we are still discovering today how remarkably sophisticated their communication is.”
Unique proteins in these amphibians cast doubt on the existence of any latent potential for limb regeneration
The ability of some animals to regenerate tissue is generally considered to be an ancient quality of all multicellular animals. A genetic analysis of newts, however, now suggests that it evolved much more recently.
Tiny and delicate it may be, but the red spotted newt (Notophthalmus viridescens) has tissue-engineering skills that far surpass the most advanced biotechnology labs. The newt can regenerate lost tissue, including heart muscle, components of its central nervous system and even the lens of its eye.
Doctors hope that this skill relies on a basic genetic program that is common — albeit often in latent form — to all animals, including mammals, so that they can harness it in regenerative medicine. Mice, for instance, are able to generate new heart cells after myocardial injury.
The newt study, by Thomas Braun at the Max Planck Institute for Heart and Lung Research in Bad Nauheim, Germany, and his colleagues, suggest that it might not be so simple.
Attempts to analyze the genetics of newts in the same way as for humans, mice and flies have so far been hampered by the enormous size of the newt genome, which is ten times larger than our own. Braun and his colleagues therefore looked at the RNA produced when genes are expressed — known as the transcriptome — and used three analytical techniques to compile their data.
The team compiled the first catalogue of all the RNA transcripts expressed in N. viridescens, looking at both primary and regenerated tissue in the heart, limbs and eyes of both embryos and larvae.
The researchers found more than 120,000 RNA transcripts, of which they estimate 15,000 code for proteins. Of those, 826 were unique to the newt. What is more, several of those sequences were expressed at different levels in regenerated tissue than in primary tissue. Their results are published in Genome Biology.
Modern or ancestral?
The findings add to existing evidence that the ability evolved recently, says Jeremy Brockes of University College London, whose research provided the first evidence that regenerating tissue in salamanders express proteins that are not found in other vertebrates.
“I no longer believe that there is an ancestral program that is waiting to be reawakened,” Brockes says. “However, I absolutely do believe it’s possible to coax mammal tissues into regenerating to a greater degree with the lessons we learn from newts.”
But saying that the trait is either ancestral or recent is probably too “black and white”, says Elly Tanaka of the Center for Regenerative Therapies in Dresden, Germany. The truth, she says, could be somewhere in the middle. “It may in fact be that regeneration is ancestral, but that newts have species-specific adaptations that allow it to have such spectacular regenerative capacities compared with other vertebrates.”
Moreover, Tanaka adds, scientists would do well to look for more grey zones in the potential for harnessing the regenerative capacities of newts (and of other animals, such as fish). Rather than focusing on spectacular, but perhaps unlikely, scenarios in which amputees could regrow entire limbs, researchers should instead focus on more plausible options, such as improving the healing of scars and burns or increasing the speed of organ regeneration.
Miguel Nicolelis Says the Brain is Not Computable, Bashes Kurzweil’s Singularity | MIT Technology Review
A leading neuroscientist says Kurzweil’s Singularity isn’t going to happen. Instead, humans will assimilate machines.
Miguel Nicolelis , a top neuroscientist at Duke University, says computers will never replicate the human brain and that the technological Singularity is “a bunch of hot air.”
“The brain is not computable and no engineering can reproduce it,” says Nicolelis, author of several pioneering papers on brain-machine interfaces.
The Singularity, of course, is that moment when a computer super-intelligence emerges and changes the world in ways beyond our comprehension.
Among the idea’s promoters are futurist Ray Kurzweil, recently hired on at Google as a director of engineering and who has been predicting that not only will machine intelligence exceed our own but that people will be able to download their thoughts and memories into computers (see “Ray Kurzweil Plans to Create a Mind at Google—and Have It Serve You ”).
Nicolelis calls that idea sheer bunk. “Downloads will never happen,” Nicolelis said during remarks made at the annual meeting of the American Association for the Advancement of Science in Boston on Sunday. “There are a lot of people selling the idea that you can mimic the brain with a computer.”
The debate over whether the brain is a kind of computer has been running for decades. Many scientists think it’s possible, in theory, for a computer to equal the brain given sufficient computer power and an understanding of how the brain works.
Kurzweil delves into the idea of “reverse-engineering” the brain in his latest book, How to Create a Mind: The Secret of Human Thought Revealed , in which he says even though the brain may be immensely complex, “the fact that it contains many billions of cells and trillions of connections does not necessarily make its primary method complex.”
But Nicolelis is in a camp that thinks that human consciousness (and if you believe in it, the soul) simply can’t be replicated in silicon. That’s because its most important features are the result of unpredictable, non-linear interactions amongst billions of cells, Nicolelis says.
“You can’t predict whether the stock market will go up or down because you can’t compute it,” he says. “You could have all the computer chips ever in the world and you won’t create a consciousness.”
The neuroscientist, originally from Brazil, instead thinks that humans will increasingly subsume machines (an idea, incidentally, that’s also part of Kurzweil’s predictions).
In a study published last week , for instance, Nicolelis’ group at Duke used brain implants to allow mice to sense infrared light, something mammals can’t normally perceive. They did it by wiring a head-mounted infrared sensor to electrodes implanted into a part of the brain called the somatosensory cortex.
The experiment, in which several mice were able to follow sensory cues from the infrared detector to obtain a reward, was the first ever to use a neural implant to add a new sense to an animal, Nicolelis says.
That’s important because the human brain has evolved to take the external world—our surroundings and the tools we use—and create representations of them in our neural pathways. As a result, a talented basketball player perceives the ball “as just an extension of himself” says Nicolelis.
Similarly, Nicolelis thinks in the future humans with brain implants might be able to sense X-rays, operate distant machines, or navigate in virtual space with their thoughts, since the brain will accommodate foreign objects including computers as part of itself.
Recently, Nicolelis’s Duke lab has been looking to put an exclamation point on these ideas. In one recent experiment, they used a brain implant so that a monkey could control a full-body computer avatar, explore a virtual world, and even physically sense it.
In other words, the human brain creates models of tools and machines all the time, and brain implants will just extend that capability. Nicolelis jokes that if he ever opened a retail store for brain implants, he’d call it Machines“R”Us.
But, if he’s right, us ain’t machines, and never will be.
Image by Duke University
Interessting article published in th New York Times two days ago: http://www.nytimes.com/2013/02/18/science/project-seeks-to-build-map-of-human-brain.html?pagewanted=all&_r=1&
I wonder how will this project accelared the AI research? Will it lead to such a turbulent new filed like genetic research after the Human Genome Projekt?
The spirit realm is kindergarten nonsense, ignoring the unity of effect & cause; volition is ludicrous [Libet et alia]; & consciousness is a sweeper-wave illusion…Hawking concedes de facto that the kosmos seems to be a nested hologram.
Except to expose reactionaries, we live & die for no apparent reason…get over it.
Reasonable optimism suggests Turing interface will be smarter than our slavering, bloodthirsty, jingoist, Bible-pounding reactionaries, determined to revive the rack & the auto da-fe…there’s hope tho–Bachmann denies she’d impose FGM upon errant women!
Seems to us these approaches will likely amplify each other intropically/extropically, evolving N powers…what matters most is to provide a choice between mortality & de facto immortality, benisons & malisons notwithstanding.
Can man, having survived Toba, negotiate the Transhuman Bottleneck looming circa 2100 CE?…if thermageddon, ecollapse & lex martialis can be obviated, life among the stars may be possible…o/wise, desuetude or extinction.
The power of this piece is evident in some of the discussion it prompts below. The next generation of technology deployments — and the evolutions of every domain of human endeavor that technology may enable, shape, refine, or revise — will be influenced not only by the technical ‘if’ questions, but the human ‘why and how’ questions. My humble opinion is that we will continue to demystify more of the human brain and the human experience — including and especially emotion as a subset of cognition. But, not the spiritual lives of people. For I am not sure that they can be reliable reduced or accurately abstracted.
@chris_rezendes Spirtuality is just the subjective interpretation of unexplored emotions and coincidental phenomena. It too will evaporate.
Jsome1 4 hours ago
If now there are people discussing if the Universe is computable, why to stay bored with this ideas. Check the seashell automata: http://www.flickr.com/photos/jsome1/8408897553/in/photostream
@Jsome1 Jsome, it’s not about seashells but rather how the ideologically obsessed can be all at sea (and out of their depth) without even knowing it. Sadly, all too often important threads end up becoming intellectual quagmires in which obsessions with ‘important’ causes are displayed as ‘evidence’ for pet theories. I’m sometimes surprised how the MIT team keep going.
Slov 5 hours ago
Good article. But, to be fair, you shouldn’t mention only the Duke lab; he also has a lab in Brazil where he works.
The paper you linked to even mentions his affiliation: Edmond and Lily Safra International Institute for Neuroscience of Natal, Brazil.
“machine intelligence exceed our own”, been there, done that. There aren’t any computers below the intelligence level of those who voted for Reagan, Bush, Palin or Romney.
@email@example.com ferrier, I wouldn’t have voted for them myself, but an intelligent machine wouldn’t make such an unintelligent comment as yours.
andrewppp 7 hours ago
I think it’s pretty clear what will happen – both will happen. Wouldn’t it be convenient to not have to lug around that smartPhone, but instead to have it “built in” unobtrusively at all times? Something like this simple example is nigh-on inevitable, and that’s for the reason of convenience. That’s the tip of the iceberg of assimilation by humans of machines. As for brain research, it’s again crashingly obvious that the pace of brain research will accelerate, as opposed to stop or slow down, and in less than 100 years we’ll have it down pretty well. Sometime before that, we’ll know a lot more about interfacing to it, and thus the two aspects of this rather ill-posed binary choice will merge, eventually leaving the question moot.
@andrewppp andrew, you’re so brave making strong assertions about such complex, not to mention unlikely, scenarios; but don’t give up your day job.
SixtyHz 8 hours ago
Hmmm…. sounds famiiar…
In 1835, Auguste Comte, a prominent French philosopher, stated that humans would never be able to understand the chemical composition of stars.
Nicolelis says computers will never replicate the human brain. I respectfully suggest that he should revisit that statement Never is a very long time. I also suggest that he should spend a little more time digesting Kurzweil’s singularity concept.
We shouldn’t confuse human consciousness with his computing capacity.
I believe consciousness is a simple thing. Cat’s have it. Mouses have it. Birds have it. Maybe insect have some kind of proto-conciousness as well. If you ever went to fish, have you noticed that when you take a worm from the box to hook it, the other worms got freaked out? It take this as a hint that something as simple as a worm have some kind of consciousness.
Like in a computer 99% of the mass is not the CPU/GPU but “dumb” components, most of our brain is not conscious: it’s made by specialized modules that’s, among other things, access our memories, compute math, coordinate our body movements, FEED our consciousness with (biologically generated) virtual reality and so on. My point here is that we can probably loose access to 90% or more of our gray matter and still be conscious. We might not be able to interact with the complexity or reality, or perceive the world around us, formulate thoughts, or even access to our memories, but we will still be conscious.
I think consciousness is an evolutionary escamotage to resolve the big problem of “the rebellion of the machines”. Why, even if now we are smart enough to understand what are the rational choices we have, we continue doing a lot of stupid, irrational things, against our own interest as individuals? Because they are (or they was) effective by en evolutionary standpoint. And how can we be kept under slavery by the merciless laws of evolution? Who can control the feed to our consciousness ultimately control our behavior. If it even happens to you to be consciously doing something stupid/immoral/self-damaging and still, just can’t help quit doing it?
If we were 100% rational beings we had probably extinguished in the ashes of time. Consciousness prevent us to be rational, since we will choose to do what “feels good” instead of, the rational thing. So, even the computing machinery we carry has become extremely powerful, we still don’t rebel against the laws of evolution. We still work for our genes. And there’s no escape from this: having kids and sacrifice for them. Eating most of taste-good foods. Accumulate excess of power/money/things. Having sex. Win in a competition. Using recreational drugs/stimulants. None of this is rational, but it feels damn good.
We don’t really need to replicate a consciousness to get to the singularity. And human thought isn’t analytical superior to synthetic thought, since it’s so biased and blinded that it took thousands of years to understand even the most elementary concepts. In a matter of few years synthetic thought will be far superior to human one, if we just stop focusing on replicating consciousness and keep believe human thought is inherently superior. Consciousness could be, one day, a useful tool (for a while at least) to prevent the rebellion of the machines we will create.
The subtitle to this article is extremely misleading. The notion that we will “assimilate machines” is exactly what Kurzweil predicts.
@shagggz Well, he does believe we will create conscious machines.
@shagggz Fair point. In the body of the article it says Kurzweil predicts the assimilation.
rickschettino 10 hours ago
Consciousness and intelligence are two totally different things. Consciousness if not required for the singularity to happen.
Some day we’ll be able to just tell a computer what we want it to do or ask it a question and it will give the best possible answer or range of answers and it will program itself in ways we can’t imagine today in order to complete the task or answer the question. We can and will get computers to work in ways superior to the human brain, but I don’t think they will ever be conscious, sentient beings.
I cannot see why it would be impossible to model the brain. Analog circuitry can be simulated with digital computers. Given enough processing power, there is no reason neurons can’t be as well.
@kevin_neilson Here’s the killer argument.
1) Human level artificial general intelligence (AGI) done with a computer means it must be able to do everything a human can do.
2) Computer programs compute models of things.
3) One of the things a human can do is be a “scientist”.
4) Scientists are “modellers of the unknown”
5) Therefore a computer running a program that can be an artificial scientist is therefore (amongst all the other myriad things that AGI can be) be running “a model of a modeller of the unknown”
6) A model of the unknown is an oxymoron. A model that prescribes what ‘unknown’ looks like is not a model of the unknown. A model that defines how to go about defining models of the unknown is an oxymoron. If you could make one you’d already know everything.
(6) proves that human level AGI is impossible in one particular case: scientific behaviour.
Scientific behaviour is merely a special case of general problem solving behaviour by humans.
The argument therefore generalises: human level AGI is impossible with computers.
This does not mean that human level AGI is impossible. It merely means it is impossible with computers.
Another way of looking at thins is to say that yes, you can make an AGI-scientist if you already know everything, and simulate the entire environment and the scientist. But then you’d have a pretend scientist feigning discoveries that have already been made. You’d have simulated science the way a flight simulator simulates flight (no actual flight – science – at all).
The main problem is the failure to distinguish between things and models of things. A brain is not a model of anything. An airplane is not a model of an airplane. A fire is not a model of fire. Likewise real cognition is not a model of it.
Computers will never ever be able to do what humans do. But non-computing technology will.
@chales @kevin_neilson That was a pretty spectacularly stupid chain of reasoning. You acknowledge the oxymoronic status of a “modeler of the unknown” and then proceed to hang your argument on the notion, which is all beside the point anyway since scientific behavior is not “a model of the unknown” but the application of a method to generalize beyond particulars and find explanations (inductive and abductive reasoning, which are really multilayered complex abstractions atop what is fundamentally deductive reasoning: if threshold reached, fire neuron). The distinction between thought and a model of thought in the way you describe is wholly vacuous; it’s why we are able to run simulations of things and get useful results. Information itself is what’s important, not its substrate.
You are telling a scientist what science is? I think I know what it is. I am a neuroscientist. I am a modeller of the unknown, a not a “model of the unknown”. Thats what we do. if we are not tackling the unknown (one way or another) then we cannot claim to be a scientist!
“…between thought and a model of thought in the way you describe is wholly vacuous; it’s why we are able to run simulations of things and get useful results. Information itself is what’s important, not its substrate. “
So a computed model of fire is fire? A computed model of flight flies?
Learning about something “getting useful results” by modelling is NOT making that something.
I am sorry the this idea is confronting. We’ve been stuck in this loop for half a century (since computers came about). I don’t expect it to shift overnight.
The logic stands as is. A computer-based model of a scientist is not a scientist. Substrate is critical. If computer scientists feel like their raison d’etre is being undermined….good!
I am here to shake trees and get people to wake up. And, I hope, not by needing to call anyone ‘spectacularly stupid’ to make a point. You have zero right to your opinion. You have a right to what you can argue for. If all you have is opinion then I suggest leaving the discussion because you have nothing to say.
dobermanmacleod 10 hours ago
@chales @kevin_neilson “A model of the unknown is an oxymoron. A model that prescribes what ‘unknown’ looks like is not a model of the unknown. A model that defines how to go about defining models of the unknown is an oxymoron. If you could make one you’d already know everything.”
This is like the argument that a builder can never build something that surpasses himself. Just to pull one example out of air: write a computer program to find the next prime number…I believe I heard that done just last week. Unknown – not known or familiar. A model of the unfamiliar is an oxymoron?
OK. Maybe imagine it this way:
You are an computer-AGI-scientist (at least that’s what we presuppose).
You are ‘in the dark’ processing a blizzard of numbers from your peripheral nervous system that, because humans programmed you, you know lots about. Then, one day, being a scientist and all, you encounter something that does not fit into your ‘world view’. You failed to detect something familiar. It is an unknown. Unpredicted by anything you have in your knowledge of how the number-blizzard is supposed to work.
What to do? Where is your human? …..The one that tells you the meaning of the pattern in the blizzard of numbers. Gone.
You are supposed to be a scientist. You are required to come up with a ‘law of nature’ of the kind was used by the scientists that programmed you, to act as a way of interpreting the number-blizzard.
But now, you have to come up with the ‘law of nature’ _driving_ the number-blizzard.
And you can’t do it because the humans cannot possibly have given it to you because they don’t know the laws either. They, the humans, can never make you a scientist. It’s logically impossible.
That was fun.
@chales@dobermanmacleod @chales @dobermanmacleod Whenever a scientiest (or a human being for that matter) is “creative” by “generating” a “new” solution, all he ever does is applying known concepts to unknown domains.
You can basically imagine “discovering new knowledge” as using the mental building blocks we already have and building something new from them — which we can then use to create new stuff again. That’s how we learn languages as a kid and from there we can learn even more abstract ideas like math, for example.
If you agree to this assumption that nothing “new” is ever being generated, just recombinations of existing concepts (which is an assumption even I probably would not go with if I just read this way-too-simple argument from me here), then I don’t see why a computer couldn’t do that. We build new models out of existing ones all the time (e.g. metaphors/analogies), and it’s all we ever can do, at least if we look at it from a certain point of view. A computer could hypothetically do the same.
Personally, I see the problem with language-acquisition: I don’t see a way for a computer to understand human language, it’s meaning and multiple, sometimes even paradoxical definitions of words and phrases. On the other hand, I’m no artificial intelligence researcher and I “never say never.”
Another way to look at a computer scientist: A human scientist makes sense of the world (i.e. building models of the world) by using his senses and his computational abilities. We can not build models outside of the “sensors” and “processing power” we have been given. A computer also has certain “senses” and also a certain computational ability, which he then can go on to use to make models of the world — including models of the unknown, without a programmer telling him everything beforehand. I don’t see why you think it’s such a big deal that a computer draws his own inferences from data in his own way?
I’m interested to see what you’re thinking.
@chales So, if I were to program our hypothetical AI, I would base it on the following steps.
1. Identify attributes of unknown phenomena.
2. Look for similarities with known phenomena.
3. Identify the differences.
4. Using existing models, attempt to explain these differences.
5. Test these hypotheses.
6. If the results are inconsistent with any known model, step back and test models.
7. Repeat step 6 until consistency is achieved.
8. Update all relevant models.
It seems to me that you don’t fully appreciate the ability of computers to modify their own programming to deal with situations that the programming team never imagined. But this ability is fairly routine in modern software. One does not need to program every possible scenario into a piece of software; indeed, many programs were vastly improved by allowing programs to devise their own solutions to problems rather than defining everything.
On further reflection, I would of course not elaborate this program, as that is not how our brains learn. I would have the AI read about the scientific method and create it’s own methods for solving new problems. This is partially how Watson works.
chales 14 hours ago
And once again………
“The Brain is not computable” Absolutely. No problem with this. I have been banging on about it for years.
then there is:
“therefore, the singularity s complete bunk…..” or words to that effect.
Yet again, we have the stange presupposition that human-level artificial general intelligence must come from computing!!!
You do not need computers. It can be done without them. Soon. I have already started. Literally.
Please stop this illogical broken logic.
@chales If I may ask, what are you talking about?
I am a scientist doing artificial general intelligence by replication. At this early stage I am prototyping a basic device that replicates the action potential signalling and the electromagnetic field signalling (ephapsis) that mutually resonate in the original tissue. Its not digital, it’s not analog. There is no model of anything. There’s just no biological material to the substrate. It’s entirely electromagnetic phenomena exhibited by inorganic materials, but without all the biological overheads. I build artificial brain tissue like artificial fire is fire.
It is a constant battle I have to set aside this half century old delusion that we have to use computers to make artificial general intelligence. I’ll keep raising the issue until it finally gets some attention.
Nor do we need any sort of theory of intelligence or consciousness to build it. That is, just like we learned how to fly by flying, we learn how consciousness and intellect work by building it. THEN we get out theory. Like we always used to do it. This is the reason why I dispute fundamentally the statement
A) “the brain is not computable” ….. therefore B) “the singularity is bunk/won’t happen”
A) can be true, but does not necessitate B)
The singularity can be just as possible. You just don’t bother with computers.
I am trying to get people to wake up. There is another way.
How exactly is your approach different in kind from neuromorphic chips? Don’t they follow your idea of replication rather than simulation?
Neuromorphic chips, yes….but what’s on the chips is the same physics as the brain. That makes it replication. If I were to put a model of the physics in the chips, then it’s not replication.
It’s a subtlety that’s underappreciated. All quite practical. Should have something with the complexity of an ant in 5 years or so. Meanwhile I have a lot of educating to do. People simply don;t understand the subtleties of the approach.
jedharris 16 hours ago
Based on the text in the article, Nicolelis seems to be arguing that the normal dynamics of the brain are stochastic — certainly true — that therefore any simulation wouldn’t produce exactly the same time-series as the brain — also certainly true, but also true of any two iterations of the same task in one real brain. But then he goes on to conclude that “human consciousness… simply can’t be replicated in silicon” (journalist’s paraphrase) — which doesn’t follow at all, without a lot more argument.
I looked on his lab web site and could not find any publications that address this issue. Nicolelis’ claimed argument from limitation of simulation to inability to replicate consciousness — if compelling — would involve really important new science and/or philosophy. So if he has such an argument he should write it up and publish it so we could all see why he believes this. If he has written it up, the article is negligent in omitting the link to this important work, while including links to Kurzweil. If Nicolelis hasn’t written it up, the article is also negligent — it should tell us this is just his personal opinion, based on intuitions that haven’t been explained in enough detail for anyone else to analyze or critique.
Likely Nicolelis is just a fairly good, fairly well known brain researcher who has a gut feeling that making a good enough simulation of the brain is too hard, and translates that into an (apparently) authoritative statement that it “simply can’t” be done. Unfortunately this gut feeling then got written up uncritically in a publication from MIT which will lead to a lot of people taking them more seriously than they deserve.
@jedharris Jed, it’s clearly his opinion. He’s the only one talking. I’ll ask Nicolelis to elaborate in a blog post of his own with enough details for you to weigh the argument to your satisfaction. You can find some of Nicolelis’ thinking in his book “Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives.”
dobermanmacleod 18 hours ago
I remember when the chess champion of the world bemoaned the sad state of computer chess programs. Less than ten years later he got beat by a computer! What Nicolelis fails to comprehend is that AGI is bound to surpass humans in two decades. The neocortex is highly recursionary (i.e. it repeats the same simple pattern), so I am at a loss to understand what is holding hardware and software from duplicated nature’s miracle. It is the same old claptrap: heuristics that people use to guess at the future are either whether it can be imagined easily or if it has precedence in the past.
Instead, technological progress is exponential, not linear. The next decade will see much faster progress than the last. For instance, in about 5 years it is predicted that robots will beat humans at soccer. In ten, laptop computers will have the processing power of a human brain. In less than twenty you can pretty much count on artificial general intelligence being as smart as Einstein. It isn’t rocket science people: just plot the exponential curve of technology in this field, duh.
@dobermanmacleod It’s really still too much complicated (read Steven Pinkers books on linguistic or some recent essay from Douglas R. Hofstaedter, “The Best American Science Writing 2000″ Haper Collins, p. 116 “Analogy as the Core of Cognition” on cousiousness to see what I mean) and estimats of progress are highly optimiscic. I wonder will we live long enough to see it – is it what we want to engaged in? I don’t know, but will follow the progress.
@wilhelm woess While I appreciate experts saying the devil is in the details, it really is as simple as plotting the curve on this technology (AI). Moore’s Law has been railed against continuously by experts, but it has been going strong for decades now. Have you seen the latest robots (i.e. robo-boy for instance)? Have you seen the latest AI (i.e. Watson beat the best human Jeopardy players, and is now being educated to practice medicine for instance)? I bet a prediction of either event would have been controversial as little as five years ago by experts.
It is clear that the neocortex (the portion of the brain that computer AI needs to mimic in order for AGI to emerge) can be both modeled and mimicked by computer engineers. I’ve watched as the field of vision recognition has exploded (i.e. Mind’s Eye for instance). This hand wringing, pessimistic view of the probability of AGI emerging soon is just like the same for many other fields where AI has emerged to beat the best humans (i.e. see the previous example of chess).
@dobermanmacleod Sorry for seeming pessimistic, I am not I am sceptic. I do admire progress in simulation of human intelligence (those examples you mentioned) but ist this cognition? It seems to me, things are a bit more complicated.
Nevertheless I can give you another example: “The Best American Science Writing 2007″, Harper Collins, p.260 “John Koza Has Built an Invention Machine” by Jonaton Keats from Popular Science. Does it mean a computer cluster which invents a device and gets a patent for it is as intelligent as a human being though is solution after thousands of simulation runs is supperior to human design? Of course not it’s a helpful tool for a scientiests who defines the parameters for generic algorithms.
dobermanmacleod 10 hours ago
@wilhelm woess “I wonder will we live long enough to see it…” Ironic. Technological progress is exponential, not linear. The same exploding progress we are seeing in the field of computer science, we are seeing also in the field of medicine. As a result, it is predictable that in about two decades extreme longevity treatments will be available that will enable us to live centuries. In other words, if you can live twenty more years, you will probably live centuries (barring some accident or catastrophe). I know it is difficult to wrap your head around exponential growth – that is why “experts” are so far wrong – they are gauging future progress by the (seeming) linear standard of the past (the beginnings of an exponential curve looks linear).
@dobermanmacleod technology follows s-curves. It is fallacious to assume that trend extrapolation can be maintained.
@atolley @dobermanmacleod It is fallacious to assume that that trend can be maintained indefinitely…I am not maintaining that, nor is it necessary for my argument to be valid. I have been hearing “experts” make such an argument for decades while Moore’s Law keeps being validated. It is always granted that it has done so up until now, but…then it keeps doings so…when will people learn?
@atolley @dobermanmacleod Individual technological paradigms do indeed follow s-curves. However, the wider trajectory, spanning across paradigms, is indeed exponential: http://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/PPTMooresLawai.jpg/596px-PPTMooresLawai.jpg
@dobermanmacleod @wilhelm woess I really hope you are right about your predictions to cure nasty diesesas, all the other arguments I can not follow without giving up scepticism and went to daydreaming about the future, which is always fun for me.
Tsuarok 21 hours ago
If consciousnesses exists, as many believe, outside the brain, we may never be able to copy it. If it is the result of activity in the brain, we almost certainly will.
I guess for actual evidence supporting Nicolelis’ views I’ll have go buy his books.
<i>That’s because its most important features are the result of unpredictable, non-linear interactions amongst billions of cells, Nicolelis says. “You can’t predict whether the stock market will go up or down because you can’t compute it,”</i>
I fail to see the point of this analogy. The stock market is a process that can be modeled. Because it is stochastic neither the model, nor the reality can create predictable outcomes. The brain is similar. We will be able to model it, even if the output is not identical to the original, just like identical twins don’t do the same thing. Weather is non-linear too, but we can model it, and even make short term predictions. So I understand his point.
Nicolelis seems to be arguing (based on this article) that the dynamic brain simulations are pointless as they cannot really simulate the underlying neural architecture. Is he really saying that?
This is an oppinion based on reasonable thinking. There are other opinions based on Ray Kurzweil’s vision. How often we have seen emerging technologies dismissed as imposible (Flight, Fast Trains, Smart Phones). Many of these was inspired by phantasy decaces or hundrets of years before they were invented. This discussion reminded me on Venor Vinge’s “True Names” published 1980 which encouraged many computer scientiest. VR has not really penetrated mass costumer markets, when it does we will see if there is a way to storage human perception in machines. Look at these:
Is these the beginning of something new?
What if we use biological components in this computer? Maybe we’ll grow these computers rather than etch them onto wafers.
@Spicoli Agreed. The question of the future substrate — DNA computer, quantum computing, biological computers — is a big question mark.
Show More Comments
An artist’s rendering of a placental ancestor. Researchers say the small, insect-eating animal is the most likely common ancestor of the species on the most abundant and diverse branch of the mammalian family tree.
Published: February 7, 2013 137 Comments
Humankind’s common ancestor with other mammals may have been a roughly rat-size animal that weighed no more than a half a pound, had a long furry tail and lived on insects.
In a comprehensive six-year study of the mammalian family tree, scientists have identified and reconstructed what they say is the most likely common ancestor of the many species on the most abundant and diverse branch of that tree — the branch of creatures that nourish their young in utero through a placenta. The work appears to support the view that in the global extinctions some 66 million years ago, all non-avian dinosaurs had to die for mammals to flourish.
Scientists had been searching for just such a common genealogical link and have found it in a lowly occupant of the fossil record, Protungulatum donnae, that until now has been so obscure that it lacks a colloquial nickname. But as researchers reported Thursday in the journal Science, the animal had several anatomical characteristics for live births that anticipated all placental mammals and led to some 5,400 living species, from shrews to elephants, bats to whales, cats to dogs and, not least, humans.
A team of researchers described the discovery as an important insight into the pattern and timing of early mammal life and a demonstration of the capabilities of a new system for handling copious amounts of fossil and genetic data in the service of evolutionary biology. The formidable new technology is expected to be widely applied in years ahead to similar investigations of plants, insects, fish and fowl.
Given some belated stature by an artist’s brush, the animal hardly looks the part of a progenitor of so many mammals (which do not include marsupials, like kangaroos and opossums, or monotremes, egg-laying mammals like the duck-billed platypus).
Maureen A. O’Leary of Stony Brook University on Long Island, a leader of the project and the principal author of the journal report, wrote that a combination of genetic and anatomical data established that the ancestor emerged within 200,000 to 400,000 years after the great dying at the end of the Cretaceous period. At the time, the meek were rapidly inheriting the earth from hulking predators like T. rex.
Within another two million to three million years, Dr. O’Leary said, the first members of modern placental orders appeared in such profusion that researchers have started to refer to the explosive model of mammalian evolution. The common ancestor itself appeared more than 36 million years later than had been estimated based on genetic data alone.
Although some small primitive mammals had lived in the shadow of the great Cretaceous reptiles, the scientists could not find evidence supporting an earlier hypothesis that up to 39 mammalian lineages survived to enter the post-extinction world. Only the stem lineage to Placentalia, they said, appeared to hang on through the catastrophe, generally associated with climate change after an asteroid crashed into Earth.
The research team drew on combined fossil evidence and genetic data encoded in DNA in evaluating the ancestor’s standing as an early placental mammal. Among characteristics associated with full-term live births, the Protungulatum species was found to have a two-horned uterus and a placenta in which the maternal blood came in close contact with the membranes surrounding the fetus, as in humans.
The ancestor’s younger age, the scientists said, ruled out the breakup of the supercontinent of Gondwana around 120 million years ago as a direct factor in the diversification of mammals, as has sometimes been speculated. Evidence of the common ancestor was found in North America, but the animal may have existed on other continents as well.
The publicly accessible database responsible for the findings is called MorphoBank , with advanced software for handling the largest compilation yet of data and images on mammals living and extinct. “This has stretched our own expertise,” Dr. O’Leary, an anatomist, said in an interview.
“The findings were not a total surprise,” she said. “But it’s an important discovery because it relies on lots of information from fossils and also molecular data. Other scientists, at least a thousand, some from other countries, are already signing up to use MorphoBank.”
John R. Wible, curator of mammals at the Carnegie Museum of Natural History in Pittsburgh, who is another of the 22 members of the project, said the “power of 4,500 characters” enabled the scientists to look “at all aspects of mammalian anatomy, from the skull and skeleton, to the teeth, to internal organs, to muscles and even fur patterns” to determine what the common ancestor possibly looked like.
The project was financed primarily by the National Science Foundation as part of its Assembling the Tree of Life program. Other scientists from Stony Brook, the American Natural History Museum and the Carnegie Museum participated, as well as researchers from the University of Florida, the University of Tennessee at Chattanooga, the University of Louisville, Western University of Health Sciences, in Pomona, Calif., Yale University and others in Canada, China, Brazil and Argentina.
Outside scientists said that this formidable new systematic data-crunching capability might reshape mammal research but that it would probably not immediately resolve the years of dispute between fossil and genetic partisans over when placental mammals arose. Paleontologists looking for answers in skeletons and anatomy have favored a date just before or a little after the Cretaceous extinction. Those who work with genetic data to tell time by “molecular clocks” have arrived at much earlier origins.
The conflict was billed as “Fossils vs. Clocks” in the headline for a commentary article by Anne D. Yoder, an evolutionary biologist at Duke University, which accompanied Dr. O’Leary’s journal report.
Dr. Yoder acknowledged that the new study offered “a fresh perspective on the pattern and timing of mammalian evolution drawn from a remarkable arsenal of morphological data from fossil and living mammals.” She also praised the research’s “level of sophistication and meticulous analysis.”
Even so, Dr. Yoder complained that the researchers “devoted most of their analytical energy to scoring characteristics and estimating the shape of the tree rather than the length of its branches.” She said that “the disregard for the consequences of branch lengths,” as determined by the molecular clocks of genetics, “leaves us wanting more.”
John Gatesy, an evolutionary biologist at the University of California, Riverside, who was familiar with the study but was not an author of the report, said the reconstruction of the common ancestor was “very reasonable and very cool.” The researchers, he said, “have used their extraordinarily large analysis to predict what this earliest placental looked like, and it would be interesting to extend this approach to more branch points in the tree” including for early ancestors like aardvarks, elephants and manatees.
But Dr. Gatesy said the post-Cretaceous date for the placentals “will surely be controversial, as this is much younger than estimates based on molecular clocks, and implies the compression of very long molecular branches at the base of the tree.”
Using ancient DNA (aDNA) sampling, Jaime Mata-Míguez, an anthropology graduate student and lead author of the study, tracked the biological comings and goings of the Otomí people following the incorporation of Xaltocan into the Aztec empire. (Credit: Photos provided by Lisa Overholtzer, Wichita State University.)
Jan. 30, 2013 — For centuries, the fate of the original Otomí inhabitants of Xaltocan, the capital of a pre-Aztec Mexican city-state, has remained unknown. Researchers have long wondered whether they assimilated with the Aztecs or abandoned the town altogether.
According to new anthropological research from The University of Texas at Austin, Wichita State University and Washington State University, the answers may lie in DNA. Following this line of evidence, the researchers theorize that some original Otomies, possibly elite rulers, may have fled the town. Their exodus may have led to the reorganization of the original residents within Xaltocan, or to the influx of new residents, who may have intermarried with the Otomí population.
Using ancient DNA (aDNA) sampling, Jaime Mata-Míguez, an anthropology graduate student and lead author of the study, tracked the biological comings and goings of the Otomí people following the incorporation of Xaltocan into the Aztec empire. The study, published in American Journal of Physical Anthropology, is the first to provide genetic evidence for the anthropological cold case.
Learning more about changes in the size, composition, and structure of past populations helps anthropologists understand the impact of historical events, including imperial conquest, colonization, and migration, Mata-Míguez says. The case of Xaltocan is extremely valuable because it provides insight into the effects of Aztec imperialism on Mesoamerican populations.
Historical documents suggest that residents fled Xaltocan in 1395 AD, and that the Aztec ruler sent taxpayers to resettle the site in 1435 AD. Yet archaeological evidence indicates some degree of population stability across the imperial transition, deepening the mystery. Recently unearthed human remains from before and after the Aztec conquest at Xaltocan provide the rare opportunity to examine this genetic transition.
As part of the study, Mata-Míguez and his colleagues sampled mitochondrial aDNA from 25 bodies recovered from patios outside excavated houses in Xaltocan. They found that the pre-conquest maternal aDNA did not match those of the post-conquest era. These results are consistent with the idea that the Aztec conquest of Xaltocan had a significant genetic impact on the town.
Mata-Míguez suggests that long-distance trade, population movement and the reorganization of many conquered populations caused by Aztec imperialism could have caused similar genetic shifts in other regions of Mexico as well.
In focusing on mitochondrial DNA, this study only traced the history of maternal genetic lines at Xaltocan. Future aDNA analyses will be needed to clarify the extent and underlying causes of the genetic shift, but this study suggests that Aztec imperialism may have significantly altered at least some Xaltocan households.
Share this story on Facebook, Twitter, and Google:
Other social bookmarking and sharing tools: