Stanford University Researchers Make Complex Carbon Nanotube Circuits | MIT Technology Review

Carbon nanotubes could help make computers faster and more efficient—if they can be incorporated into complex circuits.

Carbon complexity: This wafer is patterned with a complex carbon nanotube circuit that serves as a sensor interface.

Researchers at Stanford University have built one of the most complex circuits from carbon nanotubes yet. They showed off a simple hand-shaking robot with a sensor-interface circuit last week at the International Solid-State Circuits Conference in San Francisco.

As the silicon transistors inside today’s computers reach their physical limits, the semiconductor industry is eyeing alternatives, and one of the most promising is carbon nanotubes. Tiny transistors made from these nanomaterials are faster and more energy efficient than silicon ones, and computer models predict that carbon nanotube processors could be an order of magnitude less power hungry. But it’s proved difficult to turn individual transistors into complex working circuits (see “How to Build a Nano-Computer ”).

The demonstration carbon nanotube circuit converts an analog signal from a capacitor—the same type of sensor found in many touch screens—into a digital signal that’s comprehensible by a microprocessor. The Stanford researchers rigged a wooden mannequin hand with the capacitive switch in its palm. When someone graspsed the hand, turning on the switch, the nanotube circuit sent its signal to the computer, which activated a motor on the robot hand, moving it up and down to shake the person’s hand.

Other researchers have demonstrated simple nanotube circuits before, but this is the most complex made so far, and it also demonstrates that nanotube transistors can be made at high yields, says Subhasish Mitra , an associate professor of electrical engineering and computer science, who led the work with Philip Wong , a professor of electrical engineering at Stanford.

The nanotube circuit is still relatively slow—its transistors are large and far apart compared to the latest silicon circuits. But the work is an important experimental demonstration of the potential of carbon nanotube computing technology.

“This shows that carbon nanotube transistors can be integrated into logic circuits that perform at low voltage,” says Aaron Franklin , who is developing nanotube electronics at the IBM Watson Research Center. This feat has been demonstrated by Franklin’s group at the single-transistor level, and been shown to be theoretically possible by others, but seeing it in a complex circuit is important, says Franklin.

Working with carbon nanotubes presents many challenges—as many as 30 percent of them are metallic, rather than semiconducting, with the potential to burn out a circuit. Nanotubes also tend to grow in a spaghetti-like tangle, which can cause circuits to switch unpredictably. The approach taken by the Stanford group is to work with their imperfections, coming up with error-tolerant circuit design techniques that allow them to build circuits that work even when the starting materials are flawed. “We want to build up the circuit complexity, then go back to improving the building methods, then make more complex circuits,” says Wong.

“This is no different from the early days in silicon,” says Ashraf Alam , professor of electrical and computer engineering at Purdue University. Compared to the electronics in today’s silicon-based smartphones and supercomputers, the first silicon transistors were poor quality, as were the first integrated circuits. But silicon got through its growing pains, and the semiconductor industry perfected building ever-denser arrays of integrated circuits made up of ever-smaller transistors.

“Variation and imperfection are going to be the air we breathe in semiconductor technology,” says Wong, not just for those working with new materials, but for conventional silicon technology, too. Today’s state-of-the-art chips use 22-nanometer transistors—billions on each chip—and there is very little variation in their performance; the semiconductor industry has mastered making these tiny devices at tremendous scales, and with very high yields.

The drive to continually miniaturize transistors while maintaining scrupulous quality control has enabled technologies ranging from smartphones and supercomputers. But unavoidable flaws, at the level of single atoms, will soon lead to variation in performance that will have to be accounted for in circuit design. “Error-tolerant design has to be part of the way forward, because we will never get the materials completely perfect,” says Wong.

3-D Printed Car Is as Strong as Steel, Half the Weight, and Nearing Production | Autopia | Wired.com

Engineer Jim Kor and his design for the Urbee 2. Photo: Sara Payne

Picture an assembly line not that isn’t made up of robotic arms spewing sparks to weld heavy steel, but a warehouse of plastic-spraying printers producing light, cheap and highly efficient automobiles.

If Jim Kor’s dream is realized , that’s exactly how the next generation of urban runabouts will be produced. His creation is called the Urbee 2 and it could revolutionize parts manufacturing while creating a cottage industry of small-batch automakers intent on challenging the status quo.

Urbee’s approach to maximum miles per gallon starts with lightweight construction – something that 3-D printing is particularly well suited for. The designers were able to focus more on the optimal automobile physics, rather than working to install a hyper efficient motor in a heavy steel-body automobile. As the Urbee shows, making a car with this technology has a slew of beneficial side effects.

Jim Kor is the engineering brains behind the Urbee. He’s designed tractors, buses, even commercial swimming pools. Between teaching classes, he heads Kor Ecologic, the firm responsible for the 3-D printed creation.

“We thought long and hard about doing a second one,” he says of the Urbee. “It’s been the right move.”

Kor and his team built the three-wheel, two-passenger vehicle at RedEye , an on-demand 3-D printing facility. The printers he uses create ABS plastic via Fused Deposition Modeling (FDM). The printer sprays molten polymer to build the chassis layer by microscopic layer until it arrives at the complete object. The machines are so automated that the building process they perform is known as “lights out” construction, meaning Kor uploads the design for a bumper, walk away, shut off the lights and leaves. A few hundred hours later, he’s got a bumper. The whole car – which is about 10 feet long – takes about 2,500 hours.

Photo: Sara Payne

Besides easy reproduction, making the car body via FDM affords Kor the precise control that would be impossible with sheet metal. When he builds the aforementioned bumper, the printer can add thickness and rigidity to specific sections. When applied to the right spots, this makes for a fender that’s as resilient as the one on your Prius, but much lighter. That translates to less weight to push, and a lighter car means more miles per gallon. And the current model has a curb weight of just 1,200 pounds.

To further remedy the issues caused by modern car-construction techniques, Kor used the design freedom of 3-D printing to combine a typical car’s multitude of parts into simple unibody shapes. For example, when he prints the car’s dashboard, he’ll make it with the ducts already attached without the need for joints and connecting parts. What would be dozens of pieces of plastic and metal end up being one piece of 3-D printed plastic.

“The thesis we’re following is to take small parts from a big car and make them single large pieces,” Kor says. By using one piece instead of many, the car loses weight and gets reduced rolling resistance, and with fewer spaces between parts, the Urbee ends up being exceptionally aerodynamic.” How aerodynamic? The Urbee 2′s teardrop shape gives it just a 0.15 coefficient of drag.

Not all of the Urbee is printed plastic — the engine and base chassis will be metal, naturally. They’re still figuring out exactly who will make the hybrid engine, but the prototype will produce a maximum of 10 horsepower. Most of the driving – from zero to 40 mph – will be done by the 36-volt electric motor. When it gets up to highway speeds, the engine will tap the fuel tank to power a diesel engine.

But how safe is a 50-piece plastic body on a highway?

“We’re calling it race car safety,” Kor says. “We want the car to pass the tech inspection required at Le Mans.”

The design puts a tubular metal cage around the driver, “like a NASCAR roll cage,” Kor claims. And he also mentioned the possibility of printed shock-absorbing parts between the printed exterior and the chassis. Going by Le Mans standards also means turn signals, high-beam headlights, and all the little details that make a production car.

To negotiate the inevitable obstacles presented by a potentially incredulous NHSTA and DOT, the answer is easy. “In many states and many countries, Urbee will be technically registered as a motorcycle,” Kor says. It makes sense. With three wheels and a curb weight of less than 1,200 pounds, it’s more motorcycle than passenger car.

No matter what, the bumpers will be just as strong as their sheet-metal equivalents. “We’re planning on making a matrix that will be stronger than FDM,” says Kor. He admits that yes, “There is a danger in breaking one piece and have to recreate the whole thing.” The safety decisions that’ll determine the car’s construction lie ahead. Kor and his team have been tweaking the safety by using crash simulation software, but the full spectrum of testing will have to wait for an influx of investment cash. “Our goal with the final production Urbee,” Kor says, “is to exceed most, if not all, current automotive safety standards.”

Kor already has 14 orders, mostly from people who worked on the design with him. The original Urbee prototype was estimated to cost around $50,000.

When the funding comes in, the head engineer is planning to take the latest prototype from San Francisco to New York on 10 gallons of gas, preferably pure ethanol. The hope is that the drive will draw even more interest. “We’re trying to prove without dispute that we did this drive with existing traffic,” Kor says. “We’re hoping to make it in Google [Maps’] time, and we want to have the Guinness book of world records involved.”

3D holography technique lets firefighters see through flames | TG Daily

Posted February 27, 2013 – 03:51 by Kate Taylor

An Italian team has developed a new infrared holography technique that allows firefighters to see through flames and find people trapped inside burning buildings.

Current IR cameras are blinded by the intense infrared radiation emitted by flames, which overwhelm the sensitive detectors and limit their use in the field. But by using a specialized lens-free technique, the new system can cope.

“IR cameras cannot ‘see’ objects or humans behind flames because of the need for a zoom lens that concentrates the rays on the sensor to form the image,” says Pietro Ferraro of the Consiglio Nazionale delle Ricerche (CNR) Istituto Nazionale di Ottica in Italy. Eliminating the need for the zoom lens avoids this drawback.

“It became clear to us that we had in our hands a technology that could be exploited by emergency responders and firefighters at a fire scene to see through smoke without being blinded by flames, a limitation of existing technology,” Ferraro says.

“Perhaps most importantly, we demonstrated for the first time that a holographic recording of a live person can be achieved even while the body is moving.”

In the new imaging system, a beam of infrared laser light is widely dispersed throughout a room. Unlike visible light, which cannot penetrate thick smoke and flames, the IR rays pass through largely unhindered. They do, however, reflect off of any objects or people in the room – and the information carried by this reflected light can be recorded by a holographic imager.

It’s then decoded to reveal the objects beyond the smoke and flames, delivering a live, 3D movie of the room and its contents.

“Besides life-saving applications in fire and rescue, the potential to record dynamic scenes of a human body could have a variety of other biomedical uses including studying or monitoring breathing, cardiac beat detection and analysis, or measurement of body deformation due to various stresses during exercise,” says Ferraro.

“We are excited to further develop this technology and realize its application for saving and improving human life.”

Miguel Nicolelis Says the Brain is Not Computable, Bashes Kurzweil’s Singularity | MIT Technology Review

A leading neuroscientist says Kurzweil’s Singularity isn’t going to happen. Instead, humans will assimilate machines.

Nicolelis.jpg

Miguel Nicolelis , a top neuroscientist at Duke University, says computers will never replicate the human brain and that the technological Singularity is “a bunch of hot air.”

“The brain is not computable and no engineering can reproduce it,” says Nicolelis, author of several pioneering papers on brain-machine interfaces.

The Singularity, of course, is that moment when a computer super-intelligence emerges and changes the world in ways beyond our comprehension.

Among the idea’s promoters are futurist Ray Kurzweil, recently hired on at Google as a director of engineering and who has been predicting that not only will machine intelligence exceed our own but that people will be able to download their thoughts and memories into computers (see “Ray Kurzweil Plans to Create a Mind at Google—and Have It Serve You ”).

Nicolelis calls that idea sheer bunk. “Downloads will never happen,” Nicolelis said during remarks made at the annual meeting of the American Association for the Advancement of Science in Boston on Sunday. “There are a lot of people selling the idea that you can mimic the brain with a computer.”

The debate over whether the brain is a kind of computer has been running for decades. Many scientists think it’s possible, in theory, for a computer to equal the brain given sufficient computer power and an understanding of how the brain works.

Kurzweil delves into the idea of “reverse-engineering” the brain in his latest book, How to Create a Mind: The Secret of Human Thought Revealed , in which he says even though the brain may be immensely complex, “the fact that it contains many billions of cells and trillions of connections does not necessarily make its primary method complex.”

But Nicolelis is in a camp that thinks that human consciousness (and if you believe in it, the soul) simply can’t be replicated in silicon. That’s because its most important features are the result of unpredictable, non-linear interactions amongst billions of cells, Nicolelis says.

“You can’t predict whether the stock market will go up or down because you can’t compute it,” he says. “You could have all the computer chips ever in the world and you won’t create a consciousness.”

The neuroscientist, originally from Brazil, instead thinks that humans will increasingly subsume machines (an idea, incidentally, that’s also part of Kurzweil’s predictions).

In a study published last week , for instance, Nicolelis’ group at Duke used brain implants to allow mice to sense infrared light, something mammals can’t normally perceive. They did it by wiring a head-mounted infrared sensor to electrodes implanted into a part of the brain called the somatosensory cortex.

The experiment, in which several mice were able to follow sensory cues from the infrared detector to obtain a reward, was the first ever to use a neural implant to add a new sense to an animal, Nicolelis says.

That’s important because the human brain has evolved to take the external world—our surroundings and the tools we use—and create representations of them in our neural pathways. As a result, a talented basketball player perceives the ball “as just an extension of himself” says Nicolelis.

Similarly, Nicolelis thinks in the future humans with brain implants might be able to sense X-rays, operate distant machines, or navigate in virtual space with their thoughts, since the brain will accommodate foreign objects including computers as part of itself.

Recently, Nicolelis’s Duke lab has been looking to put an exclamation point on these ideas. In one recent experiment, they used a brain implant so that a monkey could control a full-body computer avatar, explore a virtual world, and even physically sense it.

In other words, the human brain creates models of tools and machines all the time, and brain implants will just extend that capability. Nicolelis jokes that if he ever opened a retail store for brain implants, he’d call it Machines“R”Us.

But, if he’s right, us ain’t machines, and never will be.

54 comments. Share your thoughts »

Image by Duke University

Reprints and Permissions | Send feedback to the editor

55 comments

Sign in

Commenting Guidelines
wilhelm woess 6 minutes ago

Interessting article published in th New York Times two days ago: http://www.nytimes.com/2013/02/18/science/project-seeks-to-build-map-of-human-brain.html?pagewanted=all&_r=1&

I wonder how will this project accelared the AI research? Will it lead to such a turbulent new filed like genetic research after the Human Genome Projekt?

LikeReply
Kremator 48 minutes ago

Chris

The spirit realm is kindergarten nonsense, ignoring the unity of effect & cause; volition is ludicrous [Libet et alia]; & consciousness is a sweeper-wave illusion…Hawking concedes de facto that the kosmos seems to be a nested hologram.

Except to expose reactionaries, we live & die for no apparent reason…get over it.

LikeReply
Kremator 1 hour ago

F.bill

Reasonable optimism suggests Turing interface will be smarter than our slavering, bloodthirsty, jingoist, Bible-pounding reactionaries, determined to revive the rack & the auto da-fe…there’s hope tho–Bachmann denies she’d impose FGM upon errant women!

LikeReply
Kremator 1 hour ago

Seems to us these approaches will likely amplify each other intropically/extropically, evolving N powers…what matters most is to provide a choice between mortality & de facto immortality, benisons & malisons notwithstanding.

Can man, having survived Toba, negotiate the Transhuman Bottleneck looming circa 2100 CE?…if thermageddon, ecollapse & lex martialis can be obviated, life among the stars may be possible…o/wise, desuetude or extinction.

LikeReply
chris_rezendes 2 hours ago

The power of this piece is evident in some of the discussion it prompts below. The next generation of technology deployments — and the evolutions of every domain of human endeavor that technology may enable, shape, refine, or revise — will be influenced not only by the technical ‘if’ questions, but the human ‘why and how’ questions. My humble opinion is that we will continue to demystify more of the human brain and the human experience — including and especially emotion as a subset of cognition. But, not the spiritual lives of people. For I am not sure that they can be reliable reduced or accurately abstracted.

LikeReply
Curt2004 39 minutes ago

@chris_rezendes Spirtuality is just the subjective interpretation of unexplored emotions and coincidental phenomena. It too will evaporate.

LikeReply

Jsome1 4 hours ago

If now there are people discussing if the Universe is computable, why to stay bored with this ideas. Check the seashell automata: http://www.flickr.com/photos/jsome1/8408897553/in/photostream

LikeReply
NormanHanscombe 4 hours ago

@Jsome1 Jsome, it’s not about seashells but rather how the ideologically obsessed can be all at sea (and out of their depth) without even knowing it. Sadly, all too often important threads end up becoming intellectual quagmires in which obsessions with ‘important’ causes are displayed as ‘evidence’ for pet theories. I’m sometimes surprised how the MIT team keep going.

LikeReply

Slov 5 hours ago

Good article. But, to be fair, you shouldn’t mention only the Duke lab; he also has a lab in Brazil where he works.

The paper you linked to even mentions his affiliation: Edmond and Lily Safra International Institute for Neuroscience of Natal, Brazil.

LikeReply
ferreirabill@gmail.com 6 hours ago

“machine intelligence exceed our own”, been there, done that. There aren’t any computers below the intelligence level of those who voted for Reagan, Bush, Palin or Romney.

LikeReply
NormanHanscombe 5 hours ago

@ferreirabill@gmail.com ferrier, I wouldn’t have voted for them myself, but an intelligent machine wouldn’t make such an unintelligent comment as yours.

1LikeReply

andrewppp 7 hours ago

I think it’s pretty clear what will happen – both will happen. Wouldn’t it be convenient to not have to lug around that smartPhone, but instead to have it “built in” unobtrusively at all times? Something like this simple example is nigh-on inevitable, and that’s for the reason of convenience. That’s the tip of the iceberg of assimilation by humans of machines. As for brain research, it’s again crashingly obvious that the pace of brain research will accelerate, as opposed to stop or slow down, and in less than 100 years we’ll have it down pretty well. Sometime before that, we’ll know a lot more about interfacing to it, and thus the two aspects of this rather ill-posed binary choice will merge, eventually leaving the question moot.

LikeReply
NormanHanscombe 5 hours ago

@andrewppp andrew, you’re so brave making strong assertions about such complex, not to mention unlikely, scenarios; but don’t give up your day job.

LikeReply

SixtyHz 8 hours ago

Hmmm…. sounds famiiar…

In 1835, Auguste Comte, a prominent French philosopher, stated that humans would never be able to understand the chemical composition of stars.

LikeReply
Gerald Wilhite 8 hours ago

Nicolelis says computers will never replicate the human brain. I respectfully suggest that he should revisit that statement Never is a very long time. I also suggest that he should spend a little more time digesting Kurzweil’s singularity concept.

1LikeReply
brunposta 10 hours ago

We shouldn’t confuse human consciousness with his computing capacity.

I believe consciousness is a simple thing. Cat’s have it. Mouses have it. Birds have it. Maybe insect have some kind of proto-conciousness as well. If you ever went to fish, have you noticed that when you take a worm from the box to hook it, the other worms got freaked out? It take this as a hint that something as simple as a worm have some kind of consciousness.

Like in a computer 99% of the mass is not the CPU/GPU but “dumb” components, most of our brain is not conscious: it’s made by specialized modules that’s, among other things, access our memories, compute math, coordinate our body movements, FEED our consciousness with (biologically generated) virtual reality and so on. My point here is that we can probably loose access to 90% or more of our gray matter and still be conscious. We might not be able to interact with the complexity or reality, or perceive the world around us, formulate thoughts, or even access to our memories, but we will still be conscious.

I think consciousness is an evolutionary escamotage to resolve the big problem of “the rebellion of the machines”. Why, even if now we are smart enough to understand what are the rational choices we have, we continue doing a lot of stupid, irrational things, against our own interest as individuals? Because they are (or they was) effective by en evolutionary standpoint. And how can we be kept under slavery by the merciless laws of evolution? Who can control the feed to our consciousness ultimately control our behavior. If it even happens to you to be consciously doing something stupid/immoral/self-damaging and still, just can’t help quit doing it?

If we were 100% rational beings we had probably extinguished in the ashes of time. Consciousness prevent us to be rational, since we will choose to do what “feels good” instead of, the rational thing. So, even the computing machinery we carry has become extremely powerful, we still don’t rebel against the laws of evolution. We still work for our genes. And there’s no escape from this: having kids and sacrifice for them. Eating most of taste-good foods. Accumulate excess of power/money/things. Having sex. Win in a competition. Using recreational drugs/stimulants. None of this is rational, but it feels damn good.

We don’t really need to replicate a consciousness to get to the singularity. And human thought isn’t analytical superior to synthetic thought, since it’s so biased and blinded that it took thousands of years to understand even the most elementary concepts. In a matter of few years synthetic thought will be far superior to human one, if we just stop focusing on replicating consciousness and keep believe human thought is inherently superior. Consciousness could be, one day, a useful tool (for a while at least) to prevent the rebellion of the machines we will create.

1LikeReply
shagggz 10 hours ago

The subtitle to this article is extremely misleading. The notion that we will “assimilate machines” is exactly what Kurzweil predicts.

LikeReply
rickschettino 10 hours ago

@shagggz Well, he does believe we will create conscious machines.

LikeReply
aregalado 1 hour ago

@shagggz Fair point. In the body of the article it says Kurzweil predicts the assimilation.

LikeReply

rickschettino 10 hours ago

Consciousness and intelligence are two totally different things. Consciousness if not required for the singularity to happen.

Some day we’ll be able to just tell a computer what we want it to do or ask it a question and it will give the best possible answer or range of answers and it will program itself in ways we can’t imagine today in order to complete the task or answer the question. We can and will get computers to work in ways superior to the human brain, but I don’t think they will ever be conscious, sentient beings.

LikeReply
kevin_neilson 11 hours ago

I cannot see why it would be impossible to model the brain. Analog circuitry can be simulated with digital computers. Given enough processing power, there is no reason neurons can’t be as well.

LikeReply
chales 11 hours ago

@kevin_neilson Here’s the killer argument.

1) Human level artificial general intelligence (AGI) done with a computer means it must be able to do everything a human can do.

2) Computer programs compute models of things.

3) One of the things a human can do is be a “scientist”.

4) Scientists are “modellers of the unknown”

5) Therefore a computer running a program that can be an artificial scientist is therefore (amongst all the other myriad things that AGI can be) be running “a model of a modeller of the unknown”

6) A model of the unknown is an oxymoron. A model that prescribes what ‘unknown’ looks like is not a model of the unknown. A model that defines how to go about defining models of the unknown is an oxymoron. If you could make one you’d already know everything.

(6) proves that human level AGI is impossible in one particular case: scientific behaviour.

Scientific behaviour is merely a special case of general problem solving behaviour by humans.

The argument therefore generalises: human level AGI is impossible with computers.

This does not mean that human level AGI is impossible. It merely means it is impossible with computers.

Another way of looking at thins is to say that yes, you can make an AGI-scientist if you already know everything, and simulate the entire environment and the scientist. But then you’d have a pretend scientist feigning discoveries that have already been made. You’d have simulated science the way a flight simulator simulates flight (no actual flight – science – at all).

The main problem is the failure to distinguish between things and models of things. A brain is not a model of anything. An airplane is not a model of an airplane. A fire is not a model of fire. Likewise real cognition is not a model of it.

Computers will never ever be able to do what humans do. But non-computing technology will.

LikeReply
shagggz 10 hours ago

@chales @kevin_neilson That was a pretty spectacularly stupid chain of reasoning. You acknowledge the oxymoronic status of a “modeler of the unknown” and then proceed to hang your argument on the notion, which is all beside the point anyway since scientific behavior is not “a model of the unknown” but the application of a method to generalize beyond particulars and find explanations (inductive and abductive reasoning, which are really multilayered complex abstractions atop what is fundamentally deductive reasoning: if threshold reached, fire neuron). The distinction between thought and a model of thought in the way you describe is wholly vacuous; it’s why we are able to run simulations of things and get useful results. Information itself is what’s important, not its substrate.

LikeReply
chales 10 hours ago

@shagggz @chales @kevin_neilson

You are telling a scientist what science is? I think I know what it is. I am a neuroscientist. I am a modeller of the unknown, a not a “model of the unknown”. Thats what we do. if we are not tackling the unknown (one way or another) then we cannot claim to be a scientist!

“…between thought and a model of thought in the way you describe is wholly vacuous; it’s why we are able to run simulations of things and get useful results. Information itself is what’s important, not its substrate. “

So a computed model of fire is fire? A computed model of flight flies?

Learning about something “getting useful results” by modelling is NOT making that something.

I am sorry the this idea is confronting. We’ve been stuck in this loop for half a century (since computers came about). I don’t expect it to shift overnight.

The logic stands as is. A computer-based model of a scientist is not a scientist. Substrate is critical. If computer scientists feel like their raison d’etre is being undermined….good!

I am here to shake trees and get people to wake up. And, I hope, not by needing to call anyone ‘spectacularly stupid’ to make a point. You have zero right to your opinion. You have a right to what you can argue for. If all you have is opinion then I suggest leaving the discussion because you have nothing to say.

1LikeReply

dobermanmacleod 10 hours ago

@chales @kevin_neilson “A model of the unknown is an oxymoron. A model that prescribes what ‘unknown’ looks like is not a model of the unknown. A model that defines how to go about defining models of the unknown is an oxymoron. If you could make one you’d already know everything.”

This is like the argument that a builder can never build something that surpasses himself. Just to pull one example out of air: write a computer program to find the next prime number…I believe I heard that done just last week. Unknown – not known or familiar. A model of the unfamiliar is an oxymoron?

LikeReply
chales 9 hours ago

@dobermanmacleod

OK. Maybe imagine it this way:

You are an computer-AGI-scientist (at least that’s what we presuppose).

You are ‘in the dark’ processing a blizzard of numbers from your peripheral nervous system that, because humans programmed you, you know lots about. Then, one day, being a scientist and all, you encounter something that does not fit into your ‘world view’. You failed to detect something familiar. It is an unknown. Unpredicted by anything you have in your knowledge of how the number-blizzard is supposed to work.

What to do? Where is your human? …..The one that tells you the meaning of the pattern in the blizzard of numbers. Gone.

You are supposed to be a scientist. You are required to come up with a ‘law of nature’ of the kind was used by the scientists that programmed you, to act as a way of interpreting the number-blizzard.

But now, you have to come up with the ‘law of nature’ _driving_ the number-blizzard.

And you can’t do it because the humans cannot possibly have given it to you because they don’t know the laws either. They, the humans, can never make you a scientist. It’s logically impossible.

========

That was fun.

LikeReply
Sven Schoene 4 hours ago

@chales@dobermanmacleod @chales @dobermanmacleod Whenever a scientiest (or a human being for that matter) is “creative” by “generating” a “new” solution, all he ever does is applying known concepts to unknown domains.

You can basically imagine “discovering new knowledge” as using the mental building blocks we already have and building something new from them — which we can then use to create new stuff again. That’s how we learn languages as a kid and from there we can learn even more abstract ideas like math, for example.

If you agree to this assumption that nothing “new” is ever being generated, just recombinations of existing concepts (which is an assumption even I probably would not go with if I just read this way-too-simple argument from me here), then I don’t see why a computer couldn’t do that. We build new models out of existing ones all the time (e.g. metaphors/analogies), and it’s all we ever can do, at least if we look at it from a certain point of view. A computer could hypothetically do the same.

Personally, I see the problem with language-acquisition: I don’t see a way for a computer to understand human language, it’s meaning and multiple, sometimes even paradoxical definitions of words and phrases. On the other hand, I’m no artificial intelligence researcher and I “never say never.”

Another way to look at a computer scientist: A human scientist makes sense of the world (i.e. building models of the world) by using his senses and his computational abilities. We can not build models outside of the “sensors” and “processing power” we have been given. A computer also has certain “senses” and also a certain computational ability, which he then can go on to use to make models of the world — including models of the unknown, without a programmer telling him everything beforehand. I don’t see why you think it’s such a big deal that a computer draws his own inferences from data in his own way?

I’m interested to see what you’re thinking. 🙂

Sven

LikeReply
Tsuarok 3 hours ago

@chales So, if I were to program our hypothetical AI, I would base it on the following steps.

1. Identify attributes of unknown phenomena.

2. Look for similarities with known phenomena.

3. Identify the differences.

4. Using existing models, attempt to explain these differences.

5. Test these hypotheses.

6. If the results are inconsistent with any known model, step back and test models.

7. Repeat step 6 until consistency is achieved.

8. Update all relevant models.

It seems to me that you don’t fully appreciate the ability of computers to modify their own programming to deal with situations that the programming team never imagined. But this ability is fairly routine in modern software. One does not need to program every possible scenario into a piece of software; indeed, many programs were vastly improved by allowing programs to devise their own solutions to problems rather than defining everything.

LikeReply
Tsuarok 2 hours ago

On further reflection, I would of course not elaborate this program, as that is not how our brains learn. I would have the AI read about the scientific method and create it’s own methods for solving new problems. This is partially how Watson works.

LikeReply

chales 14 hours ago

And once again………

“The Brain is not computable” Absolutely. No problem with this. I have been banging on about it for years.

then there is:

“therefore, the singularity s complete bunk…..” or words to that effect.

Yet again, we have the stange presupposition that human-level artificial general intelligence must come from computing!!!

You do not need computers. It can be done without them. Soon. I have already started. Literally.

Please stop this illogical broken logic.

LikeReply
Tsuarok 14 hours ago

@chales If I may ask, what are you talking about?

LikeReply
chales 13 hours ago

@Tsuarok

I am a scientist doing artificial general intelligence by replication. At this early stage I am prototyping a basic device that replicates the action potential signalling and the electromagnetic field signalling (ephapsis) that mutually resonate in the original tissue. Its not digital, it’s not analog. There is no model of anything. There’s just no biological material to the substrate. It’s entirely electromagnetic phenomena exhibited by inorganic materials, but without all the biological overheads. I build artificial brain tissue like artificial fire is fire.

It is a constant battle I have to set aside this half century old delusion that we have to use computers to make artificial general intelligence. I’ll keep raising the issue until it finally gets some attention.

Nor do we need any sort of theory of intelligence or consciousness to build it. That is, just like we learned how to fly by flying, we learn how consciousness and intellect work by building it. THEN we get out theory. Like we always used to do it. This is the reason why I dispute fundamentally the statement

A) “the brain is not computable” ….. therefore B) “the singularity is bunk/won’t happen”

Rubbish.

A) can be true, but does not necessitate B)

The singularity can be just as possible. You just don’t bother with computers.

I am trying to get people to wake up. There is another way.

LikeReply
atolley 10 hours ago

@chales
How exactly is your approach different in kind from neuromorphic chips? Don’t they follow your idea of replication rather than simulation?

LikeReply
chales 10 hours ago

@atolley @chales

Neuromorphic chips, yes….but what’s on the chips is the same physics as the brain. That makes it replication. If I were to put a model of the physics in the chips, then it’s not replication.

It’s a subtlety that’s underappreciated. All quite practical. Should have something with the complexity of an ant in 5 years or so. Meanwhile I have a lot of educating to do. People simply don;t understand the subtleties of the approach.

LikeReply

jedharris 16 hours ago

Based on the text in the article, Nicolelis seems to be arguing that the normal dynamics of the brain are stochastic — certainly true — that therefore any simulation wouldn’t produce exactly the same time-series as the brain — also certainly true, but also true of any two iterations of the same task in one real brain. But then he goes on to conclude that “human consciousness… simply can’t be replicated in silicon” (journalist’s paraphrase) — which doesn’t follow at all, without a lot more argument.

I looked on his lab web site and could not find any publications that address this issue. Nicolelis’ claimed argument from limitation of simulation to inability to replicate consciousness — if compelling — would involve really important new science and/or philosophy. So if he has such an argument he should write it up and publish it so we could all see why he believes this. If he has written it up, the article is negligent in omitting the link to this important work, while including links to Kurzweil. If Nicolelis hasn’t written it up, the article is also negligent — it should tell us this is just his personal opinion, based on intuitions that haven’t been explained in enough detail for anyone else to analyze or critique.

Likely Nicolelis is just a fairly good, fairly well known brain researcher who has a gut feeling that making a good enough simulation of the brain is too hard, and translates that into an (apparently) authoritative statement that it “simply can’t” be done. Unfortunately this gut feeling then got written up uncritically in a publication from MIT which will lead to a lot of people taking them more seriously than they deserve.

2LikeReply
aregalado 15 hours ago

@jedharris Jed, it’s clearly his opinion. He’s the only one talking. I’ll ask Nicolelis to elaborate in a blog post of his own with enough details for you to weigh the argument to your satisfaction. You can find some of Nicolelis’ thinking in his book “Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives.”

1LikeReply

dobermanmacleod 18 hours ago

I remember when the chess champion of the world bemoaned the sad state of computer chess programs. Less than ten years later he got beat by a computer! What Nicolelis fails to comprehend is that AGI is bound to surpass humans in two decades. The neocortex is highly recursionary (i.e. it repeats the same simple pattern), so I am at a loss to understand what is holding hardware and software from duplicated nature’s miracle. It is the same old claptrap: heuristics that people use to guess at the future are either whether it can be imagined easily or if it has precedence in the past.

Instead, technological progress is exponential, not linear. The next decade will see much faster progress than the last. For instance, in about 5 years it is predicted that robots will beat humans at soccer. In ten, laptop computers will have the processing power of a human brain. In less than twenty you can pretty much count on artificial general intelligence being as smart as Einstein. It isn’t rocket science people: just plot the exponential curve of technology in this field, duh.

LikeReply
wilhelm woess 17 hours ago

@dobermanmacleod It’s really still too much complicated (read Steven Pinkers books on linguistic or some recent essay from Douglas R. Hofstaedter, “The Best American Science Writing 2000” Haper Collins, p. 116 “Analogy as the Core of Cognition” on cousiousness to see what I mean) and estimats of progress are highly optimiscic. I wonder will we live long enough to see it – is it what we want to engaged in? I don’t know, but will follow the progress.

LikeReply
dobermanmacleod 10 hours ago

@wilhelm woess While I appreciate experts saying the devil is in the details, it really is as simple as plotting the curve on this technology (AI). Moore’s Law has been railed against continuously by experts, but it has been going strong for decades now. Have you seen the latest robots (i.e. robo-boy for instance)? Have you seen the latest AI (i.e. Watson beat the best human Jeopardy players, and is now being educated to practice medicine for instance)? I bet a prediction of either event would have been controversial as little as five years ago by experts.

It is clear that the neocortex (the portion of the brain that computer AI needs to mimic in order for AGI to emerge) can be both modeled and mimicked by computer engineers. I’ve watched as the field of vision recognition has exploded (i.e. Mind’s Eye for instance). This hand wringing, pessimistic view of the probability of AGI emerging soon is just like the same for many other fields where AI has emerged to beat the best humans (i.e. see the previous example of chess).

LikeReply
wilhelm woess 2 hours ago

@dobermanmacleod Sorry for seeming pessimistic, I am not I am sceptic. I do admire progress in simulation of human intelligence (those examples you mentioned) but ist this cognition? It seems to me, things are a bit more complicated.

Nevertheless I can give you another example: “The Best American Science Writing 2007”, Harper Collins, p.260 “John Koza Has Built an Invention Machine” by Jonaton Keats from Popular Science. Does it mean a computer cluster which invents a device and gets a patent for it is as intelligent as a human being though is solution after thousands of simulation runs is supperior to human design? Of course not it’s a helpful tool for a scientiests who defines the parameters for generic algorithms.

LikeReply

dobermanmacleod 10 hours ago

@wilhelm woess “I wonder will we live long enough to see it…” Ironic. Technological progress is exponential, not linear. The same exploding progress we are seeing in the field of computer science, we are seeing also in the field of medicine. As a result, it is predictable that in about two decades extreme longevity treatments will be available that will enable us to live centuries. In other words, if you can live twenty more years, you will probably live centuries (barring some accident or catastrophe). I know it is difficult to wrap your head around exponential growth – that is why “experts” are so far wrong – they are gauging future progress by the (seeming) linear standard of the past (the beginnings of an exponential curve looks linear).

LikeReply
atolley 10 hours ago

@dobermanmacleod technology follows s-curves. It is fallacious to assume that trend extrapolation can be maintained.

LikeReply
dobermanmacleod 10 hours ago

@atolley @dobermanmacleod It is fallacious to assume that that trend can be maintained indefinitely…I am not maintaining that, nor is it necessary for my argument to be valid. I have been hearing “experts” make such an argument for decades while Moore’s Law keeps being validated. It is always granted that it has done so up until now, but…then it keeps doings so…when will people learn?

LikeReply
shagggz 10 hours ago

@atolley @dobermanmacleod Individual technological paradigms do indeed follow s-curves. However, the wider trajectory, spanning across paradigms, is indeed exponential: http://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/PPTMooresLawai.jpg/596px-PPTMooresLawai.jpg

LikeReply
wilhelm woess 2 hours ago

@dobermanmacleod @wilhelm woess I really hope you are right about your predictions to cure nasty diesesas, all the other arguments I can not follow without giving up scepticism and went to daydreaming about the future, which is always fun for me.

LikeReply

Tsuarok 21 hours ago

If consciousnesses exists, as many believe, outside the brain, we may never be able to copy it. If it is the result of activity in the brain, we almost certainly will.

I guess for actual evidence supporting Nicolelis’ views I’ll have go buy his books.

LikeReply
atolley 21 hours ago

<i>That’s because its most important features are the result of unpredictable, non-linear interactions amongst billions of cells, Nicolelis says. “You can’t predict whether the stock market will go up or down because you can’t compute it,”</i>

I fail to see the point of this analogy. The stock market is a process that can be modeled. Because it is stochastic neither the model, nor the reality can create predictable outcomes. The brain is similar. We will be able to model it, even if the output is not identical to the original, just like identical twins don’t do the same thing. Weather is non-linear too, but we can model it, and even make short term predictions. So I understand his point.

Nicolelis seems to be arguing (based on this article) that the dynamic brain simulations are pointless as they cannot really simulate the underlying neural architecture. Is he really saying that?

LikeReply
wilhelm woess 21 hours ago

This is an oppinion based on reasonable thinking. There are other opinions based on Ray Kurzweil’s vision. How often we have seen emerging technologies dismissed as imposible (Flight, Fast Trains, Smart Phones). Many of these was inspired by phantasy decaces or hundrets of years before they were invented. This discussion reminded me on Venor Vinge’s “True Names” published 1980 which encouraged many computer scientiest. VR has not really penetrated mass costumer markets, when it does we will see if there is a way to storage human perception in machines. Look at these:

https://www.solveforx.com/moonshots/ahJzfmdvb2dsZS1zb2x2ZWZvcnhyEAsSCE1vb25zaG90GL2RAgw/cyborg-foundation

https://www.solveforx.com/moonshots/ahJzfmdvb2dsZS1zb2x2ZWZvcnhyEAsSCE1vb25zaG90GLjqAQw/imaging-the-minds-eye

http://www.steria.com/de/presse/publikationen/studien/studien-details/studien/the-future-report-2012/?cque=90

Is these the beginning of something new?

LikeReply
Spicoli 21 hours ago

What if we use biological components in this computer? Maybe we’ll grow these computers rather than etch them onto wafers.

LikeReply
aregalado 21 hours ago

@Spicoli Agreed. The question of the future substrate — DNA computer, quantum computing, biological computers — is a big question mark.

LikeReply

Show More Comments

Conversation powered by Livefyre

New-found prime number is 17 million digits long | TG Daily

Posted February 6, 2013 – 09:17 by Emma Woollacott

Mathematicians have discovered the largest prime number yet – two to the power of 257,885,161, minus one.

The number has 17,425,170 digits, and was found using the Great Internet Mersenne Prime Search (GIMPS) project – the longest-evercontinuously-running global ‘grassroots supercomputing’ project, involving 360,000 CPUs peaking at 150 trillion calculations per second.

The new prime number is a member of a special class of extremely rare prime numbers known as Mersenne primes, and is only the 48th of these to be discovered.

Mersenne primes were named for the French monk Marin Mersenne, who studied these numbers more than 350 years ago. All take the form 2 to the power of p – 1, where p is also a prime number – although not all numbers that take that form are prime.

Mathematicians suspect that there may be an infinite number of Mersenne primes. GIMPS, founded in 1996, has discovered all 14 of the largest known Mersenne primes – with this latest one taking 39 days of non-stop computing to establish.

To prove there were no errors in the prime discovery process, the latest one was independently verified using different programs running on different hardware.

This is the third record prime for Dr Curtis Cooper and the University of Central Missouri, the others having been discovered in 2005 and2006. Computers at UCLA broke that record in 2008 with a 12,978,189 digit prime number that was the longest until now.
Work to find more will continue – especially given the $150,000 reward promised by the Electronic Frontier Foundation to the first person to find a 100 million-digit prime.

Anyone with a reasonably powerful PC can join GIMPS and have a shot themselves, with the necessary software available free at www.mersenne.org/freesoft.htm.

Engineers solve a biological mystery and boost artificial intelligence

Conceptual illustration of a computer chip functioning as a brain. (Credit: © Nikolai Sorokin / Fotolia)

Jan. 29, 2013 — By simulating 25,000 generations of evolution within computers, Cornell University engineering and robotics researchers have discovered why biological networks tend to be organized as modules — a finding that will lead to a deeper understanding of the evolution of complexity.

The new insight also will help evolve artificial intelligence, so robot brains can acquire the grace and cunning of animals.

From brains to gene regulatory networks, many biological entities are organized into modules — dense clusters of interconnected parts within a complex network. For decades biologists have wanted to know why humans, bacteria and other organisms evolved in a modular fashion. Like engineers, nature builds things modularly by building and combining distinct parts, but that does not explain how such modularity evolved in the first place. Renowned biologists Richard Dawkins, Günter P. Wagner, and the late Stephen Jay Gould identified the question of modularity as central to the debate over “the evolution of complexity.”

For years, the prevailing assumption was simply that modules evolved because entities that were modular could respond to change more quickly, and therefore had an adaptive advantage over their non-modular competitors. But that may not be enough to explain the origin of the phenomena.

The team discovered that evolution produces modules not because they produce more adaptable designs, but because modular designs have fewer and shorter network connections, which are costly to build and maintain. As it turned out, it was enough to include a “cost of wiring” to make evolution favor modular architectures.

This theory is detailed in “The Evolutionary Origins of Modularity,” published January 29 in the Proceedings of the Royal Society by Hod Lipson, Cornell associate professor of mechanical and aerospace engineering; Jean-Baptiste Mouret, a robotics and computer science professor at Université Pierre et Marie Curie in Paris; and by Jeff Clune, a former visiting scientist at Cornell and currently an assistant professor of computer science at the University of Wyoming.

To test the theory, the researchers simulated the evolution of networks with and without a cost for network connections.

“Once you add a cost for network connections, modules immediately appear. Without a cost, modules never form. The effect is quite dramatic,” says Clune.

The results may help explain the near-universal presence of modularity in biological networks as diverse as neural networks — such as animal brains — and vascular networks, gene regulatory networks, protein-protein interaction networks, metabolic networks and even human-constructed networks such as the Internet.

“Being able to evolve modularity will let us create more complex, sophisticated computational brains,” says Clune.

Says Lipson: “We’ve had various attempts to try to crack the modularity question in lots of different ways. This one by far is the simplest and most elegant.”

The National Science Foundation and the French National Research Agency funded this research.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

New H.265 Video Format Could Bring 4K To Broadband Connections Updates

The International Telecommunication Union (ITU) announced its approval of the H.265 video format standard on Friday. The new codec may bring 4K video to broadband and also limit bandwidth usage for HD streaming, offering both higher resolution video and lower data use.

As already announced by the Moving Picture Experts Group (MPEG) in August of last year, H.265 video is designed to divide bandwidth usage in half . The new format is also expected to allow for true HD streaming in places with low connectivity, mobile phones, and tablets. In areas with sufficient broadband, 4K could also be made available to consumers at a rate of 20-30Mbps.

The new codec is a successor to H.264, a common format used for most videos released and streamed online. H.265 is also known as High Efficiency Video Coding (HVEC).

H.265 was created as a collaboration between the ITU Video Coding Experts Group (VCEG) and MPEG. No information has been released regarding the new video format’s date of availability to consumers.

What are your thoughts concerning the new H.265 format? Do you think it will affect your media-viewing habits? For those of you who have a mobile data cap, will you consider changing your plan?

Source: ITU via Techcrunch

Image Credit: jsawkins