Fluid Experiments Support Deterministic “Pilot-Wave” Quantum Theory | Simons Foundation

Courtesy of John Bush

A droplet bouncing on the surface of a liquid has been found to exhibit many quantum-like properties, including double-slit interference, tunneling and energy quantization.

For nearly a century, “reality” has been a murky concept. The laws of quantum physics seem to suggest that particles spend much of their time in a ghostly state, lacking even basic properties such as a definite location and instead existing everywhere and nowhere at once. Only when a particle is measured does it suddenly materialize, appearing to pick its position as if by a roll of the dice.

This idea that nature is inherently probabilistic — that particles have no hard properties, only likelihoods, until they are observed — is directly implied by the standard equations of quantum mechanics. But now a set of surprising experiments with fluids has revived old skepticism about that worldview. The bizarre results are fueling interest in an almost forgotten version of quantum mechanics, one that never gave up the idea of a single, concrete reality.

The experiments involve an oil droplet that bounces along the surface of a liquid. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet’s interaction with its own ripples, which form what’s known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles — including behaviors seen as evidence that these particles are spread through space like waves, without any specific location, until they are measured.

Particles at the quantum scale seem to do things that human-scale objects do not do. They can tunnel through barriers, spontaneously arise or annihilate, and occupy discrete energy levels. This new body of research reveals that oil droplets, when guided by pilot waves, also exhibit these quantum-like features.

To some researchers, the experiments suggest that quantum objects are as definite as droplets, and that they too are guided by pilot waves — in this case, fluid-like undulations in space and time. These arguments have injected new life into a deterministic (as opposed to probabilistic) theory of the microscopic world first proposed, and rejected, at the birth of quantum mechanics.

“This is a classical system that exhibits behavior that people previously thought was exclusive to the quantum realm, and we can say why,” said John Bush, a professor of applied mathematics at the Massachusetts Institute of Technology who has led several recent bouncing-droplet experiments. “The more things we understand and can provide a physical rationale for, the more difficult it will be to defend the ‘quantum mechanics is magic’ perspective.”

Magical Measurements

The orthodox view of quantum mechanics, known as the “Copenhagen interpretation” after the home city of Danish physicist Niels Bohr, one of its architects, holds that particles play out all possible realities simultaneously. Each particle is represented by a “probability wave” weighting these various possibilities, and the wave collapses to a definite state only when the particle is measured. The equations of quantum mechanics do not address how a particle’s properties solidify at the moment of measurement, or how, at such moments, reality picks which form to take. But the calculations work. As Seth Lloyd, a quantum physicist at MIT, put it, “Quantum mechanics is just counterintuitive and we just have to suck it up.”

Bottom: Akira Tonomura/Creative Commons

When light illuminates a pair of slits in a screen (top), the two overlapping wavefronts cooperate in some places and cancel out in between, producing an interference pattern. The pattern appears even when particles are shot toward the screen one by one (bottom), as if each particle passes through both slits at once, like a wave.

A classic experiment in quantum mechanics that seems to demonstrate the probabilistic nature of reality involves a beam of particles (such as electrons) propelled one by one toward a pair of slits in a screen. When no one keeps track of each electron’s trajectory, it seems to pass through both slits simultaneously. In time, the electron beam creates a wavelike interference pattern of bright and dark stripes on the other side of the screen. But when a detector is placed in front of one of the slits, its measurement causes the particles to lose their wavelike omnipresence, collapse into definite states, and travel through one slit or the other. The interference pattern vanishes. The great 20th-century physicist Richard Feynman said that this double-slit experiment “has in it the heart of quantum mechanics,” and “is impossible, absolutely impossible, to explain in any classical way.”

Some physicists now disagree. “Quantum mechanics is very successful; nobody’s claiming that it’s wrong,” said Paul Milewski, a professor of mathematics at the University of Bath in England who has devised computer models of bouncing-droplet dynamics. “What we believe is that there may be, in fact, some more fundamental reason why [quantum mechanics] looks the way it does.”

Riding Waves

The idea that pilot waves might explain the peculiarities of particles dates back to the early days of quantum mechanics. The French physicist Louis de Broglie presented the earliest version of pilot-wave theory at the 1927 Solvay Conference in Brussels, a famous gathering of the founders of the field. As de Broglie explained that day to Bohr, Albert Einstein, Erwin Schrödinger, Werner Heisenberg and two dozen other celebrated physicists, pilot-wave theory made all the same predictions as the probabilistic formulation of quantum mechanics (which wouldn’t be referred to as the “Copenhagen” interpretation until the 1950s), but without the ghostliness or mysterious collapse.

The probabilistic version, championed by Bohr, involves a single equation that represents likely and unlikely locations of particles as peaks and troughs of a wave. Bohr interpreted this probability-wave equation as a complete definition of the particle. But de Broglie urged his colleagues to use two equations: one describing a real, physical wave, and another tying the trajectory of an actual, concrete particle to the variables in that wave equation, as if the particle interacts with and is propelled by the wave rather than being defined by it.

For example, consider the double-slit experiment. In de Broglie’s pilot-wave picture, each electron passes through just one of the two slits, but is influenced by a pilot wave that splits and travels through both slits. Like flotsam in a current, the particle is drawn to the places where the two wavefronts cooperate, and does not go where they cancel out.

De Broglie could not predict the exact place where an individual particle would end up — just like Bohr’s version of events, pilot-wave theory predicts only the statistical distribution of outcomes, or the bright and dark stripes — but the two men interpreted this shortcoming differently. Bohr claimed that particles don’t have definite trajectories; de Broglie argued that they do, but that we can’t measure each particle’s initial position well enough to deduce its exact path.

In principle, however, the pilot-wave theory is deterministic: The future evolves dynamically from the past, so that, if the exact state of all the particles in the universe were known at a given instant, their states at all future times could be calculated.

At the Solvay conference, Einstein objected to a probabilistic universe, quipping, “God does not play dice,” but he seemed ambivalent about de Broglie’s alternative. Bohr told Einstein to “stop telling God what to do,” and (for reasons that remain in dispute) he won the day. By 1932, when the Hungarian-American mathematician John von Neumann claimed to have proven that the probabilistic wave equation in quantum mechanics could have no “hidden variables” (that is, missing components, such as de Broglie’s particle with its well-defined trajectory), pilot-wave theory was so poorly regarded that most physicists believed von Neumann’s proof without even reading a translation.

At the fifth Solvay Conference, a 1927 meeting of the founders of quantum mechanics, Louis de Broglie (middle row, third from right) argued for a deterministic formulation of quantum mechanics called pilot-wave theory. But a probabilistic version of the theory championed by Niels Bohr (middle row, far right) won the day.

More than 30 years would pass before von Neumann’s proof was shown to be false, but by then the damage was done. The physicist David Bohm resurrected pilot-wave theory in a modified form in 1952, with Einstein’s encouragement, and made clear that it did work, but it never caught on. (The theory is also known as de Broglie-Bohm theory, or Bohmian mechanics.)

Later, the Northern Irish physicist John Stewart Bell went on to prove a seminal theorem that many physicists today misinterpret as rendering hidden variables impossible. But Bell supported pilot-wave theory. He was the one who pointed out the flaws in von Neumann’s original proof. And in 1986 he wrote that pilot-wave theory “seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored.”

The neglect continues. A century down the line, the standard, probabilistic formulation of quantum mechanics has been combined with Einstein’s theory of special relativity and developed into the Standard Model, an elaborate and precise description of most of the particles and forces in the universe. Acclimating to the weirdness of quantum mechanics has become a physicists’ rite of passage. The old, deterministic alternative is not mentioned in most textbooks; most people in the field haven’t heard of it. Sheldon Goldstein, a professor of mathematics, physics and philosophy at Rutgers University and a supporter of pilot-wave theory, blames the “preposterous” neglect of the theory on “decades of indoctrination.” At this stage, Goldstein and several others noted, researchers risk their careers by questioning quantum orthodoxy.

A Quantum Drop

Yves Couder et al.

When a droplet bounces along the surface of a liquid toward a pair of openings in a barrier, it passes randomly through one opening or the other while its “pilot wave,” or the ripples on the liquid’s surface, passes through both. After many repeat runs, a quantum-like interference pattern appears in the distribution of droplet trajectories.

Now at last, pilot-wave theory may be experiencing a minor comeback — at least, among fluid dynamicists. “I wish that the people who were developing quantum mechanics at the beginning of last century had access to these experiments,” Milewski said. “Because then the whole history of quantum mechanics might be different.”

The experiments began a decade ago, when Yves Couder and colleagues at Paris Diderot University discovered that vibrating a silicon oil bath up and down at a particular frequency can induce a droplet to bounce along the surface. The droplet’s path, they found, was guided by the slanted contours of the liquid’s surface generated from the droplet’s own bounces — a mutual particle-wave interaction analogous to de Broglie’s pilot-wave concept.

In a groundbreaking experiment, the Paris researchers used the droplet setup to demonstrate single- and double-slit interference. They discovered that when a droplet bounces toward a pair of openings in a damlike barrier, it passes through only one slit or the other, while the pilot wave passes through both. Repeated trials show that the overlapping wavefronts of the pilot wave steer the droplets to certain places and never to locations in between — an apparent replication of the interference pattern in the quantum double-slit experiment that Feynman described as “impossible … to explain in any classical way.” And just as measuring the trajectories of particles seems to “collapse” their simultaneous realities, disturbing the pilot wave in the bouncing-droplet experiment destroys the interference pattern.

Droplets can also seem to “tunnel” through barriers, orbit each other in stable “bound states,” and exhibit properties analogous to quantum spin and electromagnetic attraction. When confined to circular areas called corrals, they form concentric rings analogous to the standing waves generated by electrons in quantum corrals. They even annihilate with subsurface bubbles, an effect reminiscent of the mutual destruction of matter and antimatter particles.

Daniel Harris and John Bush

Video: The pilot-wave dynamics of walking droplets.

In each test, the droplet wends a chaotic path that, over time, builds up the same statistical distribution in the fluid system as that expected of particles at the quantum scale. But rather than resulting from indefiniteness or a lack of reality, these quantum-like effects are driven, according to the researchers, by “path memory.” Every bounce of the droplet leaves a mark in the form of ripples, and these ripples chaotically but deterministically influence the droplet’s future bounces and lead to quantum-like statistical outcomes. The more path memory a given fluid exhibits — that is, the less its ripples dissipate — the crisper and more quantum-like the statistics become. “Memory generates chaos, which we need to get the right probabilities,” Couder explained. “We see path memory clearly in our system. It doesn’t necessarily mean it exists in quantum objects, it just suggests it would be possible.”

The quantum statistics are apparent even when the droplets are subjected to external forces. In one recent test, Couder and his colleagues placed a magnet at the center of their oil bath and observed a magnetic ferrofluid droplet. Like an electron occupying fixed energy levels around a nucleus, the bouncing droplet adopteda discrete set of stable orbits around the magnet, each characterized by a set energy level and angular momentum. The “quantization” of these properties into discrete packets is usually understood as a defining feature of the quantum realm.

Harris et al., PRL (2013)

As a droplet wends a chaotic path around the liquid’s surface, it gradually builds up quantum-like statistics.

If space and time behave like a superfluid, or a fluid that experiences no dissipation at all, then path memory could conceivably give rise to the strange quantum phenomenon of entanglement — what Einstein referred to as “spooky action at a distance.” When two particles become entangled, a measurement of the state of one instantly affects that of the other. The entanglement holds even if the two particles are light-years apart.

In standard quantum mechanics, the effect is rationalized as the instantaneous collapse of the particles’ joint probability wave. But in the pilot-wave version of events, an interaction between two particles in a superfluid universe sets them on paths that stay correlated forever because the interaction permanently affects the contours of the superfluid. “As the particles move along, they feel the wave field generated by them in the past and all other particles in the past,” Bush explained. In other words, the ubiquity of the pilot wave “provides a mechanism for accounting for these nonlocal correlations.” Yet an experimental test of droplet entanglement remains a distant goal.

Subatomic Realities

Many of the fluid dynamicists involved in or familiar with the new research have become convinced that there is a classical, fluid explanation of quantum mechanics. “I think it’s all too much of a coincidence,” said Bush, who led a June workshop on the topic in Rio de Janeiro and is writing a review paper on the experiments for the Annual Review of Fluid Mechanics.

Quantum physicists tend to consider the findings less significant. After all, the fluid research does not provide direct evidence that pilot waves propel particles at the quantum scale. And a surprising analogy between electrons and oil droplets does not yield new and better calculations. “Personally, I think it has little to do with quantum mechanics,” said Gerard ’t Hooft, a Nobel Prize-winning particle physicist at Utrecht University in the Netherlands. He believes quantum theory is incomplete but dislikes pilot-wave theory.

Many working quantum physicists question the value of rebuilding their highly successful Standard Model from scratch. “I think the experiments are very clever and mind-expanding,” said Frank Wilczek, a professor of physics at MIT and a Nobel laureate, “but they take you only a few steps along what would have to be a very long road, going from a hypothetical classical underlying theory to the successful use of quantum mechanics as we know it.”

“This really is a very striking and visible manifestation of the pilot-wave phenomenon,” Lloyd said. “It’s mind-blowing — but it’s not going to replace actual quantum mechanics anytime soon.”

In its current, immature state, the pilot-wave formulation of quantum mechanics only describes simple interactions between matter and electromagnetic fields, according to David Wallace, a philosopher of physics at the University of Oxford in England, and cannot even capture the physics of an ordinary light bulb. “It is not by itself capable of representing very much physics,” Wallace said. “In my own view, this is the most severe problem for the theory, though, to be fair, it remains an active research area.”

Pilot-wave theory has the reputation of being more cumbersome than standard quantum mechanics. Some researchers said that the theory has trouble dealing with identical particles, and that it becomes unwieldy when describing multiparticle interactions. They also claimed that it combines less elegantly with special relativity. But other specialists in quantum mechanics disagreed or said the approach is simply under-researched. It may just be a matter of effort to recast the predictions of quantum mechanics in the pilot-wave language, said Anthony Leggett, a professor of physics at the University of Illinois, Urbana-Champaign, and a Nobel laureate. “Whether one thinks this is worth a lot of time and effort is a matter of personal taste,” he added. “Personally, I don’t.”

Courtesy of John Bush

Attendees of Hydrodynamic Quantum Analogs IV, a meeting held June 2-6 in Rio de Janeiro. The conference organizer, John Bush, a professor of applied mathematics at MIT, is pictured at left.

On the other hand, as Bohm argued in his 1952 paper, an alternative formulation of quantum mechanics might make the same predictions as the standard version at the quantum scale, but differ when it comes to smaller scales of nature. In the search for a unified theory of physics at all scales, “we could easily be kept on the wrong track for a long time by restricting ourselves to the usual interpretation of quantum theory,” Bohm wrote.

Some enthusiasts think the fluid approach could indeed be the key to resolving the long-standing conflict between quantum mechanics and Einstein’s theory of gravity, which clash at infinitesimal scales.

“The possibility exists that we can look for a unified theory of the Standard Model and gravity in terms of an underlying, superfluid substrate of reality,” said Ross Anderson, a computer scientist and mathematician at the University of Cambridge in England, and the co-author of a recent paper on the fluid-quantum analogy. In the future, Anderson and his collaborators plan to study the behavior of “rotons” (particle-like excitations) in superfluid helium as an even closer analog of this possible “superfluid model of reality.”

But at present, these connections with quantum gravity are speculative, and for young researchers, risky ideas. Bush, Couder and the other fluid dynamicists hope that their demonstrations of a growing number of quantum-like phenomena will make a deterministic, fluid picture of quantum mechanics increasingly convincing.

“With physicists it’s such a controversial thing, and people are pretty noncommittal at this stage,” Bush said. “We’re just forging ahead, and time will tell. The truth wins out in the end.”

Evernote helps you remember everything and get organized effortlessly. Download Evernote.

Analysis suggests that solar thermal can provide baseline power | Ars Technica

But it requires careful coordination across multiple plants.

by John Timmer – June 24 2014, 1:28pm CDT

Enlarge
Sandia National Labs

With the cost of photovoltaic devices and wind power dropping dramatically, the economics of some forms of renewable power are becoming increasingly compelling. But these sources of power come with a significant limitation: intermittency. Solar can’t generate power around the clock (and output drops during cloudy days), while wind power can suffer from low output that can last days. There are various ways to work around some lack of production—grid-scale storage and careful matching of supply and demand—but some degree of what’s termed "baseline power" is needed to ensure the stability of the grid.

There are ways to provide this baseline power that don’t involve significant carbon emissions, like nuclear and hydro power. But those come with their own set of issues. So a group of European researchers decided to look into a form of renewable power that hasn’t attracted as much attention: concentrating solar power (CSP), sometimes termed solar thermal power.

Further Reading

Who needs sunlight? In Arizona, solar power never sleeps

Ars visits the Solana solar thermal power plant, newly online in October.

CSP involves the use of mirrors to focus sunlight onto a liquid, rapidly bringing it up to extremely high temperatures. The resulting heat can be used immediately to generate electricity, or some fraction of it can be stored and used to drive generators later. Depending on the details of the storage, CSP can typically generate electricity for at least eight hours after the Sun sets, and some plants have managed to produce power around the clock during the summer.

Typically, a CSP plant is optimized for a mixture of generation and storage. But the authors of the new paper note that it’s possible to expand the area where the mirrors are located (called the "solar field") relative to its power block. This may be less efficient economically, but it allows the plant to start generating more power rapidly at the start of the day and continue to store heat at the same time as generating power.

To determine whether this sort of approach could allow CSP to provide better baseline power, the authors looked at three scenarios: a flat power demand, one based on the European Union (where demand peaks in winter evenings), and one based on California, where demand peaks during summer afternoons. They also looked at three different levels of plant optimization: none at all, optimization of layout and equipment based on economic considerations, and a regional optimization. In this last case, the layout of multiple sites are coordinated in order to provide the best baseline power output.

If this sort of regional coordination can be achieved—and the authors don’t offer any suggestions as to how it could—then as few as 10 sites in southern Europe would be sufficient to provide 70 to 80 percent reliable baseline power at very little added cost. And that, the authors point out, is similar to the reliability of a typical nuclear plant. Looking at other regions of the globe, CSP would also provide similar performance in South Africa, but wouldn’t be as effective in the US and India. The problem in these locations isn’t the lack of good sites, it’s that the weather at the best sites tends to be similar (ie, a cloudy day at one site will often be cloudy at the rest).

The more we’re willing to spend on the CSP plants, the better optimized they can be, and the more reliable the power would be. But this gets into the biggest problem with CSP: it’s expensive. While it was competitive with photovoltaic power a few years ago, the price of PV has plunged, while CSP’s costs have only dropped slowly. We can expect continued declines in price, but it’s likely to remain one of the pricier options.

That situation should sound familiar, because it’s the same problem faced by offshore wind. Although the wind is much more reliable in offshore locations—and thus the power produced there is closer to a baseline quality—the cost of installing wind turbines in the ocean is significantly more expensive. The challenge here is that, once on the grid, the value of power generated by any source is treated equally, with no bonuses attached to the source being renewable or baseline.

Nature Climate Change, 2014. DOI: 10.1038/NCLIMATE2276 (About DOIs).

Reader comments 79

John Timmer / John became Ars Technica’s science editor in 2007 after spending 15 years doing biology research at places like Berkeley and Cornell.

You May Also Like

Evernote helps you remember everything and get organized effortlessly. Download Evernote.

No-fly list removal process unconstitutional, judge rules | Ars Technica

Judge says there’s no "meaningful mechanism" to dispute placement on watch list.

Spreng Ben/Flickr

The Department of Homeland Security’s method for the public to challenge placement on a no-fly list is unconstitutional, a federal judge ruled [PDF] Tuesday. US District Judge Anna Brown ordered the authorities to revise the process she declared as "wholly ineffective."

Brown’s ruling stems from a case brought by 13 people on a no-fly list. The judge wrote that the redress process does not provide "a meaningful mechanism for travelers who have been denied boarding to correct erroneous information in the government’s terrorism databases."

It was the first time a court declared the Traveler Redress Inquiry Program run by the Department of Homeland Security as unconstitutional.

“Our clients will finally get the due process to which they are entitled under the Constitution. This excellent decision also benefits other people wrongly stuck on the no-fly list, with the promise of a way out from a Kafkaesque bureaucracy causing them no end of grief and hardship. We hope this serves as a wake-up call for the government to fix its broken watch list system, which has swept up so many innocent people," said Hina Shamsi, the national security project director of the American Civil Liberties Union.

The decision comes months after a Muslim woman was the first to successfully challenge her placement on a watch list. But that decision did not raise the broader constitutional issues like the case decided Tuesday. The Justice Department said it was reviewing the decision and declined comment on whether it would appeal.

Under the redress program, the government responds to passengers with a letter that neither explains why they are on a watch list that usually bars them from flight nor says whether they’ve been removed from a watch list.

Brown ordered DHS to disclose to the plaintiffs, with unclassified information, why they were placed on a watch list.

Sheikh Mohamed Abdirahman Kariye, who is the imam of Portland’s largest Mosque and a plaintiff in the case, was elated with the decision.

“I have been prevented by the government from traveling to visit my family members and fulfill religious obligations for years, and it has had a devastating impact on all of us," he said in a statement. "After all this time, I look forward to a fair process that allows me to clear my name in court.”

Reader comments 68

David Kravets / The senior editor for Ars Technica. Founder of TYDN fake news site. Technologist. Political scientist. Humorist. Dad of two boys. Been doing journalism for so long I remember manual typewriters with real paper. Bikram Choudhury is my master.

You May Also Like

Evernote helps you remember everything and get organized effortlessly. Download Evernote.

Figuring out why drugs don’t work on pancreatic cancer | Ars Technica

A mix of cancerous and normal cells alters interactions with the immune system.

by Mohit Kumar Jolly June 24 2014, 3:01pm CDT

Wikimedia

Pancreatic cancer is one of the most lethal cancers, with the survival period after diagnosis being only four to six months. The main reason for the poor prognosis is that chemotherapy, which has had some success in extending lives for patients with other cancers, doesn’t seem to slow down pancreatic cancer.

People had thought that this failure was caused by the tissue that surrounds the tumor, called the stroma, blocking the delivery of chemotherapy drugs to the tumor. A new study, published in Cancer Cell, raises questions about this idea. It shows that rather than supporting tumor progression, the stroma inhibits the progression by recruiting the body’s immune system to attack the tumor.

Tumorous tale

All tumors are composed of a mix of cancerous and normal cells. But pancreatic cancer is unusual in that only around 10 percent of the cells in the tumor are cancerous, the lowest proportion in any cancer. The remaining 90 percent is stroma that consists of cells known as myofibroblasts.

In keeping with this distinctive property of pancreatic cancer, a previous study had indicated that the stroma can act as a physical barrier to keep chemotherapy drugs away from the tumor. This finding triggered a slew of clinical trials that combined chemotherapy with “stromal depletion therapy”—that is, removing the stroma from the tumor. However, these trials had to be stopped abruptly when patients receiving this combination therapy were found to have an accelerated tumor progression when compared to patients who only received chemotherapy.

In a new study, Raghu Kalluri and his colleagues at MD Anderson Cancer Center may have found an explanation for these disappointing results.

Using mice, Kalluri and colleagues showed that the depletion of myofibroblasts—the major component of stroma—at any stage of pancreatic cancer does not improve the efficiency of chemotherapy. Instead, tumors grow more aggressively. This indicated to Kalluri that the stroma could inhibit the tumor rather than promote its growth.

“We did these experiments thinking that we would show the importance of myofibroblasts and fibrosis in pancreas cancer progression, but the results went completely against that hypothesis,” Kalluri said in a statement. “This supportive tissue that is abundant in pancreatic cancer tumors is not a traitor as we thought, but rather an ally that is fighting to the end. It’s a losing battle with cancer cells, but progression is much faster without their constant resistance. It is like having a car with weak yet functioning brakes vs having one with no brakes."

Dump me not just yet

It is not all bad news for this cancer therapy, however. Kalluri found that tumors with less stroma had higher levels of CTLA-4, a protein that is responsible for slowing down the immune system response. When these tumors were treated with ipilumimab, a drug that blocks CTLA-4, survival time of the mice increased by 60 percent compared to the untreated control mice.

This is a shot in the arm for cancer immunotherapy—therapies that enable the body’s immune system to fight cancer directly—which was named the Breakthrough of the Year in 2013 by the journal Science. This sort of therapy might be more effective in pancreatic cancer patients where CTLA-4 is blocked. It’s also possible that a combination of immunotherapy and stromal depletion therapy might be more effective for pancreatic cancer patients with dense stroma.

The development offers hope for patients with a disease in which only seven out of 100 patients survive for five years after being diagnosed.

Cancer Cell, 2014. DOI: 10.1016/j.ccr.2014.04.005 (About DOIs).

Mohit Kumar Jolly is a graduate student in cancer systems biology at Rice University. This article was first published on The Conversation.

Evernote helps you remember everything and get organized effortlessly. Download Evernote.

How Disney built and programmed an animatronic president | Ars Technica

Before programming languages and systems on a chip, there was voltage and sound.

The control harness, on the left, used for programming animatronic characters’ movements.
Disney
Welcome to The Multiverse, a new column where’ll you’ll find Ars’ explorations and meditations on the world of science fiction. The Multiverse covers things we love, the things we hate, and the things we do not yet understand from source materials new and old. Send questions, tips, or just say hi to The Multiverse’s writers at themultiverse@arstechnica.com.

Animatronics have powered some of sci-fi and fantasy cinema’s most imposing creatures and characters: The alien queen in Aliens, the Terminator in The Terminator, and Jaws of Jaws (the key to getting top billing in Hollywood: be a robot). Even beloved little E.T.—of E.T.: the Extra-Terrestrial—was a pile of aluminum, steel, and foam rubber capable of 150 robotic actions, including wrinkling its nose. But although animatronics is a treasured component of some of culture’s farthest-reaching movies, it originated in much more mundane circumstances. According to the Disney archives, it began with a bird.

Among the things Walt Disney was renowned for was bringing animatronics (or what he termed at the time Audio-Animatronics) to big stages at his company and elsewhere. But Disney didn’t discover or invent animatronics for entertainment use; rather, he found it in a store. In a video on Disney’s site, Disney archivist Dave Smith tells a story of how one day in the early 1950s, while out shopping in New Orleans antique shop, Disney took note of a tiny cage with a tinier mechanical bird, bobbing its tail and wings while tweeting tunelessly. He bought the trinket and brought it back to his studio, where his technicians took the bird apart to see how it worked.

Enlarge / The bird that reportedly started it all, bought by Walt Disney in New Orleans in the 50s.

This led to the Disney engineers experimenting with what eventually became Walt Disney’s Enchanted Tiki Room, a building full of animatronic birds signing, flowers blooming, and tiki drummers drumming. Disney was able to make the birds work years before computer programming, engineering, sound, and movement came together in a useful way.

According to an article from the defunct Persistence of Vision, Audio-Animatronics figures’ sounds were recorded onto tape, like a bird’s chirping. When the tape was played back, the sound would cause a metal reed in the system to vibrate. The reed’s vibration would close a circuit, allowing electricity to cross it and power a pneumatic valve in the figure to move (in the case of a bird, opening its mouth). When the sound stopped, the circuit would open again, and the bird’s mouth would return to its neutral, closed position. This way, the motion was dependent upon the sound, so the two would always operate together to create a realistic display of a singing bird.

The Enchanted Tiki Room opened in Disneyland in 1962, first as a restaurant and then as a standalone attraction. Throughout, the implementation remained simplistic. Disney was initially motivated to bring robots to life as a form of real-life animation, essentially taking movies off the screen and into three-dimensional space. But by the time the tiki room debuted, Disney’s team had been quietly pursuing a more ambitious goal—experimenting with more complex systems that could mimic human beings. They were getting close.

Toward a complicated human-robot

While the tiki room was being developed in the 50s, engineers at Disney were experimenting with "Project Little Man," an animatronic figure meant to mimic a Buddy Ebsen dance routine. While they made some progress, the figure was crude.

From there, Disney’s team experimented for a while with a Confucius-like animatronic character that was meant to stand in a restaurant entryway and serve pearls of wisdom when asked questions. That project fell by the wayside when Disney opted instead to work on an animatronic figure of President Abraham Lincoln, modeling its face on a real cast taken from Lincoln in 1860, the year before he took office.

For much larger figures like Lincoln, Disney’s technicians had to create system to control them that was more flexible. The digital system used for the Tiki Room birds above still worked for simple actions like blinking eyes or small finger movements, according to Persistence of Vision, but larger body movements required an analog system.

This was accomplished a couple of different ways, but both were based on a system of regulating voltage and triggering motion with tones. An increase or decrease in voltage was used to activate the more complex types of movement in animatronic figures, according to Persistence of Vision. For instance, if a figure’s neutral head position was hanging to the left, an increase in voltage would activate its hydraulics to make it start to move to the right. A decrease would swing it back to the left, and no voltage would return it to neutral. It’s easy to imagine how the on-off system described above for bird chirping would become alarmingly herky-jerky for, say, President Lincoln turning to look at you. To combat this, the voltage system allowed for smoother movements.

The animatronic president Lincoln standing to deliver his speech.
Wikipedia

The voltage changes would be triggered by audio tones laid on 35mm film stock that signaled it to increase or decrease. The engineering team used a transducer that would relate the voltage changes to changes in sound, so that they could record movements and play them back.

Persistence of Vision states that a movement would be rehearsed, perfected, and then "recorded." An engineer would use a potentiometer joystick that applied and measured the voltage differences going to a particular joint. Once he had the motion down, he would perform the gesture by generating the ups and downs of voltage with the potentiometer, and the transducer would translate that to tones recorded on film. The engineers would then play back the track to see that the movement worked correctly.

This sounds simple enough for a single movement, but most animatronic figures’ acts were composed of many different motions coordinated together. For instance, just raising an arm to point one’s finger to emphasize something they were saying would require shoulder, elbow, wrist, and finger movements that all had to happen in tandem. Movements for each joint for each act would be recorded on one reel, and then all the movement reels would be combined into a master tape. Persistence of Vision notes that for the Carousel of Progress exhibit put on by Disney at the 1964 World’s Fair, four narrator figures had a combined 120 actions that were all recorded on one one-inch 32-track master tape.

The other solution also used voltage regulation, but it didn’t require a system of timed cues. Instead of working on one joint at a time with a transducer and potentiometer, engineers used a wearable control harness, visible in the Dave Smith video, that was hooked up to an animatronic figure. The harness measured voltage differences across its various joints and mimicked the changes in the animatronic figure connected to it, recording the movements to tape. The harness had the benefit of being able to record complex multi-joint movements, but it required perfect performances from the wearer, who had to sit for hours trying to get the motions right.

The animatronic Lincoln used both types of motion, though because of maintenance and space constraints, the actual materials used to execute the necessary motions were improved between the time Disney first started developing animatronics and when the figure debuted. For instance, the engineering team later started integrating servo valves, which can translate digital signals into smooth movements and would have cut down on the need for precise voltage difference recordings for some motions.

The animatronic Lincoln debuted at the 1964 World’s Fair as part of the state of Illinois’ pavilion. "He" could stand up from a chair and deliver snippets from the president’s most famous addresses. "Audiences were really amazed at this. They thought it was an actor up there that was standing on the stage," Smith said. Animatronics filled several other Disney exhibits at the fair, including "It’s a Small World" and the General Electric Carousel of Progress.

As animatronics circled back around from being an expression of live animation to a special effect in movies, the importance of programming and coordinating their performances perfectly fell by the wayside. While some programming still was and is necessary, the need for a full-on illusion is rarely necessary. But even if it’s no longer the standard it once was, Disney’s systems for its birds and its robot President laid the foundation for some of film’s most awesome displays.

Evernote helps you remember everything and get organized effortlessly. Download Evernote.

3D-printed material can carry 160,000 times its own weight

Researchers from MIT and Lawrence Livermore have created a new class of materials with the same density as aerogels (aka frozen smoke) but 10,000 times stiffer. Called micro-architected metamaterials, they can withstand 160,000 times their own weight, making them ideal for load-bearing, weight-sensitive applications. To do it, the team created microscopic lattice molds using a 3D printer and photosensitive feedstock (see the video below), then coated them with a metal 200 to 500 nanometers thick. Once the lattice material was removed, it left an ultralight metal material with a very high strength-to-weight ratio. The process also works with polymers and ceramics — with the latter, they created a material as light as aerogel, but four orders of magnitude stiffer. In fact, it was 100 times stronger than any known aerogel, making it ideal for use in the aerospace industry. Given that it was funded by DARPA, it could also end up on robots, drones or soldiers.

Evernote helps you remember everything and get organized effortlessly. Download Evernote.

US Supreme Court deals major blow against software patents and patent trolls | ExtremeTech

The current US Supreme Court is one of the most divided in history, but the justices managed to come together for a unanimous decision this week to strike a blow against software patents. The Court has narrowed the definition of an invention in the US to exclude abstract ideas that have simply been implemented on a computer. Some were hoping that the Court would make broad statements about the (in)validity of software patents in general, but this is still a step in the right direction.

The case in question pitted Alice Corp against CLS Bank Int, both financial institutions of which you’ve likely never heard. Lawsuits between banking companies have the potential to be tremendously boring, but this one could have ripple effects through much of the tech industry. The Court’s decision today basically says that tacking on “then do it on a computer” to an existing idea is not patentable.

It sounds a little bizarre on the face of it. Surely people would not be so bold as to simply toss in a generic computerization step and apply for a patent, right? It’s actually a well-known loophole and a favorite of patent trolls. In this case, Alice Corp claimed a patent on escrow services, which have existed for centuries. Oh, but Alice had the brilliant idea of doing escrow on a computer. The ruling striking down Alice’s patent is 20 pages long, which is short for a Supreme Court opinion. In it, Justice Clarence Thomas attempts to set a standard by which these fringe software patents can be judged — generic computer implementation doesn’t make an abstract idea patentable.

The reasoning used in the opinion is strikingly similar to another of the Court’s recent patent cases, that of Myriad Genetics. In that case, the Court unanimously decided that human genes cannot be patented, and invalidated the Utah-based company’s claim on the BRCA1 and BRCA2 genes, which are important for diagnosing breast and ovarian cancers. It’s the same deal — you cannot apply a common technique to a non-patentable idea and magically have a patent (even if it was hard to do).

We’re probably not looking at the end of patent trolling, but some of the particularly egregious patents out there are in trouble. True patent trolls often rely on incredibly broad business method patents that use the “do it on a computer” loophole. For example, sell a thing, but do it with a computer. Or distribute a newsletter on a computer. [Read: The Patent War: Is it killing innovation?]

These exceptionally lame patents aren’t instantly dead, but lower courts could easily point to the Supreme Court ruling and toss them out. This screws up the business model of patent trolls, which relies mainly on intimidating people with protracted and expensive legal battles. The trolls lose their leverage when many of these generic software patents are likely to be invalidated the first time they come before a judge.

Share This Article

Evernote helps you remember everything and get organized effortlessly. Download Evernote.

Things from around the world

Beyond the headline

My life after reporting.

Christine Rose

On Marketing, Mochas, & Mayhem

More Fun Design

Longtime Austin artist Jaime Cervantes create eye-catching concert posters.

Dr. Bob's Blog

The Doctor is In... with his random musings

AL'S BLOG

The web log of "Weird Al" Yankovic

My Green Projects and Education

Making my life Greener and Learning new Things

WordPress.com

WordPress.com is the best place for your personal blog or business site.

Follow

Get every new post delivered to your Inbox.

Join 35 other followers