Do not buy the House Science Committee���s claim that scientists faked data

Do not buy the House Science Committee���s claim that scientists faked data

No credible evidence supports that NOAA fabricated data; evidence still points to climate change
By Kendra Pierre-Louis 7 hours ago

Donald LeRoi, NOAA Southwest Fisheries Science Center

Climate scientists have worked hard for decades to prove climate change. Why is the US House Committee on Science, Space and Technology working so hard not to believe them?

On Sunday February 5th, the U.S. House of Representatives Committee on Science, Space, and Technology published a press release alleging, based on questionable evidence, that the National Oceanic and Atmospheric Administration (NOAA) “manipulated climate records.”

The source of their evidence, according to Committee spokesperson Thea McDonalds, was a Daily Mail article. The Daily Mail is a British tabloid most famous for outlandish headlines such as “Is the Bum the New Side Boob” and “ISIS Chief executioner winning hearts with his rugged looks.” This is not the first time that the House Science Committee has used spurious evidence to dispute the existence of human-driven climate change.

The piece, which quotes John Bates—a scientist who NOAA once employed—challenges the data used in the famous 2015 Karl study. The study, named after Thomas R. Karl—the director of the NOAA’s Centers for Environmental Information (NCEI) and the paper’s lead author—was published in Science and debunked the notion of a climate “hiatus” or “cooling.”

The White House Press release, which includes quotes from committee Chairman Lamar Smith as well as Darin Lahood (R-Ill) and Andy Biggs (R-Ariz), misrepresents a procedural disagreement as proof that human caused climate change is not occurring. It’s akin to pointing to a family argument as proof that they aren’t actually related.

“What the House Committee is trying to do, like they did in the past, is debunk the whole issue of global warming,” said Yochanan Kushnir, a Senior Scientist at the Lamont Doherty Earth Observatory.

At the center of the argument is contention over how NOAA maintains climate data records. Climate researchers receive grants to process and develop climate-related data sets. Once those data sets are fully developed, it becomes the responsibility of NOAA’s National Climactic Data Center (NCDC) to preserve, monitor, and update that data—which can sometimes be what data scientists refer to as messy.

“The problem,” said Kevin Trenberth a Distinguished Senior Scientist at the National Center for Atmospheric Research, “is that this is quite an arduous process, and can take a long time. And, of course NOAA doesn’t necessarily get an increase of funds to do this.”

Maintaining this data fell under the purview of Bates’ group, and it’s this data that he has taken issue with publicly.

“Bates was complaining that not all of the data sets were being done as thoroughly as he wanted to,” said Ternberth. “But there’s a compromise you have to make as to whether you can do more data sets or whether you can do more really thoroughly. And the decision was made that you try and do more.”

Ice core samples are used as proxy indicators for past global climate temperatures and atmospheric CO2 concentrations.

Bates takes particular issue with the way Karl handled land temperature data in the Science study which addressed the so-called “climate hiatus.” Early analyses of global temperature trends during the first ten years of this century seemed to suggest that warming had slowed down. Climate change doubters used this analysis to support their belief that—despite climatological data which includes 800,000 year ice-core records of atmospheric carbon dioxide—humans have not affected the atmosphere by releasing billions of tons of carbon dioxide per year.

“His primary complaint seems to be that when researchers at NOAA published this paper in Science, while they used a fully developed and vetted ocean temperature product, they used an experimental land temperature product,” said Zeke Hausfather, an energy systems analyst and environmental economist with Berkeley Earth. Because climate data comes from a number of different sources, methods of handling that data go through a vetting process that ultimately dictates the use of one for the official government temperature product. That can mean controlling for known defects in the devices that gather climate data or figuring out the best way to put them together. The product that Karl used for land temperature data hadn’t finished that process.

“That said,” said Hausfather, “the land temperature data they used in the paper is certainly up to the standards of an experimental or research product.”

So what does that mean for those of us on the outside?

Not much.

The record data that Bates takes umbrage with showed roughly the same amount of warming as the old record. And the evidence that the Karl paper cites as to why there’s no hiatus is based on ocean temperatures—not land. A government source who does not wish to be named emphasized that there is no evidence or even a credible suggestion that NOAA falsified data in the Karl et al (K15) study. And even if Bates’ critiques were valid—and given that this methodology, after much peer review, is now the default way that NOAA calculates land temperatures, his complaints seem problematic—it doesn’t upend the study’s conclusion. And —the evidence still supportsAs for the differences in water temperatures, that can be easily be accounted for by differences in the tools used to measure water temperatures. In the past, as PopSci previously reported, most ocean temperature data was taken by ships which pulled water into their engine rooms—rooms warmer than the ocean outside, making ocean temperature recordings slightly higher. When ocean temperature tracking switched to buoys, which stay in the water all of the time and don’t heat up, NOAA failed to control for the cooler (and arguably more accurate) water temperatures due to the lack of hot ship engines. The Karl study corrects for that temperature difference and Bates’ complaints do nothing to discredit it.

“People should be aware of the fact that there are different groups that analyze the data,” said Kushnir. “if you look at all of the sources together you get a bigger, more reliable picture of what’s happening. There’s the Hadley Center from the UK meteorology office that puts together a data set of global mean temperatures, there’s NASA, NOAA, then there’s the Berkeley group and the Japanese who have their own way of putting information together.”NOAA

Zeke Hausfather at Berkeley Earth independently developed an updated version of Figure 2 in Karl et al 2015. The black line shows the new NOAA record, while the thinner blue line shows the results from raw land stations, ships, and buoys with no corrections for station moves or instrument changes. The two are quite similar over the last 50 years; over the last 100 years the corrected data [the one Karl uses] actually shows less global warming.

The Karl paper is also not the only one to tackle the hiatus. Studies in Nature by Stephan Lewandowsky of the Cabot Institute University of Bristol, and this one in the journal Climate Change by Bala Rajaratnam of Stanford University, all say the same thing.

The Karl study’s high profile, however, has made it a frequent target for criticism.

“The whole issue of this hiatus issue was discussed quite heavily in science,” said Kushnir. “And as scientists we understand what happened in this long period.”

Basically, there’s the natural climate variability, and then there’s the variability caused by climate change. The natural variability during this period was cooler, but the climate change impact on top of it was not.

But that isn’t even Bates’ complaint, as the House Committee would imply—his complaint is that the data wasn’t vetted heavily enough.

“I interpret a key part of the issue,” said Trenberth, “as, how deep do you go and how far into the research do you go for one particular data set, as opposed to moving onto the next data set and getting that into a much better state than it would have been otherwise.”

Trenberth points to a backlog of data that hasn’t yet been released or updated, pressuring NOAA to focus on volume over perfection. If this sounds to you like an argument for more funding for climate change research instead of less, you’re not alone.

“Recommendations about doing these things have been made, but they’ve never been adequately funded. So we muddle along,” said Trenberth. “And Lamar smith under the house has been responsible for some of this, because they actually cut the funding to enable NOAA to properly deal with and process the data by 30 percent in 2012. So the ability to do this properly has actually been compromised by the House Science Committee and by Lamar Smith in particular.” 

The current administration has talked a lot about the “politicization of science.” Meanwhile on the House Committee’s website, Representative Smith states that Bates has exposed the “previous administration’s efforts to push their costly climate agenda at the expense of scientific integrity.” With the House Committee misrepresenting both Bates’ complaint and the overarching scientific consensus, it does indeed seem that the politicization of science is a problem the administration needs to deal with.

February 09, 2017 at 11:22AM
Open in Evernote


Why a Tax Break for Security Cameras Is a Terrible Idea

Why a Tax Break for Security Cameras Is a Terrible Idea

Why a Tax Break for Security Cameras Is a Terrible Idea

Law enforcement agencies around the country have been expanding their surveillance capabilities by recruiting private citizens and businesses to share their security camera footage and live feeds. The trend is alarming, since it allows government to spy on communities without the oversight, approval, or legal processes that are typically required for police. 

EFF is opposing new legislation introduced in California by Assemblymember Marc Steinorth that would create a tax credit worth up to $500 for residents who purchase home security systems, including fences, alarms and cameras. In a letter, EFF has asked the lawmaker to strike the tax break for surveillance cameras, citing privacy concerns as well as the potential threat created by consumer cameras that can be exploited by botnets. As we write in the letter: 

Personal privacy is an inalienable right under Section 1 of the California Constitution. Yet, in 2017, privacy is under threat on multiple fronts, including through the increase in use of privately operated surveillance cameras. Law enforcement agencies throughout the state have been encouraging private individuals and businesses to install cameras and share access to expand government’s surveillance reach through private cooperation. The ability for facial recognition technology to be applied routinely and automatically to CCTV footage will present even more dangers for personal privacy. EFF has significant concerns that, by using tax credits to encourage residents of California to buy and install security cameras, A.B. 54 will not only increase the probability that Californians will use cameras to spy on one another but will also build the infrastructure to allow for the growth of a “Big Brother” state.

In addition, this tax credit for surveillance cameras may create a new weakness for security. In October, a massive cyberattack that exploited personal cameras disabled Internet traffic across the country. EFF and independent security researchers have also discovered surveillance cameras that were openly accessible over the Internet, allowing anyone with a browser to watch live footage and manipulate the cameras. The potential for breaches will grow commensurately with the increase in the number of cameras in communities promoted by the tax incentive.

EFF urges Steinorth to amend A.B. 54 and, failing that, we ask his colleagues in the California legislature to vote against the bill. 

January 09, 2017 at 03:13PM
Open in Evernote

NASA���s Cassini Spacecraft Prepares for Ring-Grazing Phase | Space Exploration |

NASA���s Cassini Spacecraft Prepares for Ring-Grazing Phase | Space Exploration |

NASA’s Cassini Spacecraft Prepares for Ring-Grazing Phase

In the final year of its epic voyage, on Nov. 30, NASA’s Cassini orbiter will begin a daring set of ‘ring-grazing orbits,’ skimming past the outside edge of Saturn’s main rings.

Artist’s concept of NASA’s Cassini spacecraft at Saturn. Image credit: NASA.

Artist’s concept of NASA’s Cassini spacecraft at Saturn. Image credit: NASA.

Launched in 1997, Cassini has been touring the Saturn system since arriving there in 2004 for an up-close study of the gas giant, its rings and moons.

During its journey, the probe has made numerous discoveries, including a global ocean within Enceladus and liquid methane seas on Titan.

On Nov. 30, following a gravitational nudge from Titan, Cassini will enter the first phase of the mission’s dramatic endgame.

Cassini will fly closer to Saturn’s rings than it has since its 2004 arrival. It will begin the closest study of the rings and offer unprecedented views of moons that orbit near them.

These orbits, a series of 20, are called ring-grazing orbits, or F-ring orbits.

During these weekly orbits, Cassini will approach to within 4,850 miles (7,800 km) of the center of the narrow F ring, with its peculiar kinked and braided structure.

Cassini’s instruments will attempt to directly sample ring particles and molecules of faint gases.

“Even though we’re flying closer to the F ring than we ever have, we’ll still be more than 4,850 miles distant. There’s very little concern over dust hazard at that range,” said Cassini project manager Dr. Earl Maize, from NASA’s Jet Propulsion Laboratory (JPL).

The F ring marks the outer boundary of the main ring system. This ring is complex and constantly changing. Cassini images have shown structures like bright streamers, wispy filaments and dark channels that appear and develop over mere hours.

The ring is also quite narrow — only about 500 miles (800 km) wide. At its core is a denser region about 30 miles (50 km) wide.

Cassini’s ring-grazing orbits also offer unprecedented opportunities to observe the menagerie of small moons that orbit in or near the edges of Saturn’s rings, including best-ever looks at the moons Pandora, Atlas, Pan and Daphnis.

“During the F-ring orbits we expect to see the rings, along with the small moons and other structures embedded in them, as never before,” said Cassini project scientist Dr. Linda Spilker, also from JPL.

“The last time we got this close to the rings was during arrival at Saturn in 2004, and we saw only their backlit side.”

“Now we have dozens of opportunities to examine their structure at extremely high resolution on both sides.”

During ring-grazing orbits, the spacecraft will pass as close as about 56,000 miles (90,000 km) above Saturn’s cloud tops. But even with all their exciting science, these orbits are merely a prelude to the planet-grazing passes that lie ahead.

In April 2017, Cassini will begin its Grand Finale phase. After nearly 20 years in space, the mission is drawing near its end because the spacecraft is running low on fuel.

The Cassini team carefully designed the finale to conduct an extraordinary science investigation before sending the spacecraft into Saturn to protect its potentially habitable moons.

During this phase, the probe will pass as close as 1,012 miles (1,628 km) above the clouds as it dives repeatedly through the narrow gap between Saturn and its rings, before making its mission-ending plunge into the planet’s atmosphere on Sept. 15, 2017.

November 28, 2016 at 01:50PM
Open in Evernote

APOD: 2014 December 14 – Molecular Cloud Barnard 68

APOD: 2014 December 14 – Molecular Cloud Barnard 68

Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.

2014 December 14

See Explanation. Clicking on the picture will download the highest resolution version available.

Molecular Cloud Barnard 68
Image Credit: FORS Team, 8.2-meter VLT Antu, ESO

Explanation: Where did all the stars go? What used to be considered a hole in the sky is now known to astronomers as a dark molecular cloud. Here, a high concentration of dust and molecular gas absorb practically all the visible light emitted from background stars. The eerily dark surroundings help make the interiors of molecular clouds some of the coldest and most isolated places in the universe. One of the most notable of these dark absorption nebulae is a cloud toward the constellation Ophiuchus known as Barnard 68, pictured above. That no stars are visible in the center indicates that Barnard 68 is relatively nearby, with measurements placing it about 500 light-years away and half a light-year across. It is not known exactly how molecular clouds like Barnard 68 form, but it is known that these clouds are themselves likely places for new stars to form. In fact, Barnard 68 itself has been found likely to collapse and form a new star system. It is possible to look right through the cloud in infrared light.

Follow APOD on: Facebook, Google Plus, or Twitter
Tomorrow’s picture: potato earth

< | Archive | Index | Search | Calendar | RSS | Education | About APOD | Discuss | >

Authors & editors: Robert Nemiroff (MTU) & Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
NASA Web Privacy Policy and Important Notices
A service of: ASD at NASA / GSFC
& Michigan Tech. U.

June 10, 2016 at 09:55AM
Open in Evernote

Egyptology can help us future-proof our culture ��� Grayson Clary ��� Aeon

Egyptology can help us future-proof our culture ��� Grayson Clary ��� Aeon

Talk like an Egyptian

If we want to safeguard our languages, stories and ideas against extinction, we had better study Egyptology

Grayson Clary 2,900 words
Egypt 1958. At the tomb of Ramses II. Photo by Elliott Erwitt/Magnum

Egypt 1958. At the tomb of Ramses II. Photo by Elliott Erwitt/Magnum

Consider the New Kingdom Egyptian. He can be forgiven for thinking his state sits at creation’s centre, a kernel of order anchoring the known world. If he lives during the reign of Ramses II, his Egypt has already governed Northeast Africa for the better part of 2,000 years. Trouncing Hittites at Kadesh, Ramses will confirm Egypt as the preeminent military power in the region and, for any Egyptian then living, the entire human fraction of the cosmos. That, at least, is what official accounts will show. Bombastic descriptions of the battle will decorate monuments across the empire.

A millennium and a bit later, not a living soul will be able to read them.

The scientific community has recently begun to think hard about natural and technological existential risks to human beings: a wandering asteroid, an unfortunately timed gamma-ray burst, a warming planet. But we should also begin to think about the possibility of cultural apocalypse. The Egyptian case is instructive: an epoch of stunning continuity, followed by abrupt extinction. This is a decline and fall worth keeping in mind. We should be prepared for the possibility that humankind will one day have no memory of Milton, or for that matter Motown. Futurism could do with a dose of Egyptology.

estern obsession with Ancient Egypt – Egyptomania – has always drawn on the split personality of its legacy: its suggestively modern face and its alien distance. Its zeitgeist is very nearly intelligible, but not quite. Here was a culture obsessed with writing. One Egyptian cosmogony gives credit to Ptah for creation through the Word, though other traditions put the cause down as Atum’s spit or semen.

Still, for all its carven glyphs, Egypt cannot claim to have passed down its dreams, memories and hopes for the future. Some of its civilisation has been recovered, but some was lost irretrievably. This is sobering enough on its own terms. When you examine our beloved present day from an Egyptological distance, you see that we are vulnerable to a similar fate.

The predicament is neatly captured in one of the best-known works of Egyptian literature, The Tale of the Shipwrecked Sailor. This Middle Kingdom story is exactly what it says on the tin, straightforward enough that its translation was assigned in my first year of Hieroglyphic Middle Egyptian at Yale University. The sailor of the title meets a ferocious storm, washes ashore on a phantom island, and enjoys several conversations with an impressively ornate serpent. The copyist scribbles: ‘His beard, it was more than two cubits long. His body was overlaid with gold. His eyebrows were lapis lazuli, truly.’ Enlightened, the sailor is duly rescued.

for the modern reader, the story’s ultimate meaning amounts to: huh?

Thanks to the survival of this particular papyrus, we have in hand the ancient bones of an adventure tale, one that’s washed ashore in virtually every cultural tradition (whether contrived independently or not). World literature is littered with shipwrecked sailors, cast overboard on this, that or the other mystical journey. Indeed, the story’s skeleton is recognisable enough that an illustrated children’s book edition is available.

And yet, the crucial ending of the story remains inaccessible. The Tale of the Shipwrecked Sailor is couched in a frame story; the account of wreck and recovery is meant to reassure a courtier, newly returned from a naval expedition to Nubia, as he readies to address the pharaoh. The tale’s sense – its moral – hinges on his response to this fantastic account.

That answer is a rhetorical question:

(‘Who,’ he asks, ‘gives water to a goose as the land brightens for the morning of its slaughter?’) (‘Who,’ he asks, ‘gives water to a goose as the land brightens for the morning of its slaughter?’)

Who indeed? Well, scholars don’t exactly agree. Some see a glib dismissal of the sailor’s happy ending, others a reference to a very real ritual function. In any case, for the modern reader, the story’s ultimate meaning amounts to: huh?

hat we have here is a failure to communicate. It’s as though an alien race has found Voyager’s Golden Record, the greatest hits of Earth-kind from Jimmy Carter to whale song. They manage to puzzle out its contents, and yet they find that it carries nothing but puns.

Without language, a people is left with very little. Think of Lee Harvey Oswald learning Russian in Don DeLillo’s novel Libra (1988): ‘Working with her, making the new sounds, watching her lips, repeating words and syllables, hearing his own flat voice take on texture and dimension, he could almost believe that he was being remade on the spot, given an opening to some larger and deeper version of himself.’ As Martin Heidegger put it in 1947: ‘Language is the house of Being. In its home man dwells.’

Imagine the pharaohs’ frustration at all the bits of language lost, the prayers and tributes especially. This was a civilisation that had its eyes fixed on eternity. Its civil calendar was apparently keyed to the heliacal rising of Sothis, whose astronomical cycle has a period of some 1,400 years. By dint of longevity, the first Egyptologists were Egyptian, and ditto the first tomb robbers. Is it a bridge too far to say the first futurists were Egyptian too?

And yet, its hieroglyphic script suffered through centuries of illegibility. The last hieroglyphic inscription – the last one with a convenient date attached – is at Philae, a small Nile island that hosted a temple to the goddess Isis (the island was recently submerged, and the temple complex relocated). That clutch of glyphs dates to 394 AD, but not until the 19th century would they be comprehensible again.

The trilingual stone saved what could be saved from an entire civilisation’s cultural memory

That the language was recovered at all is a minor miracle. In the years after its extinction, Egyptian refused to yield up its secrets to an onslaught of wrong-headed decipherments. Pseudo-scholars would read outlandish allegorical meanings into an ibis, the forelegs of a lion, a windpipe and heart. The language’s trick, of course, is that it isn’t fully phonographic or ideographic.

The Pharaohs and their scribes would be unheard until the French scholar Jean-François Champollion reached back through more than a millennium to rescue them. The tale of this miracle is shopworn. In 1798 Napoleon’s armies swept into Egypt with teams of scientists in tow, and stumbled onto the Rosetta Stone. They would later surrender the stone to the British, but casts of it circulated among European museums and scholars, including Champollion. The stone has become romantic shorthand for the cracking of a code, but it is, all things considered, an unexciting, bureaucratic text. And yet, it owes its air of romance to bureaucratic necessity, for the stone records the same message three times, in three scripts, one of which was well understood in Napoleon’s day. The trilingual stone saved what could be saved from an entire civilisation’s cultural memory. It was the time machine by which ancient Egypt travelled into the future.

Egypt’s story isn’t the only example of decline, fall and resurrection. The decipherment of hieroglyphs is a particular case of a more general problem: retrieving information across vast cultural divides and immense stretches of time is difficult. Cretan hieroglyphs remain impenetrable, Olmec – the language of the first major civilization in Mexico – is largely a mystery, and only within the past half-century or so has meaning been teased from the Mayan script. For every civilisation retrieved, another remains substantially beyond our comprehension. And for all the millennia it spent plotting immortality, Egypt’s resurrection was a happy accident.

But what if we could systematise that luck, to make sure that our own achievements never vanish? What if we could design a Rosetta Stone to be a Rosetta stone, on purpose – one that might someday rescue us from the dustbin of history?

hen it comes to creating exceptionally durable records, the physical challenges aren’t half as intractable as they might seem. I put the issue to Anders Sandberg, a research fellow at the Future of Humanity Institute at the University of Oxford, who quickly listed a whole slew of plausible methods for very long-term information storage: write it into DNA, engrave it on glass or sapphire at the nano-scale, store it in a ‘million-year’ tungsten and silicon nitride hard-drive. The Moon, he volunteered, would be a better place to station an archive than Earth.

Of course, retrieving those records would be quite the feat. The institute’s director, Nick Bostrom, framed it to me this way: ‘It is one kind of challenge to preserve information that a technologically mature civilisation could eventually discover and retrieve, and another kind of challenge to preserve information in a form that would be helpful to a primitive civilisation that needs simple clues to help it back up to our current level.’ The first amounts to vanity publishing on an epochal scale, but the second just might keep the flame of our current civilisation alive.

‘We humans have figured out both honeybee dances and ant pheromone trails.’

When constructing an archive for a stranger, it’s imperative to keep in mind the terms of discovery. How much information does the recipient need in hand to make sense of the archive, or to know that it is an archive in the first place? Bostrom told me it would be easy to communicate with a sufficiently developed species. ‘I think practically any record that we could create that we could also read,’ he said, ‘would be intelligible to an advanced future civilisation, provided only that we preserve a sufficient amount of text.’ Sandberg sounded a similar positive note: ‘We humans have figured out both honeybee dances and ant pheromone trails.’

If the arc of human history bends towards super-intelligence, our memories will be in good hands, but that doesn’t mean we’re in the clear. As Laura Welcher, a linguist with an interest in endangered languages, and the director of operations for the Long Now Foundation, put it: ‘In the very long term, I think we have to expect possible discontinuities.’ In that scenario, the challenge is trickier – how to create a library labelled, as simply as possible: ‘In case of apocalypse, read me.’

he Long Now Foundation in San Francisco has been puzzling over this problem. The foundation was set up 18 years ago to build a culture of extremely long-term thinking. Its house style is to write our current year as 02014 – to solve the Y10K problem, and to leave room for millennia 11 through 100. The Long Now is trying to do the opposite of archaeology. They are building ‘intentional artifacts’ in order to seed the world with secrets for the finding.

Last year, the foundation caught a flurry of attention for its 10,000-Year Clock: a timepiece sheltered in a Texas mountain, meant to run for millennia. The hook for most media outlets was the involvement of’s founder Jeff Bezos, whose maverick millions gave credibility to a project wrapped in fantastically romantic language. ‘[As] long as the Clock ticks,’ the foundation’s board member Kevin Kelly wrote for the project’s website, ‘it keeps asking us, in whispers of buried bells, “Are we being good ancestors?”’

Kelly always capitalises the Clock, and he likes to attach active verbs to it, as if the thing has a mind of its own. Steven Inskeep, the radio presenter of NPR’s Morning Edition,put the natural question to the Clock’s designer Danny Hillis: ‘Human nature being what it is, you still have to wonder if those future people discovering your clock might just go to wildly wrong conclusions – oh, this was their God that they worshipped in the mountain. Or who knows what else?’

That impression is, substantially, the point. The foundation wants its projects to have a ‘mythic’ feel, Welcher told me, the better to create a lasting community. The 10,000-Year Clock fits the bill, sitting right on the glinting knife’s edge between technology and magic. But this is just one way of being ‘good ancestors’. The Long Now has a smaller, and more plainly useful project that draws on the decipherment of Egyptian. An archive of human language, it could fairly be described as a whole world in your hand. The Foundation calls it the Rosetta Disk.

Digital formats decay at extraordinary speed. Egyptian papyri have endured. On millennial scales, go analogue

The Rosetta Disk takes the principle of the Rosetta Stone to its practical extreme: massive parallelism for maximum intelligibility. A nickel puck just 2.8in across, the Disk is etched all over, microscopically, with more than 13,000 pages in more than 1,500 languages. Champollion, eat your heart out. A glass globe shelters the Disk against wear, tear and elemental abuse. This isn’t the library itself, emphasises Welcher, who heads the Rosetta Project. Instead, she calls the Disk a ‘decoder ring’ or a ‘card catalogue’. It’s the map, not the territory – a means of (re)discovery, in case all else is lost.

In designing a ‘library of civilisation’, the Long Now didn’t look to Voyager-style probes or cutting-edge hard drives. Instead, Welcher says, the challenge was: ‘Can we do better than paper?’ Digital formats decay at extraordinary speed – blink and you miss them. Indeed, some NASA data was temporarily lost due to software and hardware change. Egyptian papyri have endured. On millennial scales, Welcher advises, go analogue.

The other design challenge was to signal the Disk’s content. Reading it requires an optical microscope, which means that a reader must have reason to think that the Disk should be examined under a microscope.

This design problem becomes tricky over very large timescales, especially if the archivist assumes nothing of her audience’s language skills and technical development. It’s a question that comes up when nuclear waste sites are discussed: how do you communicate ‘do not dig here’ across time, to people for whom most of your signposts are unintelligible? Some have suggested that grotesque sculptures could do the trick, but those sculptures could be interpreted as hiding treasure. In 1984, a pair of linguists proposed creating ‘ray cats’ that would glow in response to radiation. They would then compose a folklore of songs and tales that would preserve the notion that glowing cats signal danger. Carl Sagan, clearly phoning it in, recommended a skull and crossbones.

The foundation’s solution to this problem is elegant, a shrinking, in-spiraling inscription that reads ‘Languages of the World’ in English, Spanish, Russian, Swahili, Arabic, Hindi, Mandarin and Indonesian. So long as one of those languages lives, the Disk’s pitch – ‘Read me!’ – is legible. But where to put it? If mass distribution can’t be managed (the Disks are fairly expensive), the setting ought to communicate the artifact’s importance. Take the Phaistos Disk, a resolutely mysterious object recovered from beneath a Minoan palace on the Greek island of Crete. Its glyphs remain un-deciphered, but its preciousness is announced loud and clear by its burial underneath a palace. So long as we treat our intentional artifacts with a certain degree of reverence, future re-discoverers will plausibly do the same.

The Long Now Foundation has found at least one symbolically resonant home for its Disk. A copy is ensconced on Rosetta, the European Space Agency’s comet probe, whose plucky lander was named for Philae. As long as the Rosetta Orbiter circles the Sun, it guards the memories of the whole human race.

he canonical allusion for ephemerality is Percy Bysshe Shelley’s sonnet ‘Ozymandias’ (1818), Ozymandias being another name for Egypt’s Ramses II. As the line goes: ‘My name is Ozymandias, king of kings/Look on my works, ye Mighty, and despair!’ ‘Nothing besides remains,’ writes Shelley. ‘Round the decay/of that colossal wreck, boundless and bare,/the lone and level sands stretch far away.’

Shelley’s poem is the best known, but Horace Smith’s on the same subject, which was published a month after his friend Shelley’s, better suits the futurist lens. Smith wrote:
We wonder, – and some Hunter may express
Wonder like ours, when thro’ the wilderness
Where London stood, holding the Wolf in chace,
He meets some fragment huge, and stops to guess
What powerful but unrecorded race
Once dwelt in that annihilated place.

Consider Smith the father of Egyptological futurism. But whatever precautions we take against cultural annihilation, one intractable problem remains. There is a gorgeously convoluted name for those words that, in a given corpus, appear only once: hapax legomena. Any beginning student of hieroglyphs is bound to come across these in Raymond O Faulkner’s Concise Dictionary of Middle Egyptian, and they tend to spark equal parts despair and delight. Delight because you can be sure you’ve gotten the transliteration right, and your work is done. Despair because, if thin context clues don’t suffice, there is no puzzling out what it means. The word is quite simply dead.

When a word goes, a bit of the culture goes with it. The losses snowball, referents fade, and allusions die with them. ‘Who,’ after all, ‘gives water to a goose as the land brightens for the morning of its slaughter?’ Bostrom’s caveat strikes back. Everything is recoverable, ‘provided only that we preserve a sufficient amount of text.’

The solution is only partly technical. Even paper is technology enough for the next few millennia. The challenge is to maximise the surviving corpus. Egypt’s literate classes fell foul of that calculus, guarding jealously the hieroglyphic script. The first commandment of Egyptological futurism, then, would be to ward off at all costs the dead words, those blasted hapax legomena. In other words, for humanity’s sake – write.

12 December 2014

June 10, 2016 at 09:55AM
Open in Evernote

How horizontal gene transfer shakes up evolution ��� Ferris Jabr ��� Aeon

How horizontal gene transfer shakes up evolution ��� Ferris Jabr ��� Aeon

Photo by Thaddeus McRae

Fay-Wei Li stepped out of his car and looked around. There was not much to see aside from an old wooden fence and a soggy ditch strewn with roadside detritus. Could this really be the spot? A biologist at Duke University, Li had driven seven hours from North Carolina to these exact coordinates in Florida in search of hornworts: the living descendants of some of the very first land plants.

An hour’s search yielded nothing except an uneasy feeling about trespassing on nearby residences. Before giving up, Li checked one more spot. There, in that ditch full of trash, he found them. Most people would probably mistake the spring-green bristles for blades of grass, but Li recognised the hornworts right away. He plunged his hands into the soil, scooped up the rootless plants, and packed them in a plastic cooler. A humble package of earth and herbage, but one that would rewrite a chapter in the evolutionary history of plants. Long ago, hornworts did something plants are not supposed to do: they breached the species barrier, trading DNA with an entirely different kind of plant – a fern.

Between 300 and 130 million years ago, as trees and flowering plants grew to dominate the globe, the sun-loving ferns of yore found themselves trapped beneath forest canopies. Most fern species perished under this umbrage, but the ones that survived learned to live on lean light. These persistent plants evolved a molecule called neochrome that could detect both red and blue light, helping them stretch towards any beams that managed to filter through the dense awning of leaves.

Neochrome’s origins have long eluded scientists. As far as anyone knew, the gene that codes for neochrome existed in only two types of plants separated by hundreds of millions of years of evolution: ferns and algae. It was extremely unlikely that the gene had been passed down from a common ancestor, yet somehow skipped over every plant lineage between algae and ferns. About two years ago, while searching through a new massive database of sequenced plant genomes, Li found a near-exact match for the neochrome gene in a group of plants not previously known to possess the light-sensitive protein: hornworts. Through subsequent DNA analysis of living specimens – like those he collected in Florida – Li confirmed his suspicion: ferns did not evolve neochrome on their own; rather, they took the gene from hornworts.

The fern’s lifecycle offers some clues as to how this happened. Ferns alternate between two distinct modes: their familiar feathery adult forms, and glistening heart-shaped lobes known as gametophytes. Gametophytes produce and secrete sperm and eggs that must fertilise each other – or find gametes on another gametophyte – in order to produce a new adult fern. Thus exposed, the fern’s gametes could easily come into contact with the similarly liberated sperm and eggs of hornworts, which tend to congregate in the same moist spots on the forest floor. If damaged or malformed gametes from both plants found one another, they could have traded DNA across their broken membranes before fusing with one of their own kind.

Scientists have known for many decades that prokaryotes such as bacteria and other microorganisms – which lack a protective nucleus enveloping their DNA – swap genetic material with each other all the time. Researchers have also documented countless cases of viruses shuttling their genes into the genomes of animals, including our own.

What has become increasingly clear in the past 10 years is that this liberal genetic exchange is definitely not limited to the DNA of the microscopic world. It likewise happens to genes that belong to animals, fungi and plants, collectively known as eukaryotes because they boast nuclei in their cells. The ancient communion between ferns and hornworts is the latest in a series of newly discovered examples of horizontal gene transfer: when DNA passes from one organism to another generally unrelated one, rather than moving ‘vertically’ from parent to child. In fact, horizontal gene transfer has happened between all kinds of living things throughout the history of life on the planet – not just between species, but also between different kingdoms of life. Bacterial genes end up in plants; fungal genes wind up in animals; snake and frog genes find their way into cows and bats. It seems that the genome of just about every modern species is something of a mosaic constructed with genes borrowed from many different forms of life.

‘What scientists have seen is just a little tip of an immense iceberg,’ says Antonio Teixeira, a biologist at the University of Brasilia. W Ford Doolittle, a biochemist at Dalhousie University in Nova Scotia, agrees: horizontal gene transfer, he wrote recently ‘is far more pervasive and more radical in its consequences than we could have guessed just a decade ago’. Researchers have now discovered so many examples of gene transfer between species and kingdoms of life – with many more surely to come – that they have to adjust their understanding of how evolution works. Standard evolutionary theory does not account for the possibility of complex organisms suddenly acquiring genes from other species, let alone how those foreign genes might change a creature for better or worse. Think of it this way: if the genomes of living species are flowers on different branches of the great evolutionary tree of life, horizontal gene transfer is a subversive wind whipping pollen from one part of the tree to another.

he first hints of horizontal gene transfer among complex organisms emerged several decades ago. In the 1940s, at Cold Spring Harbor Laboratory in New York, Barbara McClintock discovered that certain genes in corn plants could pop out of one position on a chromosome and move to another. The extent to which this transposition happened in a particular kernel determined its unique pattern of colourful speckles. McClintock’s pioneering work demonstrated for the first time that a genome is highly dynamic, not forever fixed in one order.

That was a difficult concept for many scientists to accept. By the 1970s, however, other researchers had discovered ‘jumping genes’, or transposons, in much more than just corn, and the scientific community at large finally began to celebrate McClintock’s work, which earned her the Nobel Prize in 1983. Scientists now know that transposons are extremely abundant and often constitute large portions of a given genome: they make up more than 85 per cent of the maize genome and about half of our own. Some slice themselves out of one spot on a chromosome and move to another; others take a copy-and-paste approach, quickly multiplying. To make these jumps, transposons rely on two main strategies: either they include a genetic sequence encoding an enzyme known as transposase, which can chop a transposon out of its current location and reintroduce it elsewhere; or they use a different set of enzymes to produce strings of RNA that are translated into DNA and woven back into the host genome.

These are exactly the notions that have unravelled in the past decade as researchers have turned up one new case of gene transfer after another

Around the time of McClintock’s vindication, scientists stumbled upon a particularly prominent transposon in fruit flies. At the University of Arizona, Margaret Kidwell was mating laboratory-raised females of the fruit-fly species Drosophila melanogaster with males caught from the wild. Kidwell was surprised to discover that the offspring of her matchmaking were sterile and rife with crippling genetic mutations.

Further experiments revealed that the source of these aberrations was a transposon later dubbed the P element, and that this mobile gene had infiltrated just about every wild population of D melanogaster sometime in the previous 50 years. By confining some groups of fruit flies to laboratories for so many decades, scientists had protected them from this infestation. Whereas wild flies had evolved strategies to repress the genetic chaos triggered by the P element, laboratory strains had not. So their hybrid offspring were vulnerable. Making things even stranger, researchers discovered that the P element originally jumped to wild D melanogaster populations from another fruit fly species, Drosophila willistoni.

Although the two fly species live in the same areas, they are sexually incompatible – so how did the P element make its extraordinary leap? One of Kidwell’s colleagues, Marilyn Houck, suspected that a mite known as Proctolaelaps regalis was the gene-smuggler. The mite regularly parasitises both D melanogaster and D willistoni, using its needling mouthparts to suck up nutrients from fruit fly eggs and larvae. Such a parasite could conceivably transfer DNA from the egg of one fruit fly species to another. Follow-up studies showed that mites feeding on fruit flies did indeed harbor the P element.

he P element was a dramatic example of just how dynamic genes could be – of their potential to disregard the boundaries between different species’ DNA and shape an organism’s evolution. Horizontal gene transfer was partly responsible for reproductively isolating lab-bred populations of fruit flies from wild ones – a major step on the way towards speciation. Still, most biologists viewed horizontal gene transfer among insects and other animals as something of an anomaly. Yes, bacteria and viruses exchanged DNA on a daily basis. But when it came to animals, plants and fungi, such genetic trespassing was surely rare overall and, in most cases, of little importance.

These are exactly the notions that have unravelled in the past decade as researchers have turned up one new case of gene transfer after another. ‘There was a time when we didn’t even realise that transposons could come from other species,’ says Cedric Feschotte of the University of Utah. ‘Now it seems our own genome is a patchwork of raw genetic material coming from different places with different histories – that to me is very profound. Even the largest eukaryote genomes have this patchwork origin to them.’

In the mid-2000s, Feschotte and his colleagues noticed some unusual patterns among the sequenced genomes of various mammals. Again and again, the lineage of certain DNA segments failed to align with established evolutionary relationships. They would find, for example, nearly identical sequences of DNA in mice and rats, but not in squirrels; and the same sequence would turn up in nocturnal primates known as bushbabies, but not in other primate species. It was highly unlikely that mice, rats and bushbabies had independently evolved the exact same chunk of DNA. Further complicating things, these puckish strings of DNA were not in the same position on the same chromosome in different species, as you would expect if they had been inherited the traditional way – rather, their locations were highly variable.

On its epic journey through the tree of life, BovB has jumped between species at least nine times, and seems to have moved from reptiles to mammals

The reason, Feschotte and colleagues discovered in 2008, is that these DNA sequences were not vertically inherited genes; rather, they belonged to a widespread family of transposons, which the scientists dubbed SPACE INVADERS, or SPINs for short. SPINs have managed to insert themselves into the genomes of tenrecs, little brown bats, opossums, green anole lizards and African clawed frogs, in addition to bushbabies, mice and rats. In each of these species’ genomes, the transposons have multiplied either themselves or abbreviated forms of themselves thousands of times. And, in at least one case, mice and rats have adopted a SPIN transposon as one of their own, turning it into a functional gene that is actively read by the cellular machinery that translates genes into proteins, though its exact role remains a mystery. Over the past 30 million years, several SPINs have infiltrated the little brown bat’s genome and replicated an enormous number of times. This amplification coincides with one of the swiftest periods of speciation in the bat’s evolutionary history. It is by no mean’s conclusive proof that horizontal gene transfer encouraged the speciation, but it is indicative.

A different kind of transposon – one of the copy-and-paste variety – has spread through an equally diverse group of animals. In 2012, David Adelson, Ali Walsh aat the University of Adelaide, and their colleagues, discovered that the transposon BovB – first found in cows (hence the bovine epithet) – is also present in anoles, opossums, platypuses, wallabies, horses, sea urchins, silkworms and zebrafish, to name a few. Once again, vertical inheritance via traditional evolutionary relationships could not explain the transposon’s haphazard materialisation here and there. On its epic journey through the tree of life, BovB has jumped between species at least nine times, and seems to have generally moved from reptiles to mammals.

How does one little piece of DNA get into all those distantly related creatures living in such different places – animals that likely never even encountered one another, let alone mated? It probably enlists the help of organisms that have mastered the art of hitchhiking: ticks. Adelson, Walsh and colleagues found BovB in several tick species known to vampirise reptiles. Likewise, a couple of years after first discovering SPINs, Feschotte and colleagues found them yet again in two creatures that – just like the mite with an appetite for fruit fly eggs – have the potential to transmit transposons from one animal to another: a blood-sucking insect known as the kissing bug (Rhodnius prolixus), which feeds on birds, mammals and reptiles alike; and the pond snail (Lymnaea stagnalis), which is host to many parasitic flatworms that infect various vertebrates. Alone, the kissing bug and pond snail cannot explain all of SPINs’ conquests; their habitats overlap with many but not all of the vertebrates that contain the transposons. But the available evidence suggests that this six-legged parasite and shelled parasite hotel are two key accomplices that allowed SPINs to infiltrate so many different animal lineages within the past 50 million years.

Sometimes, parasites transfer far more than a single gene into the genomes of their hosts. Like many insects, the fruit fly species Drosophila ananassae is home to parasitic bacteria known as wolbachia, typically found in an insect’s sex organs. Through a series of gene‑sequencing studies, scientists have confirmed that the wolbachia species living inside D ananassae has shuttled not just one, but all of its 1,206 genes into the fruit fly’s DNA. Consider this: insects are collectively the most numerous animals on the planet; wolbachia infects between 25 and 70 per cent of all insect species, and it’s probable that wolbachia has successfully completed such genetic mergers in far more than fruit flies. Think of the quintillions of insects in the world – all those buzzing, bristling, bug-eyed creatures. At their very core, most of them might not be individual organisms but at least two beasts in one.

Recently, while studying a virus that preys on wolbachia, Jason Metcalf and Seth Bordenstein of Vanderbilt University in Tennessee discovered the Napoleon of horizontal gene transfers: a little gene that has conquered every kingdom of life. The virus in question attacks and kills wolbachia using a gene named GH25-muramidase, which encodes an enzyme that can perforate bacterial cell walls. When Metcalf and Bordenstein traced the evolutionary lineage of GH25, they discovered a pattern of inheritance that looked anything but typical. The GH25 gene was scattered throughout the tree of life: in bacteria, plants, fungi and insects. This particular gene seems to have moved fluidly through the microbial world and then hopped laterally to viruses, plants, fungi and insects living in close association with different kinds of bacteria. ‘Every organism needs to fight bacteria off,’ Metcalf says. ‘If they can get a new method of antibacterial defence, that’s a huge evolutionary advantage for them.’

in Japan, some people’s gut bacteria have stolen seaweed-digesting genes from ocean bacteria lingering on raw seaweed salads

One of the most clear-cut instances of horizontal gene transfer is the story of the fungus and the pea aphid. Some fungi, plants and bacteria have genes encoding carotenoids, a diverse class of colourful molecules involved in everything from photosynthesis and vision to camouflage and sexual attraction. No one had ever found such genes in animals, though. In all known cases, animals acquired carotenoids from their diet (for instance, flamingoes become red and pink from eating plankton). In late 2009, Nancy Moran, an evolutionary biologist then at the University of Arizona, stumbled onto the fact that pea aphids have a carotenoid gene.

'More than 270 million years ago, a lone aphid likely attained a carotenoid gene from a fungus'. Photo courtesy Wikipedia

‘More than 270 million years ago, a lone aphid likely attained a carotenoid gene from a fungus’. Photo courtesy Wikipedia

Scientists already knew that pea aphids appear green or red depending on the carotenoids in their bodies, and that aphid populations shift their colours in response to certain threats: green aphids are more susceptible to parasitic wasps; red aphids are more vulnerable to ladybirds. But the origin of the pigments had always been something of a mystery. Aphids primarily feast on sap, which does not contain many carotenoids. And pea aphids were often found with very different carotenoids than the ones inside the plants they were eating. When Moran compared the aphid’s pigment genes with those in many different creatures, the closest match was in a family of fungus. More than 270 million years ago, a lone aphid likely attained a carotenoid gene from a fungus – perhaps one that was infecting it, or one it was munching. Other scientists have since discovered that spider mites and gall midges have also acquired carotenoid genes from fungi and bacteria.

Shake any branch on the tree of life and another astonishing case of interspecies gene transfer will fall at your feet. Bdelloid rotifers – tiny translucent animals that look something like sea slugs – have constructed a whopping eight per cent of their genome using genes from bacteria, fungi and plants. Fish living in icy seawater have traded genes coding for antifreeze proteins. Gargantuan-blossomed rafflesia have exchanged genes with the plants they parasitise. And in Japan, some people’s gut bacteria have stolen seaweed-digesting genes from ocean bacteria lingering on raw seaweed salads.

t this point, the tally is too high to ignore. Scientists can no longer write off gene-swapping among eukaryotes – and between prokaryotes and eukaryotes – as inconsequential. Clearly genes have all kinds of ways of journeying between the kingdoms of life: sometimes in large and sudden leaps; other times in incremental steps over millennia. Granted, many of these voyages are probably futile: a translocated gene finds itself to be utterly useless in its new home, or becomes such a nuisance to its genetic neighbours that it is evicted. Laterally transferred genes can be imps of chaos, gumming up or refashioning a genome in a way that is ultimately disastrous – perhaps even lethal to a species. In a surprising number of instances, however, wayfaring genes make a new life for themselves, becoming successful enough to change the way an organism behaves and steer its evolution.

The fact that horizontal gene transfer happens among eukaryotes does not require a complete overhaul of standard evolutionary theory, but it does compel us to make some important adjustments. According to textbook theories of evolution, the major route of genes moving between organisms is parent to child – whether through sex or asexual cloning – not this sneaky business of escorting genes between unrelated organisms. We must now acknowledge that, even among the most complex organisms, vertical is not the only direction in which genes travel.

Likewise, standard theory says that mutations are supposed to happen within a species’s own genome, not come from somewhere else entirely. We now know that the appearance of new genes does not necessarily result from tweaks to native DNA, but might instead represent the arrival of far-flung visitors. ‘We need to start thinking about genomes as ecological units rather than monolithic units,’ says Jack Werren of the University of Rochester in New York, one of the scientists who discovered the wolbachia/fruit fly Russian doll. ‘We’re dealing with a new category by which unique genes can evolve.’

Related video


The missing link in the history of natural selection – an adventure with puppets

In some cases, this genetic hopscotching ‘could exert a very powerful evolutionary force’, says Li. ‘It can introduce novelties that cannot be achieved by gradual genetic mutations.’ Consider that a plant acquiring a gene from a bacterium, or an aphid from a fungus, is not receiving some half-constructed genetic prototype. Rather, it gets the benefit of all the aeons of natural selection that have whittled that gene in another creature, honing its power. An introduced gene might need some tweaks before it whirs in sync with its new neighbours, but it could be closer to such harmony than a de novo mutation that was caused by, say, a cell-division error or UV radiation. Horizontal gene transfer opens the possibility of a creature instantaneously acquiring a gene-trait combo that its own genome would have been unlikely to invent by itself.

Laterally transferred genes can sway evolution’s tiller in more subtle ways, too. Certain types of introduced genes duplicate themselves many times over, often leaving behind either little bits and pieces or entire replicas. In the process, they can rearrange large chunks of native DNA, change the way certain genes are expressed, or create whole new genes out of all this shuffling. By making a host genome larger and more diverse, these genetic immigrants increase the probability of copying and editing errors, some of which can be serendipitous and spur rapid evolution, as might have happened with the little brown bat.

We can unite these various corollaries to standard evolutionary dogma by re-imagining the tree of life. In the classic textbook depiction, the tree of life has a single trunk that splits into three big domains – bacteria, archaea (which resemble bacteria but are genetically and molecularly distinct) and eukaryota. These three domains of life branch into all known species. Every creature that ever existed presumably ‘descended from some one primordial form’, as Charles Darwin put it in 1859. And genes ostensibly flow in one direction: up from the trunk.

Scientists such as Ford Doolittle and Carl Woese at the University of Illinois have argued that this portrayal is an oversimplification. Rather than rising from a single trunk, they say, the tree of life stands on an interweaving root system. Rather than evolving from one ‘last universal ancestor’, all life arose from a communal pool of primitive cells with unbridled zeal for exchanging DNA. For relatively simple cells with only a handful of genes each, swapping DNA was an excellent strategy for acquiring and preserving the best adaptations around.

At some point, Woese proposed, cells reached a certain threshold of complexity at which it became detrimental to embrace a bombardment of foreign genes. A primordial cell harbouring a small group of genes can potentially gain a lot by adding new genes to its repertoire; but a more sophisticated cell with hundreds or thousands of genes risks imbalancing an intricate genome fine-tuned by a longer period of natural selection. So, complex eukaryotic cells evolved new ways to protect their DNA and expunge genetic invaders.

However, as has become clear in the past decade, horizontal gene transfer did not halt among eukaryotes and their microbial denizens. A mischievous breeze continued to blow DNA this way and that, from one branch on the tree of life to another. Wolbachia, pea aphids and hornworts all encourage us to accept a truth that seems unsettling at first, but ultimately invites us into greater communion with all life on the planet.

we can no longer pretend that gene-mixing between species is ‘unnatural’, that it is some misguided practice that would never exist if not for our meddling latex-gloved hands

There seems to be a notion in the public consciousness that the DNA of one species should not mix with the DNA of another. This belief becomes especially clear in the ongoing debate about genetically modified organisms (GMOs). Opponents frequently argue that the kind of gene transfers scientists make between different species would never happen outside the lab. Putting a wheat gene into a chestnut tree, or a bacterial gene into corn, or a fish gene into a tomato? Surely that’s unnatural. The ostensible perversion of mixing genes is stuck like a gong, again and again. The supermarket chain Whole Foods, for example – which counsels its customers on how to avoid genetically modified foods – defines GMOs as ‘organisms whose genetic make-up (DNA) has been altered in a way that does not occur naturally.’

But it does. Genetic promiscuity is far more prevalent in nature than we realised. This fact alone is not an argument in favour of GMOs; simply because something occurs in nature without assistance from humans does not mean it is inherently good or bad. Confronted with this fact, however, we can no longer pretend that gene-mixing between species is ‘unnatural’, that it is some misguided practice that would never exist if not for our meddling latex-gloved hands. We did not invent gene transfer; DNA did. Genes are concerned with one thing above all else: self-perpetuation. If such preservation requires a particular gene to adapt to a genome it has never encountered before – if riding a parasite from one species to another turns out to be an extremely successful way of guaranteeing perpetuity – so be it. Species barriers might protect the integrity of a genome as a whole, but when an individual gene has a chance to advance itself by breaching those boundaries, it will not hesitate.

That’s the thing about DNA: its true loyalty is to itself. We tend to think of any one species’s genome as belonging to that species. We have a strong sense of ownership over our genes in particular – an understanding that, even though our genome overlaps with that of other creatures, it is still singular, is still ‘the human genome’. So strong is our possessiveness that the mere idea of mixing our DNA with another creature’s – of any two species intermingling genes – immediately repulses us. As far as DNA is concerned, however, the supposed walls between species are not nearly so impermeable. Up in the branches of the great tree of life, we are no longer immersed in the ancient communal pool that watered its tangled roots. Yet we cannot escape the winds of promiscuity. Even today – as was true from the start – ‘our’ genes are not ours alone.

11 December 2014

May 26, 2016 at 10:31AM
Open in Evernote

HP Will Release a ���Revolutionary��� New Operating System in 2015 | MIT Technology Review

HP Will Release a ���Revolutionary��� New Operating System in 2015 | MIT Technology Review

Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system next year.

Why It Matters

U.S. data centers consumed 91 billion kilowatt-hours of electricity in 2013—twice as much as all the households in New York City—according to the Natural Resources Defense Council.

Closeup of HP Memristor devices on a 300 millimeter wafer.

Hewlett-Packard will take a big step toward shaking up its own troubled business and the entire computing industry next year when it releases an operating system for an exotic new computer.

The company’s research division is working to create a computer HP calls The Machine. It is meant to be the first of a new dynasty of computers that are much more energy-efficient and powerful than current products. HP aims to achieve its goals primarily by using a new kind of computer memory instead of the two types that computers use today. The current approach originated in the 1940s, and the need to shuttle data back and forth between the two types of memory limits performance.

“A model from the beginning of computing has been reflected in everything since, and it is holding us back,” says Kirk Bresniker, chief architect for The Machine. The project is run inside HP Labs and accounts for three-quarters of the 200-person research staff. CEO Meg Whitman has expanded HP’s research spending in support of the project, says Bresniker, though he would not disclose the amount.

The Machine is designed to compete with the servers that run corporate networks and the services of Internet companies such as Google and Facebook. Bresniker says elements of its design could one day be adapted for smaller devices, too.

HP must still make significant progress in both software and hardware to make its new computer a reality. In particular, the company needs to perfect a new form of computer memory based on an electronic component called a memristor (see “Memristor Memory Readied for Production”).

A working prototype of The Machine should be ready by 2016, says Bresniker. However, he wants researchers and programmers to get familiar with how it will work well before then. His team aims to complete an operating system designed for The Machine, called Linux++, in June 2015. Software that emulates the hardware design of The Machine and other tools will be released so that programmers can test their code against the new operating system. Linux++ is intended to ultimately be replaced by an operating system designed from scratch for The Machine, which HP calls Carbon.

Programmers’ experiments with Linux++ will help people understand the project and aid HP’s progress, says Bresniker. He hopes to gain more clues about, for example, what types of software will benefit most from the new approach.

The main difference between The Machine and conventional computers is that HP’s design will use a single kind of memory for both temporary and long-term data storage. Existing computers store their operating systems, programs, and files on either a hard disk drive or a flash drive. To run a program or load a document, data must be retrieved from the hard drive and loaded into a form of memory, called RAM, that is much faster but can’t store data very densely or keep hold of it when the power is turned off.

HP plans to use a single kind of memory—in the form of memristors—for both long- and short-term data storage in The Machine. Not having to move data back and forth should deliver major power and time savings. Memristor memory also can retain data when powered off, should be faster than RAM, and promises to store more data than comparably sized hard drives today.

The Machine’s design includes other novel features such as optical fiber instead of copper wiring for moving data around. HP’s simulations suggest that a server built to The Machine’s blueprint could be six times more powerful than an equivalent conventional design, while using just 1.25 percent of the energy and being around 10 percent the size.

HP’s ideas are likely being closely watched by companies such as Google that rely on large numbers of computer servers and are eager for improvements in energy efficiency and computing power, says Umakishore Ramachandran, a professor at Georgia Tech. That said, a radical new design like that of The Machine will require new approaches to writing software, says Ramachandran.

There are other prospects for reinvention besides HP’s technology. Companies such as Google and Facebook have shown themselves to be capable of refining server designs. And other new forms of memory, all with the potential to make large-scale cloud services more efficient, are being tested by researchers and nearing commercialization (see “Denser, Faster Memory Challenges Both DRAM and Flash” and “A Preview of Future Disk Drives”).

“Right now it’s not clear what technology is going to become useful in a big way,” says Steven Swanson, an associate professor at the University of California, San Diego, who researches large-scale computer systems.

HP may also face skepticism because it has fallen behind its own timetable for getting memristor memory to market. When the company began working to commercialize the components, together with semiconductor manufacturer Hynix, in 2010, the first products were predicted for 2013 (see “Memristor Memory Readied for Production”).

Today, Bresniker says the first working chips won’t be sent to HP partners until 2016 at the earliest.

May 26, 2016 at 10:31AM
Open in Evernote