Driverless cars of the future confront rules written for drivers

Driverless cars of the future confront rules written for drivers

Driverless cars of the future confront rules written for drivers

WASHINGTON — When the U.S. government finally got around to regulating auto safety in 1967, it insisted that every car have seatbelts and that the steering column be engineered to absorb impact so it wouldn’t spear the driver.

The safety rulebook has since swelled to nearly 900 pages and encompasses everything from electronic stability control to rear-view backup cameras. Through all the updates, however, the regulations remain premised on an assumption that may soon be obsolete: that a human would be at the wheel.

That’s about to change.

U.S. regulators could soon undertake one of the biggest overhauls of the Federal Motor Vehicle Safety Standards ever, one that would apply to cars that drive themselves. Tech and auto companies are pouring billions of dollars into a race to develop self-driving vehicles, which carmakers from Tesla Inc. to Volvo Cars say could be deployed in less than 10 years — assuming the regulations can be put in place that quickly. 

“The basic problem here is one we’ve seen in a lot of industries: the technology moves a lot quicker than the regulation,” said Elliot Katz, a partner at McGuireWoods LLP who chairs the firm’s connected and automated vehicle practice. “Unfortunately, the rulemaking process is not a short one, not a cheap one and is nothing short of labor intensive.”

That’s why automakers have been pressing the agency to get the process underway as soon as possible. While limited road testing is permitted today, mass production of the vehicles will require the new regulations to be in place. Car companies would also like to have some reassurance that what they’re doing is approved by NHTSA before venturing too far into the unknown.

A group of automakers and other groups called the Coalition for Future Mobility issued a statement July 13 urging Congress to direct NHTSA to begin to begin work on writing the driver out of new car designs. 

Legislation advanced by a House committee weeks later would direct the National Highway Traffic Safety Administration to update its auto safety standards. The bill would also allow automakers to roll out more self-driving vehicles under expanded exemptions from some safety rules while the formal regulations take shape.

The bill sends the signal that “NHTSA needs to start planning and take stock of what needs to be done,” said Tim Goodman, an attorney at law firm Babst, Calland, Clements and Zomnir PC in Washington and a former NHTSA attorney.

NHTSA didn’t provide a comment. 

Nearly half of the 73 safety standards on the books now make an explicit or implied reference to a human driver, according to a 2016 study by the Volpe National Transportation Systems Center, a research arm of the U.S. Transportation Department.

Compliance issues

The report found few barriers for autonomous vehicles that stuck to conventional designs. Indeed, Google affiliate Waymo is already offering rides to residents of Phoenix, Ariz., in its fleet of self-driving Chrysler Pacifica minivans, with company testers in the driver seat.

Yet the report found more than 30 standards that could present a compliance problem for fully-autonomous vehicles with no human controls or with novel seating arrangements. 

For example, NHTSA sent a letter last year to the head of Google’s self-driving car project saying the agency would consider the company’s artificial intelligence to be a driver. But it pointed out some intractable obstacles, such as a requirement that cars have brakes that “shall be activated by means of foot control.”

Moreover, rules governing vehicle testing will have to be modified. A 2016 report by the Rand Corp. found that autonomous safety will need to be evaluated based on how the array of sensors and artificial intelligence computers safely responds to the environment surrounding a car.

Claybrook: NHTSA lacks the budget and technical staff to craft new rules for autonomous cars in anything less than five years.

‘Extemely challenging’ 

Testing that spans the full range of conditions drivers encounter on the road will be necessary and “extremely challenging and likely to be a significant barrier to deploying” autonomous vehicles without first gaining additional experience, the Rand report said.

In addition to technical challenges, the Rand report said that reaching a consensus on new rules will be difficult with so many companies, consumer groups and private citizens who will be affected by autonomous cars. NHTSA recently updated its new car safety ratings to show if a vehicle is equipped with crash prevention systems like automatic braking. But the performance of those systems is not evaluated, in part because of disagreements about how that should be done, the report said. 

NHTSA plans to release updated guidance for safe deployment of highly-automated automated vehicles in September. The non-binding guidance was initially released by the Obama administration last September and was the federal government’s first attempt to provide some basic safety guidelines for autonomous vehicles without prescribing new regulations that could stifle innovation. 

Safety advocate and former NHTSA Administrator Joan Claybrook agrees that the task will be a very complicated. The agency could opt to amend each problematic standard through the notice-and-comment rulemaking process that can take years. It could also amend groups of related standards at once, or craft separate standards for so-called Level 4 and 5 highly-automated vehicles that can drive themselves without human intervention of any kind.

Claybrook also said NHTSA lacks the budget and technical staff to craft new rules for autonomous cars in anything less than five years. Goodman and others estimated it could take as long as seven years to a decade to establish safety standards for fully self-driving cars.

“I don’t think that’s even been thought through at NHTSA,” she said. “It’s still technically very complicated.”

Tags:
September 06, 2017 at 10:25AM
Open in Evernote

Advertisements

Biocomputer and Memory Built Inside Living Bacteria

Biocomputer and Memory Built Inside Living Bacteria

Biocomputer and Memory Built Inside Living Bacteria

Scientists have built the most complex biomolecular computer yet and stored a movie

Posted 23 Aug 2017 | 15:00 GMT

By

Image: Seth Shipman
GIFs: Seth Shipman
Ride On, Annie! A 19th-century film of a horse named Annie G. was encoded in DNA and embedded in bacteria [left]. Decoding it from the bacteria produced few errors [right].

Scientists have come up with two clever new ways to harness the programming power of DNA in living bacterial cells. In separate experiments published in Naturein July, researchers reported that they had successfully archived a movie and built a complex biological computer inside living E. coli cells.

The experiments expand our ability to exploit DNA’s encoding potential. “What these papers represent is just how good we are getting at harnessing that power,” says Julius Lucks, a bioengineer at the Center for Synthetic Biology, at Northwestern University, in Evanston, Ill., who was not involved in either report.

Researchers for both experiments relied on electrical engineering principles to achieve their feats. In the DNA storage experiments, a team at Harvard University demonstrated for the first time how to encode a movie and an image into living cells. Storage of digital data in DNA has been achieved before—as much as 200 megabytes—but until now, no one had archived data inside a living organism, says Seth Shipman, a neuroscientist at Harvard who led the experiments.

To get a movie into E. colis DNA, Shipman and his colleagues had to disguise it. They converted the movie’s pixels into DNA’s four-letter code—molecules represented by the letters A,T,G and C—and synthesized that DNA. But instead of generating one long strand of code, they arranged it, along with other genetic elements, into short segments that looked like fragments of viral DNA.

E. coli is naturally programmed by its own DNA to grab errant pieces of viral DNA and store them in its own genome—a way of keeping a chronological record of invaders. So when the researchers introduced the pieces of movie-turned-synthetic DNA—disguised as viral DNA—E. coli’s molecular machinery grabbed them and filed them away.

The movie they stored was a 36-by-26-pixel GIF of one of the first moving images ever recorded: a galloping mare named Annie G., by Eadweard Muybridge­ in 1887. The team was able to retrieve it, along with a separate image, with about 90 percent accuracy by sequencing the bacterium’s genome.

The same month Shipman announced the storage breakthrough, a separate group of researchers reported another cleverly programmed piece of synthetic DNA. This one, when introduced into E. coli, can direct the cell to produce a biological computer made of ribonucleic acid, or RNA.

/image/Mjk0MzYwNg.jpeg
Image: Alexander Green
The Ribocomputer: Loops and other shapes in RNA act as a series of logic gates that detect input RNA signals [red] to determine the protein output of a ribosome [purple].

The “ribocomputer” can evaluate up to a dozen inputs, make logic-based decisions using AND, OR, and NOT operations, and give the cell commands. The system is the most complex biological computer to date and is one of the few that operates inside a living cell, says Alexander Green, an engineer at the Biodesign Institute at Arizona State University, in Tempe, who developed the technology with colleagues at Harvard’s Wyss Institute for Biologically Inspired Engineering.

The biological circuit enables researchers to program cells to respond when they receive a particular type of input. For instance, cells could be programmed to light up or self-destruct when they sense the presence of a toxin or a marker of cancer.

Taken together, these advances in DNA storage and biological computing are reminiscent of the early days of electronics, say researchers. “It’s like a golden age of circuit design,” says Timothy Lu, a bioengineer in the Research Laboratory of Electronics at MIT, who was not involved in the work. “It’s a great time for creative circuit engineers to be in the field.”

In order to build more complex biological computers, or store increasingly complex data in DNA, bioengineers will need to borrow electrical engineering concepts. “That way of thinking—the way that electrical engineers have gone about establishing design hierarchy or abstraction layers—I think that’s going to be really important for biology,” says Lu.

One doesn’t need a life sciences background to participate, adds Lucks. “We can create a layer of abstraction where you don’t need to know about RNA folding to design a circuit out of RNA,” he says.

At some point—perhaps much further down the road—the public will need to weigh in on the idea of forcing living things to perform functions that fall so far outside their normal activities. “We can make bacteria compute information and store a movie. Is that okay?” says Lucks. “I don’t think anybody would really argue that it’s unethical to do this in E. coli. But as you go up in the chain [of living organisms], it gets more interesting from an ethical point of view.”

A version of this article appeared as two posts in The Human OS blog.

Tags:
September 06, 2017 at 10:25AM
Open in Evernote

Secret life of the dodo revealed – BBC News

Secret life of the dodo revealed – BBC News

Secret life of the dodo revealed

BBC News

Dodo

Image copyright
Julian Hume

Image caption

The dodo lived on the island of Mauritius until it died out about 350 years ago

Scientists are piecing together clues about the life of the dodo, hundreds of years after the flightless bird was driven to extinction.

Few scientific facts are known about the hapless bird, which was last sighted in 1662.

A study of bone specimens shows the chicks hatched in August and grew rapidly to adult size.

The bird shed its feathers in March revealing fluffy grey plumage recorded in historical accounts by mariners.

Delphine Angst of the University of Cape Town, South Africa, was given access to some of the dodo bones that still exist in museums and collections, including specimens that were recently donated to a museum in France.

Her team analysed slices of bone from 22 dodos under the microscope to find out more about the bird’s growth and breeding patterns.

“Before our study we knew very very little about these birds,” said Dr Angst.

“Using the bone histology for the first time we managed to describe that this bird was actually breeding at a certain time of the year and was moulting just after that.”

Dodo

Image copyright
Agnès Angst

Image caption

Reconstruction of the dodo: Much of what we know from paintings is inaccurate

The scientists can tell from growth patterns in the bones that the chicks grew to adult size very rapidly after hatching from eggs around August.

This would have given them a survival advantage when cyclones hit the island between November and March, leading to a scarcity of food.

However, the birds probably took several years to reach sexual maturity, possibly because the adult birds lacked any natural predators.

The bones of adult birds also show signs of mineral loss, which suggests that they lost old damaged feathers after the breeding season.

Ancient mariners gave conflicting accounts of the dodo, describing them as having “black down” or “curled plumes of a greyish colour”.

The research, published in Scientific Reports, backs this historical evidence.

“The dodo was quite a brown-grey bird, and during the moulting it had downy, black plumage,” explained Dr Angst.

“What we found using our scientific methods fit perfectly with what the sailors had written in the past.”

Egg theft

The research could also shed light on the dodo’s extinction about 350 years ago, less than 100 years after humans arrived on the island.

Hunting was a factor in the dodo’s demise, but monkeys, deer, pigs and rats released on the island from ships probably sealed their fate.

Dodos laid their eggs in nests on the ground, meaning they were vulnerable to attack by feral mammals.

Mauritius

Image copyright
Getty Images

Image caption

The island of Mauritius in the Indian Ocean

Dr Angst said the dodo is considered “a very big icon of animal-human induced extinction”, although the full facts are unknown.

“It’s difficult to know what was the real impact of humans if we don’t know the ecology of this bird and the ecology of the Mauritius island at this time,” she explained.

“So that’s one step to understand the ecology of these birds and the global ecosystem of Mauritius and to say, ‘Okay, when the human arrived what exactly did they do wrong and why did these birds became extinct so quickly’.”

Julian Hume of the Natural History Museum, London, a co-researcher on the study, said there are still many mysteries surrounding the dodo.

“Our work is showing the seasons and what was actually affecting the growth of these birds because of the climate in Mauritius,” he said.

“The cyclone season, when often the island is devastated with storms – all the fruits and all the leaves are blown off the trees – is quite a harsh period for the fauna – the reptiles and the birds on Mauritius.”

The dodo, which is related to the pigeon, evolved on Mauritius.

However, bone samples are rare, making it difficult to trace the evolutionary process.

Although many specimens of the dodo ended up in European museums, most were lost or destroyed in the Victorian era.

Follow Helen on Twitter.

Tags:
September 06, 2017 at 10:24AM
Open in Evernote

When a Ping Pong Ball Breaks the Sound Barrier

When a Ping Pong Ball Breaks the Sound Barrier

Better Nuclear Power Through Ping Pong

It easily blasts a hole in a ping pong paddle. It also demonstrates a revolutionary way to harvest nuclear power.

The lab is deep-space quiet. A long, narrow hallway hung with fluorescent lights extends to my left. Four or five doors interrupt the flow of drywall. A few of those doors are open, the occupants of the rooms within now out in the hall and staring, ears plugged in anticipation.

A technician flips a small lever to activate the vacuum pumps on an 18-foot cannon that is tented in bulletproof polycarbonate. He’s dressed casually in dark jeans and a black button-down, an ID card coolly clipped to his pants. He wears clear safety glasses and bright red protective headphones. Like the scientists down the hall, he is part of Intellectual Ventures in Bellevue, Washington—a skunkworks created by Nathan Myhrvold (Microsoft’s former chief technical officer and a bit of a mad scientist), who pays some of the smartest doctors, biologists, chemists, nuclear scientists, demolition experts, and hackers to work together to create great things. Things like the cannon we’re about to fire, which demonstrates technology that could change the nuclear power industry.

The pumps chitter away, sucking air from the barrel. That’s the secret to breaking the sound barrier with a ping pong ball. If any air were left in front of the ball, it would crush the ball under the force of the acceleration. I press a button to release 400 psi of helium gas into the accumulator. The breech is loaded and the silence returns—until I yell “Fire in the hole!” and press the red fire button. A shattering ka-BAWOOOMM roars through the lab complex. The smell of smoke hits my nostrils. Splinters burst everywhere, crashing into the plywood backstop and bulletproof protection panels. They came from the ping pong paddle mounted two inches in front of the cannon. That paddle, a multilayered rubber-and-wood Stiga, now has a ping-pong-ball-shaped hole through its center. Considering that the little yellow ping pong ball was traveling at Mach 2.09, the paddle didn’t have a chance.

The cannon is a prop, really—something to get potential investors excited about the technology. After our test fire, the scientists in the hallway are cheering. This isn’t just work.

Conventional reactors use designs that remain basically unchanged since the 1950s. They require expensive enriched uranium and frequent fuel changes. The Intellectual Ventures design, from a spin-off called TerraPower, uses unenriched uranium and needs fuel changes every ten years.

That’s the secret to breaking the sound barrier with a ping pong ball: If any air were left in front of the ball, it would crush the ball under the force of the acceleration.

What does any of that have to do with ping pong? Imagine the ping pong ball is a neutron. In a conventional reactor, a neutron knocks into an atom and releases two or three neutrons, creating heat in a slow chain reaction. In the TerraPower reactor, that neutron travels more like the ping pong ball: at an insanely high speed. It bashes into atoms, freeing neutrons like the shards that fly from the demolished ping pong paddle—as many as six per collision. Those neutrons retain most of the speed of the first and go on to cause collisions of their own, freeing even more neutrons and continuing the chain reaction with exponentially higher efficiency. The design, called a Traveling Wave Reactor, unlocks about 30 times more energy, produces three to six times less waste, improves safety, and, TerraPower contends, will eventually eliminate the need to use enrichment. It also manages to use the plutonium created without having to remove it from the reaction and process it, which means the technology could be shared with rogue nations without worrying that it would be weaponized. (If the plutonium never comes out of the system, it can’t be put in a missile.)

With our tests finished, the researchers head back to their labs to work on the next great project. 3ric Johanson (not a typo, he’s a hacker and engineer), who worked on the cannon, turns to me with joy. Even if there hadn’t been a nuclear project, he says, “we would have made the cannon anyway, just because it’s cool.” The group plans for the technology to be operational by 2027. In the meantime, they’ll be doing a lot of testing of the ping pong cannon. Whether they need to or not.

Two Types of Nuclear Reactions

Slow

U-235 is the enriched uranium isotope. It’s easily split by a neutron moving at slow speed. When the neutron hits the uranium atom, the atom divides into two fission products and releases two to three neutrons. One of those neutrons might be absorbed by unenriched uranium, U-238. One might hit another U-235 atom to continue the chain reaction. And most others will leak out and no longer contribute to the process. Enriched U-235 atoms must be added to continue the reaction. If too many U-238 atoms are present, the reaction will die.

Fast

Neutrons in fast reactions move much more quickly because they use liquid metal sodium as coolant instead of water. Sodium atoms are heavier than the atoms in water, so neutrons bounce off of them harder and retain their speed. When a neutron hits a U-235 atom, the higher velocity releases three to six neutrons. According to Nick Touran at TerraPower, one hits a U-235 atom to continue the reaction. Two or three hit U-238 atoms and convert them to plutonium. The rest are lost. Slow reactions don’t have many extra neutrons, so U-238 atoms are rarely hit with another. But in fast reactions, free neutrons split the plutonium atoms, release more neutrons, and continue the reaction—without the need to remove the plutonium from the system for purification.


Want more Popular Mechanics? Get Instant Access!

This story appears in the May 2017 Popular Mechanics.

Tags:
April 27, 2017 at 03:14PM
Open in Evernote

New bill would let companies force workers to get genetic tests, share results

New bill would let companies force workers to get genetic tests, share results

Under guise of “voluntary” wellness programs, employees’ genetics could be exposed.

Beth Mole3/10/2017, 11:14 AM

It’s hard to imagine a more sensitive type of personal information than your own genetic blueprints. With varying degrees of accuracy, the four-base code can reveal bits of your family’s past, explain some of your current traits and health, and may provide a glimpse into your future with possible conditions and health problems you could face. And that information doesn’t just apply to you but potentially your blood relatives, too.

Most people would likely want to keep the results of genetic tests highly guarded—if they want their genetic code deciphered at all. But, as STAT reports, a new bill that is quietly moving through the House would allow companies to strong-arm their employees into taking genetic tests and then sharing that data with unregulated third parties as well as the employer. Employees that resist could face penalties of thousands of dollars.

In the past, such personal information has been protected by a law called GINA, the Genetic Information Nondiscrimination Act, which shields people from DNA-based discrimination. But the new bill, HR 1313, gets around this by allowing genetic testing to be part of company wellness programs.

Company wellness programs, which often involve filling out health surveys and undergoing screenings, are pitched as a way to improve employee health and reduce overall health costs. But, research has shown that they have little effect on employee health and may actually end up costing companies. Still, they may survive as a way to push healthcare costs onto employees. As Ars has reported before, companies use financial incentives to get employees to participate in these wellness programs. Under the ACA, these incentives can include all sorts of rewards and compensations. For instance, people who don’t want to participate can pay up to 60 percent more on employer-sponsored insurance premiums. That can easily amount to thousands of dollars each year.

Despite the heavy financial pressure, employee participation is still considered voluntary. Under HR 1313, GINA wouldn’t apply to anything voluntarily collected through wellness programs, and companies would have access to genetic data. That information would be stripped of identifiers, but in small companies, it could be fairly easy to match certain genetic profiles to specific employees.

Moreover, employers tend to hire third parties to collect and manage health data. These companies are not heavily regulated and can review genetic and other health data with identifiers. Some of the companies even sell health information to advertisers, STAT notes.

Civil rights and genetic privacy advocates strongly opposed the bill. In a press release, Nancy Cox, PhD, president of the American Society of Human Genetics said:

“We urge the Committee not to move forward with consideration of this bill. As longtime advocates of genetic privacy, we instead encourage the Committee to pursue ways to foster workplace wellness and employee health without infringing upon the civil rights afforded by [Americans with Disabilities Act] and GINA.”

On Wednesday, the House Education and the Workforce Committee approved HR 1313 along party lines, with 22 Republicans supporting and 17 Democrats opposing the bill.

Tags:
March 10, 2017 at 03:01PM
Open in Evernote

Self-Healing Transistors for Chip-Scale Starships

Self-Healing Transistors for Chip-Scale Starships

A new design could survive the radiation of a 20-year trip to Alpha Centauri
Photo: Yang-Kyu Choi
Cosmic-Ray-Proof: A test chip includes DRAM and logic circuits made from self-healing gate-all-around transistors.

Working with the Korea Advanced Institute of Science and Technology (KAIST), NASA is pioneering the development of tiny spacecraft, each made from a single silicon chip, that could slash interstellar exploration times.

Speaking at the IEEE International Electron Devices Meeting in San Francisco last December, NASA’s Dong-Il Moon detailed this new technology, which is aimed at ensuring such spacecraft survive the potentially powerful radiation they’ll encounter on their journey.

Calculations suggest that if silicon chips were used to form the heart of a spacecraft powered by a tiny, featherweight solar sail and accelerated by a gigawatt-scale laser system, the craft could accelerate to one-fifth the speed of light. At such high speeds, it would reach the nearest stars in just 20 years, compared with the tens of thousands of years it would take a conventional spacecraft.

Moon and coworkers argue that 20 years in space is still too long for an ordinary silicon chip, because on its journey it will be bombarded by more high-energy radiation than chips encounter on Earth. “You are above most of the magnetic fields that block a lot of radiation, and above most of the atmosphere, which also does a good job of blocking radiation,” says Brett Streetman, who leads efforts in chip-scale spacecraft at the Charles Stark Draper Laboratory, in Cambridge, Mass.

Radiation leads to the accumulation of positively charged defects in the chip’s silicon dioxide layer, where they degrade device performance. The most serious of the impairments is an increase in the current that leaks through a transistor when it is supposed to be turned off, according to Yang-Kyu Choi, leader of the team at KAIST, where the work was done.

Two options for addressing chip damage are to select a path through space that minimizes radiation exposure and to add shielding. But the former leads to longer missions and constrains exploration, and the latter adds weight and nullifies the advantage of using a miniaturized craft. A far better approach, argues Moon, is to let the devices suffer damage but to design them so that they can heal themselves with heat.

“On-chip healing has been around for many, many years,” says Jin-Woo Han, a member of the NASA team. The critical addition made now, Han says, is the most comprehensive analysis of radiation damage so far.

This study uses KAIST’s experimental “gate-all-around” nanowire transistor. These devices use nanoscale wires as the transistor channel instead of today’s fin-shaped channels. The gate-all-around device may not be well known today, but production is expected to rocket in the early 2020s. [See “Transistors Could Stop Shrinking in 2021,” IEEE Spectrum, August 2016.]

The gate—the electrode that turns the flow of charge through the channel on or off—completely surrounds the nanowire. Adding an extra contact to the gate allows you to pass current through it. That current heats the gate and the channel it surrounds, fixing any radiation-induced defects.

Nanowire transistors are ideal for space, according to KAIST, because they naturally have a relatively high degree of immunity to cosmic rays and because they are small, with dimensions in the tens of nanometers. “The typical size for [transistor dimensions on] chips devoted to spacecraft applications is about 500 nanometers,” says Choi. “If you can replace 500-nm feature sizes with 20-nm feature sizes, the chip size and weight can be reduced.” Costs fall too.

KAIST’s design has been used to form three key building blocks for a single-chip spacecraft: a microprocessor, DRAM memory for supporting this, and flash memory that can serve as a hard drive.

Repairs to radiation-induced damage can be made many times, with experiments showing that flash memory can be recovered up to around 10,000 times and DRAM returned to its pristine state 1012 times. With logic devices, an even higher figure is expected. These results indicate that a lengthy interstellar space mission could take place, with the chip powered down every few years, heated internally to recover its performance, and then brought back to life.

Philip Lubin, a professor at the University of California, Santa Barbara, believes that this annealing-based approach is “creative and clever” but wonders how much danger from cosmic rays there really will be to these chips. He would like to see a thorough evaluation of existing technologies for chip-scale spacecraft, pointing out that there are already radiation hardened electronics developed in the military.

Today, efforts at NASA and KAIST are focusing on the elimination of the second gate contact for heating. This contact is not ideal because it modifies chip design and demands the creation of a new transistor library, which escalates production costs. Those at KAIST are investigating the capability of a different design, called a junctionless nanowire transistor, which heats the channel during normal operation. Separately, at NASA, researchers are developing on-chip embedded microheaters that are compatible with standard circuits.

Cutting the costs of self-healing tech will play a key role in determining its future in chip-scale spacecraft, which will require many more years of investment before they can get off the ground.

Tags:
February 09, 2017 at 11:24AM
Open in Evernote

Posted: Inuits Inherited Cold Adaptation Genes from Denisovan-Related Species | Genetics, Paleoanthropology | Sci-News.com

Posted: Inuits Inherited Cold Adaptation Genes from Denisovan-Related Species | Genetics, Paleoanthropology | Sci-News.com

In the Arctic, the Inuits have adapted to cold and a seafood diet. After the first genomic analysis of Greenlandic Inuits, a region in the genome containing two genes (TBX15 and WARS2) has now been scrutinized by researchers.

Denisovans were probably dark-skinned, unlike the pale Neandertals. Image credit: Mauro Cutrona.

Dr. Fernando Racimo of the New York Genome Center and his colleagues have now followed up on that study to trace back the origins of these adaptations.

“To identify genes responsible for biological adaptations to life in the Arctic, Fumagalli et al. scanned the genomes of Greenlandic Inuit using the population branch statistic, which detects loci that are highly differentiated from other populations,” the researchers explained.

“Using this method, they found two regions with a strong signal of selection: (i) one region contains the cluster of FADS genes, involved in the metabolism of unsaturated fatty acids; (ii) the other region contains WARS2 and TBX15, located on chromosome 1.”

“WARS2 encodes the mitochondrial tryptophanyl-tRNA synthetase. TBX15 is a transcription factor from the T-box family and is a highly pleotropic gene expressed in multiple tissues at different stages of development.”

“TBX15 plays a role in the differentiation of brown and brite adipocytes. Brown and brite adipocytes produce heat via lipid oxidation when stimulated by cold temperatures, making TBX15 a strong candidate gene for adaptation to life in the Arctic.”

In their own study, Dr. Racimo and co-authors used the genomic data from nearly 200 Greenlandic Inuits and compared this to the 1000 Genomes Project and ancient DNA from Neanderthals and Denisovans.

The results provide convincing evidence that the Inuit variant of the TBX15/WARS2 region first came into modern humans from an archaic hominid population, likely related to the Denisovans.

“The Inuit DNA sequence in this region matches very well with the Denisovan genome, and it is highly differentiated from other present-day human sequences, though we can’t discard the possibility that the variant was introduced from another archaic group whose genomes we haven’t sampled yet,” Dr. Racimo said.

The scientists found that the variant is present at low-to-intermediate frequencies throughout Eurasia, and at especially high frequencies in the Inuits and Native American populations, but almost absent in Africa.

They speculate that the archaic variant may have been beneficial to modern humans during their expansion throughout Siberia and across Beringia, into the Americas.

The team also worked to understand the physiological role of the TBX15/WARS2 region.

They found an association between the archaic region and the gene expression of TBX15 and WARS2 in various tissues, like fibroblasts and adipose tissue.

They also observed that the methylation patterns in this region in the Denisovan genome are very different from those of Neanderthals and present-day humans.

“All this suggests that the introduced variant may have altered the regulation of these genes, thought the exact mechanism by which this occurred remains elusive,” Dr. Racimo said.

The team’s results were published online this week in the journal Molecular Biology and Evolution.

_____

Fernando Racimo et al. Archaic adaptive introgression in TBX15/WARS2. Mol Biol Evol, published online December 21, 2016; doi: 10.1093/molbev/msw283

Tags:
February 09, 2017 at 11:23AM
Open in Evernote