Why Vertical Farming Might Be Our Planet’s Future | Care2 Healthy Living

Explosive growth that will take place in the world’s urban centers as we reach 2050. To keep these people from starvation, architects and farmers have combined their talents to create Vertical Farming. Although not entirely new, these farms are becoming more efficient and may appear as skyscraper greenhouses throughout many urban cities. Vertical farming can take many architectural shapes and offers a number of key solutions to the problems of efficient food growth.

Draught Resistant

Food supplies are more secure with vertical farming. Production can continue year around, even during long draughts, which seem to be more frequent as the world undergoes climate change. In Nature Climate Change: The Global Groundwater Crisis, James Famiglietti, a leading hydrologist at NASA’s Jet Propulsion Laboratory warns that groundwater depletion poses a far greater threat to global water security than is
 currently acknowledged.His research team employs satellites and computer modeling to track changing freshwater availability and groundwater depletion around the world. Joe Romm’s Climate Progress article notes that the groundwater in the U.S. High Plains, California’s Central Valley, China, India, and other aquifers is being pumped out faster than can be naturally replenished. Vertical farming allows fruits and vegetables that may be in high demand to be grown all year without concerns over seasonal rainfalls or droughts.

Conserve Water

Vertical farms utilize more efficient, soil-free hydroponic systems, so they need less water. Some use advanced new systems like Aeroponics, which grow plants in mists that efficiently provide roots with nutrients, hydration and oxygen. This results in faster growing cycles and more biomass than other farming systems. These closed-looped systems recirculate nutrient rich solutions and use 95 percent less water than field farming.

Preserve Environment

Instead of “consuming” rainforest land and harming untouched parts of the earth, vertical farming helps preserve the environment by growing our food in cities. As noted in Crop Farming Review, one indoor acre is equivalent to 4-6 outdoor acres or more, depending on the crop. For strawberries, a single indoor acre may produce a yield equivalent to 30 acres. Existing farms could be reverted to their natural state to promote the regrowth of trees for CO2 sequestration.

Fewer Diseases and Toxins

Because of their design, indoor vertical “fields” can more easily be protected from pests, so fewer herbicides or insecticides are needed. Result: more fresh, toxin-free produce.

Reduced Transport, Less Pollution

Food can be grown in high-rise urban buildings and sold directly to consumers without the need for carbon-emitting transport. Produce sold closer to where it’s grown also means fresher fruits and vegetables with less spoilage.

Reduced Light Energy

Lit by LEDs that mimic sunlight, vertical farms use software that regulates the amount of light energy plants need to grow. Crop Farming Review also notes that vertical farms can even generate power. While a 30-story vertical farm may use million kwh of electricity, it could generate up to 56 million kwh through the use of biogas digesters and by capturing solar energy.

Evernote helps you remember everything and get organized effortlessly. Download Evernote.

Oldest engraving rewrites view of human history

Richard Ingham

Detail of the engraving on fossil Pseudodon shell (DUB1006-fL) from Trinil Credit: Wim Lustenhouwer, VU University Amsterdam

Anthropologists on Wednesday said they had found the earliest engraving in human history on a fossilised mollusc shell some 500,000 years old, unearthed in colonial-era Indonesia.

The zigzag scratching, together with evidence that these shells were used as a tool, should prompt a rethink about the mysterious early human called Homo erectus, they said.

The discovery comes through new scrutiny of 166 freshwater mussel shells found at Trinil, on the banks of the Bengawan Solo river in East Java, where one of the most sensational finds in fossil-hunting was made.

It was here in 1891 that an adventurous Dutch palaeontologist, Eugene Dubois, found "Java Man."

With a couple of army sergeants and convict labour to do the digging, Dubois excavated part of a heavy-browed skull, a tooth and a thigh bone.

He interpreted these as being the remains of a gibbon-like hominid that was the long-sought "missing link" between apes and humans.

Dubois’ claim excited fierce controversy, as well as jokey images of our distant ancestors as slack-jawed primates with dragging knuckles.

Palaeontologists eventually categorised the find as a Homo erectus, or "upright human"—a hominid that according to sketchy and hugely debated fossil evidence lived from around 1.9 million years ago to about 150,000 years ago.

Reporting in the science journal Nature, a team led by Josephine Joordens at Leiden University in the Netherlands, harnessed 21st-century technology to take a new look at the Trinil shells, now housed in a local collection.

Carbon dating of sediment found in the shells put their age at between 430,000 and 540,000 years ago.

A third of the shells were also found to have a curious hole at the base of one of the bivalve’s muscles.

Sharp-toothed animals such as otters, rats or monkeys may have bitten into it to get at the flesh—but a likelier source, said the experts, is H. erectus, which tucked into the shells for food.

The fossil Pseudodon shell (DUB1006-fL) with the engraving made by Homo erectus at Trinil Credit: Wim Lustenhouwer, VU University Amsterdam

The team carried out experiments on living mussels of the same mollusc family, Pseudodon, piercing the shell at the same location with a pointed object.

As soon as the shell was broached, the muscle was damaged by the tool tip and the mollusc could be easily opened without breakage.

Dextrous erectus?

The scientists then deployed a scanning electron microscope to get a closer look at the shells.

One of them was found to have a polished and smooth edge, suggesting it may have been used as a tool to cut or scrape.

Another had a zigzag set of grooves incised into it, by a sharp implement such as a shark’s tooth.

The marks are at least 300,000 years older than the earliest previously known, indisputable engravings.

"The simple zigzag on the shell is the earliest engraving known thus far in the history of humankind," Joordens’ colleague, Wil Roebroeks, told AFP in an email.

"But: we have no clue why somebody made it half a million years ago, and we explicitly refrain from speculating on it" in terms of art or symbolism, he said.

Inside of the fossil Pseudodon shell (DUB7923-bL) showing that the hole made by Homo erectus is exactly at the spot where the adductor muscle is attached to the shell Credit: Henk Caspers, Naturalis, Leiden, The Netherlands

Francesco d’Errico of Bordeaux University in southwestern France said the engraving was "the oldest known graphic expression."

"The behaviour is deliberate. The individual had the desire to make a zigzag pattern in a single go," he said.

But d’Errico cautioned, "We don’t know why he did it. It may have been a mark of ownership, a personal code, a gift."

Geometric marks are considered to be a sign of cognitive behaviour and neuromotor skills that—until now—have been overwhelmingly attributed to modern man, Homo sapiens.

Put together, the new evidence delivers a blow to the stereotype of H. erectus as lumbering, heavy-handed and stupid.

He was smart enough to feed himself efficiently from mussels, dextrous enough to use slim, smooth shells as tools and brainy enough to engrave an abstract pattern on one of them.

A "richer" image of this enigmatic hominid results, Roebroeks said.

"We knew that H. erectus made nice handaxes etcetera," he said.

"Now we have this evidence for sophisticated opening of shells and a small zigzag, it might create a more subtle picture."

Explore further: New age of the Lantian Homo erectus cranium extending to about 1.63 million years ago

More information: Nature, dx.doi.org/10.1038/nature13962

© 2014 AFP

Evernote helps you remember everything and get organized effortlessly. Download Evernote.

Engineers take big step toward using light instead of wires inside computers

by Chris Cesare

This tiny slice of silicon, etched in Jelena Vuckovic’s lab at Stanford with a pattern that resembles a bar code, is one step on the way toward linking computer components with light instead of wires. Credit: Vuckovic Lab

(Phys.org)—Stanford engineers have designed and built a prism-like device that can split a beam of light into different colors and bend the light at right angles, a development that could eventually lead to computers that use optics, rather than electricity, to carry data.

They describe what they call an "optical link" in an article in Scientific Reports.

The optical link is a tiny slice of silicon etched with a pattern that resembles a bar code. When a beam of light is shined at the link, two different wavelengths (colors) of light split off at right angles to the input, forming a T shape. This is a big step toward creating a complete system for connecting computer components with light rather than wires.

"Light can carry more data than a wire, and it takes less energy to transmit photons than electrons," said electrical engineering Professor Jelena Vuckovic, who led the research.

In previous work her team developed an algorithm that did two things: It automated the process of designing optical structures and it enabled them to create previously unimaginable, nanoscale structures to control light.

Now, she and lead author Alexander Piggott, a doctoral candidate in electrical engineering, have employed that algorithm to design, build and test a link compatible with current fiber optic networks.

Creating a silicon prism

The Stanford structure was made by etching a tiny bar code pattern into silicon that split waves of light like a small-scale prism. The team engineered the effect using a subtle understanding of how the speed of light changes as it moves through different materials.

What we call the speed of light is how fast light travels in a vacuum. Light travels a bit more slowly in air and even more slowly in water. This speed difference is why a straw in a glass of water looks dislocated.

A property of materials called the index of refraction characterizes the difference in speed. The higher the index, the more slowly light will travel in that material. Air has an index of refraction of nearly 1 and water of 1.3. Infrared light travels through silicon even more slowly: it has an index of refraction of 3.5.

The Stanford algorithm designed a structure that alternated strips of silicon and gaps of air in a specific way. The device takes advantage of the fact that as light passes from one medium to the next, some light is reflected and some is transmitted. When light traveled through the silicon bar code, the reflected light interfered with the transmitted light in complicated ways.

The algorithm designed the bar code to use this subtle interference to direct one wavelength to go left and a different wavelength to go right, all within a tiny silicon chip eight microns long.

Both 1300-nanometer light and 1550-nanometer light, corresponding to C-band and O-band wavelengths widely used in fiber optic networks, were beamed at the device from above. The bar code-like structure redirected C-band light one way and O-band light the other, right on the chip.

Convex optimization

The researchers designed these bar code patterns already knowing their desired function. Since they wanted C-band and O-band light routed in opposite directions, they let the algorithm design a structure to achieve it.

"We wanted to be able to let the software design the structure of a particular size given only the desired inputs and outputs for the device," Vuckovic said.

To design their device they adapted concepts from convex optimization, a mathematical approach to solving complex problems such as stock market trading. With help from Stanford electrical engineering Professor Stephen Boyd, an expert in convex optimization, they discovered how to automatically create novel shapes at the nanoscale to cause light to behave in specific ways.

"For many years, nanophotonics researchers made structures using simple geometries and regular shapes," Vuckovic said. "The structures you see produced by this algorithm are nothing like what anyone has done before."

The algorithm began its work with a simple design of just silicon. Then, through hundreds of tiny adjustments, it found better and better bar code structures for producing the desired output light.

Previous designs of nanophotonic structures were based on regular geometric patterns and the designer’s intuition. The Stanford algorithm can design this structure in just 15 minutes on a laptop computer.

They have also used this algorithm to design a wide variety of other devices, like the super-compact "Swiss cheese" structures that route light beams to different outputs not based on their color, but based on their mode, i.e., based on how they look. For example, a light beam with a single lobe in the cross-section goes to one output, and a double lobed beam (looking like two rivers flowing side by side) goes to the other output. Such a mode router is equally as important as the bar code color splitter, as different modes are also used in optical communications to transmit information.

The algorithm is the key. It gives researchers a tool to create optical components to perform specific functions, and in many cases such components didn’t even exist before. "There’s no way to analytically design these kinds of devices," Piggott said.

Explore further: Design of optical modulators with low losses to improve optical communications

More information: "Inverse design and implementation of a wavelength demultiplexing grating coupler." Scientific Reports 4, Article number: 7210 DOI: 10.1038/srep07210

Evernote helps you remember everything and get organized effortlessly. Download Evernote.

Why has human progress ground to a halt? – Michael Hanlon – Aeon

We live in a golden age of technological, medical, scientific and social progress. Look at our computers! Look at our phones! Twenty years ago, the internet was a creaky machine for geeks. Now we can’t imagine life without it. We are on the verge of medical breakthroughs that would have seemed like magic only half a century ago: cloned organs, stem-cell therapies to repair our very DNA. Even now, life expectancy in some rich countries is improving by five hours a day. A day! Surely immortality, or something very like it, is just around the corner.

The notion that our 21st-century world is one of accelerating advances is so dominant that it seems churlish to challenge it. Almost every week we read about ‘new hopes’ for cancer sufferers, developments in the lab that might lead to new cures, talk of a new era of space tourism and super-jets that can fly round the world in a few hours. Yet a moment’s thought tells us that this vision of unparalleled innovation can’t be right, that many of these breathless reports of progress are in fact mere hype, speculation – even fantasy.

Yet there once was an age when speculation matched reality. It spluttered to a halt more than 40 years ago. Most of what has happened since has been merely incremental improvements upon what came before. That true age of innovation – I’ll call it the Golden Quarter – ran from approximately 1945 to 1971. Just about everything that defines the modern world either came about, or had its seeds sown, during this time. The Pill. Electronics. Computers and the birth of the internet. Nuclear power. Television. Antibiotics. Space travel. Civil rights.

There is more. Feminism. Teenagers. The Green Revolution in agriculture. Decolonisation. Popular music. Mass aviation. The birth of the gay rights movement. Cheap, reliable and safe automobiles. High-speed trains. We put a man on the Moon, sent a probe to Mars, beat smallpox and discovered the double-spiral key of life. The Golden Quarter was a unique period of less than a single human generation, a time when innovation appeared to be running on a mix of dragster fuel and dilithium crystals.

Today, progress is defined almost entirely by consumer-driven, often banal improvements in information technology. The US economist Tyler Cowen, in his essay The Great Stagnation (2011), argues that, in the US at least, a technological plateau has been reached. Sure, our phones are great, but that’s not the same as being able to fly across the Atlantic in eight hours or eliminating smallpox. As the US technologist Peter Thiel once put it: ‘We wanted flying cars, we got 140 characters.’

Economists describe this extraordinary period in terms of increases in wealth. After the Second World War came a quarter-century boom; GDP-per-head in the US and Europe rocketed. New industrial powerhouses arose from the ashes of Japan. Germany experienced its Wirtschaftswunder. Even the Communist world got richer. This growth has been attributed to massive postwar government stimulus plus a happy nexus of low fuel prices, population growth and high Cold War military spending.

But alongside this was that extraordinary burst of human ingenuity and societal change. This is commented upon less often, perhaps because it is so obvious, or maybe it is seen as a simple consequence of the economics. We saw the biggest advances in science and technology: if you were a biologist, physicist or materials scientist, there was no better time to be working. But we also saw a shift in social attitudes every bit as profound. In even the most enlightened societies before 1945, attitudes to race, sexuality and women’s rights were what we would now consider antediluvian. By 1971, those old prejudices were on the back foot. Simply put, the world had changed.

ut surely progress today is real? Well, take a look around. Look up and the airliners you see are basically updated versions of the ones flying in the 1960s – slightly quieter Tristars with better avionics. In 1971, a regular airliner took eight hours to fly from London to New York; it still does. And in 1971, there was one airliner that could do the trip in three hours. Now, Concorde is dead. Our cars are faster, safer and use less fuel than they did in 1971, but there has been no paradigm shift.

And yes, we are living longer, but this has disappointingly little to do with any recent breakthroughs. Since 1970, the US Federal Government has spent more than $100 billion in what President Richard Nixon dubbed the ‘War on Cancer’. Far more has been spent globally, with most wealthy nations boasting well-funded cancer‑research bodies. Despite these billions of investment, this war has been a spectacular failure. In the US, the death rates for all kinds of cancer dropped by only 5 per cent in the period 1950-2005, according to the National Center for Health Statistics. Even if you strip out confounding variables such as age (more people are living long enough to get cancer) and better diagnosis, the blunt fact is that, with most kinds of cancer, your chances in 2014 are not much better than they were in 1974. In many cases, your treatment will be pretty much the same.

After the dizzying breakthroughs of the 20th century, physics seems to have ground to a halt

For the past 20 years, as a science writer, I have covered such extraordinary medical advances as gene therapy, cloned replacement organs, stem-cell therapy, life-extension technologies, the promised spin-offs from genomics and tailored medicine. None of these new treatments is yet routinely available. The paralyzed still cannot walk, the blind still cannot see. The human genome was decoded (one post-Golden Quarter triumph) nearly 15 years ago and we’re still waiting to see the benefits that, at the time, were confidently asserted to be ‘a decade away’. We still have no real idea how to treat chronic addiction or dementia. The recent history of psychiatric medicine is, according to one eminent British psychiatrist I spoke to, ‘the history of ever-better placebos’. And most recent advances in longevity have come about by the simple expedient of getting people to give up smoking, eat better, and take drugs to control blood pressure.

There has been no new Green Revolution. We still drive steel cars powered by burning petroleum spirit or, worse, diesel. There has been no new materials revolution since the Golden Quarter’s advances in plastics, semi-conductors, new alloys and composite materials. After the dizzying breakthroughs of the early- to mid-20th century, physics seems (Higgs boson aside) to have ground to a halt. String Theory is apparently our best hope of reconciling Albert Einstein with the Quantum world, but as yet, no one has any idea if it is even testable. And nobody has been to the Moon for 42 years.

Why has progress stopped? Why, for that matter, did it start when it did, in the dying embers of the Second World War?

ne explanation is that the Golden Age was the simple result of economic growth and technological spinoffs from the Second World War. It is certainly true that the war sped the development of several weaponisable technologies and medical advances. The Apollo space programme probably could not have happened when it did without the aerospace engineer Wernher Von Braun and the V-2 ballistic missile. But penicillin, the jet engine and even the nuclear bomb were on the drawing board before the first shots were fired. They would have happened anyway.

Conflict spurs innovation, and the Cold War played its part – we would never have got to the Moon without it. But someone has to pay for everything. The economic boom came to an end in the 1970s with the collapse of the 1944 Bretton Woods trading agreements and the oil shocks. So did the great age of innovation. Case closed, you might say.

And yet, something doesn’t quite fit. The 1970s recession was temporary: we came out of it soon enough. What’s more, in terms of Gross World Product, the world is between two and three times richer now than it was then. There is more than enough money for a new Apollo, a new Concorde and a new Green Revolution. So if rapid economic growth drove innovation in the 1950s and ’60s, why has it not done so since?

In The Great Stagnation, Cowen argues that progress ground to a halt because the ‘low-hanging fruit’ had been plucked off. These fruits include the cultivation of unused land, mass education, and the capitalisation by technologists of the scientific breakthroughs made in the 19th century. It is possible that the advances we saw in the period 1945-1970 were similarly quick wins, and that further progress is much harder. Going from the prop-airliners of the 1930s to the jets of the 1960s was, perhaps, just easier than going from today’s aircraft to something much better.

But history suggests that this explanation is fanciful. During periods of technological and scientific expansion, it has often seemed that a plateau has been reached, only for a new discovery to shatter old paradigms completely. The most famous example was when, in 1900, Lord Kelvin declared physics to be more or less over, just a few years before Einstein proved him comprehensively wrong. As late as the turn of the 20th century, it was still unclear how powered, heavier-than-air aircraft would develop, with several competing theories left floundering in the wake of the Wright brothers’ triumph (which no one saw coming).

Lack of money, then, is not the reason that innovation has stalled. What we do with our money might be, however. Capitalism was once the great engine of progress. It was capitalism in the 18th and 19th centuries that built roads and railways, steam engines and telegraphs (another golden era). Capital drove the industrial revolution.

Now, wealth is concentrated in the hands of a tiny elite. A report by Credit Suisse this October found that the richest 1 per cent of humans own half the world’s assets. That has consequences. Firstly, there is a lot more for the hyper-rich to spend their money on today than there was in the golden age of philanthropy in the 19th century. The superyachts, fast cars, private jets and other gewgaws of Planet Rich simply did not exist when people such as Andrew Carnegie walked the earth and, though they are no doubt nice to have, these fripperies don’t much advance the frontiers of knowledge. Furthermore, as the French economist Thomas Piketty pointed out in Capital (2014), money now begets money more than at any time in recent history. When wealth accumulates so spectacularly by doing nothing, there is less impetus to invest in genuine innovation.

the new ideal is to render your own products obsolete as fast as possible

During the Golden Quarter, inequality in the world’s economic powerhouses was, remarkably, declining. In the UK, that trend levelled off a few years later, to reach a historic low point in 1977. Is it possible that there could be some relationship between equality and innovation? Here’s a sketch of how that might work.

As success comes to be defined by the amount of money one can generate in the very short term, progress is in turn defined not by making things better, but by rendering them obsolete as rapidly as possible so that the next iteration of phones, cars or operating systems can be sold to a willing market.

In particular, when share prices are almost entirely dependent on growth (as opposed to market share or profit), built-in obsolescence becomes an important driver of ‘innovation’. Half a century ago, makers of telephones, TVs and cars prospered by building products that their buyers knew (or at least believed) would last for many years. No one sells a smartphone on that basis today; the new ideal is to render your own products obsolete as fast as possible. Thus the purpose of the iPhone 6 is not to be better than the iPhone 5, but to make aspirational people buy a new iPhone (and feel better for doing so). In a very unequal society, aspiration becomes a powerful force. This is new, and the paradoxical result is that true innovation, as opposed to its marketing proxy, is stymied. In the 1960s, venture capital was willing to take risks, particularly in the emerging electronic technologies. Now it is more conservative, funding start-ups that offer incremental improvements on what has gone before.

But there is more to it than inequality and the failure of capital.

During the Golden Quarter, we saw a boom in public spending on research and innovation. The taxpayers of Europe, the US and elsewhere replaced the great 19th‑century venture capitalists. And so we find that nearly all the advances of this period came either from tax-funded universities or from popular movements. The first electronic computers came not from the labs of IBM but from the universities of Manchester and Pennsylvania. (Even the 19th-century analytical engine of Charles Babbage was directly funded by the British government.) The early internet came out of the University of California, not Bell or Xerox. Later on, the world wide web arose not from Apple or Microsoft but from CERN, a wholly public institution. In short, the great advances in medicine, materials, aviation and spaceflight were nearly all pump-primed by public investment. But since the 1970s, an assumption has been made that the private sector is the best place to innovate.

The story of the past four decades might seem to cast doubt on that belief. And yet we cannot pin the stagnation of ingenuity on a decline in public funding. Tax spending on research and development has, in general, increased in real and relative terms in most industrialised nations even since the end of the Golden Quarter. There must be another reason why this increased investment is not paying more dividends.

ould it be that the missing part of the jigsaw is our attitude towards risk? Nothing ventured, nothing gained, as the saying goes. Many of the achievements of the Golden Quarter just wouldn’t be attempted now. The assault on smallpox, spearheaded by a worldwide vaccination campaign, probably killed several thousand people, though it saved tens of millions more. In the 1960s, new medicines were rushed to market. Not all of them worked and a few (thalidomide) had disastrous consequences. But the overall result was a medical boom that brought huge benefits to millions. Today, this is impossible.

The time for a new drug candidate to gain approval in the US rose from less than eight years in the 1960s to nearly 13 years by the 1990s. Many promising new treatments now take 20 years or more to reach the market. In 2011, several medical charities and research institutes in the UK accused EU-driven clinical regulations of ‘stifling medical advances’. It would not be an exaggeration to say that people are dying in the cause of making medicine safer.

Risk-aversion has become a potent weapon in the war against progress on other fronts. In 1992, the Swiss genetic engineer Ingo Potrykus developed a variety of rice in which the grain, rather than the leaves, contain a large concentration of Vitamin A. Deficiency in this vitamin causes blindness and death among hundreds of thousands every year in the developing world. And yet, thanks to a well-funded fear-mongering campaign by anti-GM fundamentalists, the world has not seen the benefits of this invention.

Apollo couldn’t happen today, not because we don’t want to go to the Moon, but because the risk would be unacceptable

In the energy sector, civilian nuclear technology was hobbled by a series of mega-profile ‘disasters’, including Three Mile Island (which killed no one) and Chernobyl (which killed only dozens). These incidents caused a global hiatus into research that could, by now, have given us safe, cheap and low-carbon energy. The climate change crisis, which might kill millions, is one of the prices we are paying for 40 years of risk-aversion.

Apollo almost certainly couldn’t happen today. That’s not because people aren’t interested in going to the Moon any more, but because the risk – calculated at a couple-of-per-cent chance of astronauts dying – would be unacceptable. Boeing took a huge risk when it developed the 747, an extraordinary 1960s machine that went from drawing board to flight in under five years. Its modern equivalent, the Airbus A380 (only slightly larger and slightly slower), first flew in 2005 – 15 years after the project go-ahead. Scientists and technologists were generally celebrated 50 years ago, when people remembered what the world was like before penicillin, vaccination, modern dentistry, affordable cars and TV. Now, we are distrustful and suspicious – we have forgotten just how dreadful the world was pre-Golden Quarter.

Risk played its part, too, in the massive postwar shift in social attitudes. People, often the young, were prepared to take huge, physical risks to right the wrongs of the pre-war world. The early civil rights and anti-war protestors faced tear gas or worse. In the 1960s, feminists faced social ridicule, media approbation and violent hostility. Now, mirroring the incremental changes seen in technology, social progress all too often finds itself down the blind alleyways of political correctness. Student bodies used to be hotbeds of dissent, even revolution; today’s hyper-conformist youth is more interested in the policing of language and stifling debate when it counters the prevailing wisdom. Forty years ago a burgeoning media allowed dissent to flower. Today’s very different social media seems, despite democratic appearances, to be enforcing a climate of timidity and encouraging groupthink.

oes any of this really matter? So what if the white heat of technological progress is cooling off a bit? The world is, in general, far safer, healthier, wealthier and nicer than it has ever been. The recent past was grim; the distant past disgusting. As Steven Pinker and others have argued, levels of violence in most human societies had been declining since well before the Golden Quarter and have continued to decline since.

We are living longer. Civil rights have become so entrenched that gay marriage is being legalised across the world and any old-style racist thinking is met with widespread revulsion. The world is better in 2014 than it was in 1971.

we could be in a world where Alzheimer’s was treatable, clean nuclear power had ended the threat of climate change, and cancer was on the back foot

And yes, we have seen some impressive technological advances. The modern internet is a wonder, more impressive in many ways than Apollo. We might have lost Concorde but you can fly across the Atlantic for a couple of days’ wages – remarkable. Sci-fi visions of the future often had improbable spacecraft and flying cars but, even in Blade Runner’s Los Angeles of 2019, Rick Deckard had to use a payphone to call Rachael.

But it could have been so much better. If the pace of change had continued, we could be living in a world where Alzheimer’s was treatable, where clean nuclear power had ended the threat of climate change, where the brilliance of genetics was used to bring the benefits of cheap and healthy food to the bottom billion, and where cancer really was on the back foot. Forget colonies on the Moon; if the Golden Quarter had become the Golden Century, the battery in your magic smartphone might even last more than a day.

3 December 2014

Evernote helps you remember everything and get organized effortlessly. Download Evernote.

Emailed: Why OLED lighting will soon shine on you – CNET

The flat-panel lighting tech is for sale at Home Depot starting at $200, providing a new energy-efficient alternative to LED lights.

Acuity Brands’ Chalina, with five replaceable OLED panels, costs $300 at Home Depot. Acuity Brands

Undecided about whether to buy LED-based lights instead of compact fluorescent bulbs? Get ready to have some more uncertainty in your life, because another new lighting technology has just arrived: OLED.

Where LED (light-emitting diode) lighting uses small, intensely bright sources of light, which are typically made to look like traditional light bulbs, OLED (organic light-emitting diode) lighting uses flat, dimmer sources of light, essentially resulting in a glowing square or rectangle. Steady advances in manufacturing technology have made OLEDs bright and long-lived enough to use, and now they’re going mainstream: Acuity Brands, whose $2 billion in annual sales make it the largest lighting company in North America, is now selling OLED light fixtures in Home Depot.

Because OLED panels are not piercingly bright, they can be mounted in fixtures seen directly by the eye; there’s no need for reflectors or diffusers to cut the glare. The approach also opens new options for lighting designs.

At Home Depot, Acuity is selling two fixtures, each in configurations that can be suspended or that can be mounted directly to a wall or ceiling. The $300 Chalina is like a four-petal flower, with four square OLED panels arranged around a central one. The Aedan uses two elongated panels facing opposite directions and costs $200. The company also offers a variety of OLED fixtures for commercial customers.

"OLED technology is at a premium relative to LED, but there are superior lighting quality benefits," said Jeannine Fisher Wang, director of business development and marketing for Acuity’s OLED group. "The overall design and construction of these luminaires is very high quality, reflective of the superior nature of the OLED light source."

OLED lighting hits home (pictures) See full gallery

Next Prev

The Chalina has an output of 345 lumens and power consumption of 8 watts; the Aedan consumes 5 watts and each of its two panels produce 68 lumens. For comparison, an old-style 40-watt incandescent bulb can produce about 450 lumens, and a Cree LED bulb can generate 1,600 lumens with 18 watts. The OLED panels can be replaced.

OLED lighting elements have a lifespan of about 30,000 to 40,000 hours of use — a bit less than LEDs that can reach 50,000 hours, but still more than quadruple what compact fluorescent bulbs offer.

Acuity faces an education challenge, Wang acknowledged, since consumers generally haven’t heard of OLED lighting. "That is definitely an area where we’re working," she said. She watched people seeing the OLED products when they first arrived at Home Depot. "It was their first experience of OLED. People stopped in their tracks when they saw the lights," she said.

Jeannine Fisher Wang, Acuity Brands’ OLED marketing leader. Acuity Brands

Dominance to come?

OLED will become mainstream, predicts Darice Liu, a spokeswoman for Universal Display, a 144-employee company founded in 1994 that licenses hundreds of OLED-related patents to companies commercializing the technology.

"We believe that OLED lighting has the potential to dominate many of the residential and commercial market applications," Liu said.

Energy efficiency and new designs will provide the impetus, Liu predicted, with OLED and LED lighting coexisting because of different advantages.

"Inherent energy efficiency advantages are a key benefit in the lighting market, but there is the potential for OLEDs to be transformational as well. OLEDs will present lighting products in a new form factor, which will expand the design possibilities and change the way we use light in many environments," she said.

That’s what Aurora Lighting Design, a lighting design firm in Chicago, evidently hopes to accomplish with offices redesigned using Acuity’s Trilia OLED fixtures. Dozens of panels are arranged in geometric but somewhat organic patterns across the ceiling in a design that can be used both for commercial or residential use.

OLED lighting is showing up in more portable forms, too. Alkilu Lighting is taking preorders for its $50 free-standing LeafLit, whose battery provides 20 hours of light, the $50 DreamLit night light and the $40 BookLit reading light.

Next up: Colors and lower costs

Prices might be higher now, but they’ll drop in coming years, Wang said.

"We’re looking at the cost of OLED being maybe a tenth of what it is today in about five years," she said, referring to the price of the panel components themselves. Even as entry-level OLED lighting fixture prices drop, though, some products will continue to come with premium pricing as OLED lighting makers embrace the interactive element of the nascent LED lighting industry. There, smartphone apps and smart-home devices add new controls, new costs and new profit possibilities to the lighting market.

Alkilu Lighting’s $60 TripLit is a battery-powered OLED display that can be propped up or hung from a built-in hook. Alkilu Lightint

"As technology integrates into homes, there are advances all over the place, like security systems and thermostats," Wang said. "People are expecting a lot more interaction and integration with other devices, like the ability to control things remotely when people aren’t at home."

Another premium option that will arrive is colored OLED panels, though the cost is too high and color saturation too low right now. These will provide light whose color can be changed according to mood, time of day or other factors.

"We definitely see opportunities for that, not only in the consumer arena but commercially as well," Wang said. "There’s huge interest — corporate interiors, healthcare environments, education facilities. There are a lot of potential applications where color-tuning makes sense."

Regulatory help

She expects governmental regulations could help push OLED along with LED lighting. Specifically, Title 24, an energy standard in California, will mean the disappearance of older lighting technologies for new homes.

"It will pretty much obsolete the use of incandescent lighting. It may even obsolete a lot of compact fluorescent, particularly in low wattages where CFL efficacy isn’t that high," Wang said. It’s just one state, but, "Once California does something in energy standards, other states tend to follow suit."

Today OLED "is a bit under the efficacy requirements" of lumens per watt, she said, "but moving forward, the standard will probably address all lighting technologies through some definition of high efficacy pertinent to particular technologies."

OLED lighting’s biggest push will come with lower prices, most likely, though.

"As the cost of OLED technology drops, you have the opportunity to play in the commodity product market," Wang said.

Evernote helps you remember everything and get organized effortlessly. Download Evernote.

Radar reveals two new rooms in Tutankhamun tomb – BBC News

From the section Middle East

Image copyright

Image caption The whereabouts of Nefertiti’s remains are a mystery

The Egyptian pharaoh queen Nefertiti could be buried in two newly-discovered rooms in King Tutankhamun’s tomb, according to a British archaeologist.

Nicholas Reeves says radar scanning has revealed two extra rooms hidden in the walls of the tomb.

Egypt gave Mr Reeves the go-ahead to use the non-invasive radar to test his theory that Nefertiti’s undiscovered remains were hidden in one of them.

She was queen of Egypt during the 14th century BC.

Mr Reeves believes the remains of Tutankhamun, who died 3,000 years ago aged 19, may have been rushed into an outer chamber of what was originally Nefertiti’s tomb.

The remains of Tutankhamun, who may have been Nefertiti’s son, were found in 1922.

Signs of a portal

Dr Reeves developed his theory after a Spanish company of artistic and preservation specialists, Factum Arte, were commissioned to produce detailed scans of Tutankhamun’s tomb.

The scans were then used to produce a facsimile of the 3,300-year-old tomb near the site of the original Valley of the Kings in Luxor.

Media caption The BBC’s Rajan Datar has been inside the replica tomb of Egyptian pharaoh Tutankhamun

While assessing the scans last February, Dr Reeves spotted what he believed were marks indicating where two doorways used to be. The archaeologist from the University of Arizona says he believes Nefertiti may lie inside.

Tutankhamun’s tomb was the most intact ever discovered in Egypt. Close to 2,000 objects were found inside.

But its layout has been a puzzle for some time – in particular, why it was smaller than those of other kings’ tombs.

Dr Reeves believes there are clues in the design of the tomb that indicate it was designed to store the remains of a queen, not a king. His theory has yet to be peer-reviewed and leading Egyptologists have urged caution over the conclusion.

A new Arctic discovery ‘challenges everything we thought’ about dinosaurs

When we think about dinosaurs, we mostly imagine towering creatures pushing through jungle, surrounded by lush, tropical foliage.

But researchers looking further afield have discovered remains from these creatures farther and farther from the tropics and temperate zones, into the polar regions.

A new discovery shows that dinosaurs lived in environments so harsh they may have been very different creatures than we once thought.

On September 22, researchers working with the University of Alaska Museum of the North published a paper describing the discovery of a 30-foot-long duck-billed dinosaur that lived in the Arctic at the very top of Alaska, the farthest north of any known dinosaur species.

That a previously unknown unique species lived in a place with snowy, icy winters and four months without seeing the sun every year — the sort of environment we didn’t know they could survive in — shows that the dinosaurs might have been tougher, hardier, and more diverse than previously thought.

This new species is in the hadrosaur family and is named Ugrunaaluk kuukpikensis, meaning "ancient grazer" in the language of the local Alaskan Native Iñupiaq culture.

"It had crests along its back like Godzilla," one of its discoverers, Florida State University biological sciences professor Gregory Erickson, told The New York Times.

James HavensA painting of Ugrunaaluk kuukpikensis, the new species of duck-billed dinosaur, that illustrates a scene from ancient Alaska during the Cretaceous Period.

Along with its Godzilla-like crests and scales, scientists say, the dinosaur was approximately 6 or 7 feet tall at the hip and could walk on all fours, even though its back legs were much longer than its front legs.

Its mouth was filled with hundreds of grinding teeth that would have helped it tear through coarse vegetation, which might have been all that was available in the Arctic winters. The press release notes that Erickson and co-discoverer Pat Druckenmiller previously have shown that the Alaskan Arctic was covered in a type of polar forest back in the Cretaceous Period, when this dinosaur would have roamed the land, approximately 69 million years ago.

It was warmer then than now, but winters would still have been snowy and mostly lifeless, and the Arctic would have been dark for months at a time in winter.

UAMN photo by Pat DruckenmillerA paleontologist searches for dinosaur bones. The Liscomb Bone Bed crops out for over 200 feet along the base of this bluff.

The bones of the U. kuukpikensi came from an area known as the Liscomb Bone Bed along the Colville River. This area, in a region known as the North Slope of Alaska, is a tough place to reach. Researchers told The Times that a journey there involves a 500-mile drive from Fairbanks before boarding a plane with "balloon tires." They have to navigate the area itself in rubber boats. But all that effort is worth it: The site is a treasure trove where bones from three species of dinosaur have been found.

While most of those fossil records are still incomplete, there are more than 6,000 bones from the newly discovered grazer, providing what the researchers describe as "multiple elements of every single bone in the body" — enough to describe a new dinosaur.

"The finding of dinosaurs this far north challenges everything we thought about a dinosaur’s physiology," Erickson said in the press release. "It creates this natural question. How did they survive up here?"

The research was published in the international journal Acta Palaeontologica Polonica.

SEE ALSO: What scientists found deep inside a cave changes what we know about human evolution

NOW WATCH: 4 ways to stay awake without caffeine

Evernote helps you remember everything and get organized effortlessly. Download Evernote.