Going Out of Print

Credit: Superstock/Getty Images

The new generation of e-book reading gadgets will transform the troubled book, magazine, and newspaper industries. But it’s uncertain what that transformation will look like.

By Wade Roush

For serious readers, products like Amazon’s Kindle 2, Barnes and Noble’s Nook, and Sony’s Daily Edition are a godsend. It’s not just that these electronic reading devices are handy portals to hundreds of thousands of trade books, textbooks, public-domain works, and best-sellers, all of which can be wirelessly downloaded at a moment’s notice, and to scores of magazines and newspapers, which show up on subscribers’ devices automatically. They’re also giving adventurous authors and publishers new ways to organize and market their creations. A California startup called Vook, for example, has begun to package cookbooks, workout manuals, and even novels with illustrative video clips, and it’s selling these hybrids of video and text to iPhone, iPad, and iPod Touch owners through Apple’s iTunes Store.

Unfortunately, you can’t get away with charging hardcover prices for an e-book, which makes it hard to see how traditional publishers will profit in a future that’s largely digital. As a result, book publishers are facing a painful and tumultuous time as they attempt to adapt to the emerging e-book technologies. The Kindle, the iPad, and their ilk will force upon print-centric publishers what the Internet, file sharing, and the iPod forced upon the CD-centric music conglomerates starting around 1999–namely, waves of cost cutting and a search for new business models.

Publishers are lucky in one way: the reckoning could have come much sooner. From 1999 to 2001, I worked for NuvoMedia, a Silicon Valley startup that developed a device called the Rocket eBook. The Rocket and its main rival at the time, the Softbook Reader from Softbook Press, prefigured the current generation of e-book devices. Owners could shop for books from major publishers online, download the publications to their PCs, and then transfer them to the portable devices, which had monochrome LCD screens that showed one page of text at a time.

But three factors conspired to kill these first-generation e-readers. First, book publishers, fearing that digital sales would cannibalize print sales, offered only a limited catalogue of books in electronic form and charged nearly as much for Rocket and Softbook editions as they did for hardcovers. Not surprisingly, consumers demurred, which in turn discouraged publishers from offering more titles digitally. Second, the technology wasn’t quite ready for mass adoption. The devices weren’t small or thin enough to be truly portable, and the book-buying process was convoluted. Third, NuvoMedia and Softbook Press were acquired and then combined by a larger company, Gemstar, that was distracted by other issues and let its new e-book division languish, eventually closing it down.

Business conditions are very different today. For one thing, there are more big players with an interest in seeing the e-book business blossom, including Sony, Amazon, Barnes and Noble, and now Apple. Using their pull with publishers, these companies have assembled huge catalogues of e-books–Amazon has nearly half a million commercial titles–and they’ve kept prices lower, in the $10-to-$15 range for new trade books.

Just as important, mobile computing technology has improved drastically. Cheap 3G data access is the biggest advance. Now that readers can browse, purchase, and download e-books and periodicals directly on their devices, they can access new material almost instantaneously, without having to be near a desktop or laptop computer with an Internet connection. Having owned a Kindle 2 since May 2009, I can testify to the allure of this feature: I’ve bought a couple of dozen more e-books for my Kindle than I would ever have ordered from Amazon in print form in the same period.

Today’s wireless e-reading devices fall into two groups, each with its strong points. The “electronic ink” devices all use black-and-white electrophoretic displays manufactured by Prime View International. (The Taiwanese display maker acquired the company that developed the technology, MIT spinoff E Ink, in 2009.) The $259 Kindle 2 is the best-known of these products, but Barnes and Noble’s identically priced Nook and the $400 Sony Reader Daily Edition offer similar functions. The Kindle DX ($489) and the forthcoming Plastic Logic Que proReader (expected this summer, starting at $649) have larger screens and are intended mainly for reading textbooks and business documents. The Prime View screens on these devices depend on reflected ambient light, which gives them two advantages: they’re easier on the eyes than backlit LCD screens, and they use far less power. Their batteries can last for days, and sometimes weeks, between charges.

Article Continues -> http://www.technologyreview.com/computing/25117/

Book: 1001 Building Forms

Posted by Lisa Smith

Not new, but definitely notable: Siteless: 1001 Building Forms, by Architect François Blanciak, was released to the academic architecture community for a little while now, but, after rediscovering it this morning on Jacket Mechanical and Lined and Unlined, we realized it has wide resonance and wanted to share it here.

Though the book is meant as a catalog of siteless building forms, all hand drawn from the same perspective, this book is relevant for formal thinkers at any scale, from sculptors to industrial designers. In particular, we’re wondering how this book might mix with interaction design and tangible interfaces: What would the pigtail towers do when combined with some flex sensors, for example?

siteless-closeup.jpg

If nothing else, the book sidesteps the often opaque written dialogue of architecture (just sayin’) and presents itself visually, accessible to a wide audience.

Order from Amazon.

Images from Jacket Mechanical

http://www.core77.com/blog/object_culture/1001_building_forms__16280.asp

Self-published books on Lulu to be available on iPad

By Dean Takahashi

Electronic book publisher Lulu told its top authors over the weekend that their electronic books can be made available on Apple’s new iBookstore that is debuting with the launch of the iPad on April 3.

This is one more way that indie books by self-published authors can appear on Apple’s iPad platform, which is a tablet computer that is expected to be one of the hot gadgets of the year. The self-publishing book company said that authors whose work is in iformat can use Lulu to publish their e-books on the iPad. Lulu will convert the books from the Lulu format into the ePub format at no cost. Authors will receive proceeds after Lulu and Apple take their cut.

Lulu said it would automatically convert books for submission to the iBookstore, unless authors didn’t want their books published on the iPad. Smashwords, another self-publishing e-book company, also said over the weekend that its books can be made available on the iBookstore. Lulu supports the ePub and PDF formats, with or without digital rights management.

http://digital.venturebeat.com/2010/03/29/self-published-books-on-lulu-to-be-available-on-ipad/

High-Speed Camera Scans Books in Seconds


By Charlie Sorrel

Professor Ishikawa Komuro’s Tokyo lab is better known for robot hands that can dribble and catch balls and spin pencils between their fingers. Now, two researchers have taken this speedy sensing tech and applied it to the ripping of paper books.

Books are different from other kinds of media, like music and movies — it’s very hard to get them into a computer. There is no equivalent of CD or DVD rippers like iTunes or Handbrake. This not only makes piracy laborious, it also stops you from turning your own books into e-books.

This high-speed scanner changes that, at least if you have the room and tech skills to build one. By using a high-speed camera that shoots at 500 frames per second, lab workers Takashi Nakashima and Yoshihiro Watanabe can scan a 200-page book in under a minute. You just hold the book under the camera and flip through the pages as if shuffling a deck of cards. The camera records the images and uses processing power to turn the odd-shaped pictures into flat, rectangular pages on which regular OCR (optical character recognition) can be performed.

The technique is unlikely to be coming to the home anytime soon (although ripping a book by flipping it in front of your notebook’s webcam would be pretty awesome), but it could certainly speed up large scanning efforts like Google’s book project.

Superfast Scanner Lets You Digitize a Book By Rapidly Flipping Pages [IEEE Spectrum]

http://www.wired.com/gadgetlab/2010/03/high-speed-camera-scans-books-in-seconds/

Print: Applying Quantitative Analysis to Classic Lit

Illustration: Rodrigo Fuenzalida

By Douglas McGray

If Google has its way, all of English literature will one day exist as searchable digital text. Franco Moretti, a Stanford English professor, wants to be ready for the deluge with new kinds of questions and new tools to answer them — things like computational linguistics, data mining, computer modeling, and network theory. Moretti is already famous in bookish circles for his data-centric approach to novels, which he graphs, maps, and charts. Until recently, though, he’s been able to crunch only a few novels at a time, doing all that quantitative stuff by hand. Now he’s going digital, building searchable databases of old books, working to write software that can mine for patterns. Instead of diving deep into a few beloved titles, Moretti aims to zip across the creative output of entire eras. He calls it distant reading, and if his new methods catch on, they could change the way we look at literary history.

Take one experiment. Moretti decided to test the idea that Victorian writers, through their choice of adjectives, might reveal their belief that moral qualities were indivisible from reality itself and that physical traits reflected a person’s virtue. So he assembled a database of 250 novels and sent the file to computer scientists at IBM’s Visual Communications Lab, who turned the books into a series of word clouds. “Boom! There were exactly the adjectives I had hoped would pop up!” he says. “Adjectives like strong, bright, fair, in which the physical and the moral blend.”

For another project, he looked at the titles of 7,000 books in 18th- and 19th-century England and discovered a correlation between shorter titles and the growth of the book publishing industry. (Moretti theorizes that more concise titles made books easier to promote in a crowded marketplace.) He is also working with a programmer to test new software that can “read” terabytes of obscure, mostly unread fiction and classify the books by genre.

“In 19th-century Britain, maybe 30,000 novels were published,” Moretti says. He is dying to analyze them all. It will be like peering through the first telescope, he says — surveying more literature at a glance than he could read in a lifetime. “We will get a sense,” he says, “of a much wider universe.”

http://www.wired.com/magazine/2009/11/pl_print

Thist Just Inbox: Infographic Posters from Visual Aid

visualaidposterscomp2Visual Aid (like The MacMillan Visual Dictionary’s kid brother) is a series of books containing loads of information that must be visualized to be understood. For example: the space race chronology, human anatomy, comparative sizes of spacecraft, swimming strokes and the like. Now, after the release of their 2nd book, Visual Aid: Stuff You’ve Forgotten, Things You Never Thought You Knew, and Lessons You Didn’t Quite Get Around to Learning, they’ve released all their graphics in poster form.

The posters are printed on 190gsm silk paper and range widely in size, from A4 to 60×40. Each one is presented exactly as it is in the book, but they take special printing requests like changing aspect ratios, switching to black and white, or swapping the background color.

Very cool and perfect for the design office, classroom, or home. Browse and buy here.

Follow the link for more samples – http://www.core77.com/blog/news/thist_just_inbox_infographic_posters_from_visual_aid_15001.asp

How SuperFreakonomics Gets Climate Engineering Wrong

The new book Superfreakonomics neglects the real dangers of geoengineering.
By Kevin Bullis

The sequel to Freakonomics, the best-selling book that uses economics to uncover surprising facts about the world, came out today. Superfreakonomics, cowritten by Steven Levitt, a professor of economics at the University of Chicago, and Stephen Dubner, a journalist, is an attempt to outdo the original, and it does this in part by taking on a huge, controversial, and very important topic–climate change.

Unfortunately, the authors’ solution to climate change, which they say is simple, cheap, and safe, is actually dangerous–a cure that could be worse than the disease. (This part of the book has already generated plenty of debate online.)

The authors set up their chapter on climate change as a challenge to global-warming orthodoxy–saying that “the movement to stop global warming has taken on the feel of a religion,” putting climate-change claims in the context of past errors by scientists, and suggesting that climate models are less reliable than risk models for financial institutions that failed in the recent waves of bank closures.

So it’s a little disorienting to discover that the chapter actually argues for the development of radical solutions to global warming. It argues that not enough has been done to curb greenhouse gas emissions and warns of catastrophic events like the melting of ice sheets in Greenland and Antarctica.

The solution that Levitt and Dubner put forward is geoengineering. More specifically, they advocate a scheme that would inject particles into the upper atmosphere to block a small percentage of incoming sunlight and so cool the earth–an idea that’s been around since at least the 1970s. The scheme would mimic the action of big volcanic eruptions, which also inject particles into the stratosphere and have been shown to have a cooling effect.

Historically, Levitt and Dubner say, the main problem with this idea was that proposals for injecting the particles have been too expensive. They add that there might be some sort of vague environmental concerns, but label them as religious objections, not practical, science-based ones. The “moralism and angst” of these environmentalists make it hard for them to see what the authors call a “fiendishly simple” and “startlingly cheap” solution to global warming. They then describe a scheme for delivering sulfur dioxide (which will form sulfate particles) to the stratosphere and declare that it would cost $250 million for the first year and $100 million thereafter, compared to $1.2 trillion a year for reducing carbon emissions. A bargain.

Other than dismissing the potential for damage to the ozone layer, the authors don’t talk about the real environmental concerns that come with sulfate injection to the stratosphere. But there are serious and specific concerns.

Scientists studying the impact of a fairly recent, large volcanic eruption–the Mount Pinatubo explosion in the Philippines in 1991–have found that not only did the layer of sulfates it produced cool the earth, it also led to a “huge change in precipitation,” says Gavin Schmidt, a climate scientist at the NASA Goddard Institute for Space Studies. By decreasing direct sunlight, the event cut down on evaporation, leading to the “lowest rainfall amount over land since 1948,” the earliest year that good records are available, says Kevin Trenberth, a climate scientist at the National Center for Atmospheric Research in Boulder, CO. The change in precipitation caused severe droughts that damaged crops and limited drinking water, he says. Schmidt says the potential for drought must be considered before any geoengineering is done. “What good does it do to save the Arctic if you cause the failure of the Indian monsoon on a regular basis?” he says. “That’s billions of people.”

The change in precipitation isn’t the only known adverse affect. Shading the earth does nothing about the levels of carbon dioxide in the air. This has some benefits–plants grow better with more carbon dioxide–but it also makes the ocean more acidic, which can lead to the destruction of coral reefs around the world and prevents some shellfish and crustaceans from developing, cutting off an important source of food for fish and whales, and ultimately destroying important food sources for humans.

And then there are potential unanticipated consequences. Volcanoes inject sulfates into the stratosphere sporadically. No one knows what will happen if the sulfates become a permanent part of the stratosphere. It could very well be that major problems won’t become obvious until many years or decades into a sulfate injection project. Levitt and Dubner argue that we could simply stop if problems arise. But this could be disastrous. All of the warming that’s been prevented by the sulfates over the years would happen suddenly, far too fast for people to adapt.

If nothing is done to curb greenhouse gas emissions, the sulfate injection scheme will have to be kept up year after year, potentially for well over a hundred years, given the lifetime of carbon dioxide in the atmosphere. As concentrations of the gases mount, ever more sulfate will be needed to offset the warming effect, increasing costs. And the dangers of stopping the program–due to war or economic hardship or a shift in the political winds–would mount. The same holds true for another scheme the authors mention–cloud whitening, an approach that may not work and that could also lead to severely reduced precipitation over land. It is not, as they suggest, “geoengineering that the greenest green could love.”

Geoengineering by shading the earth is simply not an alternative to curbing greenhouse gas emissions. In some extreme case–the impending collapse of major ice sheets, or the realization that the world is warming far faster than anticipated–it might be used to buy a little time. But even this is a risky proposition, not just because of the environmental concerns, but because of political ones, since some countries would be harmed more than others. The authors point out–in passing–that one can “imagine the wars that might break out over who controls the dials,” that is, who selects how much the earth should be cooled. Oddly, they don’t seem to consider this a serious objection to geoengineering.

But although the authors may be wrong in failing to point out the significant hazards of shading the earth (let alone some annoying side effects, such as obscuring the view from ground telescopes and reducing the power output from some solar power systems), they may be right that geoengineering may prove necessary. They point out that changing people’s behavior is notoriously difficult, and that the uncertainty of climate predictions makes it particularly hard to set up and enforce government policies, particularly those that require international agreements. For poor countries, the uncertain cost of climate change may seem small compared to the cost of forgoing cheap electricity, at least until cheap carbon sequestration or renewable energy is available.

Donald Johnston, the former secretary general for the Organisation for Economic Co-operation and Development (OECD), has said that political realities may make strong international emissions controls impossible: “I foresee a situation about 10 years from now where the world will be warming, the new targets for greenhouse gases set [at the December 2009 United Nations climate change meeting] in Copenhagen will be ignored by many big emitters as they have in the past, and desperation will force the world to consider reducing the penetration of the sun’s rays through geoengineering.”

If we reach that point, we’d better have a clear idea what geoengineering might entail, so we can choose the best methods and prepare for the inevitable bad side effects. That means research must be funded to create ever more sophisticated computer models of geoengineering and to run some small- and perhaps even large-scale experiments. Also, governments need to start talking about geoengineering policy. How do you decide–and who decides–how much to cool the earth? How do you decide how to reimburse people who suffer from negative side effects? How will lawsuits be handled? What’s to be done if a country decides to undertake geoengineering on its own?

This research and planning should be accompanied by continued efforts to reduce greenhouse gas emissions and, eventually, to start pulling carbon dioxide out of the atmosphere. The goal should be to shade the earth for as short a time as possible–or not at all. The only way to drive these changes is to be as clear as possible about the dangers of both global warming and geoengineering. That’s going to be a lot harder with Levitt and Dubner making geoengineering sound like a panacea.

http://www.technologyreview.com/blog/energy/24274/