Category Archives: Robots
The military’s tried nearly everything to stop insurgent bombs. It’s tried jamming their detonation signals. It’s tried scouring roads for them with metal detectors like beach bums searching for treasure. And it’s tried — lots of times — to put robots in the way of the boom. No dice: the bombs keep proliferating, largely because of how cheap they are to construct.
So it was only a matter of time, perhaps, before someone thought to put KITT on the job.
The Army’s latest scheme to stop homemade bombs is pretty much inspired by Knight Rider. A California company called 5D Robotics recently won a contract of unspecified amount to develop an interface with an “intelligent” unmanned car that will “effectively perform improvised explosive device defeat (IEDD) tasks remotely.”
Sure, this “mid-sized unmanned ground vehicle” won’t really be David Hasselhoff’s robotic pal. “We’re not making KITT,” 5D chief David Bruemmer, a good sport, clarifies for Danger Room. “I don’t think anyone that we’re talking to in the military is looking for a highly cognitive intelligence.” Details, details.
But if it works well, then (kind of like the Knight Rider car) 5D’s robot won’t be relying on its human operator very much. 5D’s writ requires it to outfit its car with “perception and hazard sensors, manipulator(s), and operator control unit(s)” that can detect, remotely, the signatures of different kinds of improvised explosive devices. The idea is to “emulat[e] the best heuristics of human sensor and neutralizer manipulation,” 5D’s contract announces, so the car knows when to veer out of the way of a bomb and when to blow it up, all while “minimizing operator attention demands.”
In other words, KITT will know what to do with a bomb without Hoff mussing his perm over it.
The military has used lots of robots to assist with explosive ordnance disposal, from the Talon – a bomb disposal system based around treads, a camera and a creepy-looking arm — to the WALL-E-like Small Unmanned Ground Vehicle. And there’s no shortage of sensors for detecting the different kinds of IED signatures out there, from the ammonium nitrate sniffers of Project Ursus to jammers that hunt down bomb detonation frequencies to old-fashioned metal detectors.
But a bomb-hunting car? Not even close. First, the military can’t even come up with a bomb sensor as good as a dog’s nose. And the larger the robot — or any vehicle, for that matter — the more likely it is to get mired in Afghanistan’s steep, craggy, unpaved terrain.
And if it needs to be said, giving a bomb-hunting car artificial intelligence is a step so far unimagined. The company claims that its “5D Behavior EngineTM” provides exactly that: “intelligent instantaneous reactive responses to local environmental, sensor, and other data.” 5D brags that its A.I. is “inspired by biological systems where skill is ingrained through immediate interaction with the real-world,” eschewing behavioral models or GPS location systems to make its approach “much more robust in the face of uncertainty.” And it promises to work “through remote rugged terrain” that’s vexed other bomb-fighting robots.
Um, OK, if it actually works. In that case, the ersatz KITT won’t just know where the bombs are, it’ll figure out when to dodge them, when to blow them up and when to call in backup. Bruemmer says the human operator will still have an interface with the robot car, but the details are TBD: perhaps it’ll be a specially designed piece of hardware, although he says, “We’re 100 percent doing the app model — we are working on a Droid interface.”
Bruemmer declined to tell Danger Room how much his contract is worth or when he’ll deliver a prototype.
5D doesn’t specify which robot vehicle qualifies as “mid sized.” But it’ll be way bigger than, say, a 125-pound Talon. Bruemmer tells Danger Room that his meetings with the Army have him experimenting with a real, honest-to-goodness car: “We’re working with a 4,500-pound vehicle that’s very promising,” is all he’ll say about it. (Good luck keeping that thing out of the mire of Afghanistan’s unpaved roads.)
But there isn’t really a single robotic vehicle 5D envisions. Bruemmer prefers a “plug and play” suite of sensors and AI so that different vehicles could be tricked out into intelligent bomb hunters. “We’re not looking to focus only on one robot,” Bruemmer says. “Depending on the mission, there’ll be a need for a family of robots.”
Hear that, KITT? There’s hope for you yet.
by Edward Moyer
Google has been testing self-driving cars on roads in California, according to a report, and so far they”ve avoided everything but a minor fender bender–caused by a human-driven car.
The New York Times reports that seven test cars have traveled 1,000 miles without need for human intervention (a driver has been stationed behind the wheel just in case, accompanied by a technician to monitor the navigation system), and that they”ve covered more than 140,000 miles with the human chaperone stepping in only occasionally. One of the cars was even able to safely make its way down Lombard Street in San Francisco, the fabled “crookedest street in the world,” the Times says.
”Stanley,” devised by Sebastian Thrun and his team from Stanford, won the DARPA Grand Challenge in 2005.
(Credit: Stefanie Olsen/CNET)
Google”s robot car is equipped with artificial-intelligence software; a rotating sensor on its roof, which can scan more than 200 feet in all directions to create a 3D map of the car”s environs; a video camera mounted behind the windshield, which helps the navigation system spot pedestrians, bicyclists, and traffic lights; three radar devices on the front bumper, and one in the back; and a sensor on one of the wheels that allows the system to determine the car”s position on the 3D map, the Times says. The car also features a GPS device and a motion sensor. The car follows a route programmed into the GPS system, and it can be instructed to drive cautiously, or more aggressively.
Engineers say robot cars aren”t susceptible to drowsiness or driving under the influence, and that eventually they might allow for more cars on the road, because they can drive closer to other vehicles, and less fuel consumption, because their safety would allow them to be made lighter, with less defensive armor, the Times says.
The man behind the project, Sebastian Thrun, a Google engineer and co-inventor of Google”s Street View mapping project, was also behind the autonomous auto that won the $2 million prize in the Defense Advanced Research Project Agency”s 2005 DARPA Grand Challenge, a contest to see if a driverless vehicle could successfully navigate nearly 150 miles in the California desert.
The Google researchers said that at the moment they don”t have a plan for marketing the system, the Times says. Thrun is a promoter of the idea of robot cars making roads safer and helping to cut down on energy costs, as is Google co-founder Larry Page, the Times reports.
Berkeley Bionics” eLEGS exoskeleton – By Ben Coxworth
t a press conference held this morning (Friday 10/8) in San Francisco, California’s Berkeley Bionics unveiled its eLEGS exoskeleton. The computer-controlled device is designed to be worn by paraplegics, providing the power and support to get them out of their wheelchairs, into a standing posture, and walking – albeit with the aid of crutches. The two formerly wheelchair-bound “test pilots” in attendance did indeed use eLEGS to walk across the stage, in a slow-but-steady gait similar to that of full-time crutch-users.
eLEGS is worn over the clothing (including the shoes), and people who are capable of transferring themselves out of their wheelchairs reportedly should be able to get in and out of the exoskeleton within one to two minutes. It can be adjusted to fit users between 5’2” and 6’4” (157 and 193 cm), weighing no more than 220 lbs (100 kg). Once they’re standing, the onboard computer utilizes sensors to observe the user’s gestures. It then determines what the user intends to do, based on those gestures, and assists them accordingly in real time.
The device weighs 45 lbs (20 kg) and has a battery life of about six hours, under normal use. A maximum walking speed in excess of 2mph (3.22km/h) can be attained.
Not only should the device allow the paralyzed to walk, in a mechanical way, but it could also be used to retrain the muscles and nerve connections of people who have been rendered temporarily unable to do so.
Clinical trials are scheduled for early 2011, with a limited release in select American rehabilitation clinics within the second half of that year. Training will be provided for therapists, and patients will be able to apply to take part in the eLEGS gait training program. Farther down the road, Berkeley would like to see the product available for home users, so they could put it on in the morning and use it all day.
Berkeley Bionics is no stranger to exoskeletons. It already produces the Human Universal Load Carrier (HULC), which allows able-bodied soldiers to carry loads of up to 200 lbs (91 kg) over rugged terrain.
Crawling along: This illustration shows a robot designed to inspect transmission infrastructure while crawling along power lines. The first robot prototype will be tested this month, with in-field testing expected in 2014. Credit: Electric Power Research Institute
Researchers think the solar-powered device could survey aging electrical lines.
By Tyler Hamilton
A robot designed to crawl along tens of thousands of miles of transmission lines could help inspect North America’s vast and aging grid infrastructure without the need for manned helicopters inspections.
Researchers at the Electric Power Research Institute, an independent nonprofit research organization for the utility sector, have designed a 140-pound, six-foot-long prototype of a robot that they plan to test for the first time at an outdoor lab later this month. The device uses rollers to clamp onto and move along a line. It can maneuver past towers, known as pylons, using cables built into newer towers or retrofitted onto old ones. “There is nothing that does what it does; nothing that even tries,” said Andrew Phillips, director of power transmission research at the institute.
The rectangular robot–looking a bit like a scaled down version of a solar car–is equipped with a high-definition camera and sensors that can detect overgrown trees. “It will do image analysis to see if there is something different with the structure compared to an earlier picture taken from the exact same spot,” says Phillips. Being able to remotely spot high-risk trees, which are the top cause of electrical outages, is important to utilities. The big August 2003 blackout was triggered by a poorly trimmed tree. “The images will be very high-definition, and we’ll be able to zoom in,” Phillips adds.
The prototype robot will also make sure there are no faulty connections that can cause overheating, and listen for electromagnetic “noise” that might indicate other problems with equipment. It could also retrieve data from sensors that are already connected to equipment in the field but which normally rely on helicopter or ground visits to get the information.
Having the ability to look at the condition of equipment without physically being there will bring tremendous value to utilities, said Phillips. It’s too early to say how much the robot will cost, but Phillips figures it will be less than $500,000 each. “The expectation is that it’s going to be at least less than 70 percent of the cost (of using helicopters),” he says.
Article Continues ->
For years, one of the dreams of robot enthusiasts and researchers has been a single robot capable of performing a wide variety of tasks. But while single-purpose robots are everywhere, the general-purpose vision has remained pretty much a fantasy.
Now, however, groups of roboticists at 11 institutions around the world will get a chance to take part in a beta project (see video below) that could change that dynamic forever. On Tuesday, Willow Garage, a Menlo Park, Calif., robotics firm, said that in June it will offer each of the 11 teams a two-year loan of a Personal Robot 2 (PR2), a sophisticated machine that is fully programmable and that has two arms, a “rich sensor suite,” a mobile base, and 16 CPU cores. Also included is the free, open-source Robot Operating System (ROS) framework that controls the PR2 and that comes with software libraries for perception, navigation, and manipulation.
And for the first time, these researchers will have a chance not only to program a general-purpose robot but also to contribute the work they’ve done on Willow Garage’s open-source robotics platform to a wide community of researchers, a forward leap that could allow others to quickly advance the state of the art.
According to Willow Garage, the 11 PR2s are worth more than $4 million and will go to teams at 10 universities in Japan, Belgium, Germany, and the United States, as well as to researchers at Bosch, a giant tools and parts firm. Eric Berger, the co-director of the Willow Garage personal robotics platform program, said that the company received 78 serious applications and in the end chose the 11 finalists that will get the chance to work on a series of research and development initiatives, sharing their progress with the community as they go.
Ultimately, Willow Goal is betting its $4 million-plus worth of robots on the goals of fostering breakthroughs in personal robotics; building the open-source robotics community; developing new productivity tools and components; creating never-before imagined applications for personal, general-purpose robots; and accelerating the progress of new robotics development by allowing members of the community to see and build on what their peers have already achieved.
‘Are you kidding?’
Asked if the Willow Garage initiative is an important step forward for the field of robotics, former NASA astronaut, Singularity University faculty member, and robotics expert Dan Barry said, “Are you kidding? It’s phenomenal that they’re giving away these state-of-the-art general-purpose autonomous robots.”
And it’s vitally important to put the PR2s in the hands of some of the best and smartest researchers on the planet, Barry said, because while the PR2 is an advanced hardware platform already, its utility is limited at this point. So, Barry explained, you could bring a PR2 into your home today, but there’s not much that it could do.
Put it in the hands of leading robotics faculty and students at universities around the world, though, and the platform begins to expand exponentially, Barry suggested. “So you have to get them out there,” he said, “into the community where clever people can use them and work on their raison d’etre.”
Willow Garage first put out its call for proposals for what to do with the PR2 last January. But already this year, a graduate student at the University of California at Berkeley programmed one of the robots to fold towels, and within two months, was able to get the PR2 to neatly fold 50 towels in a row. But while that was an impressive feat, Berger is excited to see what kinds of applications will result when there are 11 teams with hundreds of researchers working on the PR2 platform.
Broadly speaking, Berger said, the 11 teams chosen to receive the PR2s proposed using the robots for applications in two main areas. First, he explained, are tools and capabilities, things like reading maps, recognizing objects, precise calibration of an on-board camera, and much more. Second are applications and demonstrations that use all the robot’s tools and components and combine them so that it can do things like fetch an object, open a door, or any number of housekeeping tasks. Much of the latter category is designed for helping disabled people with things they cannot easily do on their own.
“There’s people looking at doing a complete housework cycle,” Berger said, that starts with “dishes in a cupboard, sets the table, puts them in the dishwasher and then back in the cabinet.”
Those chores, of course, present a robot with some very difficult tasks, like understanding how drawers and refrigerators open, as well as how to tell multiple objects apart. Similarly, one team is looking at getting the robot to learn how to carry an object through a crowded space. The Berkeley team will expand on its towel-folding work and now focus on getting its PR2 to do laundry.
To Andrew Ng, a computer science professor at Stanford and a member of the team there that was one of the 11 chosen by Willow Garage, being able to work on a PR2 is a chance to move toward the place many have long hoped robots would go.
“It’s been a longstanding dream in robotics to develop technology to put a robot in every home,” Ng said. “Science-fiction has promised us these housekeeper robots for decades, and we still don’t have them.”
The main reason we don’t, especially given that many robots have the physical capability to handle many household tasks, Ng added, is because the software has never existed that could transform a single-purpose robot into one that could take care of many responsibilities. But by making its Robot Operating System open-source, and its PR2 robots programmable, Willow Garage is tackling that unfulfilled promise head-on.
To be sure, there are countless robotics groups at universities and other research institutions, Ng explained, but most have been working on their own hardware, somewhat blind to the constant iterative improvements being made in the field. Plus, because each of those robots is a closed system, no two groups have been able to swap software.
“This is one of the reasons that Willow Garage’s robots are so exciting,” Ng said. “They’re making robots available to different research groups to focus on software problems without having to worry about putting together and maintaining their own hardware…Willow Garage, by putting out a standardized piece of hardware, will enable research groups to share ideas and inventions” because groups will be able to share code.
That ’70s show
While there certainly has been plenty of progress in robotics over the years, Barry likens the state of the field to where computers were in the 1970s. To many at that time, he explained, the idea of buying a computer was ludicrous because there weren’t yet any must-have applications. But when programs like VisiCalc and word processors came along, it suddenly made sense to get a personal computer.
“That’s where we are with robotics right now,” Barry said. “You could not conceive of things in the 1970s like YouTube…We’re going to see something really similar with robotics. Twenty years from now, we’ll slap our heads and say, ‘How could we not have thought of that,’ but we can’t think of [those robotics applications] right now.”
And if Willow Garage is successful with its goals and Barry is right that we’ll see the emergence of applications we can’t imagine today, that means there’s a lot of exciting things to come.
“I think it’s absolutely vital that [Willow Garage is giving away the PR2s],” Barry said, “and it’s incredibly generous. And I can’t wait to see what people come up with.”
Festo’s unveiled some pretty impressive tech over the years, from fluidic muscles to robotic flying penguins, but this next one has us a bit worried. The Bionic Handling Assistant is ostensibly patterned after the elephant’s trunk, designed to be both agile and delicate… but have you seen the thing? We’re pretty sure that it was patterned after the tentacles of Doctor Octopus, and that it will crush you and everyone you care about without a second thought. But if you’re the trusting type, the company assures you that this is just the thing for all those delicate processes you’ve been meaning to automate but haven’t been able to in the past: everything from handling fruit to animal husbandry is a cinch with this “hierarchically arranged system of muscles and evolutionary optimized movement patterns”! But don’t take our word for it: peep the video after the break.
Robonaut2 — or R2 for short — is the next generation dexterous robot, developed through a Space Act Agreement by NASA and General Motors. It is faster, more dexterous and more technologically advanced than its predecessors and able to use its hands to do work beyond the scope of previously introduced humanoid robots. (Credit: NASA)
Robonaut is evolving.
NASA and General Motors are working together to accelerate development of the next generation of robots and related technologies for use in the automotive and aerospace industries.
Engineers and scientists from NASA and GM worked together through a Space Act Agreement at the agency’s Johnson Space Center in Houston to build a new humanoid robot capable of working side by side with people. Using leading edge control, sensor and vision technologies, future robots could assist astronauts during hazardous space missions and help GM build safer cars and plants.
The two organizations, with the help of engineers from Oceaneering Space Systems of Houston, developed and built the next iteration of Robonaut. Robonaut 2, or R2, is a faster, more dexterous and more technologically advanced robot. This new generation robot can use its hands to do work beyond the scope of prior humanoid machines. R2 can work safely alongside people, a necessity both on Earth and in space.
“This cutting-edge robotics technology holds great promise, not only for NASA, but also for the nation,” said Doug Cooke, associate administrator for the Exploration Systems Mission Directorate at NASA Headquarters in Washington. “I’m very excited about the new opportunities for human and robotic exploration these versatile robots provide across a wide range of applications.”
NASA and General Motors have come together to develop the next generation dexterous humanoid robot. The robots — called Robonaut2 — were designed to use the same tools as humans, which allows them to work safely side-by-side humans on Earth and in space. Credit: NASA.
“For GM, this is about safer cars and safer plants,” said Alan Taub, GM’s vice president for global research and development. “When it comes to future vehicles, the advancements in controls, sensors and vision technology can be used to develop advanced vehicle safety systems. The partnership’s vision is to explore advanced robots working together in harmony with people, building better, higher quality vehicles in a safer, more competitive manufacturing environment.”
The idea of using dexterous, human-like robots capable of using their hands to do intricate work is not new to the aerospace industry. The original Robonaut, a humanoid robot designed for space travel, was built by the software, robotics and simulation division at Johnson in a collaborative effort with the Defense Advanced Research Project Agency 10 years ago. During the past decade, NASA gained significant expertise in building robotic technologies for space applications. These capabilities will help NASA launch a bold new era of space exploration.
“Our challenge today is to build machines that can help humans work and explore in space,” said Mike Coats, Johnson’s center director. “Working side by side with humans, or going where the risks are too great for people, machines like Robonaut will expand our capability for construction and discovery.”
NASA and GM have a long, rich history of partnering on key technologies, starting in the 1960s with the development of the navigation systems for the Apollo missions. GM also played a vital role in the development of the Lunar Rover Vehicle, the first vehicle to be used on the moon.
For more information on Robonaut and video, visit:
Adapted from materials provided by NASA.
A Swiss team has applied Darwinian selection to robot development, producing robots that can walk, cooperate and even hunt each other.
“Just a few hundred generations of selection are sufficient to allow robots to evolve collision-free movement, homing, sophisticated predator versus prey strategies, coadaptation of brains and bodies, cooperation, and even altruism,” say the Ecole Polytechnique Fédérale de Lausanne and University of Lausanne researchers.
“In all cases this occurred via selection in robots controlled by a simple neural network, which mutated randomly.”
The input neurons of the neural network were activated by the robot’s sensors, and the output neurons controlled its motors.
Each robot had a different ‘genome’, describing different connections between neurons. This resulted in unique behaviour and fitness – how fast and straight it moved, for example, or how often it collided with obstacles.
At the beginning, the robots had random values for their genes, leading to completely random behaviours.
But Darwinian selection was then imitated, by choosing the genomes of the robots with the highest fitness to produce the next generation.
To do this, genomes were paired, and random mutations such as character substitution, insertion, deletion, or duplication were applied.
The team found that within 100 generations, robots were able to move without collisions in a maze.
When ‘breeding’ for predator behaviour, a range of strategies evolved, including lying in wait and circling the walls.
And, astonishingly, the robots were even able to evolve altruistic behaviour, in a task that involved pushing tokens around. Some could be pushed single-handed, earning the robot one ‘fitness point’; others required two robots, gaining the whole group one point.
It was found that groups of unrelated robots – those with randomly differing genomes – invariably took the selfish approach and went for the small tokens. But those with similar genomes generally pushed the larger tokens, cooperating to raise the fitness of the whole group – and thus reducing their own chances of ‘winning’.
“These examples of experimental evolution with robots verify the power of evolution by mutation, recombination, and natural selection,” conclude the authors.
“The ability of robots to orientate, escape predators, and even cooperate is particularly remarkable given that they had deliberately simple genotypes directly mapped into the connection weights of neural networks comprising only a few dozen neurons.”
The full report is in PLoS, here.
By Loz Blain
Back in the 70s, the robots were coming for our crappy manufacturing jobs. Now, it seems, they’re coming for our crappy table service jobs. Korean company ITM Technology has developed restaurant concept around a cute little robot that fulfills the role of a waiter – it takes orders from customers, either verbally or through a touch screen, then relays them to the kitchen, and brings the food out when it’s ready. Robo Cafe eliminates ordering errors, reduces staffing costs dramatically for restaurant owners, and even brings the boss all the tips. It’s probably not going to be nearly as interesting to Tiger Woods, though.
Since spending a little time at the Tokyo Robotics Expo late last year, one thing has become crystal clear – Japan, in particular, is gearing up for some very tough times ahead. When the bulk of its aging population becomes too old to work, labor is going to be in severe shortage. And the ingenious technology sector in East Asia is exploding with really fascinating ideas to take the pressure off when it hits.
With a proportionally smaller workforce, and a booming robotics sector, human labour is going to have to be used very selectively – which will be quite a turnaround in Japan, a land where department stores currently employ squads of well-manicured girls whose sole job it is to greet you when you enter.
What’s more, the value of an hour of labor is likely to skyrocket – and this scarcity and expense of employee time is going to combine to make the case for automation a lot stronger.
In the case of small cafe and restaurant owners, the solution might well look something like Robo Cafe. Robo Cafe is a restaurant designed to operate as efficiently as possible with the absolute minimum human workforce possible.
The building needs to be designed with small horizontal pathways leading from the kitchen to all the tables. A small team of waiter robots can then get around to every table in the house when they’re summoned by customers.
Customers can either order verbally, or flick through extra options on a touch screen on the robot’s belly. Once the order is confirmed, the robot relays it to the kitchen. When the kitchen’s done preparing the food, the robots bring it out to the customers – and it’s the same process for ordering drinks.
In the case of a small chain bakery, the robots are able to take over the whole front of house operation, and the back room could largely be operated by a staff of one. It was hard for us to get much more detail out of Robo Cafe’s promoters in Tokyo, simply because we don’t speak Japanese and they don’t have a lot of English – but the idea is simple and clear.
The brilliant Dan Carlin talks about the “kitchen of the future” in one of his podcasts (Addicted to Bondage – highly recommended), where everything is automated and the labor is free because you own it. And how really, the kitchen of the future is actually the kitchen of the past, when you consider that throughout human history, there are myriad examples of slavery situations in which one man could actually own another, never having to pay for labor, only for slave maintenance expenses like clothes, food and medicine.
As Carlin points out, today’s concept of slavery is mostly restricted to non-humans. And the ownership of robot labor in a Robo Cafe type situation will almost certainly prove itself far more economical and dependable than a human workforce once the technology itself becomes mature.
Of course, slaves have a worrying habit of rising up against their owners – and Robo Cafe’s parent company ITM Technology is best known for its military hardware, so owners had best treat these cute little fellas with some respect!
We found out in June that the stealth robotics company created by iRobot founder Helen Greiner would work on unmanned air vehicles (UAVs) for emergency response. Now the company has revealed that these UAVs will also be used to inspect bridges, dams and other infrastructure.
Formerly known as The Droid Works, and now called CyPhy Works, the company has received a National Institute of Standards and Technology (NIST) Technology Innovation Program (TIP) grant of $2.4 million. CyPhy Works will work with researchers at the Georgia Institute of Technology to develop small, hovering UAVs equipped with video cameras and sensors.
According to a press release from the company:
If successful, the project will produce an advanced class of UAVs that would enable entirely novel, efficient, and relatively low-cost techniques for monitoring the health of the nation’s existing civil infrastructure.
While many researchers are working on small, hovering robots for search-and-rescue, surveillance, and structure monitoring, controlling and coordinating these aircraft remains a challenge. Many UAV projects currently use GPS to navigate, but this is not very precise and does not work inside buildings.
CyPhy Works apparently plans to develop a more precise navigation system. It has plans for two types of monitoring: Robotic Assisted Inspection, where a UAV slowly flies along a structure taking high resolution images, and Autonomous Robotic Monitoring, where a UAV stays at a structure and routinely checks for potential dangerous changes on its own. It will be interesting to see if the company can make the latter approach work, and what techniques it develops for stabilizing the UAVs in high wind.