Greetings! Please Share & Subcribe.
It's one of science fiction's most familiar ideas: computers that can, when hooked up to other computers, become self-aware and exhibit human characteristics. The results are often sinister, as the machines perceive humanity as a threat and defend themselves to the death.
"Star Trek" had its Ultimate Computer, among many others; "2001: A Space Odyssey" had HAL 9000. Heroes of the "Terminator" films fought against Skynet, a group of networked military computers that triggered a nuclear holocaust, then aggressively tried to put what was left of humanity out with the trash.
Now RoboEarth -- "a World Wide Web for robots: a giant network and database repository where robots can share information and learn from each other about their behavior and their environment," in the project's words -- has put a mirror up to those fictional tropes. Artificial intelligence and robotics experts -- and a brief history of our darker interactions with subpersonified silicon -- tell us we should take a look in.
RoboEarth's debut made headlines a few weeks ago, most cheering the innovation as a logical next step in the evolution of computers. But, beyond the benign notion of jacking up a robotic riveter in a Detroit car factory to a cybernetic welder in Nagoya, is something darker lurking?
Markus Waibel, one of about 30 people working on the European RoboEarth project, told IEEE Spectrum earlier this month:
Before you yell "Skynet!" think again. While the most similar things science fiction writers have imagined may well be the artificial intelligences in Terminator, the Space Odyssey series, or the Ender saga, I think those analogies are flawed. RoboEarth is about building a knowledge base, and while it may include intelligent web services or a robot app store, it will probably be about as self-aware as Wikipedia. If it's surprising one of RoboEarth's brightest minds is invoking the bone-grinding image of Skynet, consider what Carnegie Mellon professor James Kuffner told the same engineering publication a week earlier: Connecting robots to cloud computing -- which, in effect, harnesses the power of remote, shared computers in the same way the Internet does for humans -- could make them "lighter, cheaper and smarter."
Not just smarter, but able to adapt to new and changing environments, said RoboEarth's Dr. Heico Sandee, professor at the University of Technology, Eindhoven, the Netherlands, in response to Waibel and Kuffner. "We fully agree with the statement that robots get smarter when they are able to communicate to each other, e.g. using cloud computing," he said in an e-mail interview. "This is the main aim of the RoboEarth project."
For many years, he said, robots have been functioning in factories in structured environments with cages around them. Now they're moving into the home, vacuuming our floors and becoming a more common part of daily life. "For robots to function [among] humans they need to deal with a very unstructured environment," Sandee said, "where new objects are appearing and old are disappearing continuously, and environments are changing rapidly. For instance, when a robot is taught to serve a drink, then it needs to be able to also open the newest bottle."
So can there be harm in connecting a Roomba in Cleveland to a Gaggia espresso maker in Milan, hoping they might teach each other to clean up spilled coffee grounds? Seth Teller, a professor at Massachusetts Institute of Technology whose speciality is interaction between humans and autonomous robots, says yes, it's dangerous.
"If you're going to have networked robots at home, it can do physical damage to your person, analogous to the way in which malware might damage your data," says Teller, though he also considers RoboEarth a positive step. "Your PC has a webcam, but it can't hit you on the head. Once these things gain the ability to move on their own, even without the virus or the bad guy dialing in from the Internet, if there is a bug in it, it can hurt you."
So far, the few varieties of robots we have invited into our homes haven't taken any steps toward killing us, let alone becoming self-aware. But there are examples of industrial and military robots thrown spectacularly awry by bugs, bad software or hardware, lack of a kill switch or all of the above.
The medical linear accelerator Therac-25, designed to produce radiation beams for destroying tumors, killed or injured six people in the United States and Canada in a two-year period between 1985 and 1987. The machine literally barbecued a man's brains in Texas and fried holes right through the bodies of others. Survivors received huge radiation overdoses. Troublingly, Therac-25's manufacturers and operators were slow to respond or make modifications because they had too much faith in the machine.
More recently, in 2007, a Swiss-made automated 35 mm Oerlikon Mark 5 anti-aircraft cannon turned nine unfortunate South African soldiers into bleeding meat and left 14 others injured during a massive military exercise. The cause was a locking pin that sheared and, in essence, shifted the gun control from manual to automatic. Computer software took over and "the rogue gun began firing wildly, spraying high-explosive shells at a rate of 550 a minute, swinging around through 360 degrees like a high-pressure hose," said a report in the South African Independent Online. Later, the South African National Defence Force, blaming the manufacturer, said it was the second such incident involving the cannon, though wouldn't reveal the first.
"If you are going to network these things so that they can be commanded remotely, you have to assume the possibility that bad guys are going to take them over," Teller says. "You make it entirely autonomous, it has no external command-and-control input. But then you can't stop it if it goes haywire. Ask someone at the Pentagon if he wants his robots to have an off switch. Ask him if he wants his soldiers to have an off switch."
The bad guys -- or good guys, depending on your perspective -- have become much more sophisticated in taking over computers. The Stuxnet virus, detected in July, was designed to sabotage Iran's nuclear ambitions by resetting the speed of particular mechanical parts in a reactor to break them, while fooling monitoring equipment into giving normal readings. Thought to be a joint venture of British and U.S. intelligence, with Israel providing software delivery, Stuxnet, which exploits security holes in Microsoft Windows, caused setbacks to Iran's nuclear program -- though how far is still sketchy. But it certainly demonstrated that humans are capable of attacking sophisticated machinery via the computer networks they're plugged into.
If networked computers were to find themselves suddenly dominating the earth -- admittedly, one Hollywood-sized if -- it's logical they might see their human creators, who could pull out the power plug, as a short-term threat. Or, in the long term, humans could at least mess up the house, as they multiply, pollute the planet and in some cases irrevocably strip it of resources. "The Matrix's" Agent Smith referred to humans as a virus, and from a potential new species' perspective, you can kind of see his point.
Paul Saffo, managing director of foresight at investment researchers Discern Analytics and an oft-consulted futurist, says the Skynet scenario is "too over-the-top dystopian." The risk in RoboEarth or similar projects would be in failing to set down standards and principles for human-computer interaction -- or setting them in the wrong way. "We're on the edge of a big explosion in robots over the next five or 10 years, so we need a protocol," he says. "And my experience with this is, the sooner you start it, the better, and the sooner you come up with a good protocol, the better."
MIT's Teller says he thinks RoboEarth is not necessarily focusing on the critical issue, which is not information sharing, but how machines go about interpreting shared information correctly. Especially if one robot with grabber arms is trying to speak to another with, say, walking spider legs. "How do we endow robots with the ability to share experiences across different body sizes and manipulator types? If those guys are working on that problem, great. It's not clear whether they are."
Clearly, the moment has arrived to set some well-thought out parameters. "This is a hugely important thing to do," says Saffo, "and now is the time to do it while you have the freedom to shape it in the right way, and you don't have to worry about a bunch of corporate suits jumping in.
"Standards are like karma: Your actions today influence the outcomes tomorrow. You do bad things, it comes back to haunt you."
AOL
0 Comments