Between the Covers
The Imminent Robot Uprising
By James H. Johnson
Mar 29, 2007

Fear is perhaps the most typically American emotion – we began as sinners in the hands of an angry God, and today our culture maintains a varied library of death fantasies. We worry over mundane destruction, like an asteroid colliding with Earth, as well as horrifying (and seemingly inevitable) attacks, like a nuclear or biological terrorist strike. We’re even uneasy about some curious ways to go, such as God’s seemingly capricious decision to call the pious to rise and float away to heaven, while the rest of us are left to fend for ourselves against armies of smoke-breathing, flesh-eating, horned and hoofed, you know, whatever those things are.

Many millions of Americans believe the latter will mark the world’s end. Like, over half of them believe it, according to a recent study. Global warming: myth. Riding a Nor’easter to sit at the right hand of God? Fact. 

Some of our culture’s most deeply rooted and prevalent death fantasies, like the Rapture, are a little fantastical. They are nightmares of little boys and girls who dress like grown-ups, the deeply embedded and immutable terrors of thorough human annihilation – extinction, to be direct. Many of these notions are passed down through contemporary myth makers – usually television and film in our case, rather than the oral traditions that served that purpose for millennia. Other fears are believed to have roots in our DNA. Generally, we can trace our anxieties’ origins. We can measure and evaluate them.

But there are others that resist codification. One in particular has always mystified me: Robots. Why are we so paranoid, as Chuck Klosterman once said, that one day our toasters will rise up to conquer us?

I opened Lee Gutkind’s Almost Human with two hopes: 1) to better understand mankind’s proximity to complete robot conquest; and 2) to find out how this will come about. Also, to a much lesser extent, robots seem like a really cool idea in theory, and I hope to have one at some point in my life, at least until one decides it wants to have me.

Gutkind faced a daunting task in choosing his subject. In robotics, two branches of science – software development and engineering – collide to devise mechanized creatures to perform tasks autonomously. Almost Human follows a group of young (mostly) men who represent the future of the field. Imagine the excitement generated by a cadre of hard-nosed grad students who eat, breathe, bleed and often literally sleep robotics!


It’s about as exciting as watching mold grow on a meatloaf. And then waiting to see if the meatloaf can become a creature capable of performing tasks autonomously.

Gutkind applies a common creative-nonfiction format to the story of some Carnegie Mellon robotics student and their mentors. In a sweeping narrative supported and furthered by exposition and background information, he tells a story of adversity overcome. It’s like Rocky, except these guys fight their battles with spectrometers and matrices, rather than fistfuls of Italian-American pain.

Here’s a typical scene from the narrative:

The computer sounds like it wants to boot up, but the monitor is dark…

“It will be alright,” says Wagner.

“Give it a little time,” says Teza. “The computer is slow.”

Now we were all standing around [the robot] and waiting.

The robotics team routinely fails, and fails, and fails, until, every now and then, they succeed a little. Which is part of Gutkind’s message. Though robotics is at the forefront of technology, it’s also in its infancy. The book isn’t a bad read – in fact, it’s a cleverly painted picture, and it unintentionally demonstrates how very long we may have until we’re forced to bow down to our robot masters. But readers who aren’t already interested in robotics may not be thrilled by Almost Human. A litmus test – if the following storyline makes you all tingly, pick it up: A team of geeks goes to the Chilean desert to see if a mostly autonomous go-cart look-alike can move across 50 kilometers of desert while detecting lichen.

The book’s most interesting passages are the expositions into other areas of robotics, however brief they may be. For example, Gutkind recalls being startled in the Robotics Institute one evening by a robot named Grace (Graduated Robot Attending a Conference). Grace is an entry in an artificial intelligence challenge in which the robots are to attempt to act as conference attendees. They’re supposed to “ride an elevator, stand in line, schmooze with friends” and so on. He also cites ALICE as another example. (She’s a little creepy to talk with; her responses are a little too quick.)

Some of the most important academic work is focused on RoboCup, a tournament devoted to the development of “a team of fully autonomous humanoid robots that can win against the human world soccer champion team.”

Primarily, two entities fund robotics research: NASA and university grants (although private funding has improved somewhat lately as practical applications of robotics arise). This two-part system presents a problem however: NASA wants specific developments for specific tasks, a system that limits the scope of robotics research. And, were it not for RoboCup, academics would have no common goal, and thus no universal direction, for robotics.

The defeat of a human soccer team is a lofty goal, but competition allows researchers an arena in which to share their work, and the tournament has spawned a generation of robots that can maneuver as a team toward a single goal. More important, RoboCup’s teams have begun to learn on their own how to perform more effectively. That is, the robots are wired to evaluate the opposition’s play and alter their own play accordingly, in addition to finding the most effective way to play themselves – including how they move.

We take these abilities for granted. A ball moves toward you. You see where it is and estimate where it will be in a moment. You motivate complex mechanisms to operate in concert, so you’ll be there to meet the ball on its path – all in a fleeting moment. Teaching a robot to do this is, in fact, more complex than letting the robots teach themselves. Besides everything else, the possible combinations available to the aforementioned orchestra of mechanisms are almost limitless. So robots have begun to teach themselves to run faster and kick the ball harder by altering methods proven to work and eliminating methods that don’t. And they eliminate untried combinations of methods, because to try everything would take far too long.

Thus, the robots are more efficient than the code writers who spend hour after hour tweaking the programming, only to find out that, say, the robot’s speed only decreases. In short, they’ve already begun to teach themselves.

The near future will see the application of autonomous robots in two areas: space exploration and armed conflict. NASA knows that robots offer a much safer alternative to manned space flight. Interestingly, robots may also offer more efficiency, if programs like Carnegie Mellon’s succeed. Instead of sending a single Mars Rover, a team of Rovers will be sent to explore the planet. In large numbers, the robots could explore more of the planet faster while remaining in constant contact with one another to share their mapping of the planet and report on their health. Also, they eventually will repair themselves, harvesting resources from other rovers, and they’ll be able to alter the scope of the mission itself based on what they already know. 

The other application, the military function, is much more troublesome. In a recent article in Harper’s, Steve Featherstone highlighted the U.S. military’s plans for the integration of robotics into peace operations and armed conflict. Currently, robots must be guided by human hands to some degree. The day of heavily armed, totally autonomous robotic soldiers isn’t too far into the future, however.

That kind of armed force raises some crucial questions about the nature of warfare, since, and this issue is probably self-evident, a fighting force that cannot lose citizens in a battle can’t lose any battle. Morality and war are awkward bedfellows, but who’s responsible for crimes of war if the killers were operating autonomously? And, besides all that, what if the robot army gets it all wrong, murdering innocents and accepting no surrender and no outside authority, possibly until Earth is entirely under the force’s control?

Perhaps our paranoia about a robot takeover is a little more grounded in reality than we imagine. As Featherstone notes in his article, robots will have “the ability to hunt down and kill the enemy with limited human supervision by 2015.”

If our literature is to be believed, robots will one day rise up against us. Their entrance into books and film is tied to this very notion. The term “robot” itself is derived from a Czech word that means servitude or drudgery. The author who invented the term used it to tell of a race of machines designed to relieve mankind of the need to labor, but, in the end, they revolt from their slavery in order to take over.

For the moment though, we don’t need to worry about these disconcerting issues. One of the space exploration vehicles Gutkind highlights is designed to represent the next generation in autonomous bots. Once it begins its task, the robot takes every obstacle it encounters into account and adjusts the rest of its mission accordingly. On an early practice run, the robot needed seven hours to plan its mission, and by the time it began the test run, it realized it had passed the intended start time – and therefore required another seven hours to plan accordingly.

I guess we’ll have to wait a little longer for mankind’s submission to mechanized tyranny, for post-apocalyptic pockets of human slaves to implode into a rebellion against what is so clearly wrong, so evil, so inhuman. Till then, the robots will have to make do with mastering the soccer ball. 

Copyright © 1998-2006
View this story online and more at: