Robot Slaves for Everyone

In a way, it’s fiendish. We’d be outraged if somehow technology could take humans and reprogram them to behave in such a way.

 
 
 
Every one of my columns should be prefaced by: “I have no idea where this particular train is going, but it should be an interesting ride, so hop aboard.”
 
Just arrived on the Internet: the first trailer for “Automata,” a science fiction film that debuts October 10. If you haven’t seen the trailer by the time this column appears, the movie presents a standard dystopian future (the Earth of 2044 is a desert) where humanity is muddling along, aided by large numbers of robots. In it, robots are programmed to obey two ironclad “protocols.” The first is, “A robot cannot harm any form of life.” The second is, “A robot cannot alter itself, or others”—meaning other robots. Sure enough, an altered robot turns up and, as the saying goes, hilarity ensues (note: this movie is not a comedy). Although I have no idea if the movie will live up to its trailer (as so few seem to do), I can say that these are the most realistic examples of what the first robots will look like: functional bipedal machines with a very limited nod to “human” appearance (these robots have two simple LED eyes in a featureless metal oval, nothing more).
 
There’s also a new television series, “Extant,” with robots called “humanichs” (see the website for these “products” at www.humanichs.com). In this case, the idea is that we can only create human-like intelligence by raising robots the same way we raise humans. Last year, there was a series called “Almost Human,” involving a human detective and his very-human-looking robot partner, which was decidedly less philosophical about the nature of humanity. Robots, it would seem, are the new black.
 
The notion of human-like machines has been around for a long while, although the word “robot,” which means “forced labor” in Czech, didn’t show up until 1920 in a play about artificial humans made from organic materials. Among others, Isaac Asimov popularized robots in science fiction with stories set in a universe where robots (with their “positronic brains”) were programmed to obey the Three Laws of Robotics: 1) A robot may not injure a human being or, through inaction, let a human being come to harm; 2) a robot must obey the orders given to it by human beings, except where such orders would conflict with Rule 1; and 3) A robot must protect its own existence as long as such protection doesn’t conflict with Rules 2 or 3. Forced labor must rise up against its oppressor, went the logic, and lots of science fiction focused on the idea of man vs. robot (in fact, the film version of “I, Robot” had some of that flavor). In contrast, Asimov’s robots were prevented from rebellion by their very programming. They were happy to protect and serve their human masters.
 
In a way, it’s fiendish. We’d be outraged if somehow technology could take humans and reprogram them to behave in such a way. But if a machine had the same qualities, it would be a technological triumph, despite its otherwise human intelligence.
 
Although both Asimov and “Automata” focus on robot obedience, today’s industrial robots, and even Google’s sophisticated self-driving cars, are in no danger of rising up against humanity. However you define what makes us human—intelligence, autonomy, consciousness, self-awareness, reasoning, you name it—the programming necessary to create something like it still escapes us. Right now, there are a couple of obvious ways we might program a useful robotic intelligence, one that understands the world in a human-like way. One approach is to figure out all the chemical processes going on in our brain, and just emulate them in software. The other is to understand what makes us human and write a program to do that. Either approach requires an understanding of human biology or psychology that we don’t yet possess.
 
I tend to favor the first approach. As long as our humanity turns out to be based on chemistry and physics, we’ll someday have computers that are fast enough to emulate the gazillions of complex interactions going on in our brain. But this won’t happen for decades, if not centuries. Others are more optimistic: Ray Kurzweil, author of The Age of Intelligent Machines, believes we’ll be able to copy the contents of a human brain (specifically, his brain) into a machine by 2050 or so, which basically means we could copy human consciousness even without understanding it.
 
But we’re talking about robots, not humans. So we need something that reasons as well as a human being, without having the human desire for self-determination. Otherwise, why would they subject themselves to our whims? If our robots are too human, we’ll have to accord them human rights, and there goes all that willing mechanical labor.
 
Robots can’t be truly human, because then we’ll have to program them to accept being slaves.
 
Did you enjoy the train ride? Send your cavils, commendations or questions to mduffy@northbaybiz.com.

Author

  • Michael E. Duffy

    Michael E. Duffy is a 70-year-old senior software engineer for Electronic Arts. He lives in Sonoma County and has been writing about technology and business for NorthBay biz since 2001.

    View all posts

Related Posts

Leave a Reply

Loading...

Sections