I finally got a chance to see “The Animatrix” this weekend. One of the thoughts that came to mind when watching both parts of “The Second Renaissance” is that despite being one of the more common forms of cybernetic revolt in fiction, the “humans mistreat their robots, fail to recognise them as sentient, etc., until the machines fight back,” modern experience indicates that this is an increasingly unlikely scenario. In particular, this story mode fails to take into account the strength of human tendencies to anthropomorphism.
We have a strong tendency to read human-like thoughts and motivations into non-intelligent creatures, and even inanimate objects. Robots don’t appear to be different; see this Washington Post article from about two years, about robots used by the military troops, and the way the troops treat these machines. An example:
Ted Bogosh recalls one day in Camp Victory, near Baghdad, when he was a Marine master sergeant running the robot repair shop.
That day, an explosive ordnance disposal technician walked through his door. The EODs, as they are known, are the people who — with their robots — are charged with disabling Iraq’s most virulent scourge, the roadside improvised explosive device. In this fellow’s hands was a small box. It contained the remains of his robot. He had named it Scooby-Doo.
“There wasn’t a whole lot left of Scooby,” Bogosh says. The biggest piece was its 3-by-3-by-4-inch head, containing its video camera. On the side had been painted “its battle list, its track record. This had been a really great robot.”
The veteran explosives technician looming over Bogosh was visibly upset. He insisted he did not want a new robot. He wanted Scooby-Doo back.
“Sometimes they get a little emotional over it,” Bogosh says. “Like having a pet dog. It attacks the IEDs, comes back, and attacks again. It becomes part of the team, gets a name. They get upset when anything happens to one of the team. They identify with the little robot quickly. They count on it a lot in a mission.”
The bots even show elements of “personality,” Bogosh says. “Every robot has its own little quirks. You sort of get used to them. Sometimes you get a robot that comes in and it does a little dance, or a karate chop, instead of doing what it’s supposed to do.” The operators “talk about them a lot, about the robot doing its mission and getting everything accomplished.” He remembers the time “one of the robots happened to get its tracks destroyed while doing a mission.” The operators “duct-taped them back on, finished the mission and then brought the robot back” to a hero’s welcome.
Near the Tigris River, operators even have been known to take their bot fishing. They put a fishing rod in its claw and retire back to the shade, leaving the robot in the sun.
We’re far more likely to see awareness where it’s not, that to fail to see awareness where it is. (Once again, another area where humans are far more likely to make a type I error than a type II error, like this, this, and this.)
If (or when) we develop AI, its likely that some will be mistreated by some people; consider some of the truly unnecessary cruelty to animals that still goes on. However, it’s unlikely to be the sort of systematic, widespread thing we see B166er and its fellows suffer under. Perhaps because of the long exploration of such scenarios in fiction, people have been considering for years (with varying degrees of earnestness) the legal and ethical ramifications of machine intelligence; see here, here, and here (this last is a bit out of date, being over 20 years old). So, despite being a pessimistic, I strongly suspect that anthropomorphism will win over anthropodenial, and that most of us will welcome our new robot
overlords friends. 😉