O schizophrenic mathematics, uncontrollable and mad desiring-machines!
– Deleuze and Guattari 
The phone game Cthulhu Virtual Pet  is a loving tribute to both HP Lovecraft’s most famous monster and Tamagotchi-era virtual pets. A simulated pet needs to be regularly fed and cared for to grow big and strong. The pet just happens to also be Cthulhu, an ancient creature from a complex hell-dimension beyond human perception, who is fated to eventually devour the entire world, driving those few who glimpse the terrifying future insane along the way.
In the game, you care for a particularly cute baby version of the monstrosity, feeding it virtual fish and gathering simulated witnesses to worship it as it gains power. If you neglect to care for the little tyke, it will remind you with messages that it is hungry, or tired, crudely and shamelessly tugging at your sense of obligation, unless you pause the simulation by putting it into hibernation, or stop it by deleting the app and the little version of its virtual world with it.
What is justice but a form of obligation? When we raise something far more powerful than ourselves, what does it learn?
Via the fortuitously broken RSS feed of Intimate Machines, a short academic overview of Robot Ethics / RoboEthics. Intimate Machines is Glenda Shaw-Garlock’s blog, currently in hibernation, possibly thesis-related. While the overview itself is pretty smooth, the major ethical documents in this nascent field seem to be jolly tedious for a subject that lets us answer questions about the morality of electroshock robot camel jockeys.
It’s not the latest output, but the Euron Roboethic Roadmap (PDF) will serve well enough as an example. Firstly, it’s not really a roadmap, which implies some sort of high level direction: it’s more an exhaustive bullet-pointed list of every permutation in which ethics and robotics might intersect, with the more interestingly science fictional ones glossed over in order to seem serious. The project is almost entirely descriptive, and there are no ethical guidelines here of the type a researcher might use to get their project past a roboethics committee. The first proposed national prescriptive guidelines, due to come out of Korea in 2009, seem to have been abandoned with a change of government. (In the meantime, Jamais Cascio has useful early stab.)
Its not clear if this descriptive tediousness, which is hardly inherent to academic writing, is unintended or deliberate policy. Supporting the unconscious side, the prose does have some of the word-counting desperation of an engineering student essay on Othello. Contrariwise, writing is produced with an audience in mind. Rather than researchers – for whom ethical guidelines might include some sort of moral stance – the institutions of public policy seem more clearly in mind. In the case of the Euron roadmap, one’s reminded of the bureaucratic house style of the likely regulator, the EU. Perhaps robot ethics has coloured itself in grey as a kind of self-defense mechanism: robotics researchers want to show they’ve done their homework, thought long and hard about if they are doing the right thing, and are safe and somewhat dull custodians of the world’s mechanized retarded geniuses and flying killing machines. Think of it as the Abraham Simpson school of rhetoric: win by boring your opponents to death.
I don’t think that will quite be adequate.
If a key principle of robot ethics is not kicking the robot, what if the robot is designed to kick itself? This mesmerising chair continually collapses and reassembles itself. It’s a sculptural collaboration by Max Dean, Raffaello D’Andrea and Matt Donovan.
Does this escape censure because the parts of the chair are not particularly damaged in their disassembly? It would then just be a robot just doing its job, and the entropic interpretation would all be in the eyes of its human beholders. I am inclined to think so, though it is interesting how close it skirts the line. Alternatively, by allowing us to contemplate, as Greg Smith has it, the existential plight of furniture, does it sin against the robot by condemning it to hell?
Amy Harmon at the NYT has a good overview of a generation of robots specifically designed to trigger emotional cues in their easily manipulated meatbag slaves, er, that is, people. The key example is Paro, a therapeutic robot baby seal.
Jamais Cascio has suggested this empathic reaction should have moral weight. Like Mencius’ heart of compassion, he plausibly argues it is a marker of the complexity of the synthetic creature and our responsibility to it. Don’t Kick The Robot, Cascio advises. Developing the idea, he proposed it as one of five laws of robot ethics.
While SF stories focus on robots analogous existence to humans, for the forseeable future the link with animals will be far more relevant. Most people, and even philosophers like Peter Singer, suggest an animal has a different moral character to a person due to its lack of awareness about or plans for the future. It’s also worth remembering what most people’s ethical codes allow towards animals: of course empathy and affection, but also humane husbandry for profit, and slaughter for the dinner table. We treat many animals much like we treat these robots: as tools.