Friday, March 09, 2007

People for the Ethical Treatment of Robots

I don't think we have had any posts about robots lately, and that is a shame. Thanks to this poorly thought out excuse for an article, that streak ends here. We start with Isaac Asimov's "three laws of robotics":
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm

- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Problem solved ("I, Robot" excluded, of course), right? Only for the short sighted. Luckily the author and scientist does a little thinking outside the box (which is a prerequisite for scientists who are grappling with ethical issues regarding a future world of robot people):
These three laws might seem like a good way to keep robots from harming people. But to a roboticist they pose more problems than they solve. In fact, programming a real robot to follow the three laws would itself be very difficult.

For a start, the robot would need to be able to tell humans apart from similar-looking things such as chimpanzees, statues and humanoid robots.

This may be easy for us humans, but it is a very hard problem for robots, as anyone working in machine vision will tell you.
The author lacks clarity here (probably because he is writing nonsense), but this can either be taken one of two ways. First the problem could be that robots that are programmed to kill chimpanzees and destroy statutes would not be able to do their job because they would be confused about their targets. This seems to be a minor problem, and for someone advocating robot ethics, it is doubtful that chimp-exteriminating robots would be acceptable anyway. So I am going to assume the problem he describes is that a robot will not be able to distinguish whether a command comes from a monkey or a man. I guess when you are making up robot fantasy worlds it is easy to forget that a chimpanzee can not speak to a robot or program a robot. If evolution brings us to the point where super-intelligent chimpanzees are programming computers at the level of humans, we have bigger problems than the robots.

There are other important ethical issues raised here as well:
If robots can feel pain, should they be granted certain rights? If robots develop emotions, as some experts think they will, should they be allowed to marry humans? Should they be allowed to own property?
First suggestion, assuming you could program robots to feel pain, just don't do it! Problem solved. As for allowing robots to marry humans, again I think the author forget to think about what he was writing. I'll leave it for my two year old to explain the problems with this.

I know that this is at least the second post I have written where I disagreed with robot ethicists, but bear with me. I am trying to get all my thoughts out before the robot lawyers pass robot hate speech laws.

Have a happy Friday, or as the robots would say, "01010011 0011 1011."

2 Comments:

Blogger Qahal said...

I don't know much about robots, but I think that failing to program them to feel pain would be the worst mistake we could ever make. Imagine, robots that can operate without pain. They would be the best athletes, workers and soldiers. How would we torture them into giving us necessary information?

I don't disagree with you very often, but Ransom, if you try to build robots that don't feel pain, I may have to kill you in order to save humanity.

1:56 PM  
Anonymous Anonymous said...

If you tried to kill me, my robot wife would sacrifice herself to save me. You see, I will program her with nothing but obedience to me. . . and lasers. She will definitely have lasers.

4:01 PM  

Post a Comment

<< Home