On 2019-08-25 at 09:19, Joe wrote:

> On Sun, 25 Aug 2019 07:26:12 -0500 John Hasler <jhas...@newsguy.com>
> wrote:

>> [1] I prefer "A robot should do its job and not hurt anyone."
> 
> The elephant in the room being in the definition of 'hurt'.
> 
> https://www.zerohedge.com/news/2019-08-21/youtube-banning-robot-fighting-videos-over-animal-cruelty

Not to mention: I'm fairly sure the software which runs autonomous,
non-piloted drones would qualify as AI for the purposes of the Three
Laws, at least as much as most things we're doing at current technology
levels would, and some of those are intentionally designed *to* hurt
people. As long as it's the "right" people.

It seems clear to me that when Asimov formulated the Three Laws, he
either failed to account for the possibility of legitimate cases for
robots injuring or otherwise harming humans (war, law enforcement,
private security, ...), or - and I think this is the more likely
scenario - was specifically trying to disallow any of those things from
ever being considered legitimate to have a robot do, either out of
philosophical objections or out of concern for the consequences which
could arise (in a robot-uprising sense, if nothing else) if that door
were once opened even a crack.

I might find any arguments to the contrary to be interesting.

-- 
   The Wanderer

The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all
progress depends on the unreasonable man.         -- George Bernard Shaw

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to