Robot ethics discussed in the New York Times

Robot ethics discussed in the New York Times

Postby adam.klaptocz on 05 Dec 2008, 11:11

With more and more robots part of the military's active service the New York Times has now run an article aboutthe ongoing ethical debate about future autonomous military robots that may make their own decisions about life or death on the battlefield. The article, entitled "A Soldier, Taking Orders From Its Ethical Judgment Center", highlights the views of numerous experts including Noel Sharkey, Daniel Dennett and Ronald Arkin, who's research hypothesis is that "intelligent robots can behave more ethically in the battlefield than humans currently can".

Read the full New York Times article
Article and complementary links on Robots.net
adam.klaptocz
Robots Podcast Team Member
 
Posts: 57
Joined: 28 May 2008, 11:17
Location: Lausanne, Switzerland


Re: Robot ethics discussed in the New York Times

Postby rogerfgay on 20 Dec 2008, 15:26

Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, argues that the steady increase in the use of robots in day-to-day life poses unanticipated risks and ethical problems.

Call for robot ethics rules

Prof Sharkey shrugs off doomsday scenarios in books such as Isaac Asimov's I, Robot about the threatening interaction between robots and humans, or in movies such as the The Terminator in which robots take over the world.

Such story lines will remain firmly in the realm of fantasy, even as societies hurtle towards greater automation, he said.

'I have no concern whatsoever about robots taking control. They are dumb machines with computers and sensors and do not think for themselves despite what science fiction tells us,' he said.

'It is the application of robots by people that concerns me and not the robots themselves.'
rogerfgay
 
Posts: 2
Joined: 16 Dec 2008, 23:25

Re: Robot ethics discussed in the New York Times

Postby Johnny 5 on 03 Feb 2009, 02:36

@ roberfgay: Yes ... what's your point though?

'It is the application of robots by people that concerns me and not the robots themselves.'


Let's assume that the doomsday scenarios portrayed in popular culture are indeed far off, that robots are far from outsmarting humans, and that they are no threat to humanity. If what we are worried about is truly the application of robots by people, then how is this problem different from the application of other types of technology?

Let's look at a few examples:


According to the Straits Times article you linked,
Professor Sharkey worries how robots - and particularly the people who control them - will be held accountable when the machines work with 'the vulnerable', namely children and the elderly ...


What makes robots special? Television sets also "work" with children (to stick with the strange terminology). They have been around for many years, and as we became accustomed to the technology we've learned how to integrate it into our lives. This did take some time, but only required little in the way of ethical guidelines or special legislation.


As a second example, let's think about the semi-autonomous war robots currently on duty in Afghanistan and Iraq. They are just another form of a smart weapon, like a torpedo, a laser guided missile, or a smart bomb. Again, similar systems have been around for many years.


Should we be having a much larger discussion about taking humans out of the loop in any - robotic- or non-robotic - system: cars, airplanes, or tanks?
User avatar
Johnny 5
 
Posts: 141
Joined: 23 Jun 2008, 22:18


Return to Podcast

Who is online

Users browsing this forum: No registered users and 2 guests

cron