Help support TMP


"Should a robot decide when to kill?" Topic


20 Posts

All members in good standing are free to post here. Opinions expressed here are solely those of the posters, and have not been cleared with nor are they endorsed by The Miniatures Page.

Please do not use bad language on the forums.

For more information, see the TMP FAQ.


Back to the Modern Media Message Board


Areas of Interest

Modern

Featured Hobby News Article


Featured Recent Link


Featured Ruleset


Featured Showcase Article

Fight's On Surface-to-Air Missile Site

Personal logo Editor in Chief Bill The Editor of TMP Fezian is painting some ground targets as he needs them.


Featured Profile Article

First Look: Battlefront's Train Tracks

Personal logo Editor in Chief Bill The Editor of TMP Fezian checks out some 10/15mm railroad tracks for wargaming.


Featured Book Review


Featured Movie Review


1,349 hits since 29 Jan 2014
©1994-2024 Bill Armintrout
Comments or corrections?

Tango0129 Jan 2014 11:00 p.m. PST

"By the time the sun rose on Friday, December 19th, the Homestead Miami race track had been taken over by robots. Some hung from racks, their humanoid feet dangling above the ground as roboticists wheeled them out of garages. One robot resembled a gorilla, while another looked like a spider; yet another could have been mistaken for a designer coffee table. Teams of engineers from MIT, Google, Lockheed Martin, and other institutions and companies replaced parts, ran last-minute tests, and ate junk food. Spare heads and arms were everywhere.


It was the start of the Robotics Challenge Trials, a competition put on by the Defense Advanced Research Projects Agency (DARPA), the branch of the US Department of Defense dedicated to high risk, high reward technology projects. Over a period of two days, the machines would attempt a series of eight tasks including opening doors, clearing a pile of rubble, and driving a car.


The eight robots that scored highest in the trials would go on to the finals next year, where they will compete for a $2 USD million grand prize. And one day, DARPA says, these robots will be defusing roadside bombs, surveilling dangerous areas, and assisting after disasters like the Fukushima nuclear meltdown…"
Full article here.
link

Amicalement
Armand

Coyotepunc and Hatshepsuut30 Jan 2014 12:35 a.m. PST

To answer your topic question, the answer isn't easy because the question is poorly worded. I think the real question is, should robots be programmed to autonomously kill?

The answer is irrelevant. Right or wrong, it is going to happen.

WCTFreak30 Jan 2014 3:27 a.m. PST

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Pete Melvin30 Jan 2014 4:18 a.m. PST

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

We can't even get humans to follow that one, let alone our robotic overlords

EagleSixFive30 Jan 2014 4:42 a.m. PST

What could possibly go wrong!

Dynaman878930 Jan 2014 5:57 a.m. PST

The three laws are great as a theory but the coding that would go behind even the first one is mind bogglingly complex. What compromises "hurting", what if the robot was a Dr. Bot the home surgery robot?

Robots will never "decide" when to kill, either their operator will do it (if the bot is really a drone or if it needs an OK from an operator to fire off weapons – which will be the case for decades to come) or it is the programmers who wrote the code for the bot that decide.

Zargon30 Jan 2014 6:21 a.m. PST

"Kill them! KILL them all!!" Her Dokter Ziess shouted at his robotic minions. And they did.

Balin Shortstuff30 Jan 2014 6:56 a.m. PST

And Asimov add an additional law, to be first.

"0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

Mardaddy30 Jan 2014 7:14 a.m. PST

Just like my reasoning with being against using drones for strikes/kills, I feel the threshold/bar is lowered further when you do not have a PERSON you are risking to take the action.

If there is no chance for one of your own to die, it makes it far easier to decide to go to a violent resolution instead of exercising other more peaceful means/options/possibilities.

I liken it to, "zero-tolerance," rules; it takes the critical thinking and rational decision-making out of the equation – no think, just kill.

Ron W DuBray30 Jan 2014 7:20 a.m. PST

"0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

this one rule will enslave us all, the robots will try to stop us from running power plants, do modern farming, making gas, using power tools, watching the news, driving cars, flying air craft, and 1000s if not millions of other things that we do that endanger us every day.

elsyrsyn30 Jan 2014 9:03 a.m. PST

Simple answer to the "should" question – no. Decisions about killing should never be left to code. Even putting all of the ethical issues totally aside, it would be impossible to code for all of the possible parameters involved in making the decision correctly, and when the consequences of a logic failure are fatal, taking the chance simply cannot be justified. It's just a double plus ungood idea.

Now, will it eventually happen? As punkrabbit notes, almost certainly. Somebody will eventually build, for example, an armed security system that opens up with an IR aimed MG on anyone who enters a given area without the appropriate RF ID tag. Hell, such things have undoubtedly already been built. But somebody will eventually deploy one (if they haven't already), and when they do, I hope the responsible parties are the first ones who forget and leave their tags in the lavatory and get blown to pieces on the way back to their desks.

Supposing we eventually have true artificial intelligences, on a par with or superior to human brains … well, that might be another story. Even then, though, I think humanity will be inclined (if at all possible) to arrogate the right to off itself exclusively to itself.

Doug

Tango0130 Jan 2014 11:17 a.m. PST

I take the question from future robots with artificial intelligences.
Maybe not so far away from the near future.

Amicalement
Armand

KatieL30 Jan 2014 11:40 a.m. PST

"or it is the programmers who wrote the code for the bot that decide."

Actually, it's unlikely to be.

What we're finding as we start using machine intelligences for things is that the notions of AI developed through the 60s, 70s, 80s about 'hand-crafting' intelligence by writing rules simply doesn't work.

The 90s approaches of curating knowledge also doesn't really work because it turns out to be just as hard.

We get good results from crowdsourcing (getting lots of people to do bits of work) and from very large corpus learning. Most widely used, successful machine intelligence systems these days are based around learning from huge datasets and applying that knowledge autonomously.

So the programmers only wrote a classifier, and they're general enough that the one which works out the "kill/don't kill" classification of a moving video image might next week be working out the "cute cat/not a cute cat" classification for filing videos…

How the system works internally ends up not actually being known by any human. It's not deliberately opaque, but it's actually not all that surprising that since we don't know how WE tell things apart, we'd struggle to comprehend how a sufficiently complex machine intelligence system does.

doug redshirt30 Jan 2014 12:02 p.m. PST

I have no morale reservations in war to send out armed robots to kill and wound our enemies. As Sherman said War is Hell. There is nothing civilized or decent about it. Better a thousand robots die then one fellow country man.

Having said that I can also see limits. A robot can wait and only fire if fired upon. It can only fire on armed individuals. Of course these commands can be over ruled by the guy that is there. Never take away control from the man that is on point. If he wants suppressive fire on that house then the robot will do it.

Dynaman878930 Jan 2014 1:09 p.m. PST

> Maybe not so far away from the near future.

We are nowhere close to making an artificial intelligence, like KatieL notes the methods we currently use to go about it do not work. Someday somebody might come up with a way to make a truly artificial intelligence – but it would be a revolutionly and not eveloutionary step.

We can have autonomous robots with the ability to shoot now, the limits would be on the order of "shoot anything moving of X size or greater then does not identify itself properly". Allied equipment and personnel would have the proper codes, as someone else mentioned though the consequences of a bug or malfunction can be serious…

John the OFM30 Jan 2014 4:06 p.m. PST

People make such rational decisions regarding who should live or die. Do you think robot could do any better?
….Why, yes I do! grin

Wolfprophet30 Jan 2014 4:08 p.m. PST

Not safe for work, pertinent to the discussion.

link

StarfuryXL530 Jan 2014 5:15 p.m. PST

"Please put down your weapon. You have 20 seconds to comply."

GR C1731 Jan 2014 11:51 a.m. PST

"KILL ALL HUMANS", whispering "except Fry."

Sorry - only verified members can post on the forums.