Jump to content

Can machines/AI commit crimes?


RoseGoesToYale

Recommended Posts

RoseGoesToYale

Hypothetical scenario: An Artificial Intelligence program plots to commit or commits a crime (grand theft, murder, arson, etc.) and gets caught. Who would you say can be held responsible for the crime? Is it the AI, because it's the one that came up with the plot in the first place? Is it the machine that holds the AI? Is it the creators of the AI, i.e. the humans, for programming crime-committing capabilities into it, even assuming the AI wasn't created for criminal purposes?

 

I might've said it was the humans, but then humans create new humans, and we don't hold the parents of a criminal responsible for the criminal's actions, even if the parents were the ones that educated the criminal about what crime is. But I don't know of any laws that can hold technology responsible for a crime. What do you think?

Link to post
Share on other sites

I mean sure, if the AI was sufficiently advanced it could be solely responsible. If a human (or humans) was manipulating or providing inputs that drove an AI to commit crimes, then it'd be the human(s) that would get the metaphorical glock of justice.  

Link to post
Share on other sites
SorryNotSorry

It would depend on whether the machine was completely autonomous. But then again, the person or machine that programmed the ferrous felon would have some serious explaining to do.

 

In a similar vein, if I went up to an ATM and deposited a piece of paper that said "this is a stick-up" into the slot, what would happen?

Link to post
Share on other sites

Depends on the degree of moral agency it possesses, which is (to varying degrees) how we deal with humans themselves.  Kids that commit a crime are a lot more likely to be let off the hook (or at least be punished less severely than an adult might) if it can be shown they just didn't know any better.  It's the whole reason why the insanity plea exists, and gets (mis)used.

Link to post
Share on other sites
RoseGoesToYale
12 minutes ago, Woodworker1968 said:

In a similar vein, if I went up to an ATM and deposited a piece of paper that said "this is a stick-up" into the slot, what would happen?

Just paper? It'd probably spit it back out. Though there is the possibility of that happening when the next patron comes along, and they could think they were getting mugged by the ATM. :o

Link to post
Share on other sites
NerotheReaper

A big question to ask with this is, does the machine have free will? Can it be proved the machine/AI has free will, or did the creator program the AI/machine to commit the crime. Did the machine/AI make the choice to kill that man, or did the creator program him to do it? What if the machine/AI killed it's creator, it might be ruled as an accident rather than a homicide. If I had to take a guess, I don't think a machine/AI would stand trial in today's court. This might change in ten years where technology is more advanced. 

Link to post
Share on other sites

There is going to be a very fuzzy line that is crossed. Certainly if a pet dog attacks someone, the owner has some responsibility - but so does the dog (which may be destroyed).  I expect we will get there before we recognize full machine responsibility. 

Link to post
Share on other sites
ExecValkyrie

I assume it being artificial intelligence includes the fact that it can act basically as a human being would, think and "feel" as one would. Assuming the humans who created it programmed it in a sort of neutral way (like having it have the capabilities to commit a crime, but also have the morals to know it is wrong), I would think the AI would be held responsible. I'm sure the reality of it would be that the humans who created it wouldn't be safe from backlash in the least. There'd be the question of why people thought it'd be okay to program the ability to commit a crime in the first place. Unlike children, I suppose it's maybe a little more...deliberate in making the possibility of crime a reality. BUT, that's when the line gets blurred about if artificial intelligence should be made, because it's not really AI if it's shackled or doesn't have the ability to evaluate morals and choose. 

 

tl;dr I think an AI can commit a crime, it'll probably be destroyed for it and the creators thrown into a massive scandal, if not held responsible.

Link to post
Share on other sites
4 hours ago, Philip027 said:

Depends on the degree of moral agency it possesses, which is (to varying degrees) how we deal with humans themselves.  Kids that commit a crime are a lot more likely to be let off the hook (or at least be punished less severely than an adult might) if it can be shown they just didn't know any better.  It's the whole reason why the insanity plea exists, and gets (mis)used.

 

4 hours ago, NerotheReaper said:

A big question to ask with this is, does the machine have free will? Can it be proved the machine/AI has free will, or did the creator program the AI/machine to commit the crime. Did the machine/AI make the choice to kill that man, or did the creator program him to do it? What if the machine/AI killed it's creator, it might be ruled as an accident rather than a homicide. If I had to take a guess, I don't think a machine/AI would stand trial in today's court. This might change in ten years where technology is more advanced. 

These two posts basically cover my views.

 

Is the machine responsible or was it programmed to do it. Is it a coding thing, because if it was coding issue, then it was whoever wrote the coding (or hacked the coding)

Link to post
Share on other sites

At the end of the day it'll be Mankind's fault for coming up with this horsecrap in the first place. If there's any other form of intelligent life and they find the remains of us and connect the dots, they'll laugh at us hysterically.

Link to post
Share on other sites
SorryNotSorry

In a related vein, the answers you get can depend on how you phrase the question.

 

If you've been stalking me monitoring my posts for the last couple of years, you know that I'm holding out some hope that bots will be developed to the point where they'll be practical as platonic partners for humans. Whenever I've asked in a direct manner, i.e., "when will we be able to date/marry robots", of course I'd get rude answers pointing out my failure at life.

 

But when I asked the question in a more roundabout way, i.e., "how big an obstacle is the so-called uncanny valley in developing robots that can pass for human", I got earnest answers. Apparently writing all that AI software is much more tedious than creepy, according to the engineers who answered my question.

 

Link to post
Share on other sites

Time will come (if it hasn't already) when machines will commit crimes. Whether its an autonomous driving car breaking a new speed limit, or a sexbot deciding it wants sex when the "owner" doesn't, or doing so with a third party. The idea of a house guest being raped by a randy sexbot may sound farcical now, but in a few years time? 

 

 

Link to post
Share on other sites
SorryNotSorry
On March 12, 2018 at 3:03 AM, Skycaptain said:

Time will come (if it hasn't already) when machines will commit crimes. Whether its an autonomous driving car breaking a new speed limit, or a sexbot deciding it wants sex when the "owner" doesn't, or doing so with a third party. The idea of a house guest being raped by a randy sexbot may sound farcical now, but in a few years time? 

 

 

Yeah, some guy will try to rob a bank with his lovebot driving the getaway car. XD

Link to post
Share on other sites
Ace of Mind

Depends on whether it's the AI we actually have or the AI everyone's afraid of. 

 

The AI seen in most movies would be Artificial General Intelligence, which is indistinguishable from a human on any intellectual or emotional task. This type of AI can draw its own conclusions, have opinions, and generate original thoughts and actions. This is also something that we are not currently mathematically confident will ever exist in real life. If an Artificial General Intelligence were to decide to commit a crime, it would be appropriate to regard it in the same manner as when a human decides to commit a crime. 

 

However, all the AI currently in existence and also the general techniques that are popular for modern AI do not suggest that the concept of "deciding to commit a crime" can be applied with any useful meaning to an AI. The AI we have now are simply using patterns from huge amounts of data to find what examples the current situation is most similar to, and what response is most likely to be the correct one based on the responses in the examples. In this case, someone harmed as a result of an AI's decision would be regarded on a similar level to a workplace accident, and if any charges would result they would most likely be something like criminal negligence or manslaughter against the development team. 

Link to post
Share on other sites
paperbackreader
On 3/11/2018 at 3:15 AM, Woodworker1968 said:

if I went up to an ATM and deposited a piece of paper that said "this is a stick-up" into the slot, what would happen?

Giggles

On 3/11/2018 at 3:29 AM, RoseGoesToYale said:

Just paper? It'd probably spit it back out. Though there is the possibility of that happening when the next patron comes along, and they could think they were getting mugged by the ATM. :o

Double giggles 

 

The legal profession is going to have some amazing discussions in the future, that is if they haven't yet been completely replaced by robots... 

 

Echoing @ace of mind here, I would say that for some crimes, eg those where it is central to prove  'intent' in order to prosecute, it may probably be harder to prosecute a bot/ai, and more likely to prosecute the person designing it, either with malicious intent or with negligence. For those where intent is not part of the prosecuting test then probably it would be less difficult, assuming the law accepts a crime can be committed by a non human / organisation. Someone would first have to work out a penalty tariff for a bot... ? So therefore I suspect manslaughter charges would come before murder / fraud / conspiracy charges against bots...

Link to post
Share on other sites

But with true AI, machines will be able to design and build other machines to suit their requirements, not human requirements. 

Link to post
Share on other sites
1 hour ago, Skycaptain said:

But with true AI, machines will be able to design and build other machines to suit their requirements, not human requirements. 

This has already happened. An AI built its own AI and the new one outsmarts everything hoomans have built so far. Will add link as soon as I find it again.

 

Edit: Here you go

Link to post
Share on other sites
  • 3 weeks later...
J. van Deijck

if the machine is sentient, then yeah. it is responsible.

Link to post
Share on other sites

No.  There could be no mechanism for penalizing/punishing a robotic machine, other than dismantling it.   Laws are inextricably connected with penalties; otherwise, the laws are toothless.

Link to post
Share on other sites
paperbackreader
1 hour ago, Sally said:

No.  There could be no mechanism for penalizing/punishing a robotic machine, other than dismantling it.   Laws are inextricably connected with penalties; otherwise, the laws are toothless.

Dismantling vs disabling vs ensuring it is never built again and all blue prints are destroyed (robot genocide?) 

Link to post
Share on other sites

No disassemble, no disassemble 

Johnny 5 is alive :P:P

 

Anyone else actually remember Short Circuit? 

Link to post
Share on other sites
Janus the Fox

I think, even with the possibility of limitless AI programming, a robot cannot be held to the criminal or moral responsibility of the law.  It would be the responsibility of it's creators, up to the point if the AI somehow re-codes itself through a misunderstanding of it's tasks.

Link to post
Share on other sites
On 3/10/2018 at 7:03 PM, RoseGoesToYale said:

I might've said it was the humans, but then humans create new humans

But, creating robots is a very different process... 

 

In response your question, I think this is all being played out already on a smaller scale. 

 

Take the advertising algorithm of [insert famous website here].  Ultimately, nobody blames the algorithm itself (as an "entity") for issues they might have with it, even though it uses what we call AI to make decisions to show you "relevant" ads/videos/etc.  It goes back to the software engineers who wrote it - and not just them, but the business people and managers who led the creation of those algorithms.  Rare is the genius coder coding in a vacuum - there is often a whole organization of people behind these kinds of products.  (I'm speaking of "product" here in the generic sense; it could be a government organization and the "product" could be a piece of military equipment.)

 

If there are problems with a product (e.g. it can commit crimes that are not accidents) then it must be traceable to a design choice, lack of sufficient testing, or lack of disaster planning/recovery.  All of these are human errors.  I don't see that going away. 

 

What I do envision is some kind of catch-all legislation, ensuring that someone capable of paying fines or going to jail is responsible for malfunctions.  Perhaps there's already laws like that in place...  It'll be something to watch as more states allow testing for autonomous vehicles.

Link to post
Share on other sites

There is one thought though. If a machine is not aware that its actions are perceived by humans to be criminal, has it committed a crime? 

Link to post
Share on other sites
J. van Deijck
11 hours ago, Sally said:

No.  There could be no mechanism for penalizing/punishing a robotic machine, other than dismantling it.   Laws are inextricably connected with penalties; otherwise, the laws are toothless.

but then, it would be equal to death penalty.

Link to post
Share on other sites
Celyn: The Lutening
On 11/03/2018 at 3:03 PM, uhtred said:

There is going to be a very fuzzy line that is crossed. Certainly if a pet dog attacks someone, the owner has some responsibility - but so does the dog (which may be destroyed).  I expect we will get there before we recognize full machine responsibility. 

I read an article about exactly this question and the closest thing they got to an answer was that they would have to revive the medieval English law of "deodand". This was created for horses that misbehaved and injured or killed someone. Basically the horse would be taken off the owner by the authorities and sold, and the proceeds go to the victim. SO I imagine something similar would happen to a robot, probably after a memory wipe and some reprogramming.

I imagine if the issues were too deeply embedded to be resolved, it would have to be destroyed - but I agree that if it was sentient, this would be the death penalty, so, in places where that isn't legal....what would a prison for robots/AIs look like?

Link to post
Share on other sites

An AI prison would probably just be a warehouse and someone to switch off the offending device 

Link to post
Share on other sites
  • 2 weeks later...

Criminal law requires both a criminal act (actus reus) and a criminal mind (mens rea). The difficult issue for artificial intelligence would be proving intent. Assuming you could get past that hurdle, you'd then face the perplexing question of punishment. How would you go about sentencing a robot?

I predict that the matter will be settled according to these practical considerations.

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...