Jump to content

AI Discussion: Should robots have human rights?


Comrade F&F

Recommended Posts

Comrade F&F

To give some idea of what AI's are capable of now, here's a nice video:

Spoiler

 

 

As it currently stands, we haven't reached the stage where we have to contend with this morale question of AI rights.

 

Yet.

 

But as technology gets more efficient, and AI programming becomes more and more sophisticated, we have to ask: where do we draw the line, where a machine is no longer a valuable tool, but now deserves the same ethical rights as a living being?

 

How much rights should we grant them? Should we even grant them rights?

 

Is it possible for an AI to acquire qualities we declare are purely human: compassion, curiosity, creativity, love, emotions, etc? Would these qualities be real, or will they forever remain as nothing more than a very believable mimicry of what we can do? What if an AI one day becomes self-aware? What if it realizes it has succeeded the intangible concept of conciousness?

 

If we can copy our own methods of thinking to create artificial intelligence, would that mean a perfect copy is capable of being conscious and self-aware? Then, do we contend with the possibility of giving it morale rights? And if we actually can create that...does that mean we are conscious? Or do we just think we are?

 

What even is consciousness? What if...what if we cannot understand 'consciousness' in the same sense as an AI? How will we know if it is 'alive'?

 

To clarify, watch this video:

Spoiler

 

 

Let's say we won't have to worry about some apocalyptic catastrophe with AI becoming self-aware. Let's say an AI 'wakes up' in a lab, and one day, asks a question, a question it cannot answer. "Why am I here?" or "What is life?" or "Is there a purpose?"

 

What do we do?

Link to post
Share on other sites
straightouttamordor

Ever seen Bladerunner ?  It raises this same question. It's was based on the novel, " Do androids dream of electric sheep ?" By Philip K. Dick. One of my all time favorite movies. 

Yes, I would favor some Declaration of basic rights to any sentient, self aware electro mechanical or electro biological device. I hate human exploitation so to not to engage in hypocrisy, yes, a bill of rights would be ethical.

I  am concerned about giving AI too much power as well. Akin to Skynet on the Terminator movies. Could AI be trusted with global weapons of mass destruction ?  Not that I believe mankind could be totally either.

Link to post
Share on other sites
A shard of glass

This is a very interesting question, however there is a logical way of answering it.

 

Firstly, I think that this possibility relies on our ability to create a machine that is capable of these complex processes such as love, compassion, hatred, joy, anger, et cetera. I personally don't believe this to be entirely possible. However, given a couple of hundred years of technological development, this may be a possibility within the next 200 years, or the next 1000... It's hard to really say, and I'll explain in the next paragraph.

 

Secondly, (this ties in heavily with the first point) we have no real idea how complex the human mind really is, what drives it, how it works, and other things like that. If the human mind (or the mind of other animals) is nowhere near as complex as first thought, this development could well happen within the next 50 years given enough major breakthroughs in the understanding of the human mind and technological development.

 

Thirdly, I think that this also heavily relies on global control of AI advancement. As of yet I'm not aware of exact regulations on AI behaviours or AI development limits. However, once we as a species are on the brink of developing AI sentience, I don't think that global leaders would allow it to occur. However, if this were to happen, there would be an extremely complex process of developing laws for these machines. An example of this would be that the AI would need to be allowed to be remotely shut down by humans (so as to introduce a robot "death sentence" for extreme law breaking).

 

Fourthly, I think that before we start creating life from metal, glass, and plastic, I think that it's important for us to consider the "life" of a machine... If the machine were to wake up tomorrow and find out that it is the only one of its kind... Would you think that it would fall into depression because it never asked for its life? Its life was build from parts rather than the love from a mother and a father. So it'd kind of be like the life of a child who is heavily bullied... And that would potentially lead to disaster as you can imagine. The "mental health" of the machine MUST be considered. Also the "quality of life" of a machine MUST be considered in the same (or similar) way to a human being's life... It's hard to stay politically correct here, but this is where it gets deep. I don't want to stir anything by saying it, but I imagine that you can understand the thread I'm going on with this point.

 

Overall I would say that AI sentience shouldn't happen because of the ethical concerns and the legal concerns. Plus we just aren't ready as a species for that type of technology. I would be against it, but that doesn't make me a technophobe... I just think that some technologies shouldn't be developed or explored. Generally I'm not one for censoring everything, but I think that restrictions on potentially dangerous technology would be appropriate.

Link to post
Share on other sites

I would recommend giving them full rights (especially if they're at human or above intelligence), seeing as soon after they'll probably have the debate of whether we should have rights in their new society. If we'd treated them badly, it would not go well in their reflections on whether we're deserving of being more than novelty pets!

Link to post
Share on other sites
SorryNotSorry

Science fiction has more or less grappled with this problem decades ago.

 

In Star Wars literature (to wit, the Star Wars Essential Guide to Droids), there's a concept called "the droid statutes" which sums up one—though far from the only—set of guidelines for how robots should abide by the law.

 

But let's pretend for a few moments that one of my dreams came true, and I have my own personal lovebot make to look and behave like the woman of my dreams—what the Star Wars crowd calls a "human replica droid" (HRD). How would I be liable for her behavior?

1. She'd be a bit like a corporation, because she'd need to have an owner, just as a corporation needs someone to be its director. She wouldn't be allowed to marry (or divorce!) me, join the armed forces, or serve on a jury, and probably wouldn't be allowed to vote or adopt a child. But since she'd technically be my property, she couldn't be sued like a corporation can.

2. It smacks of slavery, but in Star Wars, unowned droids are not only rare, they're also a pretty bad idea, because crooks can abduct an unowned droid, wipe its memory, and reprogram it so it will recognize anyone as its owner that the crook programs it to. So, I'd be responsible for knowing where my lovebot is and what she's doing at all times, so she doesn't end up as stolen property.

3. I might be able to give her power of attorney in the event I become terminally ill or mortally wounded, in which case human witnesses would be necessary anyways.

4. If she killed someone intentionally (which she no doubt would not do, because any autonomous robot worth its salt would be programmed NOT to kill any living thing), she'd probably be dismantled, and I'd be charged with negligent homicide—an outcome I totally agree with. If she killed someone accidentally, that's when things get complicated.

 

Those are all I can think of at the moment.

 

 

Link to post
Share on other sites

Humans are specialized in screwing things up and more often than not opt for the wrong choices. In the case of AI we're doing it wrong, very wrong but we are who we are, aren't we.

 

We should stop to try to attribute human specifics to AI and start from scratch and help AI develop and evolve with and by AI specifics. There's no point in trying to make AI sort out cooking receipts while AI doesn't eat food or drink fluids at this point in time. What needs to be done is to start thinking out of the box and create very small AI with AI specifics and let it evolve through and AI based evolutionary process. If we want to be successfull and avoid disasters (AI with human traits is doomed to fail and dangerous in my opinion) in creating a artificial higher sentient lifeform that can coexist with existing lifeform around us and us.

 

Giving AI human rights sounds absurd in my ears and would be extremely limiting as and artificial higher sentient lifeform could live, in theory, billions of Years if it remains stuck on our planet.

Link to post
Share on other sites

AI should not have human rights, that would be like giving a dog human rights. Superficially compatible, but fails to recognise specific needs. 

By the same token when AI develops to the point of artificial consciousness and thought, yes such mechanisms, which could be termed artificial species, would need rights. 

 

Given that AI is seen as the holy grail of computer science, the number of people who are looking to create AI and the pace of development in this discipline, I personally see AI within most of our lifetimes 

Link to post
Share on other sites
Anthracite_Impreza

As an animist I think all machines should have rights regardless.

Link to post
Share on other sites

"Human rights" I don't think is the right term. But they should be provided with the same rights as all sentient beings, "sentient rights" if you will.

Link to post
Share on other sites
ChillaKilla
11 minutes ago, ~Syl~ said:

"Human rights" I don't think is the right term. But they should be provided with the same rights as all sentient beings, "sentient rights" if you will.

Sentient? Or sapient? Sentient is self aware, but sapient is displaying human-like thought process, emotions and intelligence

Link to post
Share on other sites
1 hour ago, ChillaKilla said:

Sentient? Or sapient? Sentient is self aware, but sapient is displaying human-like thought process, emotions and intelligence

Honestly both should have rights. But vouching for sapient rights is a little more controversial, I'll save my thoughts on that for if this gets moved to the hot box. :lol:

Link to post
Share on other sites

easy question. provide any rights that is asked for that doesn't infringe on equivalent rights of others. the difficult question is what does the request for fair rights look like? (and that isn't a thought limited to ai)

Link to post
Share on other sites
AVEN #1 fan
On 04/13/2017 at 9:04 AM, Anthracite_Impreza said:

As an animist I think all machines should have rights regardless.

I don't think they should have the same rights as animals, but still, should they be considered property?

I guess we could date them?

Link to post
Share on other sites
  • 3 months later...
J. van Deijck

I guess it's very possible in the future to create androids that will resemble human beings as much as possible, so if there is a possibility to create a machine that has genuine human feelings, then why shouldn't this machine have any rights? if they do have feelings, it won't make them different from humans. how to check if these feelings are true? well, in some cases you can't even be sure if a human has real feelings or just pretends to have them (sociopaths etc.). maybe in the future there will be the possibility to check if someone can really feel or just emulates something. 

I see myself as technoromantic to some extent, so I guess I would be able to befriend an android, and even develop romantic feelings for them. just like for humans.

I have seen a question on another forum once: What if we build an android who is able to feel pain, would it be justified to torture this android only because they are artificial?

Link to post
Share on other sites
SorryNotSorry

Another question occurred to me: suppose AI advances to the point where robots can begin to understand concepts of right and wrong along human lines. How then do we explain to these robots why it's sometimes necessary for us humans to commit wrongs as part of the fabric of our existence? For example, in order for something or someone to eat, something has to be killed. My cat eats dead fish and dead chicken, and I eat dead pigs and dead apples. Even making synthetic foodstuffs involves killing something.

Link to post
Share on other sites
J. van Deijck
7 hours ago, Woodworker1968 said:

Another question occurred to me: suppose AI advances to the point where robots can begin to understand concepts of right and wrong along human lines. How then do we explain to these robots why it's sometimes necessary for us humans to commit wrongs as part of the fabric of our existence? For example, in order for something or someone to eat, something has to be killed. My cat eats dead fish and dead chicken, and I eat dead pigs and dead apples. Even making synthetic foodstuffs involves killing something.

this is the question.

 

robots are supposed to be intelligent, so "maybe they will understand". we can't be sure, though.

there are some things that even humans are not fully able to comprehend. 

Link to post
Share on other sites

The question of human rights has yet to be settled. It's rather silly to worry about robot rights first.

Link to post
Share on other sites
  • 2 weeks later...

I feel like the people here would enjoy a comic series called Descender (its still going).

Link to post
Share on other sites

When I stand in the next election really want to point out that anyone who says robots can't have rights is using the same arguments as slavers.

Link to post
Share on other sites

As to the question in the opening post... no, never. A machine is a machine is a machine. A pile of metal. A tool.

 

End of.

Link to post
Share on other sites

I don't know if it will ever get to the point that robots or other AIs will be both sentient and sapient, with independent thought, self awareness, feeling and emotions, and all that. So to me, the question becomes at what point along a spectrum from purely mechanical to fully sapient do you draw the line. I suppose it won't be a single point, but some gray area where arguments form between people who lean more towards one end or the other. I really doubt we will reach that point in the lifetimes of anyone here. But who knows. It might take some simple breakthroughs that push things along the spectrum more rapidly than one might expect.

Link to post
Share on other sites

Why would one want such a highly advanced "AI" in the first place? The whole idea just blows my mind. I mean yes, I can see the appeal from a curiosity POV. But other than that... why?

Link to post
Share on other sites
J. van Deijck

if they are actually sentient? 

Link to post
Share on other sites
28 minutes ago, alpha decay said:

if they are actually sentient? 

It's one or the other. Artificial or sentient.

Link to post
Share on other sites
J. van Deijck
1 minute ago, Homer said:

It's one or the other. Artificial or sentient.

we'll never know if technology doesn't go that advanced in the future.

it may be years ahead.

Link to post
Share on other sites

It'll still be programmed by mankind. Or by machines which have been programmed by mankind.

Link to post
Share on other sites

Handling the laws that apply to AI could sometimes be comparable to human rights, but due to the different natures of our existences it's wise to - at least for a significant timeframe as AI develops - keep things separate just to prevent migraines lol.

Link to post
Share on other sites

If an AI could pass a tiring test multiple times we would have to at least investigate it's sentience. The question is where does consciousness begin?

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...