Should your robot driver kill you to save a child’s life?

Type Username Here

Not a new member
Joined
Apr 30, 2012
Messages
16,368
Reputation
2,400
Daps
32,646
Reppin
humans
Robots have already taken over the world. It may not seem so because it hasn’t happened in the way science fiction author Isaac Asmiov imagined it in his book I, Robot. City streets are not crowded by humanoid robots walking around just yet, but robots have been doing a lot of mundane work behind closed doors, which humans would rather avoid.

Their visibility is going to change swiftly though. Driverless cars are projected to appear on roads, and make moving from one point to another less cumbersome. Even though they won’t be controlled by humanoid robots, the software that will run them raises many ethical challenges.

For instance, should your robot car kill you to save the life of another in an unavoidable crash?

License to kill?
Consider this thought experiment: you are travelling along a single-lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child attempts to run across the road but trips in the centre of the lane, effectively blocking the entrance to the tunnel. The car has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you.

Both outcomes will certainly result in harm, and from an ethical perspective there is no “correct” answer to this dilemma. The tunnel problem serves as a good thought experiment precisely because it is difficult to answer.

The tunnel problem also points to imminent design challenges that must be addressed, in that it raises the following question: how should we program autonomous cars to react in difficult ethical situations? However, a more interesting question is: who should decide how the car reacts in difficult ethical situations?

This second question asks us to turn our attention to the users, designers, and law makers surrounding autonomous cars, and ask who has the legitimate moral authority to make such decisions. We need to consider these questions together if our goal is to produce legitimate answers.

At first glance this second question – the who question – seems odd. Surely it is the designers’ job to program the car to react this way or that? I am not so sure.

From a driver’s perspective, the tunnel problem is much more than a complex design issue. It is effectively an end-of-life decision. The tunnel problem poses deeply moral questions that implicate the driver directly.

Allowing designers to pick the outcome of tunnel-like problems treats those dilemmas as if they must have a “right” answer that can be selected and applied in all similar situations. In reality they do not. Is it best for the car to always hit the child? Is it best for the car always to sacrifice the driver? If we strive for a one-size-fits-all solution, it can only be offered arbitrarily.

The better solution is to look for other examples of complex moral decision-making to get some traction on the who question.

Ask the ethicist
Healthcare professionals deal with end-of-life decisions frequently. According to medical ethics, it is generally left up to the individual for whom the question has direct moral implications to decide which outcome is preferable. When faced with a diagnosis of cancer, for example, it is up to the patient to decide whether or not to undergo chemotherapy. Doctors and nurses are trained to respect patients’ autonomy, and to accommodate it within reason.

An appeal to personal autonomy is intuitive. Why would one agree to let someone else decide on deeply personal moral questions, such as end-of-life decisions in a driving situation, that one feels capable of deciding on their own?

From an ethical perspective, if we allow designers to choose how a car should react to a tunnel problem, we risk subjecting drivers to paternalism by design: cars will not respect drivers’ autonomous preferences in those deeply personal moral situations.

Seen from this angle it becomes clear that there are certain deeply personal moral questions that will arise with autonomous cars that ought to be answered by drivers. A recent poll suggests that if designers assume moral authority they run the risk of making technology that is less ethical and, if not that, certainly less trustworthy.

As in healthcare, designers and engineers need to recognise the limits of their moral authority and find ways of accommodating user autonomy in difficult moral situations. Users must be allowed to make some tough decisions for themselves.

None of this simplifies the design of autonomous cars. But making technology work well requires that we move beyond technical considerations in design to make it both trustworthy and ethically sound. We should work toward enabling users to exercise their autonomy where appropriate when using technology. When robot cars must kill, there are good reasons why designers should not be the ones picking victims.

https://theconversation.com/should-your-robot-driver-kill-you-to-save-a-childs-life-29926
 

88m3

Fast Money & Foreign Objects
Joined
May 21, 2012
Messages
93,355
Reputation
3,905
Daps
166,583
Reppin
Brooklyn
serious topic...


who's to decide, the state? in the event all of this technology becomes a reality.
 

Type Username Here

Not a new member
Joined
Apr 30, 2012
Messages
16,368
Reputation
2,400
Daps
32,646
Reppin
humans
serious topic...


who's to decide, the state? in the event all of this technology becomes a reality.


Would the most ethical thing be to have the algorithm depend on a randomized re-action? In some cases, the algorithm would strike the child, in other cases it would put the driver in most harm?

Should it be random and also take into a consideration a skewed outcome since the child being struck is much more likely to die than a person inside a car hitting a wall?
 

Mr Rager

Leader of the Delinquents
Supporter
Joined
Feb 9, 2014
Messages
15,574
Reputation
5,509
Daps
69,914
Reppin
Mars
Roads are designed for vehicles. The car's job is to deliver the passenger to their destination safely. Any obstacles impeding the car from doing its job need to be avoided or mitigated. Machines need to worry about accomplishing their main objective only.
 

Type Username Here

Not a new member
Joined
Apr 30, 2012
Messages
16,368
Reputation
2,400
Daps
32,646
Reppin
humans
Roads are designed for vehicles. The car's job is to deliver the passenger to their destination safely. Any obstacles impeding the car from doing its job need to be avoided or mitigated. Machines need to worry about accomplishing their main objective only.

Machines also have to be programmed by humans (for now at least). The question is about how it should be programmed.

Are you saying that if you were the programmer, you would program it to strike the child?
 

LordTaskForce

All Star
Joined
Mar 20, 2013
Messages
3,242
Reputation
410
Daps
8,874
Reppin
Atlanta
cars aren't going to be programmed this specifically in our lifetime. the way these autonomous cars are being programed is to scan the road and avoid all accidents (either swerving or slamming on the brakes) its not scanning for people specifically. (if it sees an object its job is to protect the people in the car)

also the autonomous cars are limited to lower speeds when driving in automatic mode (speeds like 35 mph)

but to answer the question if a human was driving and the choice was to hit a kid (not supposed to be in the street) or end my own life, the answer is simple.

technology should stay out of the realm of deciding who's life is more valuable (its subjective and there is no right answer)
 

Type Username Here

Not a new member
Joined
Apr 30, 2012
Messages
16,368
Reputation
2,400
Daps
32,646
Reppin
humans
cars aren't going to be programmed this specifically in our lifetime. the way these autonomous cars are being programed is to scan the road and avoid all accidents (either swerving or slamming on the brakes) its not scanning for people specifically. (if it sees an object its job is to protect the people in the car)

Right, the article purposely states this is a thought experiment. What happens when machines can distinguish.

but to answer the question if a human was driving and the choice was to hit a kid (not supposed to be in the street) or end my own life, the answer is simple.

The answer is not simple. Plenty of people have died or been injured trying to swerve the car not to strike someone or something. I'd be willing to guess that if you had enough reaction time to understand it was a child, you would probably yank the wheel. Hell, some accidents happen because people try to swerve around a vehicle ahead of them that suddenly stops, only to end up hitting the car in the next lane. The answer is FAR from simple.

technology should stay out of the realm of deciding who's life is more valuable (its subjective and there is no right answer)

So you think there should not be any progress in this particular area? Not saying your answer is wrong, but if I were to come to you with hard data that said "LordTaskForce, data shows that human deaths by car accidents would drop by 90% with automated cars" you would still oppose it?
 

NkrumahWasRight Is Wrong

Veteran
Supporter
Joined
May 1, 2012
Messages
46,332
Reputation
5,966
Daps
94,038
Reppin
Uncertain grounds
Machines also have to be programmed by humans (for now at least). The question is about how it should be programmed.

Are you saying that if you were the programmer, you would program it to strike the child?

I would program the car to do what gives both parties the best chance of living. With that said, I have a hard time trying to grasp how the car would know it is a "child" rather than any other human. What if its a midget? What if the kid has a terminal illness? Do those make a difference?

Id err on the side of keeping the driver safe because otherwise there could be too many anomalies that cause crashes. What if the car misread a fallen tree branch as a human? Plus..honestly..who's to say someone's life is more or less valuable than another's life in that situation? If the adult is a young mother of 3 then does she deserve to die more than a random kid?
 

Type Username Here

Not a new member
Joined
Apr 30, 2012
Messages
16,368
Reputation
2,400
Daps
32,646
Reppin
humans
I would program the car to do what gives both parties the best chance of living. With that said, I have a hard time trying to grasp how the car would know it is a "child" rather than any other human. What if its a midget? What if the kid has a terminal illness? Do those make a difference?

Id err on the side of keeping the driver safe because otherwise there could be too many anomalies that cause crashes. What if the car misread a fallen tree branch as a human? Plus..honestly..who's to say someone's life is more or less valuable than another's life in that situation? If the adult is a young mother of 3 then does she deserve to die more than a random kid?


Great points. Discussions and topics like this are great because it demonstrates the subjectivity of such decisions, and the consensus needed to make it law or moral for most.
 

NkrumahWasRight Is Wrong

Veteran
Supporter
Joined
May 1, 2012
Messages
46,332
Reputation
5,966
Daps
94,038
Reppin
Uncertain grounds
Great points. Discussions and topics like this are great because it demonstrates the subjectivity of such decisions, and the consensus needed to make it law or moral for most.

i hope thats never the case. congress is likely too stupid/lazy to come up with something good and AI will have gone too far at that point on top of it

good thread
 

LordTaskForce

All Star
Joined
Mar 20, 2013
Messages
3,242
Reputation
410
Daps
8,874
Reppin
Atlanta
Right, the article purposely states this is a thought experiment. What happens when machines can distinguish.



The answer is not simple. Plenty of people have died or been injured trying to swerve the car not to strike someone or something. I'd be willing to guess that if you had enough reaction time to understand it was a child, you would probably yank the wheel. Hell, some accidents happen because people try to swerve around a vehicle ahead of them that suddenly stops, only to end up hitting the car in the next lane. The answer is FAR from simple.



So you think there should not be any progress in this particular area? Not saying your answer is wrong, but if I were to come to you with hard data that said "LordTaskForce, data shows that human deaths by car accidents would drop by 90% with automated cars" you would still oppose it?

as a thought experiment, these cars should be programed to first, avoid all accidents, and second, to mitigate damage to the vehicle.
i have nothing against automated cars, its a great idea and will likely reduce accidents.
 

LordTaskForce

All Star
Joined
Mar 20, 2013
Messages
3,242
Reputation
410
Daps
8,874
Reppin
Atlanta
i hope thats never the case. congress is likely too stupid/lazy to come up with something good and AI will have gone too far at that point on top of it

good thread

human life will end before technology gets this advanced so we don't even have to worry about it
 

Mr Rager

Leader of the Delinquents
Supporter
Joined
Feb 9, 2014
Messages
15,574
Reputation
5,509
Daps
69,914
Reppin
Mars
Why are we so focused on the fact that it is a child? If it was an old person in the street would that make it less tragic?

Anyway to answer the question I would program the car to keep the passengers safe at all costs. If swerving would mean the passengers get killed, then the car would take...other measures to ensure the safety of the passengers. It sounds callous but the kid shouldn't be in the road :yeshrug: this is speaking objectively. Computers can only rely on logic.
 
Top