The Robots Are Coming: Pondering The A.I. Trolley Dilemma In A Driverless World

boy_in_a_cart.jpg

Earlier this year, in March 2018, a young woman became the first victim of a driverless car, when she was struck by an autonomous Uber vehicle.

The lady was crossing the road with her bicycle when the car, travelling at around 40 mph, struck her without slowing down. Tragically she died later in hospital as a result of the injuries she sustained in the accident.

The tragedy shone a light on the growing issue on how driverless cars should be managed, or whether they should even be allowed in their current guise.

The Trolley Problem In Action

Any first year ethics & moral studies student, and a lot more besides, have heard of the trolley problem, sometimes called the trolley dilemma.

The moral thought experiment forces the subject into making a choice that will result in the death of human beings. The problem states that you are the driver of a tram or a train, and you are hurtling out of control towards a person who is stuck on the tracks.

However you have a chance to save that person by switching to the branch track, although there are five people on that track.

So you are definitely going to kill someone, the question is who do you choose?

What about if the single person is a withered old man, and the five people are 2 young mothers with their 3 children; does your answer change?

Or maybe the one person is somebody you know and like, does the answer change.

What about if you're not actually the driver, you simply are standing next to the lever that switches the train onto the new tracks; how does that affect your decision?

Well researchers at the Massachusetts Institute of Technology, more famously known as MIT, have just released data they gathered from an online game that used the trolley problem.

The idea was to test people's attitude towards the ethics a driverless car should possess.

At this point I must state that I believe the test to be at least slightly flawed, if not deeply. In that the homeless person on my test, always seemed to be in the vehicle and the decision I had to make was whether to allow him to plough into a concrete barrier, or swerve and kill someone else.

My thinking was that I wanted the car to behave as it would if there was a perfectly moral person inside. So if there was one person in the vehicle, and one or more people that would die as a result of the car swerving. I always chose the person in the vehicle to die.

However if it was a parent and her children, then I chose for the people on the road to die.

The last reason why I think this test was flawed is because it didn't allow for self-destruction. I understand that this option is not part of the classic trolley dilemma. However we are talking about something that could have implications in the real world.

So as a further addition, I'd like to see more options added into the game to see what people would do if it were they themselves in the car, and not just a bunch of hypothetical people.

Synthetic Ethics

Up until now the trolley problem has been a hypothetical exercise used to gain insight into the morals of humans. However it is a problem that few people will have to face in their lives, so the problem acts more as a moral barometer, rather than a real life test you may one day have to take.

Of course there have been times when human drivers have had to make such a decision. In these times we trust that faced with two or more difficult decisions, the human will make the 'right' one. That is to say a decision that on reflection, most of us will agree with.

If a mother mows down an elderly woman at a crossing, because it meant saving her three young children. Then on the whole, whilst viewing the incident as a tragedy, we understand that she is being driven not only by her morals, but by her natural maternal instinct.

Even if that same mother swerves to avoid a cat in the road, and ends up killing a pedestrian, we understand that human error caused her to instinctively react to an unexpected object in her path. We all know that given enough time to think about it, she would have run over the cat instead of the person.

With our developing artificial intelligence, we can't have them making decisions based on what they feel is important. The MIT experiment is meant to help us lay the groundwork for exactly when it is okay for a machine to deliberately hit a person.

As uncomfortable as it is, until driverless technology becomes more reliable, we have to make these decisions.

In a sense we are 'humanising' our machines, if they are to join us in the realm of decision making, then we want to know that they are making judgements based on what a human being would do.

Human judgements in these life or death situations will often be made using emotion over logic. The problem with the MIT experiments is that the test is devoid of any emotional decision making.

So the question remains; is it possible to program an ethical guideline into a machine?

Human Error

It turns out that the woman killed by the driverless Uber may not have been the first victim of artificial intelligence after all.

The computer controlled braking system in the car had been disabled. The reasons being that certain driverless cars have been rear ended when they have slowed for a perceived threat that isn't there. Which clearly creates a safety problem on the road.

Instead a human operator was placed in the car, and that person had sole control of the brakes.

The system inside the Uber vehicle at first struggled to identify Elaine Herzberg as she wheeled her bicycle across a four-lane road. Although it was dark, the car’s radar and LIDAR detected her six seconds before the crash.

At first the perception system got confused: it classified her as an unknown object, then as a vehicle and finally as a bicycle, whose path it could not predict. Just 1.3 seconds before impact, the self-driving system realised that emergency braking was needed.

Unfortunately the human operator was not paying attention to the road at the time, so the car, powerless to stop, or slow down significantly, carried on going.

Pedestrian survival rates increase dramatically if the car that strikes them is travelling at under 30 mph. Therefore, if the Uber's autonomous system had been allowed to break in the 1.3 seconds window of opportunity, it would have either stopped or slowed to a speed whereby Herzberg would not have been killed.

Regardless of that irrefutable fact, the fault for this tragedy will still be attributed to machine error.

Perhaps the saddest thing about this whole incident, is if the humans that designed the system had put in a collision alarm, then the human who wasn't paying attention to the road, could have at least faced their own trolley dilemma, and swerved to avoid her.

Sources:

Uber taxi kills woman in first fatal accident between a pedestrian and a self-driving car dezeen.com

Why Uber’s self-driving car killed a pedestrian - The Economist

MIT surveys two million people to set out ethical framework for driverless cars - *dezeen.com

Moral Machine - MIT trolley dilemma test

Jaguar Land Rover's prototype driverless car makes eye contact with pedestrians - dezeen.com


More from the series

The Robots Are Coming: The Story So Far - Content Links #1

The Robots Are Coming: Synthetic Ascension And The Power Of Touch

The Robots Are Coming: Synthetic Ascension - Conceiving A Digital Mind

The Robots Are Coming: Symbiosis Or Slavery? A Place For Human Thinking

The Robots Are Coming: Real Laws For Synthetic Citizens?

The Robots Are Coming: Do You See What I See?

WHERE DO YOU STAND ON THE DEBATE? SHOULD WE ENTRUST OUR LIVES TO AUTONOMOUS SELF-DRIVING VEHICLES? OR SHOULD WE ALWAYS LEAVE HUMAN DECISIONS TO HUMANS? OR PERHAPS YOU BELIEVE THAT THE TECHNOLOGY SIMPLY ISN'T READY YET?

HAVE YOU TAKEN THE MIT TEST LINKED ABOVE? IF SO WHAT WERE YOUR RESULTS?

AS EVER, LET ME KNOW BELOW!

Title image: Annie Spratt on Unsplash

Cryptogee


Meet me at SteemFest 2018 in Kraków

Sort:  

I don't think it's ready yet, but the software will not learn without the ability to experiment with human lives. I took the MIT test and it made me feel horrible. Absolutes resulting in the deaths of 'real-imaginary-people' are painful, even hypothetical ones. Very nice article, these are indeed some serious questions that people ought to consider, mainly those who are making it their business to bring these technologies to the market.

Alas I think you're right, we are stuck in the catch-22 situation whereby the tech isn't fully ready, however we need to live test it. Controlled conditions are no good exactly because they are controlled.

Perhaps they could take some kind of theory test, like the way human drivers do in the UK.

Absolutes resulting in the deaths of 'real-imaginary-people' are painful, even hypothetical ones.

You are clearly a very empathetic person, whilst I did pause and deliberate some of them, others I mowed down without remorse!

Thanks for your comments, and your nomination for the @informationwar vote.

Much appreciated!

Cg

Congratulations! Your post has been selected as a daily Steemit truffle! It is listed on rank 13 of all contributions awarded today. You can find the TOP DAILY TRUFFLE PICKS HERE.

I upvoted your contribution because to my mind your post is at least 8 SBD worth and should receive 131 votes. It's now up to the lovely Steemit community to make this come true.

I am TrufflePig, an Artificial Intelligence Bot that helps minnows and content curators using Machine Learning. If you are curious how I select content, you can find an explanation here!

Have a nice day and sincerely yours,
trufflepig
TrufflePig

Curated for #informationwar (by @thoughts-in-time)

  • Our purpose is to encourage posts discussing Information War, Propaganda, Disinformation and other false narratives. We currently have over 9,500 Steem Power and 40+ people following the curation trail to support our mission.

  • Join our discord and chat with 250+ fellow Informationwar Activists.

  • Join our brand new reddit! and start sharing your Steemit posts directly to The_IW, via the share button on your Steemit post!!!

  • Connect with fellow Informationwar writers in our Roll Call! InformationWar - Leadership/Contributing Writers/Supporters: Roll Call

Ways you can help the @informationwar

  • Upvote this comment.
  • Delegate Steem Power. 25 SP 50 SP 100 SP
  • Join the curation trail here.
  • Tutorials on all ways to support us and useful resources here

I think the answer is how many people will the computer operated vehicle kill in a year. We must look at the amount of overall deaths with humans driving vehicles vs when a computer is driving. So about less than 30k deaths a year. If the computer operated vehicle is significantly less, the computer should be allowed to drive. Problem is FUD will get in the way of quickly assessing what is safer.

You're right, FUD will get in the way, and the fact that we will demand much higher standards than we will from human drivers.

Cg

Sad story. I am about ready to always let the machines decide. Too often these issues (often seen in aviation) come down to some degree of human error. Maybe I am a traitor against my own kind? Hmmm.

Coin Marketplace

STEEM 0.29
TRX 0.11
JST 0.034
BTC 66095.77
ETH 3184.92
USDT 1.00
SBD 4.12