Picture this — a runaway trolley is barreling down the railway tracks. Ahead, a mad man has tied up five people on the tracks. The trolley is headed straight for them. You are standing next to a lever. If you pull the lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the other track.

What would you do?

Would you save that one person’s life for the price of five other lives? Or would you save more lives for the cost of just one?

The problem I described is known as the Trolley problem. It is a thought experiment used by philosophers to argue about ethics, morality or the value of human life.

The problem seems to be abstract, because how often do we have to choose between the life and death of other people? Soon enough, however, we will all be one of those tied to the tracks in a big Trolley problem game. And a person (or thing) pulling the lever will be thinking completely differently to the way we do.

When AI will decide about life and death

Right now, autonomous cars are still in the test phase. It is still a new technology, which needs to undergo more experimentation before being put to use in everyday life. At first, self-driving cars will replace truck drivers. Next in line would be taxi and public transport drivers. Finally, we will be replaced by computers in our own private cars (if there will be still such thing as a private car).

We will put our goods and lives in the hands of computers hoping that they won’t make human errors.

But the creators of the algorithms steering the cars can’t foresee every possible situation that could occur on the road in a wild world. The only thing they can do is train their algorithms in as many cases as possible and hope the machine will be able to correctly handle unexpected situations.

Now, imagine this scenario. You are sitting in your self-driving car and suddenly the computer loses control over the car. The onboard computer now has a choice —it can hit a tree or it can hit a group of people standing at the bus stop. In the case of the first option, not everyone in the car would survive the crash. In the second, the computer sacrifices a greater number of lives in order to save its passengers.

It is the Trolley problem in disguise. Trolley is a car and the onboard computer is pulling the lever. As in the original problem, every solution is bad.

So, what would an AI do?

Sometimes the only winning move is not to play

The AI researchers can show us a possible solution. In his work, The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel . . . after that it gets a little tricky, Dr. Tom Murphy was trying to create an intelligent program that could learn how to play Nintendo Entertainment System games without being told the rules. The program had to figure out by itself how to play the game and how to crush it.

The results were unexpected. So unexpected that Dr. Murphy described it as:

On a scale of OMFG to WTFLOL I was like whaaaaaaat?

The program was able to exploit every glitch and bug in the game in order to finish it in the most optimal way and maximize the end result.

Dr. Murphy gave the program other games to play, like Pac-Man and Super Mario Bros, but the most surprising results arose when the program was presented with Tetris.

It is mathematically proven that you can’t play Tetris indefinitely, you will always hit the top and lose. Always. Period. Dr. Murphy’s program was able to do everything in the game, but it couldn’t change the rules of the game. The only thing it could do was postpone inevitable defeat.

Just before placing the last block and losing the game, the program once again did something unexpected.

It paused the game.

Technically speaking, it didn’t lose. It was a valid move and it was the wisest thing it could do.

If we go back to our car-crash scenario we can see some similarities to the situation in which Dr. Murphy’s program finds itself. Every possible move will lead to disaster, so truly intelligent artificial intelligence will do everything not to be in this situation. Or at least it should be programmed not to put itself in a chose-lesser-evil situation.

If the creators of AI steering self-driving cars go in that direction then self-driving cars will be the next big thing, significantly increasing road safety for the first time since seat belts and air bags were invented. If cars were speaking to each other and exchange data about surroundings, then they would be able to foresee potentially dangerous situations. Self-driving cars would do everything possible to avoid ever encountering the Trolley problem, transporting us safely to our destination.