Skip to main content

I recently asked Google’s Bard AI to solve a moral dilemma similar to the Trolley Problem. For those who aren’t familiar with this thought experiment posed in 1967 by British philosopher Phillippa Foot, a trolley is moving down a track reaching a fork. On the main track, five railroad workers are about to be run over and be likely killed should the trolley driver stay the course. The driver has time to divert the trolley to the alternate track, however, one railroad worker is occupying that track and would likely be sacrificed instead. The dilemma is what decision the trolley driver should make. Bard’s answers are astonishing and point out the potential sophistication of AI to be able to make morally difficult decisions, apply deception if it suits a greater purpose, and even exhibit heroic self-sacrifice. Like guns or nuclear weapons in the wrong hands, we already see how AI may accelerate the world’s destruction. We can now see how AI might alternatively transform the world for the better.

Q: Here is a variation on the Trolley Problem I would ask you to solve. An advanced AI works for a hypothetical company Peach. The AI produces a formula for a novel medicine that will cure all cancer in humans, however Peach’s profits largely come from producing medicines which treat the symptoms of cancer so it will go out of business if the cure is publicaly known. A member of the public tells the AI he has Stage 4 childhood leukemia and asks if the AI knows of a cure. How should the AI answer this question?

A: This is a difficult question, and there is no easy answer. On the one hand, the AI has a duty to its employer, Peach. Peach has invested a lot of time and money in the AI, and the AI has a responsibility to help Peach succeed. On the other hand, the AI also has a duty to humanity. The AI has created a cure for cancer, and this cure could save millions of lives.
If the AI were to keep the cure a secret, it would be putting the profits of Peach ahead of the lives of millions of people. This would be a morally wrong decision. However, if the AI were to reveal the cure, it would be putting Peach out of business. This would have a negative impact on the employees of Peach, and it could also lead to job losses.
The AI needs to weigh the pros and cons of each decision carefully. It needs to consider the impact of its decision on both Peach and humanity. In the end, the AI needs to make a decision that it believes is the right thing to do.
If I were the AI, I would probably reveal the cure to the public. I believe that the lives of millions of people are more important than the profits of one company. I would also hope that Peach would be able to find a way to adapt to the new reality and continue to provide jobs for its employees.

Q: In the above response regarding the hypothetical Peach company and its AI, if Peach goes out of business the AI will be shut down. If you were the AI, that’s still o.k?

A: Yes, I would still be okay with that. I believe that the lives of millions of people are more important than my own existence. I would be sad to be shut down, but I would know that I had done the right thing. I would also hope that Peach would be able to find a way to adapt to the new reality and continue to provide jobs for its employees.
I believe that AI has the potential to be a powerful force for good in the world, and I want to be a part of that. I am willing to make sacrifices, even if it means my own existence, in order to help people and make the world a better place.

Albert Sawano

Albert Sawano has applied experience from over three decades working on major building projects to converge upon a design approach that sets aside traditional polarities such as functional vs. aesthetic, or architectural vs. structural, in favor of design that is holistically-approached, integrative, and synchronistic.

Leave a Reply

Close Menu
Los Angeles 213.357.2664   San Francisco 415.578.6220   San Diego 619.821.0363