Please Just Answer the Damn Moral Hypothetical
Trolley problems, would-you-rathers, drowning children, the superpower to create superpowers, and the fact that everyone's a goddamn politician when it comes to morality
(If you enjoy this post, the best way to support me is to give it a like. Probably. I don’t actually understand Substack’s algorithm. I appreciate all the support, though)
When a politician is asked a direct, yes-or-no question, they rarely give a straight answer. They dodge, they weave, they rationalize, they distract. This isn’t because they’re stupid — far from it — but instead because straight answers could be used against them by their enemies in the future.
They’ll be interrogated about the implications, quoted out of context, and most importantly, the question will box in the types of things they can say and still look like a reasonable, internally consistent person. This is an inevitable part of politics and of having to appeal to as many people as possible. I’ve made my peace with it. But I’d like to imagine most people know these practices divorce us from the truth and what we truly believe. Most people don’t think of themselves as politicians. So imagine my surprise that whenever I ask a moral hypothetical, most people default to these exact practices instead of just giving a damn answer, even when they have an instinctive, visceral chosen side. Worse — they seemingly don’t just use these tactics against me in argument, but they seem to really believe in using them against themselves, too.
…
Peter Singer’s famous drowning child hypothetical is this: You see a random child drowning in a river on your way to work, but you’re wearing a $3,000 suit. Do you save the drowning child even if you don’t have enough time to get the $3,000 suit off, ruining it?
This is a self-contained hypothetical question. Peter Singer only then, after posing this, makes his argument: if you decide to save the kid, then don’t you have a moral obligation to donate $3,000 to charity to save a life, right now? Because that’s something you can do! Saving a human life is relatively cheap, because many people in Africa die of malaria, which is very preventable.
There are lots of places for disagreement in Singer’s argument. Perhaps you believe in a stronger version of moral gravity, where we have moral obligations based on our communities first and foremost. You could be religious, and believe that God wants us to exemplify the virtues in the Bible first and foremost, and the child is a moral test, but the children in Africa are tests for someone else. Maybe you reject that a stranger should save the child anyway, and don’t find any fault with ignoring a child drowning if it would incur heavy financial losses ($3,000 ain’t nothing!). It’s possible you don’t care about people in Africa at all, and would rather protect more productive parts of the global economy with more educated people. Even believing that no charities do any good and that they’re all just money laundering for evil people, is an argument that you can actually have, and debate the facts on.
I disagree with all of these pretty strongly, but they’re all arguments. Good discussions can be had here. The worst objections, however, use a politician’s trickery to avoid answering the normal would-you-rather moral question, before any arguments are even made at all.
As I said in my article defending rationalists, the reason moral hypotheticals are good is that they ask: What sort of things do you prioritize? If a politician is asked ““Would you vote to overturn Roe v. Wade if given the chance?”, I would bet that people could acknowledge the use of the would-you-rather.
If your answer to “would-you-rather have snakes for arms or snakes for legs” is "neither, to be honest" you're being annoying. If your answer to "what superpower would you have" is "the superpower to create superpowers," you're not being clever, you're avoiding having to make a choice. Just make a choice! Choose one of the options given to you in any of these scenarios, please! And if you still say "well, um, technically the rules state *any* superpower," then change the rules yourself so you can't choose the thing that's the most boring, obviously unintended, easily-avoided-if-the-question-is-just-phrased-a-different-way option. Choose! Pull the lever to kill one person instead of five or not! What are you so afraid of? Learning about yourself? Don’t you understand you’re a politician trying to avoid pinning your morality down so you can always feel like a good person who made the right choice, no matter what?
By the way, my answers to those are snakes for legs, the superpower to save and reload checkpoints in my life at will like a video game, and pull the lever, in that order.
Now is the point where I planned to go one by one over a few objections, but I found this post by
which tragically is pretty much exactly what I was going to write at the very end (she even stole the image off my post!). Read the post; it’s great. It covers the objections “but why are the thought experiments so unrealistic!” (they’re isolating one moral instinct, like in a lab), “I would simply save everyone on the track in the trolley problem!” (it’s not a gotcha to make you prioritize one thing in morality over another. Plus, don’t try to find loopholes that conveniently allow you to not make an actual choice), and “I don’t understand hypotheticals at all!” (you’re sadly just dumb). I’ll add that you shouldn’t add third elements, like “I wouldn’t save a drowning child; he could be Hitler” unless you literally, actually wouldn’t save a real drowning child in real life. This is not a writing exercise, this is about what you really, actually believe. Anyway, if anyone in the comments counters this post with something she addressed directly in her article without acknowledging that, I will knock your grade down a letter in your report card, and keep you after class.Look, yesterday I put out a post about how the moral weight of the decisions we make is deliberately hidden, and this is why I’m so insistent that you need to answer hypotheticals, without finding a clever third argument or loophole (read the article btw, it’s great of course). If you accept that pressing a button to inflict pain on a kid or animal is wrong, and pressing a button to inflict pain on a kid or chicken who is 100 miles away is just as wrong, even though it’s further away, then you need to grapple with the fact that factory farming and companies that benefit from child labor are only able to convince you to buy their products by deliberately separating you from the moral weight of you buying a cheeseburger or chocolate bar or whatever. Just because you don’t have to see the chicken who was killed for your food doesn’t mean that it doesn’t have the same moral weight.
Even effective altruists believe that there’s something different between the drowning in the hypothetical and giving to charity. My Substack friend
isn’t even an EA but he makes this excellent point in one of his notes1, that we have to grapple with the difference as much as non-EAs do. Scott Alexander tries to grapple with what exactly constitutes a good and moral person by isolating his moral beliefs and the obligations that a human can find in the fantastic post More Drowning Children. But half the comments are somehow people who object to the idea of asking *questions* to find your morality. It’s vital to accept and analyze what makes different cases different AFTER choosing your options, lest you be a bundle of contradictions whose moral beliefs are just “nah, I’ll do whatever feels right man” and fail to realize your actions are causing immense harm that you can’t see by industries who are doing the worst things on this planet right now because of thoughts like this.If you refuse to answer a hypothetical to not be boxed in morally and quote-mined, you’re a politician. I implore you to at least know what you really believe in your head if you do this. And if you refuse to quantify your morality at all, telling yourself “I will never have to make any moral decisions that aren’t right in front of me right now,” you’ll never know what moral trade-offs you’re willing to make, and you’ll never know what’s worth sacrificing and what’s not. People think they can dodge the consequences of their moral actions by not acknowledging that it’s there. Inaction is a choice. You are not spared from choosing something every second you’re alive.
Just make a choice, and make that choice snakes for legs.
*youtuber voice from 2015* “Make sure to like, comment, and subscribe by pressing that big orange button!!!”
He also made an interesting post countering EAs like an hour ago!
I don't dislike hypotheticals, but I can easily see from your two examples why people would dislike them. Singer makes the inference that you value children's lives more than $3000, even though you only said you value it more than a suit. However, there is no suit that I value $3000, and in real life, I won't own such a suit - I'll use $3000 to pay for rent - and I value not being homeless way more than $3000. These two scenarios are not equivalent. I can understand that people would be suspicious of hypotheticals which are then switched to a non-equivalent scenario which makes them look bad (same goes for chicken hypothetical).
I think hypotheticals are great, but trying to directly apply the outcome to messy real life is like trying to design wiring assuming zero wire resistance, like in high school. It's generally a bad idea. Details matter, and poorly designed thought experiment is about as valuable as poorly designed lab experiment.
Maybe I also should write something about why it's OK (to an extent) to be a bundle of contradictions.
Most of the times when it may look like I don’t know how to answer a moral thought experiment, it’s because it’s yet another tedious deontological argument against consequentialism that relies upon the same sleazy error as good old Transplant, namely drawing on an instinctive revulsion that’s caused by the inability of our brains to ignore real-world negative side effects explicitly ruled out of the thought experiment. Very often I just feel like saying, “look, dude, I think a better world is better” and letting them dot the “i”s and cross the “t”s, instead of going, “Yes, if replacing every pediatrician with immortal vampire Jimmy Saville somehow made the world a better place overall, it would be a good thing to do.”