11 Comments
User's avatar
Alex's avatar

> I think anybody under these constraints would fall into intuitionism and ad-hoc justification for the personal moral beliefs they find compelling, such as donating to animals[.]

Great essay, but it also reminded me why I don't like pure philosophy. Kant's ideas seem so hopelessly theoretical, so "how many angels can fit on a tip of a pin?" that there is no chance to applying it to the real world, and what fun is it in discussing theories that couldn't be applied to the real world?

Expand full comment
Kyle Star's avatar

To be honest, I like Kant a lot because universality is a super duper cool idea that really gives the vibe of “more real” than pure intuition. I think it falls apart on reflection, and most importantly the actions it endorses are not moral! If the actions this super logical morality endorses cannot be applied to the real world in a way that’s, like, good (like you say) then what’s the point of a moral theory at all?

Expand full comment
Ali Afroz's avatar

I actually think the first and third formulation of the categorical imperative is obviously identical and depending on how you interpreted the second one is also identical. The first formulation tells you to only act on a maxim that you can will should become a universal law while the third tells you to act as if you are a legislator making universal laws when choosing your own maxim. Seem pretty identical to me, and the second formulation strikes me as basically saying that you have to respect rational agency or basically that you have responsibilities and duties towards rational agents that you don’t have towards rocks, which is obvious. The second formulation is main problem is that it’s worded in a very imprecise manner. Since all it says is that you can’t merely treat people as means you have to treat them as ends in themselves with zero specification of what that supposed to be.

I do agree that the argument falls apart. If you try replacing rational with basically any other word, since rational in practice seems to means capable of being motivated by reasoning in the Moraly correct way, which is obviously circular if you’re trying to come up with morality. That said, I actually think a lot of the stupid rules are due to poor reasoning on the part of philosophers and that for example, you can easily universalise a rule permitting you to lie when it has good consequences. Basically, a maxim is a rule of the form. I will undertake action A in situation B motivated by reason C. Just build it into the rule that you will not lie in a situation where everybody else is following the exact same rule as you because it would become common knowledge that you lie that situation in which case the motivation for lying would be defeated by the fact that nobody would be fooled. A lot of the stupid rules like the rule against suicide seem to be because of trying to reach a conclusion from axiom,s that are reasonable, but don’t imply a conclusion. For example, if you’re an agent who wants to be dead, it seems pretty beneficial to your goals for others to kill you.

As for imperfect duties, the argument there is that in theory, rational agent can just want to universalise a rule that permits them not to help those in need in reality, given actual human preferences. No human can actually wish that they should be left alone when they are in need. I think this is another case of motivated reasoning because you could equally argue that given actual human psychology humans can’t actually live according to the categorical imperative, as indeed seems to be acknowledged in sum of Cant’s writing, as he admits that living perfectly in accordance with the moral law isn’t humanly possible, although he was working from a more demanding definition that seems to require you only act out of duty. Still, it’s just true that no human can actually be perfectly unselfish in the way of not willing others to help them more than they would wish themselves to help others in the exact same situation. Still, I do think requiring that you universalise your moral rules is obvious from anyone who wants other people to follow their morality, not to mention that otherwise you have to ignore the obvious fact that if you think a particular line of moral reasoning is correct, then logically everybody who is moral would follow the exact same reasoning which just means that whatever rules they come up with would be identical and therefore taking a self defeating rule is stupid. Of course, this is pre-supposing that there is something like correct moral reasoning which you can be write about, and also that reasoning correctly about morality is the important thing instead of focusing on your actions with little regard for your reasoning behind the action. Still to make it seem more intuitive. Do keep in mind that anyone following the exact same reasoning process or decision procedure. As you will inevitably come up with the same decision rules, so it’s stupid to pick rules that you would not want people following the identical decision procedure to also follow, since they will in fact, inevitably, end up following them. Also, if you’re not universalise your morality before picking your rules and can’t universalise it. That means you can’t want others to follow your morality or at least not everybody else.

Expand full comment
Kyle Star's avatar

Good comment. I think there is a difference between 1 and 3 though — 1 is more a logical test and 3 is more thinking of a hypothetical society where you get to decide all the moral rules. Like, 1 is like “what would happen if everyone did this” and 3 is asking you the society you want to build. 3 smuggles in the fact that you have moral concern for each other as equals by saying you’re a hypothetical moral legislator and everyone has already agreed to follow your rules. Also, it takes the “people should be ends, not means” from 2: hence the name, “Kingdom of Ends”.

Check the other comment thread I had with a kantian, Korsgaard has some thoughts on the whole “lying” thing that I still disagree with but didn’t address very directly. But the other commenter has to bite the bullet that you can’t sacrifice 1 person to save 1 billion because the idea of that is pretty incompatible with the premises; if you say numbers matter, you’re importing consequentialism.

Expand full comment
Ali Afroz's avatar

My understanding of the formula of universal law is that a maxim does not pass it if it suffers from either a contradiction in conception or a contradiction in the will. A contradiction in the will is when a hypothetical rational agent could will that it become a universal law, but an actual human cannot, for example, you can’t decide to not help poor people in need because to will that it become a universal law. You have to wish that other people not help you if you yourself ever become poor and are in need, and no actual human can wish this or at least no normal human can. So I think the first and third formulas are pretty much the same thing written differently. The first asks you to only act on the rule which you can simultaneously, wish should become a universal rule. Meanwhile, the third asks you to only act on the rules you wish to make universal laws.

Which comment thread are you referring to? Is it the one about variations of the trolley problem?

I personally think that actually, it’s completely compatible with universalisability to be consequentialist and the only reason people think otherwise is because the person who first discovered the categorical imperative happened to hate that moral system and be given to motivated reasoning. also to be fair in a lot of moral dilemmas, the inability of actual humans to act in accordance to the categorical imperative can lead to miss reading what it requires. If you are the fat man in the trolley problem, you probably cannot will that someone else threw you in front of the trolley. But in that case, you have to simultaneously wish that if you are one of the five, nobody would intervene to throw someone in front of the trolley. Most actual humans cannot wish these two things at the same time. So end up inevitably violating the categorical imperative, but that doesn’t mean that killing the one to save the five is incompatible with it.

Also about the second formula as I mentioned, it’s super imprecise, but I think the main thing it conveys is that moral obligations are only to rational agents. Nothing else has moral worth. The moral law can be exhausted by fulfilling your duties to rational agents, including yourself, but without using the other formulas, you can’t exactly understand what these duties are given the lack of precision.

Expand full comment
Lance S. Bush's avatar

I share much of your reservations about the Kant’s moral philosophy and likewise reject Florence’s position (though probably for at least some different reasons than you). However, I don’t think rejecting Kant’s position does much to establish that intuitions are mandatory. I reject both Kant’s views and intuitionism.

In fact, I’m not convinced there even is any distinct psychological phenomena of “intuitions.” I think this the notion of philosophical intuition is something philosophers made up, that it’s a bit of parochial pseudopsychology, and that morality can get by just fine without intuitions. As far as I can tell, I don't have or use intuitions at all. Why even think intuitions are a thing?

Expand full comment
Neeraj Krishnan's avatar

A few variations on the trolley problem (not sure how much they have been studied in the literature, but a quick google search does not bring up anything). 0) The standard problem. Push one to save 5. All six are of identical worth to you. 1) The candidate for being pushed is your child 2) You push the one, but that fails to stop the train. You find a second and then a third and pushing them fails too, do you push the fourth person you find? The fifth? At this point you net out. 3) The candidate for being pushed is someone else’s child, and their parent is right behind you and realizes you are pushing their child, and intervenes to push you instead. And then your parent shows up and pushes them down, and so on, until 4 people are killed. This is still all ok? 4) You dont get to push someone else down, but you have to sacrifice yourself. Are you required to jump? 5) The “trolley” is remote and is really certain death by HIV unless a remedy is administered and whose cost is you pushing the person or yourself under a trolley. You should still push them/yourself to save the 5 patients?

Expand full comment
Kyle Star's avatar

Admittedly not super related to Kant, but you got me, I love moral hypotheticals.

1) It’s not morally optimal to prioritize one family member over something like, a billion people, but I would just be selfish if there was only 5 people on the other side of the track and save the person I loved.

2) if there’s no guarantee that pushing someone on the track will stop the train, there’s no reason to do it. The fat man variation only works if you’re very, very, very sure it will stop the trolley, and even then it has to be abstracted away from any consequences like “law” that will put you in jail if you do it, preventing helping in the future

3) this is obviously not utility optimal, no utilitarian would endorse multiple people dying for no reason when only 1 is required to save. You can rephrase this one in a way that makes it more clear what the options are and I’ll answer

4) All utilitarianism says is that it’s better if you prioritize helping more people, so you would be selfish for not sacrificing yourself. I think this should be pretty intuitive; a moral theory should say it’s better to sacrifice yourself to save, like, a million lives than to not.

5) yes, it’s morally better to save more people than fewer people, as long as you’re not sacrificing trust in institutions or law in such a way that would cause more harm than good

Expand full comment
Neeraj Krishnan's avatar

Thank you! For 2 and 3, the hypotheticals were trying to raise the costs for the pusher. i.e. throwing one person down decelerates the trolley a bit, the second still more, and so on till the fourth or fifth causes the trolley to come to a full stop. In the case it takes five people to be pushed to stop it, you are back to where you started but just a different set of five are dead. This is presumably neutral, yes? The utilitarian will not distinguish which set of 5 are killed, the action did no net harm or net good.

Expand full comment
Kyle Star's avatar

In the real world, you don’t want to violate rights because that makes life worse for everyone — people shouldn’t fear doctors because they may steal their organs, that’s very very bad. And of course, there’s uncertainty — there’s no way to know EXACTLY 5 people will stop a trolley, and if you’re trying to save people on a track murder sure seems like a suboptimal result. Also, there’s guilt of the people involved to think about.

But if the people involved were to all have their memory wiped (not necessarily mandatory, maybe the people are blindfolded or don’t know the sacrifice was made for them or something?), and it had no impact on law, and the deaths were all equally painful, and we knew with 100% certainty 5 people would stop the track with 5 people tied to it, then yes, a utilitarian calls it even. I even think this should be intuitive with all these caveats; 5 people die, then 5 people’s lives were cut off early.

This is all part of my general belief that the distance you are from the action doesn’t make an action better or worse. Clicking a button so 2 people far away die horribly should be worse than if you personally make 1 person die horribly in the same way, with a knife or something. You don’t get to feel more moral when taking an action that causes more pain and suffering

Expand full comment
Neeraj Krishnan's avatar

Thanks! And the calculation accounts for the downside of people living in fear of standing near the edge of overpasses over trolley tracks, from having a utilitarian push them over? I’m sure the literature goes into details, but I got the sense that the “second order” calculations were only being made for things like involuntary organ transplants. Or is it just how the weights are applied to “not have to live in fear of being pushed from a bridge” vs “N lives being saved as a result of being pushed” comes out in favor of the pushing for a certain N? And that this whole line of reasoning is to motivate the calculation in the first place, thereby calling on everyone to be utilitarian?

Expand full comment