Discussion about this post

User's avatar
Alex's avatar

> I think anybody under these constraints would fall into intuitionism and ad-hoc justification for the personal moral beliefs they find compelling, such as donating to animals[.]

Great essay, but it also reminded me why I don't like pure philosophy. Kant's ideas seem so hopelessly theoretical, so "how many angels can fit on a tip of a pin?" that there is no chance to applying it to the real world, and what fun is it in discussing theories that couldn't be applied to the real world?

Expand full comment
Ali Afroz's avatar

I actually think the first and third formulation of the categorical imperative is obviously identical and depending on how you interpreted the second one is also identical. The first formulation tells you to only act on a maxim that you can will should become a universal law while the third tells you to act as if you are a legislator making universal laws when choosing your own maxim. Seem pretty identical to me, and the second formulation strikes me as basically saying that you have to respect rational agency or basically that you have responsibilities and duties towards rational agents that you don’t have towards rocks, which is obvious. The second formulation is main problem is that it’s worded in a very imprecise manner. Since all it says is that you can’t merely treat people as means you have to treat them as ends in themselves with zero specification of what that supposed to be.

I do agree that the argument falls apart. If you try replacing rational with basically any other word, since rational in practice seems to means capable of being motivated by reasoning in the Moraly correct way, which is obviously circular if you’re trying to come up with morality. That said, I actually think a lot of the stupid rules are due to poor reasoning on the part of philosophers and that for example, you can easily universalise a rule permitting you to lie when it has good consequences. Basically, a maxim is a rule of the form. I will undertake action A in situation B motivated by reason C. Just build it into the rule that you will not lie in a situation where everybody else is following the exact same rule as you because it would become common knowledge that you lie that situation in which case the motivation for lying would be defeated by the fact that nobody would be fooled. A lot of the stupid rules like the rule against suicide seem to be because of trying to reach a conclusion from axiom,s that are reasonable, but don’t imply a conclusion. For example, if you’re an agent who wants to be dead, it seems pretty beneficial to your goals for others to kill you.

As for imperfect duties, the argument there is that in theory, rational agent can just want to universalise a rule that permits them not to help those in need in reality, given actual human preferences. No human can actually wish that they should be left alone when they are in need. I think this is another case of motivated reasoning because you could equally argue that given actual human psychology humans can’t actually live according to the categorical imperative, as indeed seems to be acknowledged in sum of Cant’s writing, as he admits that living perfectly in accordance with the moral law isn’t humanly possible, although he was working from a more demanding definition that seems to require you only act out of duty. Still, it’s just true that no human can actually be perfectly unselfish in the way of not willing others to help them more than they would wish themselves to help others in the exact same situation. Still, I do think requiring that you universalise your moral rules is obvious from anyone who wants other people to follow their morality, not to mention that otherwise you have to ignore the obvious fact that if you think a particular line of moral reasoning is correct, then logically everybody who is moral would follow the exact same reasoning which just means that whatever rules they come up with would be identical and therefore taking a self defeating rule is stupid. Of course, this is pre-supposing that there is something like correct moral reasoning which you can be write about, and also that reasoning correctly about morality is the important thing instead of focusing on your actions with little regard for your reasoning behind the action. Still to make it seem more intuitive. Do keep in mind that anyone following the exact same reasoning process or decision procedure. As you will inevitably come up with the same decision rules, so it’s stupid to pick rules that you would not want people following the identical decision procedure to also follow, since they will in fact, inevitably, end up following them. Also, if you’re not universalise your morality before picking your rules and can’t universalise it. That means you can’t want others to follow your morality or at least not everybody else.

Expand full comment
8 more comments...

No posts