‘Intuitionism’ is Quite Mandatory in Morality
Response to a Constructive Kantian's great post "Against Intuitionism"
There’s a famous, mocking idea of Kant as “the philosopher who wouldn’t lie to a murderer who asks where his friend is hiding.” He really did believe this, which was a true joy for me to find1. Later philosophers inheriting his ideas predictably found this to be “not great” and introduced many new takes on his work which produce more agreeable actions to take. I think the majority of these alterations are a mistake; they completely ruin what makes Kant’s ideas so appealing. I’ll explain why Kant has such a fun moral theory on his hands, why the patchwork these philosophers apply is doomed (and I’ll counter one patchwork specifically), and why Kant’s ideas are still very, very wrong when it comes to the true moral fabric of the universe.
The Way of Kant
’s piece Against Intuitionism is the best explanation and defense of the philosophical position of this “Kantian Constructivism” that I’ve ever read. Unfortunately, I disagree with most of the points, and feel they claim too much based off too little. I highly recommend you read it right now, but a great starting place is to summarize their arguments in the most plain-english way we can manage:“Intuitionism,” where philosophers juggle thought experiments against principles2 with intuition until it looks right, can’t be the ultimate source of moral judgement. If intuitions are biased, we’ll end up with a false theory when we incorporate those biases into our theory.
Noticing that people could disagree on morality while being rational, she redefines the term “moral obligation” to be a decision that every rational agent could discover purely from reasoning, no intuition required. You can have moral preferences, sure, but moral obligations have gotta bind everyone.
It then follows that the moral “ought” doesn’t imply can do physically, but can be rationally convinced. So if Bob has evaluated all of the arguments in the trolley problem rationally, and still doesn’t agree to pull the lever because his intuitions are different, he cannot be morally obligated to pull — he can’t be rationally convinced!
I am going to attempt to start Huge Internet Beef by first saying I respect Florence, they’re a fantastic and succinct philosopher, and they completely deserve the massive new amounts of subscribers they have, but then attempt to dismantle this argument piece by piece until I can hopefully show that their take on Kant is rife with “intuitionism” in at least four places and that the purity of Kant’s original argument is under question and leads to very odd moral priorities.
Wait, “every rational agent could discover”? Like what?
So the first point which may jump out to you is this phrase “a decision that every rational agent could discover purely from reasoning.” What do Kant, Florence, and other proponents like Korsgaard think can be derived from literally, as Florence put it, “zero premises”? Well, this is where that classic Kantian phrase that I didn’t understand for a long time, “if a maxim is self defeating” comes in. The idea is this:
You’re a rational agent who wants to do some moral action.
But, by deciding to do stuff in the first place, you’re relying on your power to choose. That’s agency.
Alright, because you’re relying on the ability to do stuff in the first place, you’re saying it’s important for anyone who lands in that situation to do stuff. That’s universality.
So if you want to do something like kill someone, you’re saying it would be OK for you to be killed for moral aims. So this moral idea is self-defeating, a contradiction, whatever you want to call it, the point is the same. “Every rational agent” would find it dumb to endorse allowing people to stop them from doing what they want to do, importantly even before any actual moral values are laid out at all.
So this is saying that agency is important3. But there’s a little bit more of a slight of hand here — first, I’m going to point out there’s at least one curious jump here. We start by saying “you must respect your own ability to choose” then jump in the next point to “ok, so everybody must respect everybody else.” This doesn’t strike me as some obvious thing that’s clearly derived from being a rational observer. In fact, if I’m pursuing a goal, even one that’s “intuition laden,” it seems like ignoring other people’s goals is pretty useful. I accuse the Kantian of baking this idea of “you gotta care about what other people’s decisions are too” into the very idea of a rational agent, which is pretty obviously the premise.
This is a big deal. The entire Kantian project relies on a foundational, unproven, disagreeable, and arguably intuition-based premise: that I care about your goals.
I don’t see anything that’s not internally consistent about a perfectly rational agent who only cares about their goals and no one else’s. By insisting that universality is required for reason itself, a Kantian isn’t proving anything, just declaring that any agent who disagrees isn’t “truly rational.”
Second, let’s just ignore the previous complaint and say it’s fine. If we say we prize agency above all, wouldn’t it be best if we just did any actions to preserve that agency for our fellow man?
has an amazing post he put out yesterday about how it’s super doable to maximize agency for man. In the trolley problem, it sure seems like there’s gonna be more agency in the world if the 5 people are saved instead of the 1 person. The ideal Kantian here values the action of saving agency as self defeating, because, it, uh, means that you’re trampling over the agency of someone? Though admittedly my claim here is pretty narrow, this points to my broader critique of deontology, my arch-nemesis philosophical belief: it definitionally values the actions we do over the people involved. Even though a deontologist can hope that the 5 in the trolley problem are saved, it prizes inaction, and being “morally clean” over actually doing something to make the world better in a way even a deontologist agrees with. Even if a survey of every human on Earth said, yes, let’s save the 5 in the trolley problem, if it’s not binding to these hypothetical optimal rational folk who are only bound to what Kant says they’re bound to, it’s thrown out.Now, there’s actually second and third rules that Kant says as to why you can’t do this4. Kant insists these are both the same version of the universality claim he makes above. I disagree, and think these are substantially harder to justify than universality. The second is that Kant says you cannot use another person merely as a means, which is presumably why this agency maxing isn’t allowed, but “merely as a means” is a very poorly defined sentence. When you pay a doctor, or lawyer, you’re literally using their means. Kant tries to dodge this by saying that he really means “acting on a maxim that presupposes their cooperation without their informed consent to that maxim.” Kant tries to exempt non-deceptive transactions too, and this is another absolutist rule that cannot be overrun with scale; that will be relevant later.
The third rule Kant is built on, Florence doesn’t mention at all, and I’ll briefly point to it as the weakest by far. “Your rules must be compatible with a society where everybody followed the same rules you do.” This ones the most obviously worst when it comes to compelling “maximally rational” folk, because it specifically tells rational beings it doesn’t matter at all what world they actually live in, they need to be able to justify their rules in a world that will never exist with super-duper rational legislators. This one has the coolest name, though. “The Kingdom of Ends.” That goes hard.
Third, ignoring both of my previous two complaints and accepting this whole deal, the set of actions that this actually endorses, if you include all three of Kant’s big boy formulations or the one that Florence focuses on, is super narrow…
Prioritizing agency means we don’t care about pain?
Alright, this is where Kant diverges from Florence and Korsgaard. Let’s start by knocking down Kant, and then I’ll tackle the other two, who have much weaker stances in my opinion.
Under Kant, you’re allowed to care about pain, hate, suffering, and scale of harm. This would be labeled as “morally permissible” in Florence’s view. But nothing to do only with pain or hate or suffering can actually bind these rational agents; they only care about agency and anything unable to be universalized. Here are some things that are “moral obligations,” the foundation of all morality, under this view:
Don’t lie — having the truth is necessary for agency
Don’t kill — being alive is necessary for agency
Don’t imprison someone
Don’t maim
Keep your promises
Don’t commit suicide
Decent list, I think, if you can stomach the logic it took to get here. But I think it starts to go off the rails next. Remember that universality only works one direction; while you can ban something for being self-defeating, you can’t make something mandatory for preserving agency. The rational guys we made only care about not violating these innate rights, actually doing anything is a matter of preference.
Alleviate extreme suffering
Help a starving kid
Rescue someone drowning right next to you
Help animals
Stop racism5
Protect the environment
Provide basic needs
DO anything LIKE, ACTUALLY GOOD
Now, a lover of Kant will know he labels many of these “imperfect duties.” Remind me how this crystalline logic produces these vague unenforceable duties? The entire apparatus we have is only able to generate “don’t” rules, so these seem like an afterthought, lacking the rational necessity that was the whole point of the project. There’s lots of thoughts Kant has on imperfect duties, and we could fill up another post with his justifications, but I just want to focus on the disconnection from this zero-premises principle. I think saying “you don’t have a moral obligation to do these things” when the scale of these can be much, much worse than the strict ones, is bad.
Oh, by the way, remember that these uber rational boys are completely obligated to never lie and never kill, so here’s things that they’re morally obligated to do when they don’t have to do any of the above stuff.
Not lie to a murderer about where their friend is when they’re lying
Refuse to press a button to kill 1 person to save a billion people
Never sabotage terrorists or anybody who’s planning nefarious intent
Keep your promise to never reveal where a prisoner is if you make one
This is the point where I have to appeal to your intuition. If you disagree with me that Kant is falsely elevating universality to importance, and think he’s not begging the question by stipulating rational as agreeing with that, and you think that we should praise inaction instead of prioritizing actual people’s agency, then I want you to take a look at these lists and go: wait, this moral system sucks ass.
No rational agent wants this! Really? Uber rational agents can only agree on this garbage? Praising people who are unwilling to do good things that have a cost, chastising people who try to do good, banning people from doing things that everyone would agree with because of the principle. It begins to resemble a system that praises people for doing the least bad, instead of any good at all. If this is perfect moral rationality then I’m glad I live with all the irrational moral people.
Patchwork upon patchwork
Alright, back to Florence and Korsgaard. Florence and Korsgaard take a look at this set of rules and agree with you and me that it sucks, so they then need to find justifications as to why these “perfect rational observers” would disagree with these perfect axioms that they’ve set out if the result is like, really bad, man. Here are a few tweaks they’ve made, and I hope I can show that they’re pretty weak, especially given how high a bar “all perfect rational observers agree” is.
Korsgaard on “refusing to press a button to kill 1 person to save a billion person”:
She invents “threshold constructivism” where there’s a threshold to where “refusing to press the button would go against the very maxims they try to protect” This threshold, despite needing to be agreed upon completely based on intuition by all rational agents, is somewhere between 100-to-1 and 1,000,000,000-to-1, maybe. My threshold, by the way, is that I think all rational observers would reach the conclusion that they should pull the damn lever in the trolley problem and save the 5! The moment you say there’s some point of convergence of intuitions, you cease to have extremely deep principles about it.
Florence on “alleviating extreme suffering” not being obligatory:
“Because if you can help someone at little cost who is very likely sentient and in dire need of help, you are obligated to.” To be completely fair to Florence, she specifically says “showing we are in fact committed to e.g. helping others when we can is not the topic of this post.” So it’s probably unfair of me to bash this point without her even presenting an argument for it. But… this one I frankly don’t get at all from the very maxims she talks about in the article. What helping someone looks like is intuitionist unless you’re talking about agency, not suffering.
What little cost looks like is intuitionist — you’re obligated to spend $10 but not all your money? Where’s the hard line? She can appeal to the threshold again and say there’s some point where all rational beings would converge, but, uh, no? If you’re saying you need to spend $10 but not $20, and your justification is extremely optimally intelligent rational observers must definitionally converge on some values, that sure seems like a pretty strong claim! So my intuition is that this is underdemanding and vibes-based, much more than utilitarianism, which will always give you an optimal move to make. I will update this section with her counter to this if she puts a post or DM with her reasons, though, because I feel bad pressing on this point when she said that wasn’t the point of her article — it’s just clearly a very weak point!
The Kantian constructivist is trapped. Either they accept the unpalatable results of Kant’s rules, or they create arbitrary thresholds that hope to alleviate the worst of the rules, but end up undermining the entire process in so doing. These two look at the actual set of maxims that really are taken from “zero premises” and balk at the results, then need to find a justification not based off the results for why they shouldn’t be allowed anyway. But this “zero premises” conclusion is so damn difficult to build good rationale out of that they just have to gesture and call dissenters irrational, because they’re stipulating what rational is in the first place.
Florence says “ought implies can be rationally convinced.” But when she starts talking about how you should donate a little to shrimp, but not a lot, and justifies it under these extremely strong criteria we started with, it makes more sense for me to call her moral ground shaky and revert to what “ought implies can” means originally — namely, can physically do. It seems there’s a surprising amount of wiggle room in perfect logic.
Morality Without a Soul
Built into the fabric of what we’re doing here is the claim that pain is an intuitive preference. This isn’t a bug, it’s a feature — Accepting suffering is the moral bad instead of these agency-by-universality rules would just be accepting maximizing or minimizing. But it does mean that it is morally permissible to press a button that makes everyone on earth have 10% more painful a life if it doesn’t impact agency, and it’s also the exact same amount of morally permissible to press a button that makes everyone on earth have 10% more beneficial. I would like my moral frameworks to provide guidance on these points.
Florence is commendably frank when discussing her project here. She says “It is for the reader to judge whether my characterization captures certain ordinary concepts” and she’s uninterested in what the words classically mean in philosophy, which I respect. So I don’t think she’s sneaking anything in here, but I can declare that this seems like a fruitless endeavor.
This morality* she talks about, where not telling white lies is a hard moral obligation but saving millions of lives for the cost of one is a moral preference, just isn’t a very useful distinction to have when deciding what to morally do. The constructivist quest for rationality comes at a seriously staggering price. By trying to escape human intuition, it removes human significance. Even if you accept all of the qualifiers and believe it’s technically perfectly rational, I think it’s really a compelling argument for the moral necessity of our so-called irrationality.
Conclusion
Florence is a better writer than me, a better philosopher than me, and probably cooler and radder than me with more swagger in real life. But I think the Kantian way has a lot of holes to patch, and attempting to prove morality off of literally zero premises and also have all of the results you get be not-insane might be an impossible task. I think anybody under these constraints would fall into intuitionism and ad-hoc justification for the personal moral beliefs they find compelling, such as donating to animals (which you should obviously do if Florence and I both agree).
Kant has a fun philosophy on his hands, and the wish to dodge intuition in philosophy is admirable. I think he unfortunately fails to, and his successors don’t provide clear-cut logical improvements. Proving that the badness in the human experience like pain, anxiety, and fear is actually bad is not a trivial logical endeavor, but I believe that the true foundation of morality must focus on the people experiencing the consequences, not the arbitrary means to reach those ends.
Here’s two subscribe buttons, which means you can subscribe twice as hard. Thank you all.
I also learned he was racist, which was less of a joy for me to find.
This is called reflective equilibrium, which is vital in philosophy.
By the way, this is called just one formulation of what Kant calls the “categorical imperative,” called the Formula of the Universal Law. It’s what Florence focuses on, and there’s two other laws that Kant insists are merely different expressions of one fundamental principle, called Formula of Humanity and the Formula of the Kingdom of Ends.
Yeeeep, here’s the stuff from footnote 3 baby.
As I said in footnote 1, Kant was super racist, so I guess if you’re Immanuel Kant or somehow stumbled from the dumbest part of Twitter onto my feed, you can call this one a boon.
> I think anybody under these constraints would fall into intuitionism and ad-hoc justification for the personal moral beliefs they find compelling, such as donating to animals[.]
Great essay, but it also reminded me why I don't like pure philosophy. Kant's ideas seem so hopelessly theoretical, so "how many angels can fit on a tip of a pin?" that there is no chance to applying it to the real world, and what fun is it in discussing theories that couldn't be applied to the real world?
I actually think the first and third formulation of the categorical imperative is obviously identical and depending on how you interpreted the second one is also identical. The first formulation tells you to only act on a maxim that you can will should become a universal law while the third tells you to act as if you are a legislator making universal laws when choosing your own maxim. Seem pretty identical to me, and the second formulation strikes me as basically saying that you have to respect rational agency or basically that you have responsibilities and duties towards rational agents that you don’t have towards rocks, which is obvious. The second formulation is main problem is that it’s worded in a very imprecise manner. Since all it says is that you can’t merely treat people as means you have to treat them as ends in themselves with zero specification of what that supposed to be.
I do agree that the argument falls apart. If you try replacing rational with basically any other word, since rational in practice seems to means capable of being motivated by reasoning in the Moraly correct way, which is obviously circular if you’re trying to come up with morality. That said, I actually think a lot of the stupid rules are due to poor reasoning on the part of philosophers and that for example, you can easily universalise a rule permitting you to lie when it has good consequences. Basically, a maxim is a rule of the form. I will undertake action A in situation B motivated by reason C. Just build it into the rule that you will not lie in a situation where everybody else is following the exact same rule as you because it would become common knowledge that you lie that situation in which case the motivation for lying would be defeated by the fact that nobody would be fooled. A lot of the stupid rules like the rule against suicide seem to be because of trying to reach a conclusion from axiom,s that are reasonable, but don’t imply a conclusion. For example, if you’re an agent who wants to be dead, it seems pretty beneficial to your goals for others to kill you.
As for imperfect duties, the argument there is that in theory, rational agent can just want to universalise a rule that permits them not to help those in need in reality, given actual human preferences. No human can actually wish that they should be left alone when they are in need. I think this is another case of motivated reasoning because you could equally argue that given actual human psychology humans can’t actually live according to the categorical imperative, as indeed seems to be acknowledged in sum of Cant’s writing, as he admits that living perfectly in accordance with the moral law isn’t humanly possible, although he was working from a more demanding definition that seems to require you only act out of duty. Still, it’s just true that no human can actually be perfectly unselfish in the way of not willing others to help them more than they would wish themselves to help others in the exact same situation. Still, I do think requiring that you universalise your moral rules is obvious from anyone who wants other people to follow their morality, not to mention that otherwise you have to ignore the obvious fact that if you think a particular line of moral reasoning is correct, then logically everybody who is moral would follow the exact same reasoning which just means that whatever rules they come up with would be identical and therefore taking a self defeating rule is stupid. Of course, this is pre-supposing that there is something like correct moral reasoning which you can be write about, and also that reasoning correctly about morality is the important thing instead of focusing on your actions with little regard for your reasoning behind the action. Still to make it seem more intuitive. Do keep in mind that anyone following the exact same reasoning process or decision procedure. As you will inevitably come up with the same decision rules, so it’s stupid to pick rules that you would not want people following the identical decision procedure to also follow, since they will in fact, inevitably, end up following them. Also, if you’re not universalise your morality before picking your rules and can’t universalise it. That means you can’t want others to follow your morality or at least not everybody else.