Why intellectuals fail
Reason is not a truth-seeking instrument
Chapter 1 from my book “Pourquoi les intellectuels se trompent”
We like to think that our era, governed by science, reason, and democratic institutions, bears no resemblance to those of the past. We imagine it has little in common with eras of collective folly, when newborns were sacrificed to appease gods, peoples exterminated for no reason, individuals enslaved because of their skin colour, holes drilled into the skulls of the sick to cure them, witches were sent to the stake and where the future was divined from the entrails of slaughtered animals. We like to imagine that the elites of our societies are very different to the elites of those societies. And yet, we forget that something hasn’t changed: human nature. To understand why brilliant people could let themselves be seduced by destructive ideas (and why they still can), we must indeed look at human nature, and answer this question: why do we reason?
The Evolutionary Function of Reason
Why do we reason? The hypothesis long favoured by the cognitive sciences was as follows: if evolution endowed humans with reasoning capabilities superior to those of every other species, it is to help us see the world accurately. This was, among others, Darwin’s view. Through his intellectual faculties, he argued, man
“has great power of adapting his habits to new conditions of life. He invents weapons, tools, and various stratagems to procure food and to defend himself. When he migrates into a colder climate he uses clothes, builds sheds, and makes fires; and by the aid of fire cooks food otherwise indigestible. […] [As a consequence], the individuals who were the most sagacious would rear the greatest number of offspring[1].”
For Darwin, reason is a tool allowing us to acquire knowledge, understand reality, reach better decisions, and extricate ourselves from difficult situations. It seems obvious that evolution has, at least partially, selected for the capacity to see the world as it is. Is this view entirely correct, however? If so, how do we reconcile this with the fact that human beings are conformist, and that in many contexts, when the people around us are mistaken, reasoning steers us toward an error[2]? And why do we succumb to dozens of cognitive biases that lead us to commit systematic reasoning errors[3]? If reason’s role is to guide us toward truth, might it be failing at its task - failing to perform its function[4]? This is how the literature on cognitive biases and conformity has long been read: reason does its job poorly.
According to researchers Hugo Mercier and Dan Sperber, this literature demonstrates instead that the function of reason is not solely to guide us toward the truth[5]. Imagine, they propose, that an animal species had been endowed by evolution with wheels rather than legs. If these wheels allowed it to move efficiently and precisely, we could conclude that they fulfil a function of locomotion. But if these wheels had a flaw, present in every animal of the species, that compromised the very execution of their function (for example, if one wheel, larger than the other, prevented the animal from rolling straight), then we would have to conclude that we had mistaken the function itself. Why? Because if wheels served only a locomotor function, animals born (through genetic variation) with symmetrical wheels would have gained a selective advantage and become the majority. Since it did not happen… asymmetrical wheels fulfil their evolutionary function better than symmetrical ones. The role of the wheels, then is not solely to enable movement. What is it? Of course, the example is schematic, but one could imagine that rolling along an unpredictable path might help the animal escape predators (a function of deception), stimulate its sexual excitement (a reproductive function), and so on. By the same logic, human reason, a product of evolution, cannot be systematically defective. Its true function must be supported, rather than undermined, by our multiple cognitive biases, and in particular, by our predisposition to conformism. What, then, is this function?
According to Mercier and Sperber, we use our cognitive capacities partly to access the truth, certainly, but also (and above all) to score social points, manage our reputation, and facilitate cooperation with our peers. Evolution has endowed us with an epistemic rationality (a capacity to adopt valid beliefs), but also with a social rationality. Throughout our evolutionary history, an individual’s reputation (upon which depended his ability to benefit from others’ protection, find romantic partners, and enjoy the fruits of cooperation) was often more important for his survival than the accuracy of his beliefs. The psychologist Jonathan Haidt asks us to imagine two men: one is obsessed with the truth and intellectually honest, reasoning rigorously in all circumstances, sometimes at the cost of his reputation[6]. The other manages to always maintain a good image, even if that means, at times, embracing fashionable errors. Across history, which of the two has enjoyed a selective advantage? In other words, which of the two was more likely to survive, reproduce, and secure tribal protection for his offsprings? Obviously, the latter. We are not the descendants of Copernicus and Galileo, but of the righteous crowd that condemned them. Not only did social ostracism often led to death in preindustrial societies (resources were scarce and dangers ever-present), but for 600,000 years, among hunter-gatherers, troublemakers and dissidents were sometimes directly executed[7]. Nonconformists sometimes suffered the same fate (think of Socrates), and even today, apostasy remains punishable by death in ten countries. Evolution has therefore favoured the character traits that allow us to avoid becoming pariahs - traits that urge us to conform to what is expected of us. And since the ideas and behaviours deemed socially acceptable vary from one era and one community to another, what was selected isn’t the ability to uncover the truth, nor to adopt any particular set of ideas, but to rationalize whatever consensus happens to prevail. “For millions of years, summarizes Jonathan Haidt, our ancestors’ survival depended upon their ability to get small groups to include them and trust them, so if there is any innate drive here, it should be a drive to get others to think well of us.[8].”
In our prosperous, peaceful societies governed by the rule of law, being viewed positively by our peers is no longer indispensable for our physical survival, yet certain mental dispositions - the fruits of millions of years of evolution - endure. Today, being well regarded by others still matters more to us than nearly anything else[9]. One study finds that 70% of people would rather have a hand amputated than to carry the reputation of being a Nazi, while 53% would prefer death over being considered a paedophile[10]. Similarly, well-being correlates more strongly with sociometric status (the respect and admiration we receive from our peers) than with socioeconomic status[11]. Furthermore, negative social judgment is associated with a spike in cortisol (a stress-related hormone)[12], while children’s self-confidence is directly determined by their popularity among peers[13]. « We think so highly of the human soul, wrote Blaise Pascal four centuries ago, that nothing is harder to bear than contempt or indifference from another; all human happiness lies in being held in esteem[14] ». A hundred years later, Adam Smith asked: “What is the point of all the toil and bustle of this world? To be observed, to be attended to, to be noticed with sympathy and approval[15].”
Once we acknowledge the social function of reasoning, cognitive biases become perfectly adaptive[16]. Conformity, in particular, is no longer an enigma: reason need not always lead us to the most justified convictions, but to those which, at the moment we reason, are deemed the most justified; it need not guide us toward the most logical decisions, but toward those perceived as the most logical; it need not push us to fight for the most virtuous causes, but for those considered the most virtuous. “People, writes Steven Pinker, are embraced or condemned according to their beliefs, so one function of the mind may be to hold beliefs that bring the belief-holder the greatest number of allies, protectors, or disciples, rather than beliefs that are most likely to be true[17].”
Mercier and Sperber review a series of studies showing that when someone knows he will have to justify a choice, he reasons until settling for... whatever will be easiest to justify. When the same person makes his choice shielded from the judgment of others, he reaches a different decision (and often a better one, judging by the level of satisfaction reported weeks later)[18]. Yet, in both cases, he believes he has reasoned his way to the decision that will satisfy him most. “Reason, Mercier and Sperber conclude, improves our social standing rather than leading us to intrinsically better decisions. And even when it leads us to better decisions, it’s mostly because we happen to be in a community that favors the right type of decisions on the issue[19].” In other words, those who adopt false beliefs do reason, and their reason performs one of its evolutionary functions perfectly: it secures their standing within an ideological community that shares those beliefs[20]. Intelligence, therefore, is no safeguard against the victory of bad ideas. When a judgment takes hold, when it becomes the majority view in our social circles, our brains are wired to accept it, embrace it, and feed it. This also suggests that those who hold the “correct” opinions on subjects such as vaccines, evolution, or global warming are sometimes no more deserving than anyone else; they are lucky enough to evolve in an environment where the socially defined truth is aligned with the objective truth (although that alignment is fragile). Do we have evidence for this? Among the general public, there seems to be no correlation between understanding the theory of evolution and believing it[21]. Likewise, citizens who believe in human-caused global warming appear to be just as ignorant of the scientific facts as those who do not[22]. And contrary to a popular idea, there appears to be no causal link between the capacity to reason correctly and atheism[23], the only factor with a causal impact on the likelihood to believe in God seems to be exposure to other believers[24]. This should make us humble: we do not owe our worldview to our intelligence, but to the company we keep!
In short, Darwin was partly mistaken. Two forces, both products of evolution, contend within us: the concern for truth, and the concern for social standing. Epistemic rationality and social rationality. Sometimes, it is (socially) rational to be (epistemically) irrational. What if, within the mind of an intellectual, social rationality easily won the contest of rationalities?
The contest of rationalities in the brain of intellectuals
The economist Robin Hanson compares beliefs to clothing[25]. Clothes possess a practical value (they keep us warm, protect us from injury...) but also a social value (they indicate our profession, reveal our personality, advertise our taste...). Beliefs, likewise, have a practical value (they motivate specific behaviours) and a social value (they testify to our conformity to a group’s codes). Now, the harsher the climate, the more we choose our clothes for their practical rather than social value. Analogously, the higher the cost of being wrong, the more we adopt beliefs for their practical value; the lower the cost, the more we adopt them for their social value. In fact, economists Roland Bénabou and Jean Tirole, examining the scientific literature on the conditions for rationality, emphasize that rigorous thinking is rarest precisely when the individual cost of error is low[26]. In other words, when epistemic irrationality carries no penalty, everyone (unconsciously) opts for social rationality, rationalizing whatever will consolidate their reputation. In his classic Capitalism, Socialism and Democracy, Joseph Schumpeter imagined that the rise of free enterprise had enabled the rise of rational thought[27]. Why? Because in the private economic sphere, mistakes have heavy consequences for the person being wrong. If an entrepreneur tries repeating “planning and control” nine times to produce a good, he will suffer from the inefficiency of the incantation and be forced to think about other methods of production. For such a man, the psychological incentive to seek the truth is powerful.
Among intellectuals (and particularly in the social sciences), error, by contrast, does not always penalize the one who supports it. A theorist can hammer away at “planning and control” all his life without ever personally suffering the consequences of what he advocates. Furthermore, not only is he unable to “test” his theory at every stage of its development (as a baker would - checking that his bread rises in the oven, then that it tastes good, then that customers like it, modifying his recipe at each step as feedback comes in), which can lead him to build a brilliant and complex theory upon flawed premises (consider Marxism, built on the fallacious labour theory of value) ; but additionally, his reputation is not pegged to the validity of his ideas (unlike the baker, whose reputation is a function of the quality of his bread). Why?
Because the criteria allowing for the empirical evaluation of his theories are both distant in time (if an economist proposes doubling taxes, we will not know whether he is right until his proposal is applied, and if it ever is, we will have to wait before knowing the results) and subjective (if higher taxes impoverish the country, some might still welcome it, arguing that it has helped build a more egalitarian society). Intellectuals are therefore judged little by the objective merits of their opinions, and heavily by what others think of their opinions. Hence the singular importance, for them, of social rationality. The philosopher and economist Thomas Sowell gives the following example:
“The ultimate test of a deconstructionist’s ideas is whether other deconstructionists find those ideas interesting, original, persuasive, elegant, or ingenious. There is no external test. […] The very terms of admiration or dismissal among intellectuals reflect the non-empirical criteria involved. Ideas that are “complex,” “exciting,” “innovative,” “nuanced,” or “progressive” are admired, while other ideas are dismissed as “simplistic,” “outmoded” or “reactionary.”[28]”.
Even when an empirical evaluation criterion is available, the individual cost of error seems low. In a classic study begun in the 1980s, a researcher interviewed several hundred experts in politics, economics, and social sciences to gather their forecasts regarding major events of the coming decades[29]. The verdict twenty years later: their conjectures proved barely more accurate than those of chimpanzees throwing darts at random[30], and even less accurate than the predictions of average citizens or simple statistical models. (The more famous the expert, the more likely he was to be wrong). Most surprising of all: the accuracy of their predictions had almost no impact on their reputation. Even those who had been heavily and systematically wrong for twenty years continued to be considered credible authorities in their fields. At the end of the Second World War, Orwell mocked the “experts” of the British public debate: none had been capable of predicting the German-Soviet pact, many had judged the Maginot Line to be “impenetrable” or prophesied that Russia would defeat Germany in less than three weeks, yet all retained their standing. He compared political commentators to “astrologers,” judged and acclaimed not by the accuracy of their beliefs and predictions (the criterion of empirical validation), but by their ability to pander to their readers’ ideological inclinations (the social criterion: the opinion of others on their own opinions)[31].
“One of the surprising privileges of intellectuals, wrote philosopher Eric Hoffer, is that they are free to be scandalously asinine without harming their reputation. The intellectuals who idolized Stalin while he was purging millions and stifling the least stirring of freedom have not been discredited. They are still holding forth on every topic under the sun and are listened to with deference[32].”
Though the facts proved him wrong almost systematically, Sartre remained throughout his life the West’s reigning “master thinker[33]”, revered by the intelligentsia on both sides of the Atlantic. In October 1939, he wrote: “Hitler has said a hundred times that he would not attack France[34]”; in 1954, he predicted that by 1960 the standard of living in the USSR would be 30 to 40% higher than in France[35]; a few years later, he assured the world that the United States was sinking irremediably into fascism[36]. The list of thinkers who retained their prestige despite grave errors could fill volumes. Even Bertrand Russell, who urged his country to disarm unilaterally to appease Nazis, continued to be considered an intellectual reference after the war. In the 1970s, upon returning from Asia, Simon Leys wrote about the humanitarian toll of the Maoist regime. He was met with the jeers of an intelligentsia that venerated the inanities of the Little Red Book. While Leys, the victim of a smear campaign, lost any chance of pursuing an academic career, his detractors remained respected and influential[37]. Drawing on this experience, Leys compared the great moral authorities of the intellectual world to the fools and idiots whom the Native Americans of the Far West considered to be “creatures inspired by God”: the French “take them for guides, consult them every problem, and, when these oracles are wrong (which happens often), grant them that immunity normally enjoyed only by small children and the simple-minded[38]”.
Let us sum up. On one hand, the cost of error is low for an intellectual, since he does not personally suffer the consequences of his bad ideas. On the other hand, the price to pay for stating a truth can be high when it does not coincide with what others believe the truth to be. Under these conditions, everything is in place to allow an overwhelming victory of social rationality over epistemic rationality[39].
Unfortunately, when the individual cost of an error may be low, it does not imply that its collective cost over time is not considerable. In the USSR, a consensus emerged around Lysenko’s agricultural theories. Among communist intellectuals and Party officials, it was individually rational for each to adopt an enthusiastic faith in rejecting classical genetics and in applying “Marxist dialectics to the natural sciences”. The individual cost of the error was low, or in any case, negligible compared to the associated reputational gains. The collective cost, in the long run: millions of deaths.
Intellectuals’ social identity and the cost of truth
If epistemic rationality (intellectual honesty) can be costly for intellectuals, that is also because, more than anyone else, they build their social identity on their ideas. They may therefore be the most incapable of changing their minds, the most inclined to choose irrationality in the present to avoid admitting past mistakes - just as an investor unreasonably clings to a bad investment rather than sell at a loss, or as a proud king refuses to capitulate so that his soldiers will not have died “in vain.” It is indeed psychologically painful to renounce an idea that we have long defended publicly. “Most men,” writes Tolstoy, “including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have delighted in explaining to colleagues, which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives[40].” In his Memoirs, Raymond Aron recounts that Sartre effectively relinquished his ability to question himself as his ideas became inseparable from who he was:
“For a long time, Sartre searched for himself and took pleasure in submitting to me his ideas of the day; if I tore them to pieces or, more often, if I unveiled their ambiguities or contradictions, he accepted the criticism because he had only just conceived them and had not yet adopted them for good. [...] [Later], he defended them because he held them to be his own, in the deepest sense of the word, and no longer as hypotheses formulated at random after a reading or a sudden intuition[41].”
The psychologist Keith Stanovich shows that there is no type of person more predisposed than another to reason in a biased way; but there is a type of belief that does lead us to reason with bias. Which ones? Precisely those that are central to how others perceive us, that involve emotional commitment, and that stake our ego[42]. Perfectly rational individuals, once politicized, turn irrational the moment they are asked to reflect on political matters[43]. Bertrand Russell playfully imagined what would have happened had Einstein proposed a theory as revolutionary as relativity not in science, but in the ideological sphere, where no one accepts changing their mind:
“English people would have found elements of Prussianism in his theory; anti-Semites would have regarded it as a Zionist plot; nationalists in all countries would have found it tainted with lily-livered pacifism, and proclaimed it a mere dodge for escaping military service. All the old-fashioned professors would have approached Scotland Yard to get the importation of his writings prohibited. Teachers favourable to him would have been dismissed[44].”
Beyond intellectuals, anyone who lives partly off their ideas (writers, politicians, journalists) has much to lose by changing their mind. A columnist, for instance, who expressed views too far from the prevailing consensus prevailing in his social circle might lose his readers, his weekly column, his publishing contracts, his connections with TV producers who regularly invite him on air, and his opportunities for paid corporate talks. He might also lose the respect of nearly all his intimates, since - a peculiarity of the political, media, and intellectual spheres - friendship circles often overlap with networks of ideological affinities. A political journalist at The Guardian forms friendships with colleagues who all share most of his political opinions, whereas an engineer at IBM forms friendships with colleagues whose convictions vary widely. The journalist who changes his mind therefore falls in the esteem of all his friends, whereas the corporate employee who does so only loses standing in the eye of some friends (and is held in higher regards by others). For the latter, the social cost of changing one’s mind (and therefore, often, of admitting the truth), is lower, and the “contest of rationalities” playing out in his brain is more evenly matched. (If the capacity for rational thought correlates with the social cost of changing one’s mind, then a society’s dominant ideology - the one embraced by the elite - may be destined to grow more irrational over time: since distancing from it is socially costly, everyone is psychologically incentivized to spare it from critical scrutiny.)
Add, moreover, that in certain contexts, people are valued by their peers precisely for being a loyal member of a clan[45]. Human beings are tribal; we need to bind ourselves to coalitions (throughout our evolutionary history, survival depended on the strength of the coalitions we belonged to for waging war, hunting, or building defences against predators and rival tribes), and we reserve our warmest moral sentiments for those who espouse the clan’s beliefs[46]. For example, we prefer to cooperate and share our resources with those who share our political opinions than with those who share our skin colour, social class, or age[47]. If an intellectual renounces certain convictions, he risks losing the favours his ideological allies were prepared to grant him and see his social status decline. (This may be why “left-wing” intellectuals who end up disagreeing with “the Left” on almost every issue never actually cross over to the Right, but instead claim they want to repair the Left, or rebuild a “true” Left. Perhaps they feel the need to frame their dissent not as a betrayal in favour of the opposing clan, but as a contribution to their own clan’s renaissance - an attempt to make it adopt a strategy to win the war of coalitions.[48]) The coalitions we belong to need not be ideological (though for the elite, who are more politicized than average, they often are): we can for example be united with others by supporting the same football team. But just as we never stop supporting the football team we started rooting for in childhood (even if its players change entirely from one decade to the next), we rarely abandon an ideological community once we have cemented our affiliation to it. The Hungarian Georg Lukács, one of the 20th century’s most influential Marxist intellectuals, declared that even if every single empirical prediction of Marxism were disproven, he would still hold Marxism to be true[49].
To conclude this section, it is worth noting that the cost of truth can be prohibitive for an intellectual who championed ideas that have sown desolation. Studies show that when we fear having harmed someone else, we often chose not to seek information about the consequences of our actions in order to save face, protect our self-image, and spare ourselves the feeling of having been cowardly or selfish[50]. As early as 1945, Orwell noted that in the United Kingdom, admirers of Hitler had achieved the feat of never even learning that Auschwitz existed, while communist sympathizers, zealous denouncers of Nazi camps, had somehow never heard of the Russian ones[51]. As Saul Bellow observed, “a great deal of intelligence can be invested in ignorance when the need for illusion is deep[52]“. (The problem: intellectuals who commit consequential mistakes cannot realize that they are mistaken - because they flee the information that would allow them to come to that realization).
In a text analyzing the French press coverage of the Cambodian genocide, literature professor Pierre Bayard offers a fascinating case study. He discusses the attitude of Patrick Ruel, a journalist at Libération (one of France’s leading left-wing newspaper), who enthusiastically welcomed the Khmer Rouge’s rise to power in April 1975, then refused to believe the damning reports about the regime - and never once informed his readers of the possibility of a grim humanitarian toll, let alone a genocide. Ruel remained in his denial for over two years, even as Le Monde (the major French left-wing newspaper) timidly began to report a worrying situation in early 1976, and a few months later described Cambodia as “a vast concentration camp[53]“. Replacing reality with fiction, Ruel clung stubbornly to his initial judgment until March 1977, insisting that Le Monde’s reputation for seriousness and objectivity had been used for a large-scale disinformation operation, and compared certain refugee accounts to “grotesque theater[54]“. How can denial persist in the face of all evidence? Bayard offers his answer:
“The explanation is simple: it is difficult to acknowledge a mistake. We are wrong to assume [...] that we can change our mind after examining the new material added to the case. Because of the psychic investments poured into fantasy-building, publicly confessing an awakening is psychologically very costly. It entails a total revision of one’s self-image, as constituted by oneself and by others. Such a revision becomes all the harder when the error carries weight. [...] [By championing a genocidal regime], these journalists became accomplices to horror, and the discovery of such complicity, however involuntary, did not come easily. [...] Such a [psychic] conflict is even more painful to manage when it is collective. When many intellectuals support your point of view, it is difficult, under the gaze of others, to backpedal and reverse judgment. [...] One’s identity is threatened in proportion to how deeply the believer invested in the object of belief and the fantasy concealing the truth.”
In other words, if an intellectual commits to a position, there comes a point when the social and psychological cost of backtracking exceeds the cost of self-deception. He must forge ahead, advancing still further into the position he has staked out. His cognitive faculties are then mobilized for a single end: to prove that he is headed in the right direction. His intelligence is used to vindicate past wanderings rather than to seek the truth; his past positions (and errors) come to entirely shape his perception of the world in the present[55] - social rationality crushes epistemic rationality. Unfortunately, this suggests that the more an intellectual errs… the more he goes on erring (we will return to this pattern in the next chapter). Error does not correct itself; it compounds.
The Ideological Prisoner’s Dilemma
Unfortunately, while it may be desirable for each of us, individually, to adhere to a false belief, it can lead to collective disasters - as we saw earlier with the example of Lysenkoism.
In game theory, there are many configurations where the optimal action for each individual (from their own point of view) leads to a situation that is sub-optimal for everyone. In a classic paper, Garett Hardin describes a situation in which shepherds share a common pasture for their flocks[56]. Year after year, each shepherd adds more livestock to the pasture. Eventually, a tipping point is reached where the overexploitation of resources begins to degrade the field. The shepherds - despite being fully aware of the looming danger - continue to expand their herds. Soon, the pasture becomes unusable. The shepherds can no longer ply their trade; all are ruined. What is interesting here is that, taken individually, none of the herders acted irrationally. Why? Because if a single shepherd had chosen to limit the size of his herd, he would have borne the consequences of that choice (reduced yield) without reaping any benefit, since the other herders would have continued to degrade the land. Put differently, he would have paid the full cost of his decision, while the corresponding gain (the space freed up on the pasture) would have been shared among all the shepherds. Hardin demonstrates that it is entirely possible for everyone - at the individual level - to act rationally, and yet for the outcome - at the collective level - to be a catastrophe.
We are probably facing an analogous situation in the realm of ideological convictions[57]. Popular but mistaken beliefs are like the pasture: it can be advantageous for each individual to sign on - if only to look good in the eyes of others - yet, in the long run, everyone loses.
Imagine, for instance, that in certain intellectual circles, the idea that Western civilization must self-destruct becomes fashionable. Anyone who objects would suffer the social cost of their dissent, but would derive no benefit from it, since, on their own, they have little influence over the West’s trajectory. Each intellectual, acting as it is preferable for him to act (there can of course be cynicism, but as we have seen, much of it is largely unconscious), approves the idea of the West’s self-destruction - and eventually, the event actually comes to pass. If that thought experiment feels too cartoonish, consider another belief popular in certain circles: the idea that police and prisons should be abolished. Imagine an intellectual who belongs to one of these circles and must decide whether to endorse the belief. Let’s even suppose that his judgment is decisive (and that he knows it): if he endorses it, he will enable the release every serial killer. Is it in this intellectual’s interest to endorse the belief? Perhaps. The harmful consequences of the decision would be diffuse, diluted across millions of citizens people (the absolute number of crimes in the country would rise sharply, but the probability of any particular individual - and thus the intellectual in question - being attacked would increase only marginally), whereas the harmful consequences of not endorsing the idea weighs solely on the intellectual (his reputation fragilized among his peers). It isn’t hard to imagine that for the past decades now, on highly polarized issues, we have often confronted such patterns. For example, since the social consequences of opposing immigration are, at the individual level, more damaging than the consequences of immigration itself, many intellectuals have psychological incentives that incline them toward motivated reasoning in defence of immigration policy.
Note that it may even be desirable for an intellectual to embrace an idea whose implementation would directly harm his own private self-interest. The economist Bryan Caplan asks: does a wealthy person supporting parties that promise to raise taxes necessarily demonstrate altruism?
“What is the expected marginal cost to Barbara Streisand of voting for a candidate sure to raise her tax bill by a million dollars? The answer is emphatically not a million dollars, but a million dollars multiplied by the chance - say, one in a million - of her casting the decisive vote. Her voting for higher taxes is not an act of radical self-sacrifice, but a token donation of a dollar. […] The wealthy but uncharitable socialist thus ceases to be a mystery once you understand relative prices: voluntary charity is costly to the giver, but voting for charity - or anything else - is virtually free[58].”
In other words, when an idea becomes popular, someone may have an incentive to subscribe to it and defend it even when the success of that idea would be extremely costly for him. In short, it can be in our interest to embrace ideas… that run directly against our interests. Corollary: that self-interest rules the world, just because an ideology harms a segment of the population doesn’t mean that segment lacks an incentive to support it. Ultimately, neither intelligence nor reality are obstacles to the progression of absurd or harmful ideas.
[1] Quoted in D. Sperber and H. Mercier, The Enigma of Reason, Penguin Books, 2018, p. 179.
[2] Our propensity to imitate the behaviours and ideas of those around us is among the best-attested and most replicated phenomena in social psychology. P. Zimbardo, The Lucifer Effect, Random House, 2007, pp. 258–296. See also: Asch (1956); Franzen and Mader (2023).
[3] We underestimate probabilities in some contexts, overestimate them in others; we are relatively insensitive to orders of magnitude; we are overly influenced by a first impression; we make choices inconsistent with our preferences; and so on. See, for example, Kahneman and Tversky (1990).
[4] “Function” is used here in its evolutionary sense: what selective advantages did human beings capable of reasoning have over those who were not?
[5] D. Sperber and H. Mercier, The Enigma of Reason.
[6] J. Haidt, The Righteous Mind, Penguin Books, 2012, p. 84.
[7] R. Wrangham, The Goodness Paradox: The Strange Relationship Between Virtue and Violence in Human Evolution, Penguin, Vintage Books, 2019, pp. 142-168.
[8] J. Haidt, The Righteous Mind, p. 90.
[9] Baumeister and Leary (2017).
[10] Baumeister et al. (2018).
[11] Anderson et al. (2012).
[12] Dickerson and Kemeny (2004). Referenced by: R. Henderson, « Thorstein Veblen’s Theory of the Leisure Class - A Status Update », Substack, January 2023.
[13] Magro et al. (2019).
[14] Quoted in A. Finkielkraut, L’Identité malheureuse, Folio, 2013, p. 199.
[15] Quoted in Loewenstein et al. (2016).
[16] For instance, it can be advantageous to succumb to the sunk-cost bias - that is, to persist in a misguided decision. Why? Because it sends a signal: when I make a decision, I stick with it. You can count on me. I am a good cooperator. Evolution may have favoured those who displayed a certain degree of stubbornness over those who backed out (even rationally) at the first difficulty. Likewise, confirmation bias - irrational if we assume that we reason in order to reach the truth (it would be more efficient to have a “disconfirmation bias” that pushed us to pay more attention to refutations of our beliefs) - allows us to behave like good team players, to stay loyal to the tribe when it is wrong (since we do not examine arguments against it), and thus to continue enjoying its protection.
[17] S. Pinker, Language, Cognition, and Human Nature: Selected Articles, Oxford UP, 2013, p. 286.
[18] D. Sperber and H. Mercier, The Enigma of Reason, pp. 253–257.
[19] Ibid., p. 259.
[20] In experimental settings, the reasoner is in fact markedly more conformist when he perceives those influencing him as belonging to “his” team (same political side when he is asked to take a stance on social issues, same religion when the question is faith, and so on). Abrams et al. (1990). See also: C. Sunstein, Conformity: The Power of Social Influences, NYU Press, 2019, pp. 26–27
[21] Shtulman (2006). See also S. Pinker, Enlightenment Now, Viking, 2018, p. 356.
[22] Kahan et al. (2012a)
[23] Gervais et al. (2018)
[24] Gervais et al. (2021)
[25] Hanson (2009).
[26] Bénabou and Tirole (2016).
[27] J. Schumpeter, Capitalism, Socialism and Democracy, Routledge, 2003 (1943), p. 123
[28] T. Sowell, Intellectuals and Society, Basic Books, 2012 (2010), pp. 8–9.
[29] P. Tetlock, Expert Political Judgment: How Good Is It?, Princeton University Press, 2005.
[30] Berkeley undergraduates even managed the feat of being less accurate than the chimpanzees.
[31] G. Orwell, “Notes on Nationalism,” pp. 5–6.
[32] Quoted in T. Sowell, Intellectuals and Society, p. 10.
[33] Phrase coined by Solzhenitsyn. See R. Aron, Mémoires, Bouquins, 2010 (1983), p. 959.
[34] Quoted in M. Winock, “Sartre s’est-il toujours trompé ?,” L’Histoire, February 2005.
[35] J.-P. Sartre, “La Liberté de critique est totale en URSS,” interview in Libération, July 1954. “Whatever path France is to follow,” he added, that path cannot run contrary to that of the Soviet Union.”
[36] See R. Aron, The Opium of the Intellectuals, Pluriel, 2010 (1955), p. 235.
[37] As the film director Fabrice Gardel notes, intellectuals who were once Maoist sympathizers admit today that they were wrong; nevertheless, “it remains chic to have applauded Mao, to have talked nonsense, nonsense that made a generation dream.”
[38] S. Leys, “À Propos de Sartre, l’irresponsabilité, le bonheur et le luxe,” Commentaire, Autumn 2005.
[39] It is worth noting, moreover, that ideology seems most prevalent in professions where, for various reasons (job security, evaluation by social reference points rather than objective criteria, and so on), one does not personally pay the price of one’s irrationality: the justice system, education, journalism, politics…
[40] Quoted in M. Shermer, The Believing Brain, Robinson, 2012, p. 313.
[41] R. Aron, Mémoires, p. 129. There is a cognitive bias known as the “Ikea effect,” meaning the tendency to assign disproportionate value to an object we have partially built. An intellectual may therefore face an additional difficulty in taking a critical look at a theory or concept he himself has created.
[42] K. Stanovich, The Bias That Divides Us, MIT Press, 2021, p. 7.
[43] Ibid., pp. 66–73; Kahan et al. (2017a).
[44] B. Russell, Essais sceptiques, Les Belles Lettres, 2011 (1928), p. 164
[45] P. Boyer, Minds Make Societies, Yale University Press, 2018, p. 44.
[46] Loewenstein et al. (2016).
[47] Cosmides et al. (2015). Likewise, membership categories grounded in beliefs (political opinions, loyalty to a sports team, religion, and musical preferences) lead to greater cooperation than all other membership categories (style of dress, body type, socio-economic status, gender…), apart from family ties (Ben-Ner et al., 2009). Finally, conservative and progressive professors alike, asked to choose between two students for a scholarship, pick in 80% of cases the one who shares their political opinions rather than their skin colour (Iyengar and Westwood, 2015).
[48] It is worth noting, however, that few conservative intellectuals criticize the right with the aim of “rebuilding a true right.” How should this be explained? One may suppose that a right-winger’s identity is less predicated on membership in a moral family. There could be various reasons for this: conservative right-wing politics, adhering to a pessimistic view of human nature, does not conceive politics as a struggle between those seeking to shape a perfect society and those standing in their way; the classic liberals do not view politics as capable of fundamentally improving people’s lives; the Christian right coalesces more around religion than ideology… And, of course, the label “right” carries no inherent moral prestige. We may therefore imagine that left-wing intellectuals, more than right-wing counterparts, see their place within their group threatened by ideological missteps. For them, the pressures of social rationality are particularly acute.
[49] R. Conquest, Reflections on a Ravaged Century, Norton & Company, 1999, p. 44.
[50] Dana et al. (2007); Soraperra et al. (2023).
[51] George Orwell, Notes on Nationalism, pp. 14–15.
[52] Quoted in Azar Gat, Ideological Fixation, Oxford University Press, 2022, p. 276.
[53] Quoted in Pierre Bayard, “Du déni au dessillement, la presse française de gauche devant le génocide cambodgien,” International conference “ Cambodge, le génocide effacé ” (Paris, 9–11 December 2010), Saint-Denis, Université Paris XIII, p. 76.
[54] Bayard, ibid., pp. 76–77.
[55] Even in everyday situations, past choices can shape how we perceive reality. The economist George Loewenstein shows, for example, that someone who bought a bottle of wine at a high price is more likely to find it tasty than someone who bought the very same bottle at a steep discount (The Cognitive Science of Belief, Cambridge University Press, 2022, p. 331). Why? Because, to rationalize having spent a lot of money, he persuades himself that he enjoys what he is drinking. Having made a mistake in the past leads him to delude himself in the present so as not to disavow his past self. Everyone lives with illusions that help them rationalize their past mistakes. This is probably why “we cannot count on those who created the problems to solve them.”
[56] Hardin (1968).
[57] K. Stanovich, The Bias That Divides Us, pp. 51-53.
[58] Caplan (2001).


This is excellent. I resonated particularly with this observation: "This may be why 'left-wing' intellectuals who end up disagreeing with 'the Left' on almost every issue never actually cross over to the Right, but instead claim they want to repair the Left, or rebuild a 'true' Left." Perhaps they feel the need to frame their dissent not as a betrayal in favour of the opposing clan, but as a contribution to their own clan’s renaissance - an attempt to make it adopt a strategy to win the war of coalitions."
I fell into that trap for a good while before abandoning any kind of identity with the Left.