Skip to content

stevegrossi

moral psychology

Tended 3 years ago (1 time) Planted 3 years ago Mentioned 0 times

Contents

I’ve written about The Righteous Mind by Jonathan Haidt, an introduction to what he calls “moral foundations theory.”

Intuitions come first, strategic reasoning second

Haidt begins by making the case, supported by his research, that the role of reason in our beliefs about moral issues is often the opposite of what we think. Reason is typically employed not to arrive at moral truth, but to convince others of the moral truth we’ve already taken for granted. In this way, our reasoning is less an impartial judge than a public-relations shill for our gut feelings. In the lab, human reason enters the picture only after our intuitions have made up our moral minds for us, and reason’s role is to grasp at the justifications for those intuitions which are most likely to convince other members of our species.

To be sure, we are capable of reflective truth-seeking in moral matters, but it’s not our default as a species and we are only consistently truth-seeking under very specific circumstances:

two very different kinds of careful reasoning. Exploratory thought is an “evenhanded consideration of alternative points of view.” Confirmatory thought is “a one-sided attempt to rationalize a particular point of view.” Accountability increases exploratory thought only when three conditions apply: (1) decision makers learn before forming any opinion that they will be accountable to an audience, (2) the audience’s views are unknown, and (3) they believe the audience is well informed and interested in accuracy. When all three conditions apply, people do their darnedest to figure out the truth, because that’s what the audience wants to hear. But the rest of the time—which is almost all of the time—accountability pressures simply increase confirmatory thought.

The Moral Foundations

Haidt engaged in a series of experiments in which participants were faced with hypothetical moral quandaries and asked to justify what they thought was the right decision. He and his colleagues then examined the justifications and found that they fall into any of six main categories, axes along which people judge right and wrong, which he dubbed “moral foundations.” Haidt also offers plausible (though fundamentally unverifiable) evolutionary explanations for why humans come pre-loaded with these moral software modules.

  • Care vs. Harm concerns evolved out of our species’ need to care for children, who remain vulnerable through an especially long developmental period. This moral faculty make us sensitive to the needs of the vulnerable and the oppressed, and encourages us to shun those who act cruelly.
  • Fairness vs. Cheating concerns evolved when humans began banding together into small groups, in order to help us cooperate effectively (with all the survival advantages that entails) while preventing us from being exploited by others. This faculty encourages us to reward those who follow the rules, and punish those who don’t pull their weight.
  • Loyalty vs. Betrayal concerns evolved as human bands got larger and required more social glue to keep them together. This faculty encourages us to put our social group before ourselves and reward others who do the same, while punishing those who do not.
  • Authority vs. Subversion concerns evolved as groups got larger still, in order to help us navigate social hierarchies. This faculty encourages us to submit to our superiors, in order to ensure the social stability of the group as well as improve our own position within it, and causes us to shun those who do not defer to hierarchy.
  • Sanctity vs. Degradation concerns evolved in response to humans’ eating things in the environment that made them sick. It causes to imbue objects with irrational values, seeing some (typically familiar ones) as inherently “clean” and others as “dirty.” Shared values around what’s clean and what’s not helped early human groups to survive and thrive in a world of microbes.
  • Liberty vs. Oppression concerns evolved to help small groups resist domination by an “alpha,” which is counter-productive. This faculty makes us suspicious of anyone who seeks to raise themselves above and impose their will upon the group.

The Modular, Moral Mind

In Why Buddhism Is True, Robert Wright writes about the modular theory of mind in a way that maps quite closely to the “moral foundations” that Haidt identifies in his research and book. Wright:

[Kenrick and Griskevicius] divide the mind neatly into seven “sub-selves” with the following missions: self-protection, mate attraction, mate retention, affiliation, making and keeping friends, kin-care, social status, and disease avoidance.

For example, the mental machinery responsible for our moral intuitions around “sanctity vs. degradation” may also be responsible for “disease avoidance”; intuitions around “fairness vs. cheating” may come from our “self-protection” module; likewise with moral intuitions of “care vs. harm” and our “kin-care” mental module.