The evolved functions of morality: From the Haidts to the depths.
The case for a darker view of morality
Written by Robert Kurzban
On January 6th, 2024, the sentence of a woman in Iran was carried out: 74 lashes. Her offense? Failing to cover her hair in public.
This was the result of the work of the morality police.1
The morality police.
The balance of this essay is a claim that Moral Foundations Theory—a prominent theory developed by Jon Haidt and colleagues that holds that different cultures build morality from a set of core innate psychological systems—does not, in fact, explain the central feature of human moral psychology. Moral Foundations Theory and similar ideas are, I suggest, explanations related to morality, but not explanations of it.
Indeed, most people see morality as a good thing, something virtuous and noble. Morality is a central aspect of human nature and our everyday lives. But what if the explanation for it is not to do with altruism or virtue, but something darker?
What’s in a Word?
To set the table, let’s talk about two words: 1) morality and 2) is.
The precise meanings of both words are, obviously, the subject of debate, and would be even if it hadn’t been for Bill Clinton’s infamous interrogation of the latter.
According to the dictionary, morality has multiple meanings. One is to do with rules—a moral system, or rules which, if you break them, lead to punishment—which I focus on here. Another meaning is virtue—e.g., “of high moral character”—with the connotation of doing good things. Related to those is a third idea, that to be moral is to conform to the rules, which presupposes moral rules to conform to. I’ll largely ignore that third meaning because it’s downstream of the first.
When it comes to morality, what, exactly, are scientists who take an evolutionary point of view trying to explain?
Consider some relatively uncontentious ways that the word moral is used in both everyday conversation and the scholarly literature in psychology and philosophy. A moral dilemma, such as the famous Trolley Problem, is a case in which the issue is whether taking some action is wrong. In the Trolley Problem a runaway trolley is headed toward five people on the track who will be killed if no action is taken. A person with a backpack large enough to stop the trolley is on a footbridge over the tracks. (See The Good Place for a more vivid explanation.) The moral question is should one push the man to stop the trolley, i.e., pushing wrong?
Related, consider Jon Haidt’s landmark paper, “The Emotional Dog and Its Rational Tail: A Social Intuitionist Account of Moral Judgment.” Haidt begins with a now-famous vignette about Julie and Mark, brother and sister who engage in consensual (pleasurable, one-time) incestuous sex. The question he poses at the end is, “Was it OK for them to make love?” The paper—an account of moral judgment—begins with the question of whether an act was OK, as opposed to wrong. Indeed, the term coined to capture the phenomenon unearthed by the vignette—moral dumbfounding—refers to a case in which someone labels an action wrong but can’t justify it.
Finally, consider the “morality police,” above, people who enforce rules such as the requirement to keep one’s hair covered, if one is a woman, in public places in Iran. Clearly, what we think that they think they are doing is policing morality: wrong behavior.
Note in these cases what the word morality emphatically does not mean. It does not mean virtuous. A moral dilemma is not a choice between two virtuous actions. A moral dilemma is also not a choice between two altruistic or cooperative actions. Whether to donate to one charity or another might be a dilemma, but it’s not a moral dilemma. The morality police are judgers and enforcers, not benefactors going around rewarding the virtuous.
Now, one could, of course, want to explain why some actions are seen as virtuous or how altruistic systems could evolve. Indeed, there are robust psychological literatures on how people improve their reputations and on the evolution of altruism and cooperation.
However, if one is trying to explain morality, a key phenomenon to be explained as illustrated by the definition and examples above is the observed empirical undeniable fact that humans judge some actions—occasionally inactions—to be wrong and have a concomitant desire for punishment. A moral judgment is a judgment of wrongness.
Now, let’s get to is.
This little word is important because some scholars have made claims about what morality is. Oliver Curry, for example, has made claims such as, “morality is a collection of biological and cultural solutions to the problems of cooperation…” (Curry et al. 2019, my italics).
Now, in some contexts, the word “is” constitutes a particular claim. Take the proposal that the heart is a pump. Because a pump is a thing with a function—a device that contracts to propel fluid into tubes—to assert the heart is a pump is a claim about what it is designed to do: pump. This claim, function, is the heart of an evolutionary analysis. An explanation for a mechanism, whether physical or psychological, from an evolutionary standpoint is to make a claim about its evolved function, how it contributed to survival and reproduction. And so, for organs such as the heart, the word “is” makes a lot of sense. When Harvey said the heart is a pump, he was explaining its features.
A typical evolutionary psychological analysis begins by specifying a phenomenon—mate choices, natural language, friendship—and a proposed functional explanation for it. Leda Cosmides, to take a classic example, began with observations of patterns in reasoning and proposed a cheater-detection module as the explanation. The capacity to quickly identify cases in which benefit was taken but the cost was not paid is for detecting cheaters.
In contrast, saying that morality is a collection of solutions—kin selected systems, reciprocal altruism systems, and other adaptations for pro-social behavior—is more akin to labelling. It’s true that the theory of reciprocal altruism explains why humans are good at detecting cheaters. One could, if one wanted, call the psychology designed for social exchange (e.g., cheater-detection) a component of “morality.” But doing so does not add any explanatory value or, crucially, explain moral judgments of wrongness.
Instead of is, the word you need to explain morality is for.
In sum, if your agenda is to explain and understand morality—including, centrally, condemnation and judgments of wrongness—it’s not sufficient (or even, in my view, necessary) to list the sorts of things that people view as virtuous and call that, together, morality. Instead, the question should be: what is the function of the psychological systems that cause humans to judge some actions to be wrong? What advantage did our ancestors get by making these judgments? What is moral condemnation for?
Moral Foundations
In 2004, Moral Foundations Theory was born with Jon Haidt’s paper (with Craig Joseph) entitled “Intuitive Ethics: How Innately Prepared Intuitions Generate Culturally Variable Virtues.” As their title illustrates, in this paper, instead of interrogating what is ok and not ok, the authors focused how different cultures understand and come to value different virtues. Summarizing this theory, the (authoritative) web site Moralfoundations.org says the theory is “that there are several innate psychological systems at the core of our ‘intuitive ethics.’ Cultures then build virtues, narratives, and institutions upon these foundational systems, resulting in the diverse moral beliefs we observe globally and even conflicts within nations.” Again, the verb to be is getting a workout. Innate systems are at the core. This account is consistent with what I think of as Haidt’s definitive work on this topic, his excellent book The Righteous Mind.
Between the emotional dog paper and the book, in addition to shifting animals—from a dog in the paper to an elephant (and its rider) in the book—Haidt shifted in the question he was asking. There are two distinct questions: 1) What explains why people judge an action wrong? and 2) what explains why people judge an action virtuous?
Note that these two questions are not simply the inverse of each other. Yes, the opposite of “wrong” in the context of the answer to a question on a mathematics exam is “right.” However, the opposite of wrong in the context of a discussion about morality is not virtuous; it’s “not wrong,” or, if you’re in court, “innocent.” If you didn’t commit murder, we don’t find you virtuous, we find you not guilty.
Because Moral Foundations Theory has been described elsewhere, I won’t do so here but, briefly, the core idea—which I actually think is well supported by the evidence—is that there are a number of basic virtues—care, fairness, liberty, loyalty, authority, sanctity—and some cultures value certain virtues over others. To me, this idea seems true and important.
The components that make up Moral Foundations Theory (and Curry’s Morality as Cooperation view) are, again in my view, perfectly good explanations for interesting things humans do. The theory of kin selection explains why people endure costs to help their relatives. No argument. Similarly, the risk of contamination explains why humans have the emotion of disgust—the aversion to things that have cues of disease. The genetic consequences of mating with close relatives explains sexual disgust toward siblings. These are all good explanations for phenomena such as parental investment, avoiding feces, and outbreeding. They are explanations not for moral condemnation of others, but for why people make these sorts of choices.
That is, an explanation for the disgust that humans experience when they contemplate having sex with a close relative—as Deb Lieberman as so nicely explored—does not explain why I care about whether or not Julie and Mark do. The genetic cost of inbreeding explains the adaptation to avoid inbreeding. The cost to you of inbreeding does not explain why you judge others for their behavior.
As Haidt says in The Righteous Mind, the ideas on which Moral Foundations Theory rest explain why we “want to care for those who are suffering,” are “sensitive to indications that another person is likely to be a good (or bad) partner or collaboration,” “trust and reward” “team players,” and “make us wary of a diverse array of symbolic objects and threats.” The foundations of Moral Foundations Theory are explanations for many of the pro-social behaviors humans engage in. These foundations might help to explain virtues, including why and how they clump. They do not, however, explain moral judgments of wrongness and condemnation, the heart of morality.
As an analogy to the approach the underlies Moral Foundations Theory, consider language. You’re an anthropologist and you live among—let’s call them the Uppas, as an homage to Don Brown—and you notice that they, like you, use language, but they talk about different sorts of things. The Uppas talk about prevailing winds a great deal, but rarely discuss drama. Their conversations often focus on what a particular god or goddess wants, but they almost never talk about money.
After working with the Uppas and perhaps a few other cultures, you return and say that you have developed a foundational theory of language. Language, you say, is not just one thing. It’s a lot of different things. It is about people, the natural world, ideas, people, exchange, and perhaps a few other categories. The Uppas talk about the natural world more than your own culture, which spends a great deal of time talking about people and finance.
Now, those observations would all be true, as far as they went. But it should be clear that you have not provided a theory about—or an explanation of—language.
Cultures talk about different topics and, similarly, cultures moralize different actions. However, measuring how much cultures discuss different topics and how those topics clump together isn’t a theory of language, though it might be a theory of something. Instead, a theory of language—the correct theory of language—is that its function is to communicate, to move ideas from one head to another. Its features can be explained with reference to that function, no matter whether you’re talking about penguins or the unbearable lightness of being.
In the same way, judgments of wrongness have a function (or, perhaps, multiple functions). The balance of this piece will lay out what one explanation of moral judgments might look like. Whatever its function is, many different kinds of actions, across cultures, are moralized, considered wrong.
The key element is that it begins with a putative function.
What Needs to be Explained?
There is no shortage of data about what people judge to be wrong because the literature on moral judgment is vast. This is not the place to review the literature comprehensively, but there are a few elements of this research that stand out.
For me there is one empirical finding that could not stand out with any more prominence: Moral judgments are nonconsequentialist. That is, whether an action is judged to be wrong does not depend only on the intended consequences of the action. This might seem like a philosopher’s amusement, but it’s not.
The easiest way to see nonconsequentialism is with the famous Trolley Problem, described above. I hastily note that this is not the only way to see it. People are nonconsequentialist in real life all the time. We have laws and norms against any number of consensual activities that people would like to engage in—the incest vignette is an example—indicating that we routinely label “wrong” actions with positive consequences for all involved. Last night you told our host that his split pea soup was superlative and, sure, that made him happy and no doubt like our company more, but—damn the consequences—it was wrong of you to dissemble. Humans are nonconsequentialist all the time.
The Trolley Problem just makes it easier to notice because we can stipulate the precise intentions and outcomes in the hypothetical. Generally, people say that it’s permissible to save five lives by killing one person in some ways—switching a train onto a sidetrack, crushing the poor soul sitting obliviously upon it—but not others—pushing someone off of a footbridge in front of the runaway trolley. In these vignettes, the consequences—how many live and how many die—are held precisely constant. If moral judgment took only consequences as its input, humans would look like good utilitarians and judgments would be the same across vignettes.
Nonconsequentialism is pervasive in moral judgments and any theory that purports to explain moral judgment must offer an account of nonconsequentialism.
Suppose, for example, your theory of moral judgment was a cooperation view, that it was for creating the greatest possible social welfare outcomes: judgments are for altruism, or helping others.
Straightforwardly, if a system is designed to choose the best outcome for those involved, it will be consequentialist. How many live/die if I push the guy off the bridge? Just one? Ok, let’s do that, independent of how that is accomplished. This is exactly and precisely what is not observed in patterns of human moral judgment.
Nonconsequentialism is a steep, steep barrier to cooperation theories. This single pattern of empirical results undermines the claim that morality is the same as, or is for, cooperation, and should cause us to look to ideas other than cooperation for an explanation of moral condemnation.
The second most important feature of moral judgment is that when people judge an action to be wrong, they simultaneously desire that the entity committing the act be punished. While there are some exceptions, overwhelmingly judgments of wrongness bring desire for punishment with them. Now, unlike the case of revenge, the person doing the judging doesn’t necessarily want to impose the punishment themselves. But they do want the person to be punished, whether formally by some sort of state apparatus or informally, by the social milieu.
Moral judgment has many other features. For example, moral judgments are often—though not always—made impartially. That is, whether or not an action is wrong is determined by the action itself, not by the identity of the person who performed the action. Now, that’s not always true, but this idea is reflected in moral systems, including American criminal law.
The third most important feature of moral judgment is that there is nearly always a victim. In research in my lab on this question (with Peter DeScioli and Skye Gilbert), we found that people nearly always claim that there is a victim, even if you create a vignette in which one is hard to find, such as blasphemy or masturbation. The moral judgment system seems to want a victim.
And, of course, a key feature of moral judgment is that an incredibly vast array of actions have been moralized over time. This includes harmful actions, such as assault and murder, harmless actions, such as dancing or not covering one’s hair, and, importantly, mutually beneficial and consensual actions, as in the case of Haidt’s siblings or, at a grander scale, lending money at interest.
Once an action is judged to be wrong, a cascade of attending psychology is activated.
But why?
Side-Taking
One explanation—which, of course, might not be the right one—is the one that Peter DeScioli and I proposed, side-taking. You can see this paper for additional details, but the basic idea is as follows.
In the same way that there are fitness benefits to keeping your hand out of a burning fire, there are also fitness benefits to avoiding being on the losing side when conflicts break out in one’s social group.
This adaptive problem is both unique—more or less—to humans and very important. The reason is that humans, unlike other species, can coordinate in large numbers of unrelated individuals, forming both ad hoc and ongoing coalitions which can do damage to one another. Historically, big coalitions have done some of the most damage ever, so it’s better, fitness-wise, to be on the side with everyone else.
Conflicts emerge in groups frequently because we’re selfish beasts and fitness is zero sum. The problem is that if people sided with their allies when conflicts emerged, because of the way social networks are structured, there would be many conflicts that would lead to roughly even numbers on both sides. (For recent work showing this, see, Kimbrough and DeScioli, 2023.) So siding using loyalty—choosing the person in the conflict who is a closer friend or ally—comes at the price of frequent evenly-matched conflicts, which often lead to costs on both sides. (Another problem with always siding with your friend is that friends, if they can be sure of perfect loyalty on your part, will be more cavalier about getting into conflicts because they can rely on your support, drawing you into more conflicts.)
Instead, what humans seem to do is identify a set of actions beforehand. Whoever took the action that’s moralized—theft, murder, hair showing—is labelled as wrong and the group sides against that person.
In this way, when conflicts emerge, third parties to the conflict—those not involved—get the benefit of being on the same side as all the other third parties. This is a benefit, which potentially explains how side-taking psychology evolved. The moral sense, on this view, is a measurement of whether someone’s behavior matches a proscribed action, motivating a desire for costs to be imposed. It is for choosing sides.
It’s important to note a few aspects of this view.
Most critically, it doesn’t matter what the actions are that get moralized. Third parties are solving a coordination problem, not a public goods problem. As long as we all agree—all coordinate on the actions—then we all get the benefits of being on the same side. That is, we don’t need to pay attention to the consequences, which I hope sounds familiar. Moral judgment isn’t consequentialist because moral judgment isn’t designed to bring about (good) consequences. It’s designed to get everyone on the same side.
In addition, the system must be open. It has to allow new rules. As cultures change and new conflicts can emerge, there must be a way to mint new rules so that people can choose sides when newfangled things such as property rights and even intellectual property rights come into being. This openness explains why there is so much diversity in moral rules. Taken together with the fact that rules need not yield good consequences, this view also explains why there are so many welfare-destroying rules that get minted and persist. (Having said that, I believe that cultural selection processes weed out many welfare-destroying rules. I like the prohibition on interest as an example. This prohibition impeded economic growth, so the rule generally shrank. Still, many consensual crimes remain on the books.)
This view also explains why punishments can be wildly out of proportion to the offense, as in the opening example. Under some common views about punishment, the expectation would be that the punishment should be just great enough—given the chance of detection—to deter the offense. So if there is a 50% chance of being caught stealing $10, a punishment of $20.01 if detected would be the correct deterrent. Deterrence views point to how great punishments should be.
However, the side-taking view doesn’t hold that punishment is to deter. Instead, the desire for punishment is a signal to others about which side one is on. I can just as easily signal that I am against the wrong-doer by supporting a stern rebuke as by demanding torture. From the perspective of playing a coordination game, the only element that matters is the signal of a desire for some punishment, whatever it might be. This has the unfortunate consequence that human psychology is open to nearly any punishment for offenses (though legal systems have emerged to put constraints on these formal systems). Because punishment is being used to signal, any amount of punishment might do, which has had some perverse consequences.
Loyalty
The side-taking view also explains why people purport to be impartial in their moral judgments. I’m not saying they are. I’m saying that they purport to be.
When a third party claims to be using the side-taking strategy—choosing based on morality, the action that was taken—they are signaling to others to do the same. If other third parties do, that’s good for the side-taker. If, however, other third parties believe that one is siding because of the identities of one of the people involved in the conflict—a relative or close friend—then this is less persuasive for recruiting other third parties to one’s side. Therefore, the pretense will nearly always be that one is choosing based on actions rather than identities. (In the real world, there are many cases in which the claim is, at best, pretense, as I’ve discussed elsewhere.)
For people immersed in Moral Foundations Theory, this line of argument might seem somewhat surprising. One of the Moral Foundations is loyalty. The side-taking view suggests that choosing based on loyalty is precisely the opposite of moral judgment.
That is correct. Loyalty is a virtue in the sense that from any given person’s point of view, when conflicts emerge, they benefit if their friends are loyal, taking their side. It is certainly true that people value those who are loyal. I, again with Peter DeScioli, have argued that friends, in humans, function as allies. From any given person’s perspective, having loyal friends is great.
When the Unabomber’s brother turned him in, that was disloyal. It was, however, the result of the judgment that Ted Kaczynski had done something wrong, so his brother sided with everyone else, but against his kin.
Conclusion
Here is a list of observed phenomena for which one might seek an explanation, along with the associated scientific questions:
1. Some people endure costs to aid others. Why are humans sometimes altruistic?
2. Some people combine their efforts to make themselves better off. Why do humans sometimes cooperate with one another?
3. Some people value some virtues more than others, within and between cultures. What is the source of this variability? That is, what explains variability—and commonalities—in what people view as virtuous? A related question is about what explains variability in what people view as wrong.
4. Universally, some people choose not to take actions that are labelled wrong or taboo in their culture even if the actions could benefit themselves. Why do humans have a conscience?
5. Some people want a woman to be tortured because people could see her hair. Why do people judge others’ actions as wrong and desire punishment? Why are these (universal) judgments nonconsequentialist?
It is, of course, fine for any scholar to interrogate any of those questions. They are all, to my eye, interesting. Explanations for the evolution of altruism changed biology forever. Anthropologists’ documentation of cultural variability has been incredibly helpful for understanding the breadth of human nature. And it’s certainly worth interrogating the human conscience.
Moral Foundations Theory answers the third question. It’s easy to see that this because of the nature of the empirical enterprise, mostly to do with large scale questionnaires asking people how they feel about a large number of different actions. The empirical heart of Moral Foundations Theory, the Moral Foundations Questionnaire—e.g., do you agree that chastity is a virtue?—allows researchers to plot how relevant different issues are to different groups: e.g., liberals value fairness more than conservatives do; this is reversed for purity.
Related, in terms of the Morality-As-Cooperation view, consider, for example, Curry, Mullins, and Whitehouse, 2019, who “test the prediction that the seven cooperative behaviors would be regarded as morally good” with a large anthropological dataset. This is a measurement of variation, specifically in what behaviors are regarded as good.
Now, of course, no one gets to say what a word means. Morality has multiple meanings. But in that case, a theory that calls itself a theory of morality is making an inherently ambiguous claim. Moral Foundations Theory, if one were to excise the word “moral,” might be better called the Variation in the Virtues That Cultures Value Theory. The Morality-As-Cooperation view, without the ambiguous word “morality,” might be given a similar moniker.
My interest is squarely on the universal human capacity for judgment, question five, and my view is that that the judgment of wrongness and the desire for punishment is the key moral phenomenon to be explained. To that end, my empirical work focused on finding uniformity rather than variation. For instance, one question one might ask is whether the same features apply when a moral judgment about any particular action is made. Is there always someone or something represented as the “victim?” Are actions always judged worse than inactions, holding consequences constant? This empirical enterprise is focused on whether there is a unified moral grammar, akin to language.
The side-taking view might not be the correct explanation for judgment of wrongness.
There might be a better explanation for why people want people who show hair to be hurt badly.
But it is an explanation for those judgments.
Coda: How did we get here?
It seems odd to me that very prominent theories of morality are not, really, theories of morality, as I understand the phenomenon, but rather an inventory of what cultures find to be virtuous. To me, this is a genuine puzzle.
In 2008 I took a sabbatical and spent a significant portion of my time reading about the history of science and the philosophy of science. The reason was that I had predicted that the great ideas embodied in evolutionary psychology would spread throughout the social sciences within a decade or so of when I finished my PhD, in 1998.
This turned out to be, to borrow a phrase Leda Cosmides once used, “luminously wrong.”
So I decided to read to try to understand how science worked because it wasn’t working the way I thought it would.
I took from that reading two big lessons.
The first is that explanations are hard. The history of science illustrates that scholars have frequently struggled with what counts as an explanation, never mind distinguishing good ones from bad ones. I’m not exempt. I learn about the tides from time to time, understand the explanation, and then an hour later all I can tell you it’s something to do with the moon. Social scientists, in my experience, just don’t get taught about explanations, by and large. Psychologists in particular often invoke a magic word (salient, culture, learned) and think that counts as an explanation. (Spoiler: it does not.)
The second lesson I took is that it takes forever. Science is slow. There’s some truth to Max Planck’s remark to the effect that science advances one funeral at a time. The people I encountered in the academy very rarely changed their view even in the face of clear counterexamples. Psychologists, in particular, don’t like to adopt other people’s theories.
And now we get to the Big Guy. Consider two passages from The Descent of Man, by one Charles Darwin.
"To do good unto others - to do unto others as ye would they should do unto you - is the foundation-stone of morality.”
Darwin placed reciprocal altruism, as it would come to be called, as the “foundation stone” of morality. Both Haidt and Curry continue in this vein. In addition, Darwin wrote (my bold font):
"As man advances in civilisation, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to all the members of the same nation, though personally unknown to him... Sympathy beyond the confines of man, that is humanity to the lower animals, seems to be one of the latest moral acquisitions.”
In this passage, Darwin links sympathy and morality.
Darwin was a genius. My academic work rests completely on his insights. But he looked at morality in terms of reciprocity and sympathy. He did, of course, notice that being prosocial required explanation, informed by his theory. But he didn’t notice that the real puzzle—or, at least, another real puzzle—was humans’ strange need to judge and condemn everything from murder to fashion choices. So when he talked about morality, he talked about goodness and sympathy.
And where he tread, many followed.
Rob Kurzban has a PhD in Psychology from University of California Santa Barbara and Masters of Public Administration from the Fels Institute of Government. He also writes for Living Fossils. You can find more of his essays there.
Support Aporia with a $6 monthly subscription and follow us on Twitter.
REFERENCES
Curry, O. S., Chesters, M. J., & Van Lissa, C. J. (2019). Mapping morality with a compass: Testing the theory of ‘morality-as-cooperation’ with a new questionnaire. Journal of Research in Personality, 78, 106-124.
Curry, O. S., Mullins, D. A., & Whitehouse, H. (2019). Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies. Current anthropology, 60(1), 47-69.
Gigerenzer, G. (1991). From tools to theories: A heuristic of discovery in cognitive psychology. Psychological review, 98(2), 254.
Gigerenzer, G. (1998). Surrogates for theories. Theory and Psychology, 8, 195-204.
Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological review, 108(4), 814.
Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Vintage.
Meehl, P. E. (1986). What Social Scientists Don't Understand. Metatheory in social science: Pluralisms and subjectivities, 315.
Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of science, 34(2), 103-115.
The term in Farsi is: "Gasht-e Ershad" (گشت ارشاد) This translates literally as “Guidance Patrol.” The western press uses the term morality police to capture their role.
We are not non-consequentialist, we are meta-consequentialist.
That is we are playing an imperfect, impartial information game. Not everyone has the same level of finiteness in their analysis of outcomes, so we resort to proxy heuristics. In network theory, the level of reputation or memory-depth layer or Theory-of-mind-of-mind-of-mind becomes the scope over which we find ourselves having conscience rules like never kill-- because we lived over centuries to be in hierarchies, and we trust authority, or there is a bunch of less valuable and more valuable people but we can use comparative advantage, and indirectly benefit better instead of wasting energy trying to kill each other the moment someone becomes one unit of less worthiness in a strict utilitarian sense for whatever metric. There is intra-group, inter-group and inter-individual dynamics at play that all unify to codify such rules and it has mainly to do with not defecting, or situationally defecting in an adaptive social sense. That said we do so well, that we consume all the resources and more intelligent people, especially psychopaths that bypass this empathy gradient of closeness can do even better because the degree or extent to which you depend on the extrinsic value of a group or individual is inversely proportional to the extent of innate extrinsic value you possess-- that is if you can do everything, you do not need others at all, especially if the distribution of energy of all tasks can be done by a set of technologies or tools you invented. Although humans have finite time and specializations.
There is no such thing as a non-consequentialist rule, it is merely an approximative meta-heuristic that was abstracted away to work better than usual like a phobia from falling from heights. Or deciding the voting age was 18 even when a precocious genius at 12 would have a greater capacity of informational awareness to make better judgements than a low-intelligence 25 year old.
Deception games play an important role in keeping the social entropy low, in terms of strategic ambiguity. This is simply another second-order depth-of-mind rule. It's like tit for tat works well, but it stops working well when white collar crime, or abstracted slow-slavery incrementalism policies are implemented to steal your time/wealth -- you can see all these moral rules are something that is done at the first-order of observable effects-- feeling more cognitive pain for immediate violence than being forced to submit as a slave or being misled into some series of decisions under the guise of an illusion by a more complex and abled schemer. That said there is also descendence-mechanics, where you are just playing against yourself if you inherit that part of the set of social strategies you play against others; like in China, everyone is just lowering their standard of living through, defection-maximizing heuristics "cheating is winning" mentality. The reason we impose variable punishments is the degree of culpability varies with the aptitude of the person to make cognitive judgements, and again we boiled it down to meta-heuristics that are implemented inside our cognition, as most 30 year olds are more longer-term oriented than 6 year-olds at the 99.99 percentile of effect, especially considering we lived in villages and whatnot. If by happenstance everyone was a 6 year old genius tomorrow, we would see evolutionary pressures for allele work overnight and our complex-set of moral judgement rules inoculated from repeated play against ourselves and others would be re-shaped. There is no such thing as a consequentialist-free judgement; like you can claim we say an action is wrong in itself, but it is wrong because historically it led to bad consequences, for the group or the individual, because of the nature of the norm itself and not the action in particular, and the sequences of violative acts is as you say a second-hand form of signalling. Just like the correspondence to proportionality of harm or benefit, free-loading or negligence -- forgiveness and discounted emotions are there because humans cannot infinitely waste their energy on warring eachother or being bothered, and we are just not good at small-scale observations or simple-deception in a perceived high-trust environment due to the regularization schema inside our brains.
"Morality Police" is more accurately "Immorality Police', since it is immorality they are policing. For example in modern western societies it's a crime to steal from charities, but not to refrain from donating to charities. As for the harmful consequences issue; it isn't always necessary to have a clear negative outcome for every instance of an act, because it may be a pattern of behavior that is being discouraged that can be reasonably expected to lead to negative outcomes. Such as incest, underage sex, or public drunkenness. This does mean that such prohibitions should have carefully nuanced punishments; we shouldn't impose the same punishment on two 15 year-olds having sex as on a 30 year-old having sex with a 6 year-old.