An interesting time for the study of moral judgment and cognition

29 April 2015 | Veljko Dubljevic (McGill University)

What is moral? Is it always good to save lives? Is killing always wrong? Is being caring always a virtue? Are there various factors that collectively affect moral judgements? Are these factors self-standing or do they interact?

Our moral judgements and moral intuitions suggest answers to some of these questions. This is so for both experts, such as moral philosophers and psychologists, who study morality in their different ways, and laypersons alike. 

The study of morality among moral philosophers has long been marked by disagreement between utilitarians, deontologists and virtue theorists on normative issues (such as should we give priority respectively to consequences, duties or virtues in moral judgment), as well as between cognitivists and non-cognitivists, realists and anti-realists (to name just a few opposing views) on meta-ethical issues. 

Moral psychology—the empirical and scientific study of human morality—has, by contrast, long shown considerable convergence in its approach to moral judgment. Despite some variation in the details, it is striking that Kohlberg’s (1968) developmental model has simply been adopted, even where it is criticised (see eg, Gilligan 1982). 

According to the developmental model moral judgment is simply the application of moral reasoning – deliberate, effortful use of moral knowledge (a system 2 process, in today’s parlance). This is not to disregard the variety of viewpoints in moral philosophy – moral psychology has taken these to reflect distinct stages in the development of a ‘mature’ morality.

This all changed with a paradigm shift in moral psychology towards a more diverse ‘intuitive paradigm’, according to which moral judgment is most often automatic and effortless (a system 1 process). 

Studies revealing automatism in everyday behaviour (Bargh and Chartrand 1999), cognitive illusions and subliminal influences such as ‘priming’ (Tulving and Schacter 1990), ‘framing’ (Tversky and Kahneman 1981), and ‘anchoring’ effects (Ariely 2008), provide ample empirical evidence that moral cognition, decision-making and judgment are often a product of associative, holistic, automatic and quick processes which are cognitively undemanding (see Haidt 2001). 

This along with the ‘moral dumbfounding’ effect – the fact that most people make quick moral judgments and are hard pressed to offer a reasoned explanation for them – led to a shift away from the developmental model which struggled to accommodate these findings.

As a result, moral psychologists now agree that moral judgment is not driven solely by system 2 reasoning. However, they disagree on almost everything else. A range of competing theories and models offer explanations on how moral judgment takes place. 

Some claim that moral judgments are nothing more than basic emotional responses, perhaps followed by rationalizations (Haidt 2001), while others claim that there are competing emotional and rational processes that pull moral judgment in one or the other direction (Greene 2008), while still others think that moral judgment is intuitive, but not necessarily emotional (see, eg, Mikhail 2007, Gigerenzer 2010, Dubljevic & Racine 2014)

Here, I will summarise some relevant information and conclude by considering which models are still viable and which are not, based on currently available evidence.

Let’s start with the basic emotivist model: As mentioned earlier, it was espoused by Jonathan Haidt (2001) in pioneering work that offered a constructive synthesis of social and cognitive psychological work on automaticity, intuition and emotion, and has also been championed by influential moral philosophers, such as Walter Sinnott-Armstrong et al. (2010). 

However, it has been called into question by work that successfully dissociated emotion from moral judgment. For example, consider the ‘torture case’ study (Batson et al 2009, Batson 2011). In this study, American respondents were asked to rate the moral wrongness of specific cases of torture and their emotional arousal. The experimental group is presented with a vignette in which an American soldier is tortured by militants, while a control group read a text in which a Sri-Lankan soldier is tortured by Tamil rebels. 

Even though there was no significant difference in the intensity of moral judgment, the respondents were ‘riled-up’ emotionally only in the case of a member of their in-group being tortured. This does not put moral emotions per se in question, but it neatly undermines a crude ‘moral judgment is just emotion’ model.

Now, let’s take a look at the ‘dual-process’ model of moral judgment. Pioneering research in the neuroscience of ethics (eg Greene et al. 2001) formulated a classification of dilemmas into so-called impersonal, such as the original trolley dilemma (eg whether to throw a switch to save five people and killing one) and personal dilemmas, such as the footbridge dilemma (eg whether to push one man to his death in order to save five people). 

Proponents of the view take their data to show that the patterns of responses in trolley dilemmas favour a “utilitarian” view of morality based on abstract thinking and calculation, while responses in the footbridge dilemma suggest that emotional reactions drive answers. The purported upshot is that rational (driving utilitarian calculation) and emotional (driving aversion to personally causing injury) processes are competing for dominance.

Even though there were some initial studies that seemed to corroborate this hypothesis, it remains controversial, with certain empirical findings appearing to remain at odds with the dual-process approach. In particular, if utilitarian, outcome based judgment, is caused by abstract thinking (system 2), whereas non-consequentialist intent or duty based judgment is intuitive (system 1) and thus irrational, how come children ages 4 to 10 focus more on outcome than on intent (see Cushman 2013)? 

Given that abstract thought is developed after age 12, ‘fully rational’ utilitarian judgments should not be observable in children. And yet they are not only observed, but seem to dominate immature and dysfunctional moral cognition.

It is then safe to say that recent research has called the dual-process model into question. Recent studies have shown that favouring the “utilitarian” option has been actually linked to anti-social personality traits, such as Machiavelianism (Bartels & Pizarro, 2011), and psychopathy (Koenings 2012), as well as with temporary (increased anger, decreased responsibility, induced lower levels of serotonin Crockett & Rini 2015) and permanent conditions, such as vmPFC damage (Koenings 2007) and Fronto-temporal dementia (Mendez 2009), that are probably not facilitating “rational” decision making. 

Perhaps the most damning piece of evidence is a recent study (Duke & Begue 2015) establishing a correlation between study participants’ blood alcohol concentrations and utilitarian preferences. All in all, the empirical evidence seems to suggest a stronger role for impaired social cognition than intact deliberative reasoning in predicting utilitarian responses in the trolley dilemma, which in turn leads to a conclusion that the dual process model is on thin ice.

So which model is true? The data seems to suggest that an intuitionist model of moral judgment is most likely, however there are at least three competitors: the moral foundations theory (Haidt & Graham 2007), the universal moral grammar (Mikhail 2007, 2011) and the ADC approach (Dubljevic & Racine 2014).

Due to reasons of space I will not go into the specifics of all models apart from mentioning them and their feasibility, and since I am an interested party in this debate, I will briefly canvass the ADC approach.

The Agent-Deed-Consequence framework offers an insight into the types of simple and fast intuitive processes involved in moral appraisals. Namely, the heuristic principle of attribute substitution – quickly and efficiently substituting a complex and intractable problem with more accessible information – is applied to specific information relevant for moral appraisal. 

I argued (along with my co-author, Eric Racine) that there are three kinds of moral intuitions stemming from three kinds of heuristic processes that simultaneously modulate moral judgments. We posited that they also form the basis of three distinct kinds of moral theory by substituting the global attribute of moral praiseworthiness/blameworthiness with the simpler attributes of virtue/vice of the agent or character (as in virtue theory), right/wrong deed or action (as in deontology) and good/bad consequences or outcomes (as in consequentialism).

The Agent-Deed-Consequence framework provides a vocabulary to start breaking down moral judgment into cognitive components, which could increase explanatory and predictive power of future work on moral judgment in general and moral heuristics in particular. 

Furthermore, this research clarifies a wide set of findings from empirical and theoretical moral psychology (eg, “intuitiveness” and “counter-intuitiveness” of certain judgments, moral “dumbfoundedness”, “ethical blind spots” of traditional moral principles, etc.). The framework offers a description of how moral judgment takes place (three aspects are computed at the same time), but also offers normative guidance on dissociating and clarifying relevant normative components.

Perhaps an example might be helpful to put things into perspective. Consider this (real life) case:

In October 2002, policemen in Frankfurt, Germany were faced with a chilling dilemma. They had in custody the man who they suspected had kidnapped a banker’s 11-year-old son and asked for ransom. Although the man was arrested while trying to take the ransom money, he maintained his innocence and denied having any knowledge of the whereabouts of the child. 

In the meantime, time was running out – if the kidnapper was in custody, who will feed and hydrate the child? The police officer in charge finally decided to use coercion to make the suspect talk. He had threatened to inflict serious pain upon the suspected kidnapper if he did not reveal where he had hidden the child. The threat worked – however, the child was already dead. (Dubljevic & Racine 2014, p. 12)

The ADC approach allows us to analyse the normative cues of the case. Here it is safe to assume that the evaluation of the agent is positive (as a virtuous person), evaluation of the deed or action is negative (torture is wrong), whereas the consequences are unclear ([A+] [D-] [C?] = [MJ?]).

Modulating any of the elements of the case can result in a different intuitive judgment, and the public controversy in Germany following this case created two camps: one stressing the uncertainty of guilt and a precedent of committing torture in police work, and the other stressing the potential to save a child by any means necessary. 

If the case is changed so that the consequence component is clearly bad (e.g., suspect is innocent AND the child died), the intuitive responses would be specific, precise and negative ([A+] [D-] [C-] = [MJ-]). And vice-versa, if we modulate the case so that the consequences are clearly good (e.g., the suspect is guilty AND a life has been saved), the intuitive responses would be specific, precise and clearly positive ([A+] [D-] [C+] = [MJ+]).

This is just one example of the frugality of the ADC framework. However, it would be premature to conclude that this model is obviously true or better than the remaining competitors, the moral foundations theory and universal moral grammar. Ultimately, it is most likely that evidence will force all models to accommodate new data and insights, but one thing is clear: this is an interesting time for the study of moral judgment and cognition.

References

Ariely, D. 2008. Predictably irrational: The hidden forces that shape our decisions. New York, NY: Harper.

Bargh, J. A., and T. L. Chartrand. 1999. The unbearable automaticity of being. American Psychologist 54: 462–479.

Bartels, D.M. & Pizarro, D. (2011) : The mismeasure of morals : Antisocial personality traits predict utilitarian responses to moral dilemmas, Cognition 121 : 154–161.

Batson, C.D. (2011): What’s wrong with morality?, Emotion Review 3 (3): 230–236.

Batson, C.D., Chao, M.C. & Givens, J.M. (2009): Pursuing moral outrage: Anger at torture, Journal of Experimental Social Psychology, 45: 155–160.

Crockett, M.J., Clark, L., Hauser, M.D. & Robbins, T.W. (2010): Serotonin selectlively influences moral judgment and behavior through effect on harm aversion, PNAS, 107 (40): 17433–38.

Crockett, M.J. & Rini, R.A. (2015): Neuromodulators and the instability of moral cognition, in Decety, J. & Wheatley, T. (Eds.): The Moral Brain: A Multidisciplinary Perspective, Cambridge, MA: MIT Press, pp. 221–235.

Dubljević, V. & Racine, E. (2014): The ADC of Moral Judgment: Opening the Black Box of Moral Intuitions with Heuristics about Agents, Deeds and Consequences, AJOB–Neuroscience, 5 (4): 3–20.

Duke, A.A. & Begue, L. (2015): The drunk utilitarian: Blood alcohol concentration predicts utilitarian responses in moral dilemmas, Cognition 134: 121–127

Gigerenzer, G. (2010): Moral satisficing: Rethinking moral behavior as bounded rationality. Topics in Cognitive Science, 2 (3): 528–554.

Greene, J. D. (2008): The secret joke of Kant’s soul, in Sinnott-Armstrong, W. (Ed.): Moral psychology Vol. 3, The neuroscience of morality, Cambridge, MA: MIT Press; 35–79.

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M. and Cohen, J. D. (2001): An fMRI investigation of emotional engagement in moral judgment, Science 293: 2105 – 2108.

Haidt, J. 2001. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review 108 (4): 814–834.

Haidt, J., & Graham, J. (2007). When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research 20(1): 98–116.

Hauser, M., Young, L. & Cushman, F. (2008): Reviving Rawls´s Linguistic Analogy, in Walter Sinott-Armstrong (Ed.): Moral psychology 2:, MIT Press, pp. 107–144.

Knoch, D., Pasqual-Leone, A., Meyer, K., Treyer, V. and Fehr, E. (2006): Diminishing reciprocal fairness by disrupting the right prefrontal cortex, Science 314: 829–832.

Knoch, D; Nitsche, M.A; Fischbacher, U; Eisenegger, C; Pasqual-Leone, A. and Fehr, E. (2008): Studying the neurobiology of social interaction with transcranial direct current stimulation—The example of punishing unfairness, Cerebral Cortex; 18:1987–1990.

Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M. & Damasio, A. (2007): Damage to the prefrontal cortex increases utilitarian moral judgements, Nature, 446: 908–911.

Koenigs M, Kruepke M, Zeier J, Newman JP. (2012): Utilitarian moral judgment in psychopathy, SCAN; 7(6): 708–14;

Kohlberg, L. (1968): The child as a moral philosopher, Psychology Today, 2: 25–30.

Mendez, M. F. 2009. The neurobiology of moral behavior: Review and neuropsychiatric implications. CNS Spectrums 14(11): 608–620.

Mikhail, J. 2007. Universal moral grammar: Theory, evidence and

the future. Trends in Cognitive Sciences 11(4): 143–152.

Mikhail, J. (2011): Elements of moral cognition, New York: Cambridge University Press.

Persson, I. & Savulescu, J. (2012): Unfit for the Future: The Need for Moral Enhancement. Oxford: Oxford University Press.

Sinnott‐Armstrong, W., Young, L. & Cushman, F. (2010): Moral Intuitions, in John M. Doris (Ed.): The Moral Psychology Handbook, Oxford: Oxford University Press, DOI: 10.1093/acprof:oso/9780199582143.003.0008

Terbeck, S., Kahane, G., McTavish, S., Savulescu, J., Levy, N., Hewstone, M. & Cowen, P.J. (2013): Beta adrenergic blockade reduces utilitarian judgment, Biological Psychology 92: 323–328.

Tversky, A., and D. Kahneman. 1981. The framing of decisions and the psychology of choice. Science 211(4481): 453–458.

Tulving, E., and D. L. Schacter. 1990. Priming and human memory systems. Science 247(4940): 301–306.

Young, L; Camprodon J.A; Hauser, M; Pascual-Leone, A. and Saxe, R. (2010): Disruption of the right temporoparietal junction with transcranial magnetic stimulation reduces the role of beliefs in moral judgements, PNAS, 107: 6753– 6758.