Is implicit cognition bad cognition?

8 August 2016 | Sophie Stammers

A significant body of research in cognitive science holds that human cognition comprises two kinds of processes: explicit and implicit. 

According to this research, explicit processes operate slowly, requiring attentional guidance, whilst implicit processes operate quickly, automatically and without attentional guidance (Kahneman, 2012; Gawonski and Bodenhausen, 2014). A prominent example of implicit cognition that has seen much recent discussion in philosophy is that of implicit social bias, where associations between (often) stigmatised social groups and (often) negative traits manifest in behaviour, resulting in discrimination (see Brownstein and Saul, 2016a; 2016b). 

This is the case even though the individual in question isn’t directing their behaviour to be discriminatory with the use of attentional guidance, and is apparently unaware that they’re exhibiting any kind of disfavouring treatment at the time (although see Holroyd 2015 for the suggestion that individuals may be able to observe bias in their behaviour).

Examples of implicit social bias manifesting in behaviour include exhibiting greater signs of social unease, less smiling and more speech errors when conversing with a black experimenter compared to when the experimenter is white (McConnell and Leibold, 2001); less eye contact and increased blinking in conversations with a black experimenter versus their white counterpart (Dovidio et al., 1997), and reduced willingness for skin contact with a black experimenter versus a white one (Wilson et al., 2000). 

Implicit social biases also arise in more deliberative scenarios: Swedish recruiters who harbour implicit racial associations are less likely to interview applicants perceived to be Muslim, as compared to applicants with a Swedish name (Rooth, 2007), and doctors who harbour implicit racial associations are less likely to offer treatment to black patients with the clinical presentation of heart disease than to white patients with the same clinical presentation of the disease (Green, et al., 2007). 

These studies establish that there is no correlation between participants’ discriminatory behaviour and the beliefs and values that they profess to have when questioned.

Both the mechanisms of implicit bias, and implicit processes more generally, are often characterised in the language of the sup-optimal. Variously, they deliver “a more inflexible form of thinking” than explicit cognition (Pérez, 2016: 28), they are “arational” compared to the rational processes that govern belief update (Gendler, 2008a: 641; 2008b: 557), and their content is “disunified” with our set of explicit attitudes (Levy, 2014: 101–103).

As such, one might be tempted to think of implicit cognition as regularly, or even necessarily bad cognition. A strong interpretation of that value-laden assessment might mean that the processes in question deliver objectively bad outputs, however these are to be defined, but we could also mean something a bit weaker, such as that outputs are not aligned with the agent’s goals, or similar. 

It’s easy to see why one might apply this value-laden assessment to the mechanisms which result in implicitly biased behaviour: individuals simply have no reason to discriminate against already marginalised people in the ways outlined above, and yet they do anyway – that seems like a good candidate for bad cognition. 

That implicitly biased behaviours are the product of what appears to be a suboptimal processing system might motivate the argument that we’re not the agents of our implicitly biased behaviours, as well as arguments that might follow from this, such as that it is not appropriate to hold people morally responsible for their implicit biases (Levy, 2014).

But I think it would be wrong to conclude that implicit cognition necessarily delivers suboptimal outputs, and that implicit bias is an example of bad cognition simply for the reason that it is implicit. Moreover, as I’ll argue below, maintaining the former claim may well do a disservice to the project of reducing implicit social biases.

Whilst explicit processes may be ‘better’ at some cognitive tasks, research suggests that implicit processes can actually deliver a more favourable performance than explicit processes in a variety of domains. For instance, non-attentional, automatic processes govern the fast motor reactions employed by skilled athletes (Kibele, 2006). 

Trying to bring these processes under attentional control can actually disrupt sporting performance: Fleagal and Anderson (2008) show that directing attention to their action performance significantly impairs the ability of high-skill golfers on a putting task, whilst high-skill footballers perform less proficiently when directing attention to their execution of dribbling (Beilock et al., 2002). Engaging attentional processes when learning new motor skills can also disrupt performance (McKay et al., 2015).

Meanwhile, functional MRI studies suggest that improvisation implicates non-attentional processes. One study shows that when professional jazz pianists improvise, they do so in the absence of central processes implicated in attentional guidance (Limb and Braun, 2008). Another study demonstrates that trained musicians inhibit networks associated with attentional processing during improvisation, (Berkowitz and Ansari, 2010).

Further, deliberately disengaging attentional resources can facilitate creativity, a process known as ‘incubation’. Subjects who return to work on a creative task after a period directing attentional resources to something unrelated to the task at hand often deliver enhanced outputs compared with those who continually engage their attentional resources (Dodds et al., 2003). It has been proposed that task-relevant implicit processes remain active during the incubation period and contribute to enhanced creative output (Ritter and Dijksterhuis, 2014).

So it would be wrong to suggest that implicit processes necessarily, or even typically, deliver sub-optimal outputs compared with their explicit cousins. And pertinent to our discussion of implicit social bias, implicit processes themselves can actually be recruited to inhibit the manifestation of bias. 

Research demonstrates that participants with genuine long-term egalitarian commitments (Moskowitz et al. 1999) as well as those in whom egalitarian commitments are activated during in an experimental task (Moskowitz and Li, 2011) actually manifest less implicit bias than those without such commitments. Crucially, the processes which bring implicit responses in line with an agent’s long-term commitments are not driven by attentional guidance, instead operating automatically to prevent the facilitation of stereotypic categories in the presence of the relevant social concepts (Moskowitz et al. 1999: 168). 

The suggestion here is that developing genuine commitments to egalitarian values and treatment can actually recalibrate implicit processes to deliver value-consistent behaviour (see Holroyd and Kelly, 2016), without needing to effortfully override implicit responses each time one encounters social concepts that might otherwise trigger biased reactions. It would seem that the profile of implicit processes as inflexible, arational and disunified with explicit values and commitments is ill-fitted to account for this example.

So, in a number of cases it seems that implicit processes can serve our goals and values. If this is right, then we should perhaps be more willing to locate ourselves as agents not just in the behaviour that arises from our explicit processes, but in that which arises from our implicit ones as well.

I think this has an important implication for practices related to implicit bias training. We should be wary of the rhetoric that distances us as agents from our implicit processes: for instance, characterising implicit bias as “racism without racists”[1] might be comforting for those of us with implicit racial biases, but disowning the implicit processes that lead to racial discrimination, while not disowning those that lead to skilled musical improvisation or creativity as above, seems somewhat inconsistent. 

I wonder whether greater willingness to accept one’s implicit processes as aspects of one’s agency (not necessarily as central, defining aspects of one’s agency — but somewhere in there nonetheless) might help to motivate the project of realigning one’s implicitly biased responses?

[1] In U.S. Department of Justice. 2016. “Implicit Bias.” Community Oriented Policing Services report, page 1. Accessed 27/07/16, URL: https://uploads.trustandjustice.org/misc/ImplicitBiasBrief.pdf

References

Berkowitz, A. L. and D. Ansari. 2010. “Expertise-Related Deactivation of the Right Temporoparietal Junction during Musical Improvisation.” NeuroImage 49 (1): 712–19.

Brownstein, M and J. Saul. 2016a. Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology, New York: Oxford University Press.

Brownstein, M and J. Saul. 2016b. Implicit Bias and Philosophy, Volume 2: Moral Responsibility, Structural Injustice, and Ethics, New York: Oxford University Press.

Dodds R. D., T. B. Ward and S. M. Smith. 2003. “Incubation in problem solving and creativity.” in The Creativity Research Handbook, edited by Runco M. A., Cresskill, NJ: Hampton Press.

Dovidio, J. F., K. Kawakami, C. Johnson, B. Johnson and A. Howard. 1997. “On the Nature of Prejudice: Automatic and Controlled Processes.” Journal of Experimental Social Psychology 33 (5): 510–40.

Gawronski, B. and G. V. Bodenhausen. 2014. “Implicit and Explicit Evaluation: A Brief Review of the Associative-Propositional Evaluation Model: APE Model.” Social and Personality Psychology Compass 8 (8): 448–62.

Gendler, T. S. 2008a. “Alief and Belief.” The Journal of Philosophy 105 (10): 634–63.

———. 2008b. “Alief in Action (and Reaction).” Mind & Language 23 (5): 552– 85.

Green, A. R., D. R. Carney, D. J. Pallin, L. H. Ngo, K. L. Raymond, L. I. Iezzoni and M. R. Banaji. 2007. “Implicit Bias among Physicians and Its Prediction of Thrombolysis Decisions for Black and White Patients.” Journal of General Internal Medicine 22 (9): 1231–38.

Holroyd, J. 2015. “Implicit Bias, Awareness and Imperfect Cognitions.” Consciousness and Cognition 33 (May): 511–23.

Holroyd, J. and D. Kelly. 2016. “Implicit Bias, Character, and Control.” in From Personality to Virtue, edited by A. Masala and J. Webber, Oxford: Oxford University Press.

Kahneman, D. 2012. Thinking, Fast and Slow, London: Penguin Books.

Kibele, A. 2006. “Non-Consciously Controlled Decision Making for Fast Motor Reactions in sports—A Priming Approach for Motor Responses to Non-Consciously Perceived Movement Features.” Psychology of Sport and Exercise 7 (6): 591–610.

Levy, N. 2014. Consciousness and Moral Responsibility, Oxford; New York: Oxford University Press.

Limb, C. J. and A. R. Braun. 2008. “Neural Substrates of Spontaneous Musical Performance: An fMRI Study of Jazz Improvisation.” Edited by E. Greene. PLoS ONE 3 (2): e1679.

McConnell, A. R. and J. M. Leibold. 2001. “Relations among the Implicit Association Test, Discriminatory Behavior, and Explicit Measures of Racial Attitudes.” Journal of Experimental Social Psychology 37 (5): 435–42.

McKay, B., G. Wulf, R. Lewthwaite and A. Nordin. 2015. “The Self: Your Own Worst Enemy? A Test of the Self-Invoking Trigger Hypothesis.” The Quarterly Journal of Experimental Psychology 68 (9): 1910–19.

Moskowitz, G. B., P. M. Gollwitzer, W. Wasel and B. Schaal. 1999. “Preconscious Control of Stereotype Activation Through Chronic Egalitarian Goals.” Journal of Personality and Social Psychology 77 (1): 167–184

Moskowitz, G. B., and P. Li. 2011. “Egalitarian Goals Trigger Stereotype Inhibition: A Proactive Form of Stereotype Control.” Journal of Experimental Social Psychology 47 (1): 103–16.

Pérez, E. O. 2016. Unspoken Politics: Implicit Attitudes and Political Thinking, New York, NY: Cambridge University Press.

Ritter, S. M. and A. Dijksterhuis. 2014. “Creativity–the Unconscious Foundations of the Incubation Period.” Frontiers in Human Neuroscience 8: 22–31.

Rooth, D‑O. 2007. “Implicit Discrimination in Hiring: Real World Evidence.” (IZA Discussion Paper No. 2764). Bonn, Germany: Forschungsinstitut Zur Zukunft Der Arbeit (Institute for the Study of Labor).

Wilson, T. D., S. Lindsey and T. Y. Schooler. 2000. “A Model of Dual Attitudes.” Psychological Review 107 (1): 101–26.