The Modularity of the Motor System

7 January 2019 | Myrto Mylopoulos (Carleton University)

The extent to which the mind is modular is a foundational concern in cognitive science.

Much of this debate has centered on the question of the degree to which input systems, i.e., sensory systems such as vision, are modular (see, e.g., Fodor 1983; Pylyshyn 1999; MacPherson 2012; Firestone & Scholl 201; Burnston 2017; Mandelbaum 2017). By contrast, researchers have paid far less attention to the question of the extent to which our main output system, i.e., the motor system, qualifies as such.

This is not to say that the latter question has gone without acknowledgement. Indeed, in his classic essay Modularity of Mind, Fodor (1983)—a pioneer in thinking about this topic—writes: “It would please me if the kinds of arguments that I shall give for the modularity of input systems proved to have application to motor systems as well. But I don’t propose to investigate that possibility here” (Fodor 1983, p.42).

I’d like to take some steps towards doing so in this post.

To start, we need to say a bit more about what modularity amounts to. A central feature of modular systems—and the one on which I fill focus here—is their informational encapsulation. 

Informational encapsulation concerns the range of information that is accessible to a module in computing the function that maps the inputs it receives to the outputs it yields. A system is informationally encapsulated to the degree that it lacks access to information stored outside the system in the course of processing its inputs. (cf. Robbins 2009, Fodor 1983).

Importantly, informational encapsulation is a relative notion. A system may be informationally encapsulated with respect to some information, but not with respect to other information. 

When a system is informationally encapsulated with respect to the states of what Fodor called “the central system” — those states familiar to us as propositional attitude states like beliefs and intentions — this is referred to as cognitive impenetrability or, what I will refer to here as cognitive impermeability. In characterizing the notion of cognitive permeability more precisely, one must be careful not to presuppose that it is perceptual systems only that are at issue. 

For a neutral characterisation, I prefer the following: A system is cognitively permeable if and only if the function it computes is sensitive to the content of a subject S’s mental states, including S’s intentions, beliefs, and desires. In the famous Müller-Lyer illusion, the visual system lacks access to the subject’s belief that the two lines are identical in length in computing the relative size of the stimuli, so it is cognitively impermeable relative to that belief.

On this characterisation of cognitive permeability, the motor system is clearly cognitively permeable in virtue of its computations, and corresponding outputs, being systematically sensitive to the content of an agent’s intentions. The evidence for this is every intentional action you’ve ever performed. 

Perhaps the uncontroversial nature of this fact has precluded further investigation of cognitive permeability in the motor system. But there are at least two interesting questions to explore here. 

First, since cognitive permeability, just like informational encapsulation, comes in degrees, we should ask to what extent is the motor system cognitively permeable. Are there interesting limitations that can be drawn out? (Spoiler: yes.) 

Second, insofar as there are such limitations, we should ask the extent to which they are fixed. Can they be modulated in interesting ways by the agent? (Spoiler: yes.)

Experimental results suggest that there are indeed interesting limitations to the cognitive permeability of the motor system. This is perhaps most clearly shown by appeal to experimental work employing visuomotor rotation tasks (see also Shepherd 2017 for an important discussion of such work with which I am broadly sympathetic). 

In such tasks, the participant is instructed to reach for a target on a computer screen. They do not see their hand, but they receive visual feedback from a cursor that represents the trajectory of their reaching movement. On some trials, the experimenters introduce a bias to the visual feedback from the cursor by rotating it relative to the actual trajectory of their unseen movement during the reaching task. 

For example, a bias might be introduced such that the visual feedback from the cursor represents the trajectory of their reach as being rotated 45°clockwise from the actual trajectory of their arm movement. This manipulation allows experimenters to determine how the motor system will compensate for the conflict between the visual feedback that is predicted on the basis of the motor commands it is executing, and the visual feedback the agent actually receives from the cursor. 

The main finding is that the motor system gradually adapts to the bias in a way that results in the recalibration of the movements it outputs such that they show “drift” in the direction opposite that of the rotation, thus reducing the mismatch between the visual feedback and the predicted feedback.

Figure 1. A: A typical set-up for a visuomotor rotation task. B: Typical error feedback when a counterclockwise directional bias is introduced. (Source: Krakauer 2009)

In the paradigm just described, participants do not form an intention to adopt a compensatory strategy; the adaptation the motor system exhibits is purely the result of implicit learning mechanisms that govern its output. But in a variant of this paradigm (Mazzoni & Krakauer 2006), participants are instructed to adopt an explicit “cheating” strategy—that is, to form intentions—to counter the angular bias introduced by the experimenters. 

This is achieved by placing a neighbouring target (Tn) at a 45°angle from the proper target (Tp) in the direction opposite to the bias (e.g., if the bias is 45°counterclockwise from the Tp, the Tn is placed 45°clockwise from the Tp), such that if participants aim for the Tn, the bias will be compensated for, and the cursor will hit the Tp, thus satisfying the primary task goal.

In this set-up, reaching errors related to the Tp are almost completely eliminated at first. The agent hits the Tp (according to the feedback from the cursor) as a result of forming the intention to aim for the strategically placed Tn. 

But as participants continue to perform the task on further trials, something interesting happens: their movements once again gradually start to show drift, but this time towards the Tn and away from the Tp. What this result is thought to reflect is yet another implicit process of adaption by the motor system, which aims to correct for the difference between the aimed for location (Tn) and the visual feedback (in the direction of the Tp).

Two further details are important for our purposes: First, when participants are instructed to stop using the strategy of aiming for the Tn (in order to hit the Tp) and return their aim to the Tp “[s]ubstantial and long-lasting” (Mazzoni & Krakauer 2006, p.3643) aftereffects are observed, meaning the motor system persists in aiming to reduce the difference between the visual feedback and the earlier aimed for location.

Second, in a separate study by Taylor & Ivry (2011) using a very similar paradigm wherein participants had significantly more trials per block (320), participants did eventually correct for the secondary adaption by the motor system and reverse the direction of their movement, but only gradually, and by means of adopting explicit aiming strategies to counteract the drift.

On the basis of these results, we can draw at least three interesting conclusions about cognitive permeability and the motor system:  First, although it is clearly sensitive to the content of the proximal intentions that it takes as input (in this case the intention to aim for the Tn), it is not always, or only weakly so, to the distal intentions that those very proximal intentions serve—in this case the intention to hit the Tp. 

If this is correct, it may be that the motor system lacks sensitivity to the structure of practical reasoning that often guides an agent’s present action in the background. In this case, the motor system seems not to register that the agent intends to hit the Tp by way ofaiming and reaching for the Tn.

Second, given that aftereffects persist for some time even once the explicit aiming strategy (and therefore the intention to aim for the Tn) has been abandoned, we may conclude that the motor system is only sensitive to the content of proximal intentions to a limited degree in that it takes time for it to properly update its performance relative to the agent’s current proximal intention. The implicit adaptation, indexed to the earlier intention, cannot be overridden immediately.

Third, this degree of sensitivity is not fixed, but rather can vary over time as the result of an agent’s interventions, as determined in Taylor & Ivry’s study, where the drift was eventually reversed after a sufficiently large number of trials wherein the agent continuously adjusted their aiming strategy.

To close, I’d like to outline what I take to be a couple of important upshots of the preceding discussion for neighbouring philosophical debates:

In my view, this reveals another important dimension of the motor system’s intelligence that goes beyond mere sensitivity, and that pertains to its ability to adapt to an agent’s present goals through learning processes that exhibit a reasonable degree of both stability and flexibility.

The preceding discussion may suggest a more limited degree of interfacing than one might have thought—obtaining only between an agent’s most proximal intentions and the motor system. It may also suggest that successful interfacing depends on both the learning mechanism(s) of the motor system (for maximal smoothness and stability) as well as a continuous interplay between its outputs and the agent’s own practical reasoning for how best to achieve their goals (for maximal flexibility).

References

Burnston, D. (2017). Interface problems in the explanation of action. Philosophical Explorations, 20(2), 242–258.

Butterfill, S. A. & Sinigaglia, C. (2014). Intention and motor representation in purposive action. Philosophy and Phenomenological Research, 88, 119–145.

Ferretti, G. & Caiani, S.Z. (2018). Solving the interface problem without translation: the same format thesis. Pacific Philosophical Quarterly, doi: 10.1111/papq.12243

Fodor, J. (1983). The modularity of mind: An essay on faculty psychology. Cambridge: The MIT Press.

Fridland, E. (2014). They’ve lost control: Reflections on skill. Synthese, 91(12), 2729–2750.

Fridland, E. (2017). Skill and motor control: intelligence all the way down. Philosophical Studies, 174(6), 1539–1560.

Krakauer J. W. (2009). Motor learning and consolidation: the case of visuomotor rotation. Advances in experimental medicine and biology, 629, 405–21.

Levy, N. (2017). Embodied savoir-faire: knowledge-how requires motor representations. Synthese, 194(2), 511–530.

MacPherson, F. (2012). Cognitive penetration of colour experience: Rethinking the debate in light of an indirect mechanism. Philosophy and Phenomenological Research, 84(1). 24–62.

Mazzoni, P. & Krakauer, J. W. (2006). An implicit plan overrides an explicit strategy during visuomotor adaptation. The Journal of Neuroscience, 26(14): 3642–3645.

Mylopoulos, M. & Pacherie, E. (2017).  Intentions and motor representations: The interface challenge. Review of Philosophy and Psychology, 8(2), pp. 317–336.

Mylopoulos, M. & Pacherie, E. (2018). Intentions: The dynamic hierarchical model revisited. WIREs Cognitive Science, doi: 10.1002/wcs.1481

Shepherd, J. (2017). Skilled action and the double life of intention. Philosophy and Phenomenological Research, doi:10.1111/phpr.12433

Taylor, J.A. and Ivry, R.B. (2011). Flexible cognitive strategies during motor learning. PLoS Computational Biology, 7(3), p.e1001096.