Enactivism, Computation, and Autonomy

20 April 2017 | Joe Dewhurst (University of Edinburgh)

Enactivism has historically rejected computational characterisations of cognition, at least in more traditional versions. This has led to the perception that enactivist approaches to cognition must be opposed to be more mainstream computationalist approaches, which offer a computational characterisation of cognition.

However, the conception of computation which enactivism rejects is in some senses quite old fashioned, and it is not so clear that enactivism need necessarily be opposed to computation, understood in a more modern sense. Demonstrating that there could be compatibility, or at least not a necessary opposition, between enactivism and computationalism (in some sense) would open the door to a possible reconciliation or cooperation between the two approaches.

In a recently published paper (Villalobos & Dewhurst 2017), my collaborator Mario and I have focused on elucidating some of the reasons why enactivism has rejected computation, and have argued that these do not necessarily apply to more modern accounts of computation. In particular, we have demonstrated that a physically instantiated Turing machine, which we take to be a paradigmatic example of a computational system, can meet the autonomy requirements that enactivism uses to characterise cognitive systems. 

This demonstration goes some way towards establishing that enactivism need not be opposed to computational characterisations of cognition, although there may be other reasons for this opposition, distinct from the autonomy requirements.

The enactive concept of autonomy first appears in its modern guise in Varela, Thompson, & Rosch’s 1991 book The Embodied Mind, although it has important historical precursors in Maturana’s autopoietic theory (see his 1970, 1975, 1981; see also Maturana & Varela 1980) and cybernetic work on homeostasis (see eg Ashby 1956, 1960). There are three dimensions of autonomy that we consider in our analysis of computation. 

Self-determination requires that the behaviour of an autonomous system must be determined by that system’s own structure, and not by external instruction. 

Operational closure requires that the functional organisation of an autonomous system must loop back on itself, such that the system possesses no (non-arbitrary) inputs or outputs. 

Finally, an autonomous system must be precarious, such that the continued existence of the system depends on its own functional organisation, rather than on external factors outside of its control. In this post I will focus on demonstrating that these criteria can be applied to a physical computing system, rather than addressing why or how enactivism argues for them in the first place.

All three criteria have traditionally been used to disqualify computational systems from being autonomous systems, and hence to deny that cognition (which for enactivists requires autonomy) can be computational (see eg Thompson 2007: chapter 3). Here it is important to recognise that the enactivists have a particular account of computation in mind, one that they have inherited from traditional computationalists.

According to this ‘semantic’ account, a physical computer is defined as a system that performs systematic transformations over content-bearing (ie representational) states or symbols (see eg Sprevak 2010). With such an account in mind, it is easy to see why the autonomy criteria might rule out computational systems. We typically think of such a system as consuming symbolic inputs, which it transforms according to programmed instructions, before producing further symbolic outputs.

Already this system has failed to meet the self-determination and operational closure criteria. Furthermore, as artefactual computers are typically reliant on their creators for maintenance, etc., they also fail to meet the precariousness criteria. So, given this quite traditional understanding of computation, it is easy to see why enactivists have typically denied that computational systems can be autonomous.

Nonetheless, understood according to more recent, ‘mechanistic’ accounts of computation, there is no reason to think that the autonomy criteria must necessarily exclude computational systems. Whilst they differ in some details, all of these accounts deny that computation is inherently semantic, and instead define physical computation in terms of mechanistic structures. 

We will not rehearse these accounts in any detail here, but the basic idea is that physical computation can be understood in terms of mechanisms that perform systematic transformations over states that do not possess any intrinsic semantic content (see eg Miłkowski 2013; Fresco 2014; Piccinini 2015). With this rough framework in mind, we can return to the autonomy criteria.

Even under the mechanistic account, computation is usually understood in terms of mappings between inputs and outputs, where there is a clear sense of the beginning and end of the computational operation. A system organised in this way can be described as ‘functionally open’, meaning that its functional organisation is open to the world. 

A functionally closed system, on the other hand, is one whose functional organisation loops back through the world, such that the environmental impact of the system’s ‘outputs’ contributes to the ‘inputs’ that it receives. A simple example of this distinction can be found by considering two different ways that a thermostat could be used. 

In the first case the sensor, which detects ambient temperature, is placed in one house, and the effector, which controls a radiator, is placed in another (see figure 1). This system is functionally open, because there is only a one-way connection between the sensor and the effector, allowing us to straightforwardly identify inputs and outputs to the system.

Figure 1 and Figure 2

A more conventional way of setting up a thermostat is with both the sensor and the effector in the same house (see figure 2). In this case the apparent ‘output’ (i.e. control of the radiator) loops back around to the apparent ‘input’ (ie ambient temperature), forming a functionally closed system. The ambient air temperature in the house is effectively part of the system, meaning that we could just as well treat the effector as providing input and the sensor as producing output – there is no non-arbitrary beginning or end to this system.

Whilst it is typical to treat a computing mechanism more like the first thermostat, with a clear input and output, we do not think that this perspective is essential to the mechanistic understanding of computation. There are two possible ways that we could arrange a computing mechanism. 

The functionally open mechanism (figure 3) reads from one tape and writes onto another, whilst the functionally closed mechanism (figure 4) reads and writes onto the same tape, creating a closed system analogous to the thermostat with its sensor and effector in the same house. As Wells (1998) suggests, a conventional Turing machine is actually arranged in the second way, providing an illustration of a functional closed computing mechanism. 

Whether or not this is true of other computational systems is a distinct question, but it is clear that at least some physically implemented computers can exhibit operational closure.

Figure 3 and Figure 4

The self-determination criterion requires that a system’s operations are determined by its own structure, rather than by external instructions. This criterion applies straightforwardly to at least some computing mechanisms. Whilst many computers are programmable, their basic operations are nonetheless determined by their own physical structure, such that the ‘instructions’ provided by the programmer only make sense in the context of the system itself. 

To another system, with a distinct physical structure, those ‘instructions’ would be meaningless. Just as the enactive automaton ‘Bittorio’ brings meaning to a meaningless sea of 1s and 0s (see Varela 1988; Varela, Thompson, & Rosch 1991: 151–5), so the structure of a computing mechanism bring meaning to the world that it encounters.

Finally, we can turn to the precariousness criterion. Whilst the computational systems that we construct are typically reliant upon us for continued maintenance and a supply of energy, and play no direct role in their own upkeep, this is more a pragmatic feature of our design of those systems, rather than anything essential to computation. 

We could easily imagine a computing mechanism designed so that it seeks out its own source of energy and is able to maintain its own physical structure. Such a system would be precarious in just the same sense that enactivism conceives of living systems as being precarious. So there is no in-principle reason why a computing system should not be able to meet the precariousness criterion.

In this post I have very briefly argued that the enactivist autonomy criteria can be applied to (some) physically implemented computing mechanisms. Of course, enactivists may have other reasons for thinking that cognitive systems cannot be computational. Nonetheless, we think this analysis could be interesting for a couple of reasons. 

Firstly, insofar as computational neuroscience and computational psychology have been successful research programs, enactivists might be interested in adopting some aspects of computational explanation for their own analyses of cognitive systems. 

Secondly, we think that the enactivist characterisation of autonomous systems might help to elucidate the senses in which a computational system might be cognitive. 

Now that we have established the basic possibility of autonomous computational systems, we hope to develop future work along both of these lines, and invite others to do so too.

I will leave you with this short and amusing video of the autonomous robotic creations of the British cyberneticist W. Grey Walter, which I hope might serve as a source of inspiration for future cooperation between enactivism and computationalism.

References

Ashby, R. (1956). An introduction to cybernetics. London: Chapman and Hall.

Ashby, R. (1960). Design for a Brain. London: Chapman and Hall.

Fresco, N. (2014). Physical computation and cognitive science. Berlin, Heidelberg: Springer-Verlag.

Maturana, H. (1970). Biology of cognition. Biological Computer Laboratory, BCL Report 9, University of Illinois, Urbana.

Maturana, H. (1975). The organization of the living: A theory of the living organization. International Journal of Man-Machine studies, 7, 313–332.

Maturana, H. (1981). Autopoiesis. In M. Zeleny (Ed.), Autopoiesis: a theory of living organization (pp. 21–33). New York; Oxford: North Holland.

Maturana, H. and Varela, F. (1980). Autopoiesis and cognition: The realization of the living. Dordrecht, Holland: Kluwer Academic Publisher.

Miłkowski, M. (2013). Explaining the computational mind. Cambridge, MA: MIT Press.

Piccinini, G. (2015). Physical Computation. Oxford: OUP.

Sprevak, M. (2010). Computation, Individuation, and Received View on Representations. Studies in History and Philosophy of Science, 41: 260–70.

Thompson, E. (2007). Mind in Life: Biology, phenomenology, and the sciences of mind. Cambridge, MA: Harvard University Press.

Varela F. 1988. Structural Coupling and the Origin of Meaning in a Simple Cellular Automation. In Sercarz E. E., Celada F., Mitchison N.A., Tada T. (eds.), The Semiotics of Cellular Communication in the Immune System, pp. 151–61. New York: Springer-Verlag.

Varela, F., Thompson, E., and Rosch, E. (1991). The Embodied Mind. Cambridge, MA: MIT Press.

Villalobos, M. & Dewhurst, J. (2017). Enactive autonomy in computational systems. Synthese, doi:10.1007/s11229-017‑1386‑z

Wells, A. J. (1998). Turing’s Analysis of Computation and Theories of Cognitive Architecture. Cognition, 22(3), 269–94.