Constructivist Foundations » Special Issues

What Does It Take for an Artificial Agent to Be Constructivist?

Editors: Olivier Georgeon and Alexander Riegler

Submission deadline: 15 August 2017

Download the Call for Papers in PDF

We invite authors to submit articles describing artificial agents – robots or simulated creatures – that are based on constructivist principles. To comply with the constructivist paradigm (PC), the implementation of such agents should object to the realist paradigm (PR), which defines cognition as the computation of action based on the perception of the state of an agent-independent pre-given world. Rather, constructivist agents learn through regularities in their stream of experience without referring to input data as perception (see Appendix B). We expect that this distinction between realist and constructivist paradigms is not a mere scholarly exercise but will stimulate research on autonomous robot learning and sense-making.

Since constructivist agents do not refer to input data as a representation of a pre-given world, they cannot be programmed to seek goals defined in reference to pre-defined states of the world. Such goals would necessarily be the goals of the programmer rather than of the agent (Riegler 2002). Thus, we invite authors to propose new criteria that are not based on extrinsic goals or tasks to demonstrate that the agent is intelligent. For example, authors may argue that the agent demonstrates some form of intrinsic motivation such as curiosity by reporting an analysis of the agent’s behavior. Authors may also demonstrate that the agent is learning to improve some intrinsic criteria that they argue are important. Georgeon, Marshall & Manzotti (2013) provide an example agent implemented according to PC and a demonstration of intelligent behaviors through activity-trace analysis. We acknowledge that the notion of “intelligent behavior” is hard to define and may depend on the observer’s judgment, and welcome submissions that would help develop this notion.

Script 2 of Appendix B presents a generic example algorithm that complies with PC. It provides a framework of thought for designing constructivist agents that would fit the scope of the special issue. We, however, do not ask authors to necessarily follow this algorithm; any other implementation that avoids considering the agent’s input data as perception of a pre-given world will fit the scope. Appendix A provides positive examples from the literature that meet this condition, and negative examples that do not.

Guiding questions

We invite authors to discuss one or several questions of the following non-exhaustive list of questions in the theoretical part of their paper submissions:

Q1 What are the possible applications and areas in which PC is or could prove useful or even indispensable?

Q2 How does PC relate to Enaction (e.g., Vörös, Froese & Riegler 2016)?

Q3 How does PC relate to Developmental Learning (e.g., Weng et al. 2001)?

Q4 How does PC relate to the representationalist vs non-representationalist distinction? That is, can the learned agent-internal data structures be in any meaningful way said to represent anything outside the agent?

Q5 How could feedback loops (e.g., action/result loops presented in Appendix B, Script 2) be hierarchically nested in order to allow more complex behaviors and incremental learning?

Q6 Which conditions does a constructivist agent’s behavior have to meet to be considered intelligent behavior?

Article submissions

The special issue will be organized around a number of target articles accompanied by Open Peer Commentaries (OPCs). Expressions of interest to submit a target article should include a short abstract and should reach us by 15 May 2017. If your proposal is accepted, submission of the full paper (in English) is due 15 August 2017, followed by a double-blind review. In the case of conditional acceptance, sufficient time will be allocated for the revisions requested. Target articles should not exceed 9,000 words. The special issue will be published in March 2018 in Constructivist Foundations, an open-access journal indexed in Web of Science, Scopus, and other citation indices. The publication of papers is free of charge.

Please use the Word template with the author guidelines. It can be found at

Declarations of interests, paper submissions, and all further inquiries should be sent by e-mail to the editors at agents/at/

For further information about this special issue, see:


15 May 2017: Expressions of interest (including abstract)
15 August 2017: Submission deadline for full papers
15 January 2018: Submission deadline for open peer commentaries
15 March 2018: Publication date

Appendix A: Examples from the literature

Some examples of constructivist frameworks and implementations from the literature:

Examples from the literature that would not fit the scope of the special issue:

Appendix B: Implementation Paradigms

A computer implementation[1] of an agent interacting with a simulated world involves the execution of an algorithm[2] that can generally be split in two parts: A and W. Algorithm A controls the agent’s behaviour, and Algorithm W implements the mechanics of the world. In the case of a real robot, there is only Algorithm A. From the programmer’s standpoint, the agent is an entity controlled by A that exists as an element of the world implemented by W; Algorithm W implements both the mechanics of the agent’s environment and the agent’s “physical part” in the world (the agent’s “body”). Algorithms A and W exchange data back and forth through the interaction cycle.[3]

Implementations in which the input data received by algorithm A from algorithm W represents the current state of the world are called realist because A receives data supposedly directly representing reality.

Script 1 and Figure 1 outline a typical realist implementation. A’s input data is called observation, and A’s output data action. Algorithms A and W are each split into two functions: Ain updates A’s current state at from input data, and Aout generates output data from state at. Likewise, Win updates W’s state wt from input data, and Wout generates output data from state wt.

Script 1:
observationt = Wout(wt–1)
t = Ain(observationt, at–1)
t = Aout(at)
t = Win(actiont, wt–1)
t = t
+ 1

Figure 1: Example realist implementation. Based on W’s state wt–1, a new observation is computed and fed into A (black bullet). Based on this “input”, the next state at of A is computed, which is used to determine the next action, which is sent to W (black triangle).

The realist paradigm PR assumes that A’s input data (observationt) represents the state of the world. In the case of a robotic agent, the robot’s algorithm A exploits the hypothesis that sensor data represents the state of the world. In the case of a simulated agent, both A and W build upon this paradigm: the state w of W represents the state of the world; Wout generates the observation as a function of w; the observation is processed by Ain as if it constituted the agent’s perception of the state of the world.

PR is embraced by, for example, Russell and Norvig for whom “the problem of AI is to build agents that receive percepts from the environment and perform actions” (Russell & Norvig 2003: iv). Sutton & Barto (1998) present reinforcement-learning algorithms based on PR. Algorithm W is implemented as a Markov Decision Process (MDP) whose state represents the state of the world. In Partially Observable Markov Decision Processes (POMDPs), Wout is implemented as a stochastic function of the MDP’s state that simulates false observations due to noise. Even though the observation is noisy and partial, the agent’s algorithm still exploits the assumption that it carries an underlying signal representing the state of the world.

PR has been successfully applied to problem-solving and reinforcement-learning algorithms. However, it is uncertain whether this paradigm is suited to design algorithms that can engage in more natural kinds of learning. In particular, constructivist epistemology suggests that PR may not accurately account for the situation of an autonomous being (animal or robot) actively learning from experience. From the constructivist perspective, the ontic nature of reality is inaccessible (Glasersfeld 1995), and the states of the world may not even exist independent of the mind of the observer.

Constructivist algorithms may draw inspiration from authors such as Humberto Maturana, Francisco Varela, Kevin O’Regan and Alva Noë who suggested that perceptions do not consist of interpreting input data supposedly representing the state of the world, but derive from sensorimotor experience. These authors compare cognition with how a submarine navigator safely steers his vessel: irrespective of what is outside of the submarine, all that its navigator needs to do is to maintain a certain dynamic relationship between gauges, control panel lights and levers (Maturana & Varela 1987: 137; O’Regan & Noë 2001: 940). See also Maturana (1978: 42) for a similar analogy using instrumental flight where the pilot “manipulated certain internal relations of the plane in order to obtain a particular sequence of readings in a set of instruments” in order to safely land the plane. This perspective aligns with the “world as a black box” perspective of Ernst von Glasersfeld (1974: 16), “a black box with which we can deal remarkably well.”

In these analogies, the signals received by the navigator (the gauges and lights on the control panel) are not a representation of the state of the submarine and its environment. Instead, the signals are feedback from actions controlled by the navigator. In a given state of the submarine and the environment, the lights may differ according to which buttons the navigator is pressing and which levers he is moving. Inspired by the submarine analogy, an implementation under the constructivist paradigm PC would look as follows:

Script 2:
actiont = Aout (at–1)
t = Win (actiont, wt–1)
t = Wout (actiont, wt–1)
t = Ain (resultt, at–1)
t = t
+ 1

Figure 2: Example of a constructivist implementation (adapted from Georgeon & Cordier 2014; see also Riegler 2007). Function Aout computes the new actiont from the previous state at–1 and sends it to W (black bullet). The function Win computes the new state wt. The function Wout computes the new resultt from actiont and state wt–1, and sends it to A (black triangle). The function Ain computes the new state at. Note that Figure 2 differs from Figure 1 by the positions of the black bullet and triangle that represent the conceptual beginning and end of the interaction cycle. In the case of a robotic agent, the robot’s algorithm A treats the sensor data (resultt) as feedback from the robot’s actions.

As we can see in Script 2, in a given state wt–1, resultt may vary depending on actiont. This means that algorithm A’s input data (resultt) does not represent the world, and does not amount to the agent’s perception but rather can be thought of as the result of an experiment initiated by A through actiont. Since A’s input data cannot be considered the agent’s perception, one needs to design A differently than realist algorithms. These new kind of algorithms should – in O’Regan and Noë’s (2001) words – “master the laws of sensorimotor contingencies.” They may construct perception as an internal data structure derived from the stream of experience of interaction but should not treat input data as perception.

Now, we shall acknowledge that Scripts 1 and 2 are formally equivalent. Indeed, Script 2 can be transformed into Script 3 by reordering the lines and adjusting t accordingly.

Script 3:
resultt = Wout (actiont–1, wt-2)
t = Ain (resultt, at–1)
t = Aout (at)
t = Win (actiont, wt–1)
t = t +

Script 3 can be transformed into Script 4 by calling wt = ⟨wt–1, actiont, wt⟩ and replacing W with new algorithm W′ whose state is w′. Note that w′ is merely an abstract construction that does not represent the world in the eye of the programmer.

Script 4:
resultt = Wout (wt–1)
t = Ain (resultt, at–1)
t = Aout (at)
t = Win (actiont, wt–1)
t = t + 1

When renaming resultt to observationt, Script 4 becomes identical to Script 1, demonstrating that Scripts 1 and 2 are structurally equivalent. This equivalence shows that Script 1 is not intrinsically realist. While W’s output data is necessarily a function of W’s state, W’s state does not have to represent the state of the world (just as w′ does not in Script 4).

In conclusion, this appendix is intended to draw attention to the fact that an agent’s input data (and a robot’s sensor data) could be something other than an observation of the state of the world. Even though Scripts 1 and 2 are structurally equivalent, PC and PR still differ by the fact that a PC agent should be able to exhibit intelligent behaviours when fed with input data that does not constitute a representation of a pre-given world.


  1. We deliberately refrain from using the highly ambiguous notion of “computation” in the context of constructivism, see Riegler, Stewart & Ziemke (2013).
  2. The term algorithm is used in a general sense. At an underlying level, the algorithm could be designed as a multi-agent system, for example to represent the agent’s neurons and body cells.
  3. This description authorizes the coupling between the agent and its environment to evolve as the agent develops, since the structure of the agent’s “body” within algorithm W may evolve, as well as the data exchanged between algorithms A and W.


Clancey W. J. (1995) A boy scout, Toto, and a bird: How situated cognition is different from situated robotics. In: Steels L. & Brooks R. (eds.) The artificial life route to artificial intelligence: Building situated embodied agents. Lawrence Erlbaum Associates, Hillsdale NJ: 227-236.

Drescher G. L. (1991) Made-up minds, a constructivist approach to artificial intelligence. Cambridge, MA: MIT Press.

Foerster H. von (1976) Objects: Tokens for (eigen-)behaviors. ASC Cybernetics Forum 8(3–4): 91–96.

Franchi S. (2013) Homeostats for the 21st Century? Simulating Ashby Simulating the Brain. Constructivist foundations 9(1): 93–101.

Füllsack M. (2016) Circularity and the micro-macro-difference. Constructivist Foundations 12(1): 1–10.

Georgeon O. & Cordier A. (2014) Inverting the interaction cycle to model embodied agents. The Fifth international conference on Biologically Inspired Cognitive Architectures, Boston MA. Procedia Computer Science, 41: 243-248.

Georgeon O., Marshall J., & Manzotti R. (2013) ECA: An enactivist cognitive architecture based on sensorimotor modeling. Biologically Inspired Cognitive Architectures 6: 46-57.

Glasersfeld E. von (1974) Jean Piaget and the radical constructivist epistemology. In: Smock C. D. & Glasersfeld E. von (eds.) Epistemology and education. Follow Through Publications, Athens GA: 1–24.

Glasersfeld E. von (1995) Radical constructivism. Falmer Press, London.

Llinás R. R. (2001) I of the vortex. MIT Press, Cambridge MA.

Maturana H. R. (1978) Biology of language: The epistemology of reality. In: Miller G. & Lenneberg E. (eds.) Psychology and biology of language and thought: Essays in honor of Eric Lenneberg. Academic Press, New York: 27–63.

Maturana H. R. & Varela F. J. (1987) The tree of knowledge: The biological roots of human understanding. Shambhala, Boston.

O’Regan J. K. & Noë A. (2001) A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences 24(5): 939–1031.

Oudeyer P.-Y., Kaplan F. & Hafner, V. (2007) Intrinsic motivation systems for autonomous mental development. IEEE Transactions on Evolutionary Computation 11(2): 265-286.

Oyama S. (1985) The ontogeny of information: Developmental systems and evolution. Cambridge University Press: Cambridge MA. Republished in 2000.

Perotto F. S. (2013) A computational constructivist model as an anticipatory learning mechanism for coupled agent–environment systems. Constructivist Foundations 9(1): 46–56.

Porr B., Egerton A. & Wörgötter F. (2006) Towards closed loop information: Predictive information. Constructivist Foundations 1(2): 83–90.

Powers W. T. (1973) Behavior: The control of perception. Aldine de Gruyter, New York.

Riegler A. (2002) When is a cognitive system embodied? Special issue on “Situated and embodied cognition” edited by Tom Ziemke. Cognitive Systems Research 3: 339–348.

Riegler A. (2007) The radical constructivist dynamics of cognition. In: Wallace B. (ed.) The mind, the body and the world: Psychology after cognitivism? Imprint, London: 91–115.

Riegler A., Stewart J. & Ziemke T. (2013) Computation, cognition and constructivism: Introduction to the special issue. Constructivist Foundations 9(1): 1–6.

Roesch E. B., Spencer M., Nasuto S. J., Tanay T. & Bishop J. M. (2013) Exploration of the functional properties of interaction: Computer models and pointers for theory. Constructivist Foundations 9(1): 26–33.

Russell S. & Norvig P. (2003) Artificial intelligence, a modern approach. Prentice Hall, Englewood Cliffs NJ.

Sutton R. & Barto A. (1998) Reinforcement learning: An introduction. MIT Press, Cambridge MA.

Vörös S., Froese T. & Riegler A. (eds.) (2016) Special issue “Exploring the diversity within enactivism and neurophenomenology”. Constructivist Foundations 11(2).

Weng J., McClelland J., Pentland A., Sporns O., Stockman I., Sur M. & Thelen E. (2001) Autonomous mental development by robots and animals. Science 291(5504): 599-600.

About the journal

Constructivist Foundations is a scholarly peer-reviewed e-journal concerned with the critical interdisciplinary study of constructivist and related approaches to science and philosophy.

It is indexed in the ISI Arts & Humanities Citation Index (AHCI) and has currently more than 10,000 subscribers.

See for more information