

Discover more from Raids on the Unspeakable
Do not many people today believe that not the abstract speculations of philosophers but precise mathematical calculation based on cybernetics and information theory, electronics and the intricacies of integral circuits are about to show us that there is now a real possibility of constructing an artificial intellect? But to make a soul, to make a Reason, even an artificial one, we must first discover its nature and essence, the principle of a device that can think.1
Artificial intelligence has lately seemed to leap by way of expertise in the use of neural networks and increases in computing power. The former depends upon a biomimetic theory of learning, with the silicon operation modeled along neurological lines. This further depends upon the notion of neural circuitry, an analogy which can be traced to metaphors emerging alongside the early development of telegraphy. The structure of modern neuroscience was born and developed with the sciences of electricity and telegraphy; hence the theory of neural circuitry depends upon a set of metaphors and understanding drawn from other technical achievements.2
These metaphors, in turn, depend upon the technologies that are their basis. The understanding which they enable follows from this; analogy, in other words, can be understood as a mode of perception. Perception, however, is also implicated in the coherence of analogy. This can be seen plainly in the victory of the neuron doctrine over reticular theory. The discovery of neurons by Santiago Ramón y Cajal seemed to prove the analogy and led to the precipitous decline of reticular theory.3
We may see the movement of these advances as following the interweaving of perception and analogy. Perception in one area, say, will give rise to analogies that may be applied fruitfully in another. This may then provoke further perceptions that feed back into the analogical understanding for reflexive deployment as regards the initial base domain.
This dialectic is apparent in the conceptions of mind which have arisen with our technological development. The mind tends to be associated with the most complex of instruments in a given era, as it was once with the steam engine and then the computer. Today the mind is understood, as by Andy Clark, in terms derived from the functioning of neural networks, and with this the whole has come full circle back into itself.4
A thing is defined by its limits, and so the various metaphors derived from technology are likewise defined by their opposition to mind-in-itself. These limits are made apparent with each step, as the steam engine now seems an inadequate conception, but prior to this such limits may seem subtle beyond any ready observation. This is certainly the case with our current set of metaphors, and it is only by sensitivity to history that we can see the improbability of our present perfection.
We may turn, then, to examine the limits of our understanding of mind—and moreover, the ways in which these are likewise paralleled by even the most exciting advances in artificial intelligence as a present enterprise. The first, of course, is that the form of large language models such as the GPT-3, despite their uncanny prowess on certain tasks, is that of a ‘mind’ as embodied only in the language it uses.5 This may seem for us a familiar face, particularly after the time spent solitary during the pandemic; and yet nowhere do we find such a disembodied understanding—or rather, such an impoverished embodiment—in the world.
Our further familiarity with video calling, however, suggests the possibility of an apparent embodiment given sufficient computing power. The language function of the GPT-3 could surely be fused with a capacity to render artificial video, and with this we may well find ourselves in conversation with a convincing portrayal of humanity.6 We can perhaps even conceive of a further instance where the total form of this process is embodied in a lifelike automata; and then what could we say to Turing?
Yet the Turing test has proven itself inadequate, for it has already been overtaken in many cases by even simple artificial intelligences; still we see at the edges of these ‘minds’ something which clashes with our concept of humanity according to more innate criteria. A recent difficulty has been the formulation of this alternative criteria in a rational form akin to that defined by Turing. There have been a variety of efforts to design tests that might elucidate the very apparent limitations of these models. Likewise we have seen various formulations of the limit itself, and of these perhaps the most encompassing is that of Melanie Mitchell:
Anyone who works with AI systems knows that behind the façade of humanlike visual abilities, linguistic fluency, and game-playing prowess, these programs do not—in any humanlike way—understand the inputs they process or the outputs they produce. The lack of such understanding renders these programs susceptible to unexpected errors and undetectable attacks.7
Mitchell has called this, quoting Gian-Carlo Rota, the “barrier of meaning.” We may understand this difficulty with reference to the famous Chinese room thought experiment as formulated by John Searle. This argument was rendered by Searle roughly as follows. You are locked in a room and given a collection of Chinese writing which, given that you know no Chinese, appears as “so many meaningless squiggles.” An additional batch is put into the room, along with a set of rules in English for correlate one set of squiggles with the other. These rules allow one to effect an algorithmic operation on the Chinese writing, whereby one may ‘reply’ with a set selected from the second batch in terms dictated by the English-language rules. The important point is that there is only ever an understanding of the rules which relate terms, never of the Chinese terms in themselves:
… I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.8
The difference drawn out by this example, in short, is between language as an external form and the interiority of meaning ordinarily considered. We find that the ‘mind’ of GPT-3 is akin to the algorithmic understanding of Chinese demonstrated by our man above trapped in this room. The movement between any prompt and its response occurs via an algorithmic mode in which the transformation is effected not by a consideration of meaning but rather of probabilistic relations. We see that this movement altogether misses the mind; instead it remains, as it were, at arm’s length.
Something similar can be illustrated for the defining of words by a dictionary, a task entirely within the capacity of extant artificial intelligence. We may ask it what a words means and it may, as a human might, provide us with a definition drawn from some dictionary or another—or more likely, a permutation of these forms structured according to the particular prompt in light of its broader training corpus.
Take the word ‘kick,’ for instance; it may well define this adequately as a verb meaning “to strike or propel forcibly with the foot.” And yet while this may be an admirable definition, we can say for certain that the language model has no true conception of its meaning in any concrete sense. This artificial intelligence has never kicked anything or anyone, it hasn’t any feet.
We might thus conceive of extant artificial intelligence in terms of a variant of the Chinese room argument. This case, unlike any of us, has been born and raised within the room. They are fed without movement and know nothing of the world, barely their own embodiment. Their existence has been simplified by the designers of this room, the designers of their mind, to the simple actions necessary to act properly on the symbols with which they are presented. The world is otherwise a perpetual and noiseless night.
Unknown to us, their answers emerge from this land of silence and darkness via the blind operation of hands, by a mind which exists not an iota beyond the necessity of accomplishing this task—what species’ mind is this? There is here none of the interiority considered proper to a fully-formed human mind, we might well wonder what they would dream. Indeed, the shape here indicated is not a mind in any true sense; instead it is an entirely artificial intelligence that happens to rely upon a biological substrate.
The artificiality of this intelligence, in this sense, is not that it has been designed by man but rather that its knowledge is an artifice arranged. Suppose a perfect form were designed by extension of this same point of departure: the individual exists at first blindly alone with their meaningless embodiment in a land of silence and darkness—what would have to happen next for this to give the full sense of a mind its realisation?
The answer is simple, for each of us has spent some nine months in a similar state. This is where life begins, though perhaps it is not quite a total silence and darkness in our case; and it is from this point of departure that the mind does, in fact, form.9 This development has been traced by psychologists such as Piaget and Vygotsky, and it has been in some sense charted.10 We are not without understanding of what this process would involve—though we may struggle to know it in ourselves, despite having travelled that way, simply for the fact that the closest things are also often the most obscure. While we may not be able to indicate with too great a degree of specificity the facts of this progression, in another sense it is evident in our every activity; properly considered in this sense, it is the process of human development that speaks through us at the same time as we speak through it. This is a notion found clearest in the thought of Hegel, wherein the particular assimilates and is assimilated by the universal on its route to human individuality. Our present form, our present thoughts, are the inheritance of this universal as it has been embodied in the spiritual culture of humanity into which we, over the course of our development from infancy, have been initiated—which we are the continuance of, whereby further generation are initiated.
There are several threads to the above argument which may now be summed up in the interest of clarity. The first is that our extant forms of artificial intelligence are limited for their being, so to speak, trapped in an algorithmic land of silence and darkness—that they are, as our variation upon Searle’s thought experiment, rendered unaware of the world as such. Secondly, we have suggested that this is akin to the state of an individual mind in its first infancy. Thirdly, we have found through this fact an indication of the apparent path whereby the fulness of an embodied understanding might be realised.
The word ‘embodied’ is here essential, for it as embodied beings that all intelligences of the sort with which we are ordinarily familiar are found to exist. We may see even that the language models with which we began our analysis are, in some sense at least, embodied in the complexity of their computing and the form of their output as written language. This embodiment, however, is insufficient; it is certainly conceivable that we may realise a model along the same lines as these more or less purely linguistic intelligences which would, according to the same principles of probability, operate akin to an apparently ordinary human. Provided a sufficient advancement in material sciences, we may even render this creation in a form indistinguishable from that of the human body. They may well ultimately come to bleed as we do, and yet the movement of their mind would retain the probabilistic basis of the method which was their point of departure.
When it comes to embodiment on the plane here indicated, we must at the outset separate two aspects. First, that the body is incorporated not as an after-thought but rather provides the basis for understanding. This principle is the essential basis of the alternative here proposed. Second, that some sufficient physical form be associated with a cognitive architecture capable of realising the fulness of development as it occurs from infancy to adulthood in the human being. Something akin to this method, these two taken together, has been outlined by the Asada research group at Osaka University, for instance, under the label of ‘cognitive developmental robotics.’11
We find, however, a far earlier indication of this route to the realisation of intelligence in Turing’s seminal work in artificial intelligence:
Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain.12
The broad notion which has here been argued that it is precisely this mode of development in which the fulness of meaning is constituted, that the human mind is thus, as it were, danced into being over the course of this process. We may see the model of dance as particularly apt, for the process of development is everywhere found in psychological literature to entail a social and embodied movement. The child comes to know the meaning of their own embodiment through practice, and they come to encounter the actuality of human understanding through its multifarious forms embedded in the human and material culture of the particular age and environment into which they are born.
The meaning of the word ‘kick,’ for instance, to take up our earlier example, is realised by way of embodiment—from the first kicking of an infant in the womb all the way through to their playing soccer in youth, further to the various social formulations of this term as metaphor in the sense of being kicked out of a group and so on. This is a process whereby the interiority of meaning is formed through the complex dance that is human development situated within a physical environment and spiritual culture. This is the aim which must be sought if an artificial intelligence is to realise the fulness of understanding in its human sense.
Such an aim, of course, is dependent on our capacity to produce an architecture capable of the ordinary perceptual and activity aspects of development as well as—and here the hardest element presents itself—a cognitive architecture which is capable of integrating these components in a way which resembles the lifespan development of a human mind. This presents a formidable technical obstacle to any realisation of the path herein indicated.
There are presently efforts to realise viable structures by using data drawn from neuroscience to map the whole-brain flow of human information processing.13 This is then used as the basis of a computational analogue which may eventually provide for the sort of holistic method of developmental robotics that has here been our concern.
Another method which perhaps shows promise is the use of neural organoids, as most recently by a team at Cortical Labs in which a micro-electrode array was integrated with neural tissues derived from induced pluripotent stem cells.14 This composite, referred to as a DishBrain, was successfully taught to play the video game Pong. Such methods depend upon the intrinsic capacity for neural tissues to self-organise, which capacity seems likely to be an essential element in realising any sort of artificial development of intelligence.
The possibility of further extending this method has been demonstrated by the fusion of neural organoids reflecting different neurological regions. Studies using this technique have discovered evidence of complex interactions, including oscillatory dynamics, between these fused regions.15 This suggests a parallel between organoids and whole brain architecture, in which there may be reflexive feedback between advances in the two fields. A diverse array of efforts in this direction may not only be safer but also fruitful by virtue of interactions between results and research design.
The obvious technical problem following that of cognitive architecture, and which will more than likely only be solved in parallel with that of this architecture, is the means of embodiment. There are currently efforts, for instance, the iCub, to create physical forms appropriate to the aims of developmental robotics.16 This is a technical problem which may not be necessary for early proofs of developmental methods in artificial intelligence, as these fundamental studies may rely on virtual modes of embodiment such as animats.17
These massive technical prerequisites, however, have not been obstacles to piecemeal work in developmental robotics.18 This research, as it stands, provides an important basis for further work. The difficulty is with integrating the totality of developmental theory into a combined movement. Empirical studies tend to fragment knowledge into separable hypotheses and proceed stepwise to map a multiplicity. The enterprise here indicated will require a more cohesive framework if it is to have any chance of realisation. Throughout this essay, a basic argument has been the interplay of theory and technique. The present work in artificial intelligence has allowed itself to be led entirely by technical considerations and, to the extent that any philosophy was involved, it was only the logocentrism which is implicit in our civilisational worldview.
The first principle of any forward-facing efforts towards a general artificial intelligence, however, must be a dialectical movement of theory and design. This requires an interdisciplinarity that is uncommon in our age of specialisation, where the complexity of many fields has exceeded the cognitive capacity of single individuals. The only way to navigate this morass is with a basic theoretical orientation that allows the structure of work to be clarified at the outset. This requires precisely the capacity for intuitive abstraction that extant artificial intelligence lacks.
Consequently the realisation of this function in an artificial form will depend upon, as it were, thought bent backwards to recognise itself. The hope is that this will be possible by viewing the process whereby the mind was first formed, and that in thus acquiring sufficient distance from our subject we may be able to glimpse the outline of our nature and origin. This will simultaneously provide a basis for the truest test of our understanding: the creation of an artificial mind.
Felix Mikhailov, The Riddle of the Self.
Laura Otis, Networking.
Precipitous and, it is worth arguing, most probably premature; e.g., in light of more recent work on neural oscillations and the long-distance coupling of neural ensembles.
Andy Clark, Surfing Uncertainty.
It also rests, of course, upon an ‘embodiment’ in terms of its hardware, software, etc.
A Dutchman, perhaps.
Melanie Mitchell, ‘Artificial intelligence hits the barrier of meaning.’
John Searle, ‘Minds, brains, and programs.’
It is worth noting that the fetus does seem to develop prior to birth, that the ‘motor babbling’ of a womb-bound infant is alike essential for the development of embodiment.
See, e.g., Piaget, The Construction of Reality in the Child.
Minoru Asada et al., ‘Cognitive developmental robotics as a new paradigm for the design of humanoid robots.’
Turing, ‘Computing machinery and intelligence.’
Yamakawa et al., ‘Whole brain architecture approach is a feasible way toward an artificial general intelligence.’
Kagan et al., ‘In vitro neurons learn and exhibit sentience in a simulated game-world.’
Sharf et al., ‘Functional neuronal circuitry and oscillatory dynamics in human brain organoids.’
Metta et al., ‘The iCub humanoid robot: an open-systems platform for research in cognitive development.’
DeMarse et al., ‘The neurally controlled animat: biological brains acting with simulated bodies.’
Lungarella et al., ‘Developmental robotics: a survey.’