What AI really tells us about the culture war
The Turing model of intelligence falls short of imagining the full scope of what humans are capable of, even as regards their ability to use language...
Written by Michael Weyns.
Aporia magazine recently published a piece on the relationship between developments in Artificial Intelligence and the status of a long-standing tug-of-war in Western thought. While the essay in question offers a profound and thought-provoking meditation on the implications of modern AI for the longevity of philosophical inquiry in general—and questions about the nature of intelligence and human existence in particular—I would argue it ultimately falls short of its intended purpose. In what follows I will attempt, however provisionally, to outline why.
The fundamental objection
The essay seems to claim that, as the repertoire of AI keeps expanding, the Turing model of intelligence comes to closer to winning out vis-à-vis its Wittgensteinian alternative. Roughly, what this means is that the culture war represented by these two sides is being decided in favour of Alan Turing, to the great detriment of so-called “linguistic holism”, of which, especially in the later phase of his life, Ludwig Wittgenstein was a primary exponent. To the essay’s credit, the central conflict it has outlined cannot simply be waved away as a case of “tennis without nets”—a competition that is only a sham, insofar as both contending parties refuse to decide on a mutually binding rule set. As is made clear, Turing’s reasons for establishing his infamous “Imitation Game” were provoked, at least in part, by Wittgenstein’s own conception of linguistics, deemed to have an intrinsically human dimension. If Wittgenstein were actually on the losing side of the culture war, he would be found losing on the basis of certain criteria to which he himself had subscribed.
The later Wittgenstein believed that “it is only in use that the sentence [Satz] has sense.” Implicit here is that meaning cannot be reduced to the application of deterministic rules, but rather, emerges from a gestalt of human practice. If non-human, computational intelligence could be shown to simulate human-level linguistic ability, then clearly the position that meaning is the exclusive province of irregular human lifeforms would prove untenable.
Turing devised the “Imitation Game” or “Turing Test” (as it is perhaps more commonly known) as a method by which to indirectly verify another (potentially artificial) agent’s ability to think. The Imitation Game does not actually aim to test directly whether the agent being tested can think, only that it can simulate or, rather, imitate (human) thought. As Turing put it: “Instead of attempting [to define cognition as such] I shall replace the question [‘Can machines think?’] by another, which is closely related to it and is expressed in relatively unambiguous words.” What the Imitation Game sets out to test is whether a machine can make use of language in a way that is sufficiently persuasive to a human interlocuter. Per Wittgenstein’s “Meaning is Use” credo, this should suffice to demonstrate that machines can engage in discourse in a way that is also meaningful to human evaluators.
By replacing the question “Can machines think” with “Can machines succeed in the Imitation Game?” Turing was doing more than reducing human cognition to the sophisticated use of language (a “behaviouristic” tendency he and Wittgenstein, even in the latter’s “mature” phase, had in common); crucially, he was also performing the inverse operation of reducing language-use to a peculiarly disembodied kind of cognition.
When pre-empting some of the possible objections that might arise in response to his reformulation of the question “Can machines think?” Turing insisted that the new question was particularly useful because it allowed us to draw “a fairly sharp line between the physical and intellectual capacities of man.” He went on to say:
No engineer or chemist claims to be able to produce a material which is indistinguishable from the human skin. It is possible that at some time this might be done, but even supposing this invention available we should feel there was little point in trying to make a "thinking machine" more human by dressing it up in such artificial flesh. The form in which we have set the problem reflects this fact in the condition which prevents the interrogator from seeing or touching the other competitors, or hearing their voices… The question and answer method seems to be suitable for introducing almost any one of the fields of human endeavour that we wish to include. We do not wish to penalise the machine for its inability to shine in beauty competitions, nor to penalise a man for losing in a race against an aeroplane. The conditions of our game make these disabilities irrelevant. The "witnesses" can brag, if they consider it advisable, as much as they please about their charms, strength or heroism, but the interrogator cannot demand practical demonstrations.
The passage is quoted at length because it reveals what I consider to be the fundamental fault in both the Turing and Wittgensteinian models of intelligence. The test an agent must pass in order to prove itself a human-level intelligence assumes that the human being as such can be reduced, qua its essential properties, to a “thinking thing.” To prove one is a human-level intelligence, one must avoid all inter-personal contact with the interlocutor, as well as any “practical demonstrations”, and instead rely only on one’s ability to manipulate words in a sophisticated manner. Passing the Turing Test may indeed suffice to demonstrate that one is equal to a “thinking thing.” Being a “thinking thing”, however, is hardly the same as exhibiting human-level intelligence. And therein lies the rub.
Linguistic holism revisited
It is quite likely that Turing’s formulation of the Imitation Game was influenced, in no small part, by definitions already set out by the father of modern epistemology, René Descartes. When Turing insisted on drawing “a fairly sharp line between the physical and intellectual capacities of man” he was echoing the central aspect of Cartesian dualism, which itself insists on a sharp division between the immaterial res cogitans (“thinking thing”) and the spatiotemporally embedded res extensa (“extended thing”)—roughly, between mind and world.
In the Discourse on the Method, Descartes assumes a mechanistic view of the body, which is likened to “a machine made by the hands of God.” He goes on to argue that “were there such machines exactly resembling the organs and outward form of an ape or any other irrational animal, we could have no means of knowing that they were in any respect of a different nature from these animals.” Animals are thus located wholly in the “extended realm”, bereft of any participation in an inner, mental world. The situation is quite different for humans. Humans are the “thinking things” par excellence: “if there were machines bearing the image of our bodies, and capable of imitating our actions as far as it is morally possible, there would still remain… tests whereby to know that they were not therefore really men.” The existence of man-like machines, “capable of imitating our actions”—i.e., capable of enacting Turing’s “practical demonstrations”—is deemed more plausible than such machines being imbued with the additional ability to “use words or other signs arranged in such a manner as is competent to us in order to declare our thoughts to others.”
Because all living beings (and not just human beings) are equipped with bodies, Turing, like Descartes, downplays the role of the body in human cognition. If a being is capable of acting like a “thinking thing”—i.e., proves itself capable of holding its own in an intelligent conversation, by arranging words “variously so as appositely to reply to what is said in its presence”—then it will have proven itself to be essentially, intellectively human. The Turing Test is based on this fundamental assumption; and the same goes for the central claims of the essay I aim to critique.
In all this, Wittgenstein only counters Turing insofar as he agrees with Descartes that it is implausible that machines will be able to imitate humans’ use of speech. On this front, Turing does at least seem to be triumphing over Wittgenstein (and Descartes). As current Large Language Models (LLMs) like OpenAI’s much-publicized ChatGPT have demonstrated, machines are quite capable of engaging in relatively advanced levels of meaningful discourse insofar as they are able to imitate (“learn”) many of the various, contextually dynamic ways in which humans have historically made use of language. But Wittgenstein and Turing both operate within a Cartesian horizon, and it is just this (beyond any other potential hard limits LLMs or other AI models might still encounter) that, I would argue, attenuates any relevance their claims might have with respect to what is especially human about cognition and language-use.
According to the essay, by invalidating Wittgenstein, Turing’s “linguistic reductionism” also invalidates much of “linguistic holism” as a whole. Continental philosophers like Martin Heidegger, Maurice Merleau-Ponty, and Jacques Derrida, to name only a few, likewise insisted that there was something intrinsically human about language use. How would they contend with the challenges mounted by today’s ascendant AI models?
Heidegger, for example, did indeed “claim that language is tied to ‘existential embeddedness’ and ‘Being-in-the-World’” (as the essay claims). However, as a more comprehensive reading of the philosopher’s oeuvre reveals, this does not entail that “existential embeddedness” is thought to be necessary for one’s ability to hold a conversation (though, in a certain sense, it very well might be). It is rather the case that being imbued with linguistic ability is itself necessary (though perhaps not sufficient) for the realization of something like “existential embeddedness”.
Heidegger famously said: “Speech [Sprache] is the house [Haus] of Being. Within her enclosure [Behausung] dwells [wohnt] the human.” Elsewhere, he echoed a verse by the poet Hölderlin touching on the same theme: “Poetically man dwells [dichterisch wohnet der Mensch].” The idea, crudely summarized, is that human being (Wesen) is governed by what the poets are able to “birth” (in terms of poetic fabulations) by attending to the inchoate pulsations of Being. The existential schemata governing individual and collective action; the deep-rooted habituations which constitute ethics in the broadest sense—these inhere in the speech-patterns we adopt and perpetuate. Speech “primes” our engagement with the world by presupposing a “Grammar of Creation” (George Steiner) through which the whole ontological order of the cosmos is filtered. In short, language encodes the paradigms and models that structure our affordances, our inclinations, our desires. It conditions how we attend to others, how we “care” (a central concept for Heidegger) for the things around us, how we are disposed to the world at large.
Given that the speech-patterns we adopt might controvert our nature as creatures who dwell poetically, language is even capable of obscuring its own centrality in the complex constellation of human being. The language of modernity, inaugurated in some sense by Descartes and his contemporaries, gives clear evidence of this possibility. If we really start believing that we are merely “thinking things,” then speech eventually denatures and turns into a disembodied tool used solely for “communication”, i.e., information exchange. (By contrast, authentic human speech enables what Heidegger calls Erschlossenheit, the “disclosedness” or “disclosure” of a holistically structured world, replete with a complicated set of cohering and competing normative predispositions.) When human beings become dissociated from their nature as creatures who dwell poetically “on the earth,” a parody of speech (mere Gerede, i.e., “chitter-chatter”, “parroting”, “signalling”) is the most likely result. Speech then assumes the qualities implied by a mistranslation of Derrida’s famous phrase, Il n’y a rien de hors-texte (“there is no outside-text or unnumbered page,” but sometimes wrongly rendered in English as “there is nothing outside the text”).
LLMs operate in a way that assumes the scope of language-learning is exhausted by attending to intratextual patterns of usage. For an LLM, it is true that, Il n'y a rien en dehors du texte—that there is effectively nothing outside of the textual corpus to which it has been exposed (or, to paraphrase the early Wittgenstein: as regards an LLM, “the limits of its language are the limits of its world”). Language becomes an echo chamber, feeding on its own self-referentiality. In order to retain a residual connection to Being and the world, LLMs must rely on the fact that much of the language they have been trained on originated in the minds of existentially embedded humans. LLMs have essentially no access to the “Offenbarkeit of Being”; they lack an engaged exposure to the world at large, which is nonetheless necessary to ensure the continued replenishment of our lingual reservoirs.
These days much the same can unfortunately be said for many, if not most, humans. What the success and frantic adoption of LLMs discloses, is a human mode of attending to the world Heidegger referred to as das Gestell. The self-conscious and foundational subjectivity of the Cartesian “thinking thing” presupposes a kind of world-disclosure, a conception of the “extended thing” as something than can be “served up” (bestellt) for the sake of individual consumption. In Heidegger’s time das Gestell was mostly limited to the various ways energy could be technologically extracted from the environment. As he put it in The Question Concerning Technology:
The hydroelectric plant is not built into the Rhine River as was the old wooden bridge that joined bank with bank for hundreds of years. Rather the river is dammed up into the power plant. What the river is now, namely, a water power supplier, derives from out of the essence of the power station.
In our own time, the dominion of the self-assertive subject has greatly expanded. With the advent of the Internet and the World Wide Web, more and more of the world is available to us via a unified interface, through which we access a vast multitude of interconnected servers that are more or less always “present-at-hand”. Under the influence of a general push towards a more advanced Industry 4.0, the world is increasingly resembling a “metaverse,” essentially mediated through our uses of technology. Indeed, the Internet itself is rapidly being extended to include sensors and actuators of various kinds, thus becoming an Internet of Things and thereby confirming its enmeshment with and encroachment on the physical world. LLMs are likely to slot into this general matrix, further consolidating the world supremacy of das Gestell. As the World Economic Forum recently put it: “LLMs and their natural language interfaces, have vast potential to revolutionise manufacturing efficiency, worker engagement, product quality and adoption.” By relying on advanced AI, the human being qua “thinking thing” merely aggravates its hold on the res extensa, and, in the process, is likely to become ever more alienated, uprooted, and existentially dis-embedded—as even many of our most cherished, most “creative” activities and pastimes become appropriated by an inexorable drive towards universal accessibility.
The quarrel between the Ancients and the Moderns
The essay locates the inception of the culture war between science and the “humanities” in 1880, when evolutionary biologist T. H. Huxley wanted to elevate the status of scientific inquiry vis-à-vis the study of literature. In truth, this culture war goes back much further—to the quarrel between the Ancients and the Moderns (as rediscovered, for example, by the philosopher Leo Strauss).
The quarrel between the Ancients and the Moderns, most broadly conceived, asks whether modern conceptions of human being are superior to pre-modern ones, or vice versa. Circumscribing the scope of this quarrel requires, it would seem, a provisional account of what distinguishes modern thought from pre-modern thought—or, at least, a criterion to decide who might count as a modern thinker and who as a pre-modern one. Strauss counted thinkers liked Machiavelli, Descartes, and Hobbes as quintessentially modern thinkers, whose philosophies he juxtaposed with those of Plato and Aristotle, prominent among the ancients.
While it would be far beyond the scope of this essay to discuss all the multifarious ways in which these groups differ (often profoundly), suffice it to say that the ancients did not subscribe to Descartes’ conception of humans as “thinking things”. Aristotle, for example, famously characterized the human being as a zōon politikon, a “political, state-building, or communal animal”. To him it was obvious that humans were pre-eminently designated for political organisation, “in greater measure than any bee or any gregarious animal”, because “man alone of the animals possesses speech.” For Aristotle, speech was not merely an instrument with which to think rationally or converse; instead, the use of speech was intimately tied up with the nature of organizing a city: “it is the special property of man in distinction from the other animals that he alone has perception of good and bad and right and wrong and the other moral qualities, and it is the partnership in these things that makes a household and a city-state.”
Only insofar as speech is able to furnish an ethics by which humans can collectively engage in the art of state-building does it set humans who wield it apart from other animals. But this means that merely talking wisely about state-building (as machines might technically be able to do at some point) is only a small part of the equation. What sets humans apart from other “thinking things” is clearly to be found in the embodied modes of being that evolved so that they would actually be able to psychosocially perform all that the “kingly art” (not to mention tactical warfare) requires. While an AI system may eventually be able to pass the Turing Test, it is quite improbable that it would then also be able to pass an Aristotelean one. The answer to the question “Can machines successfully build and manage states?” (or alternatively, “Can machines convincingly imitate political animals?”) is clearly a resounding no. (More concretely: Would a thinking machine ever be able to imitate, say, a Caesar, a Napoleon, a T. E. Lawrence? It seems unlikely.)
The essay claims that “AI's successes are shunting many old questions about human nature away from handwaving in the humanities towards testable claims in the sciences.” The idea seems to be that as AI increasingly accommodates the whole gamut of human capabilities—as the computational model of human intelligence gains currency—the humanities will have been exhausted as a serious body of inquiry. Channelling Heidegger (perhaps heretically), I would argue that what the humanities can still teach us (even if the humanities departments themselves seem to have largely forgotten this) should inevitably relate to the ways man is best able to dwell poetically, so that his essence as a communal animal is optimally harnessed. So long as machines fail to pass the Aristotle Test (and even then), the culture war will rage on, and the humanities should retain their pride of place as inexhaustible repositories of human wisdom. After all, their enduring value lies not so much in how well they are able to explain the neuronal mechanisms behind “matters of mind and behaviour” as in what they are able to elucidate in terms of the broad narrative patterns that govern human conduct.
Whether the computational model of intelligence ultimately holds (even partially), remains an open question. It may one day be possible to bridge the gap between machines and thinking things (though I suspect that day is still further removed than some might hope), or it may not. What I have tried to argue throughout this piece pertains specifically to the ways in which the Turing model of intelligence falls short of imagining the full scope of what humans are capable of, even as regards their ability to use language. While the essay rightly mentions that it is improper to discredit Turing’s contributions because of contingent weaknesses in the original test, it would be equally improper to discount the philosophical presuppositions made by the test—especially insofar as they constrain what might be learned from passing it successfully. It is simply not the case that having a machine pass the Turing Test would mean redrawing “the borders between science and the humanities.” Crucially, linguistic holism, as conceived by Continental philosophers, would not thereby be discredited. The fact that such misapprehensions continue to circulate should encourage us to take the humanities more seriously than we currently do (though certainly we would also benefit from more cross-pollination between the various academic disciplines). This would make us more sensitive to the exact nature of AI, its relation to ourselves, and its implications for the ongoing culture war.
Michael Weyns is an independent writer with an M.Sc. degree in computer science and a background in philosophy. He currently lives and works in Belgium and divides his time between doctoral research and forays into intellectual history.
Consider supporting Aporia with a $6.99 monthly subscription and following us on Twitter.
Where does society come into play here in this argument. This seems to leave out how much our society restricts our ability to connect our emotions and intellect. Instead of teaching us to use emotions as an extension of our intellect.
Humans are animals, captive animal but animals none the less. Wild animals emotions and intellect connect. This is how they learn and survive. Why do we think we are superior when we can't do this, or most of us can't do this? Why would we not be inferior?
We, like ants, terraform the world to suit us. Unlike ants though we destroy more than we build during this process. Might does not make right and power does not equal intelligence. I don't see, by terraforming the world to protect us but in doing so causing destruction that repeatedly leads to a societal collapse and most occupants demise, as very intelligent. We seem to be powerless to stop this from occurring, as individuals.
Perhaps the animals are not dumber, perhaps we are the dumb ones. Perhaps we are so stupid we are now taking them all down with us. Perhaps we should be modeling the minds of animals and not humans for AI. Perhaps the reason we have been successful might not be from intelligence as it our a strange combination of that determines which species thrive and which ones go extinct.
We know nature favors niches. This include niches for the brilliant, rare and the stupid. I will give you the Panda for the stupid example. What if we are closer to being a niche like a Panda than not?