Theory Brief #13: Artificial Unintelligence (AU) and the Problem of Perpetual Patchability
AI systems are fundamentally dumb, but they're always being improved -- or so their makers say. How to critique an object that is forever being unmade and remade?
It is increasingly recognized that large language models (LLMs) are prone to erroneous outputs, a phenomenon known as hallucinations. As AI spreads ever wider into the technological substrate of late modernity—summarizing news stories, synthesizing search results, condensing research papers, speeding up office worker productivity, and so on—the hallucinatory effects of AI are only likely to grow more consequential.
To give just one relatively innocent example: In 2022, Google announced that it was deploying its Multitask Unified Model (MUM) to produce “snippets,” or brief summaries, of search results. So how’s it faring? Say you want to learn whether Switzerland is a member of the European Union. You might plausibly search for a phrase like “Switzerland EU.” As of May 2024, the phrase results in the following Google-manufactured summary of a Wikipedia page devoted to Switzerland-European Union relations: “Switzerland is a member state of the European Union (EU).”
The only problem, of course, is that this is completely wrong. In fact, the first sentence of the Wikipedia page that Google summarizes, states exactly the opposite of Google’s AI-driven summary: “Switzerland is not a member state of the European Union (EU),” Wikipedia writes (emphasis added).
So how did Google’s AI-driven model get it so wrong? How can artificial intelligence be so confidently unintelligent as to omit something so basic as the negation, thereby inverting the original proposition’s meaning and upending its truth-content?
There are lots of other examples of hallucinations, such as LLMs generating lists of references to works that do not exist, or summarizing books that have never been written, or which contain completely different materials from what the program claims.
The makers of AI do not necessarily know why their LLMs produce the outputs they do, a phenomenon known as the “black box problem”: You enter some input X into a program, which produces some output Y, but what happens in the intermediate phase is governed by logics and datasets so complex and convoluted that the mediating link risks turning into a mystery, even unto its own creators.
Artificial Unintelligence (AU)
The problem with AI’s growing proliferation today, then, can be summarized in the following formula: hallucinations * black box = Artificial Unintelligence (AU).
1. Hallucinations mean we cannot inherently trust the truth-content of AI outputs. From this basic lack of trustworthiness, it follows that we cannot determine in advance when we might find ourselves in the throes of AI hallucination and when we are being addressed by a more lucid apparatus. The outputs are therefore in essence spoiled, unless we possess the expert knowledge to make reasonably certain determinations about their veracity (but then why would we want or need AI outputs?).
2. Worse still, the black box problem means we do not necessarily know why the AI model is hallucinating. This has led to the rise of the Explainable AI movement (abbreviated to XAI), which advocates constructing AI in such a way that the inscrutable black box might turn into a more transparent “glass box,” satisfying a claimed human “right to explanation,” a basic condition of human machine-learning oversight.
3. The modulation of hallucinatory drives by the basic fact of black-box opacity means that, even as the world seems on the verge of a “great leap forward” into the unknown abyss of machine learning (from higher education to journalism, from office productivity to tutoring aids), we are at risk of consigning ourselves to Artificial Unintelligence.
The Forever Patch
Silicon Valley is, I think, increasingly aware of the risks posed by the two-headed beast that is Artificial Unintelligence, which is at once enormously learned (measured in the vast quantities of text and other data that it has consumed) and strangely idiotic (claiming, for instance, that Switzerland is a part of the EU, or offering summaries of imaginary books).
But Silicon Valley has, intentionally or inadvertently, developed—or hit upon—an ingenious technical-political technique to manage and respond to the problem: perpetual patchability.
An AI model is never “finished,” never actually ready for consumption in a completed state, but exists in an ever-evolving state of development. All we get are versions, never finalized editions, a basic feature of software as such, but all the more aggressively emphasized in AI marketing. Thus, in early May 2024, OpenAI’s chief operating officer, Brad Lightcap, claimed that ChatGPT, while seemingly impressive today, would retroactively be transformed by the future appearance of newer, improved versions of the program: “In the next couple of 12 months, I think the systems that we use today will be laughably bad.” Similarly, in early 2024, Sam Altman, CEO of OpenAI, promised that ChatGPT-5 (version 5.0) would be a major improvement. The present moment “is the stupidest these models will ever be,” Altman said.
AI LLMs, then, are said to be in a constant state of improvement, and developers claim the right to forever make improvements to “the system,” even going so far as to admitting that their present-day product is “laughably bad.” This creates a state of permanent disruption, throwing the user—and much of an increasingly AI-fixated society—into an always unraveling present moment that awaits only the next patch.
More importantly, permanent patchability precludes socio-technical critique: If the object is always being unmade and remade, how can critics hope to convince with their assessments of a system that is always-already undergoing transformation (and, so it is claimed, fundamental improvement)? The forever patch, or state of perpetual patchability, is an ingenious technique for regulating and stifling criticism, not by modernist techniques of brute control, but through the late-modern promise of a permanently disrupted/improved future.
Perpetual patchability may be Silicon Valley’s optimism of the will, embodied and turned into social ontology. But is it enough to fundamentally alter the basic problem confronting AI today: its shadowy obverse, the specter of artificial unintelligence? I think not. I suspect that throwing more data at the AU will not fundamentally alter the constraints outlined in the formula above: hallucinations * black box = Artificial Unintelligence (AU). In fact, the hallucinatory tendencies may worsen or grow more insidious. The problems of opacity might become more profound as the systems grow in complexity.
But who can really tell? The forever patch means we can never, finally, be certain of what we are dealing with. A perpetually patchable object is one that, therefore, intelligently eludes critique. In this, at least, Silicon Valley demonstrates the not inconsiderable intellectual resources at its command.
And that Switzerland search result? It might already have been patched by some Google engineer.