In the book Field Guide to a Patchy Anthropocene: The New Nature, which I am currently reading, the authors theorize a category of nature that they label “feral.” Feral nature, as they describe it, is distinct from what we typically think of as “the natural world” because it exists, in part, due to human engineering efforts. In their words, it is “the state of nonhuman beings engaged with human projects, but not in the way the makers of those projects designed.” They expand:
Without the concept of the feral, it is too easy to fall into a dichotomy that only includes the wild and the domestic. Wild things are imagined as having nothing to do with humans; domestics are imagined as entirely under human control. In the patchy Anthropocene, many nonhumans (living and nonliving) are responsive to human actions without submitting even slightly to human control. It is this set of beings that we call feral. (p10)
The anthropocene is our current geologic era; where previous eras have been defined by global climate circumstances that create visible geological layers due to volcanoes or atmospheric conditions, this era is marked in the geological record by the global outputs of human civilization.
I currently work in the tech industry, a globally ambitious industry that currently feels like it is being consumed with a mania about “AI”, which (hopefully you know) means “Artificial Intelligence.” (When I first published this piece, I accidentally defined it as “Automated Intelligence” because that’s how I define it in my head. Whoops.) “AI” is inescapable in the industry, every product is adding “AI” features, and seemingly every person with power (those with executive titles or controlling investments in tech companies) is convinced that “AI” has unleashed the next generation of technology, something possibly even more radical than the digital revolution or the internet era, technology that is bringing us into futures that have filled science fiction dreams for decades.
I am rather skeptical of this current “AI” project, and as I read Field Guide to a Patchy Anthropocene I’m finding my thoughts continually turn towards this emerging tech and how we might best make sense of it. I’m going to try to think with the Field Guide, and a few other books to try and explore what future I think we are actually building with the current state of “AI”. I promise to swear a bit to make it more entertaining.
I’m also annoyed by putting “AI” in scare quotes so I’ll get a little bit more specific as a starting point. I have seen no proof that the current generation of “AI” is “intelligent”, and so I refuse to seriously call it “AI” because that brings with it all the scifi dreams that aren’t yet real. From this point on, instead of “AI” I will refer to this current tech as LLMs—Large Language Models—which are the foundational technology that many people refer to when they speak of “AI” or “Generative AI” in this era. Now I can drop the scare quotes and be specific.
Ok, let’s think with this Feral Nature idea and see what emerges.
What is Ferality?
The authors of Field Guide to a Patchy Anthropocene—Anna Lowenhaupt Tsing, Jennifer Deger, Alder Keleman Saxena and Feifei Zhou—use their concept of ferality to examine the unexpected impacts of terraforming the earth. Specifically, they focus on humans operating under capitalism, which has grown so global in scale that it has touched nearly every part of the natural world in significant ways. The Field Guide authors are interested in better defining how the Anthropocene works and what exactly has changed.
Terraforming is a concept that (I believe—although to be honest I have not deeply researched) comes from science fiction. When we dream about space travel we imagine that there are plenty of earth-like planets that might be almost survivable for humans, if we modified their environment. This is terraforming—remaking the climate and terrain of a planet at global scales. It’s not that common, however, to refer to the civilizational engineering efforts of humans on Earth as terraforming, because the term is typically associated with as-yet-unexplored planets (Mars, and further out). I love that the Field Guide authors use the term; it’s also how I think about our impact on this planet.
It’s a helpful word because the sci-fi version of terraforming is taking something uninhabitable and making it habitable, and though that version of terraforming is still fictional, we are now quite aware in 2025 that it’s entirely possible to take a planet that is already habitable and turn it into something much less habitable (in the current arrangement of the world, it’s quite profitable to do so, as well) (although, as every investment opportunity will remind you, past performance is not indicative of future results). Anyways, I digress. What the authors of Field Guide do is draw our attention to the terraforming of earth, to understand the unintended consequences of colonial and capitalist enterprises.
Ferality, as the authors define it, is not a term intended to use for value or moral judgements. It can be applied to both things we believe are good and things we believe are bad.
Designation as feral does not indicate whether we approve or disapprove of a particular bit of feral action. Trees that grow up in an abandoned lot are feral—and wonderful for many ecological reasons. Pathogens that evolve resistance to antibiotics are feral—and terrible for the humans likely to die of infections. (p11)
What I find so interesting about their framing and theorizing is how they focus on the terraforming aspect of capitalism as the generator of ferality. One thing that capitalism incentivizes so well is scale, the need to take something useful on a small level, find a way to genericize or standardize it, and scale it up. This can be commodities, like cotton or wheat or raw metals, or it can be outputs of systems, like electrical power or water control systems or international shipping. The authors of the Field Guide point our attention to infrastructure, those somewhat hidden systems that help our economies scale and while remaining relatively stable.
Industrial capitalism has become a vast infrastructure-building program. Capitalism makes investments equivalently liquid, whatever their social and ecological effects. Thus, investors are encouraged to terraform distant lands for their projects, entirely disregarding the effects of these projects on local people and ecologies. (p. 103)
Here then is the idea I wish to jump off from in thinking about LLMs. The authors of the Field Guide explore the ways that vast projects of industrial terraforming create unexpectedly ideal environments for various nonhuman actors to thrive; often times despite our best efforts.
They explore this through various examples; the way that a plant called water hyacinth flourishes in placid water has become a problem worldwide, because humans have sought to control the movements of water through canal building, damming up rivers and straightening them, and other methods of significant engineering. In turn, water hyacinth takes hold and destroys all the human initiatives by reducing the flow of water, killing other species including fish and rice that humans rely on. Or they give the example of industrial cotton plantations, which allowed the boll weevil to thrive in ways it is unable to do so when cotton is interspersed with other plants (as in non-industrial locations).
Feral nature, as the authors describe it, is the nonhuman species that learn to thrive in the (eco)systems that humans engineer for other purposes, whether or not those species and their behaviors are desirable.
I would like to explore LLMs through this lens of ferality, because even though humans are engineering the tech, I do not think its release into civilization will go the way we want it to. I think LLMs are a technology ripe for going feral, and I don’t know how large our window is (or if it’s even still open) for containing the damage.
LLMs, In Brief, and why they are not “AI”
The math and theory behind LLMs has existed for a few decades but the current generation of the tech has a much shorter history; there is a 2017 paper from Google Research called Attention is All You Need that introduced the theories that underly the tools like ChatGPT, Claude, Gemini, etc that are currently being widely adopted. The key ideas that led to these tools is that given a large enough dataset, significant enough computational power, and the right training methods, you could create massive language models that can be prompted with freeform human language and will respond with a relevant, statistically generated output, based on the contents of the input.
In short, certain companies spent a shit ton of money building out massive data centers, gathering terabytes of data, and then, using the mathematical models (expressed in code libraries), trained their computers to generate text based directly on the inputs given by a user.
Before the introduction of this generation of LLMs, computers mostly only did what they had been explicitly told to do, via code written by humans. The software that humans used, whether it was apps on a computer/phone, or web-based interfaces like Google, Gmail, and Facebook all had user interfaces with specific buttons and features that allowed you to perform certain actions, and if you tried to do actions outside of what had been intentionally designed, you either were shit out of luck, encountered an error, or were a hacker and had some hypotheses about loopholes that exist. In general though, computers were narrow interfaces that required a user to build a conceptual model for what the interface was for, and how to use it, and if the interface could not do what the user wanted, they had to go find a different interface or they were shit out of luck.
Even search engines—best represented by Google Search— weren’t able to do everything. Google was a conduit to the world; you’d input your query and then click links to see if somewhere, someone had put the information you sought on the web. It was possible to strike out with a Google Search, whether it was because you were using the wrong language for the search or because the information simply was not available on the public internet.
Enter ChatGPT and its ilk which similarly provides you with a text input box. Only with these tools, when you put text into the interface, it doesn’t query a database to find relevant, extant results. Instead, it parses the text of the input, and generates a response in real time. It “talks” back to you. And it keeps “talking” if you keep providing more input.
Well, holy shit. Instead of a narrow computing interface, we now have wide interfaces. Wide open spaces, if you will entertain a Chicks reference. A user does not need to know how it works, does not need to understand it at all, a user just needs to input text—a prompt—and in return a tailor-made response is generated just for them.
Alas, however, though the computer can “talk,” thought it is very “happy” to help you with that question it is not conscious, it is not “thinking” it is instead generating. LLMs are bounded by all the data that went into training their models, and whatever mathematical relationships it “learned” from the training data.
Which means—and this is the hard thing for actual thinking humans to hold on to and grapple with—it does not actually “know” anything, it only has computationally derived associations and mappings between words and phrases that were present in its training data. This means the computer can continually string together words (so many words) but it is not capable, for any standard meaning of the word, of “knowing” if the words are true, or accurate, or helpful, or useful, or right, or wrong. The computer produces text, but the computer has no ability to judge/know/determine if the generated text is the right text. It is just the text that its Large Language Model has determined is the statistically correct output for the given input.
It doesn’t know anything. It can’t know anything. Which means it also doesn’t know what it doesn’t know—because that would count as knowing something. Which it doesn’t.
But—and this, to me, is the fascinating part about the moment we are currently in—the text generated by these LLMs is “right” a not-insignificant amount of the time. The results aren’t half bad! Well, ok, depending on the prompt and the subject matter, it can be half bad, but the other half is good, and wow, that’s not nothing.
And so, my goodness, this tech is blowing up. It feels unstoppable. Billions of dollars are being invested in it. Everyone with a stake is convinced we’ve entered a new era of human civilization. Which is why I want to think about the near future; about the next few years; about what might happen if the trends we find ourselves living within continue on their current trajectory.
Ferality is a fun word to say
In the Field Guide to a Patchy Anthropocene, Anna Tsing writes a few chapters exploring some feral species, and what she draws the reader’s attention to is how entangled those species are with human engineering efforts. Water hyacinth is a key example; it’s an aquatic plant native to the Amazon area with intriguing flowers that made it interesting for botanical gardens. I will not explore the full history outlined in the book, but the key information is that in its native habitat, it is subject to moving water and regular floods which keep it in check. It has evolved to take advantage of interims between the floods, when the water is quieter, to quickly grow and establish itself, so that when the next flood comes it can stick around.
In the past few centuries humans have started terraforming the world with large engineering efforts to control the flow of water, which we are so dependent on. We’ve dammed rivers, creating large reservoirs, we’ve also straightened them, we’ve built large holding ponds for various needs, we’ve even scaled up rice growing systems which involve regular flooding of areas with sitting water. Each of these engineering efforts around the world has created ideal environments for water hyacinth to flourish; and because we also brought the plant itself around the world because it has pretty flowers, it has had the opportunity to establish itself in so many of these places.
From a human and native ecosystem perspective, water hyacinth is a terrible thing to have around. It grows wildly fast, it clones itself, it hybridizes with native species, it removes the oxygen from water, it soaks up water itself and empties ponds and tanks. When water hyacinth encounters human engineering, it does whatever the fuck it wants, and that usually involves doing things that are counter to our purposes for the engineering.
Which is why the authors of Field Guide refer to it as feral. It takes advantage of things we are doing for its own purposes; but it never could have achieved those purposes without our help. We are incapable of modifying its purposes; it’s evolved too many tools and everything we might do (poison it, destroy the engineering efforts that host it, move away) lead to other significant consequences that we can’t afford. And so we are stuck dealing with it, as best we can.
I think this is how LLMs are going to work.
Before I can get to that though, I need to turn to the ways that technology is not simply a benign thing that we are fully in control of. I need to explore how new technologies have requirements, and those requirements make demands on how the world is shaped.
Ursula Franklin Strikes Again
When I read books about technology (and I’ve read a good bit of them), one of the first things I do is flip to the bibliography and see if The Real World of Technology by Ursula Franklin is cited. Invariably, it is not. I learned of this book from my friend Mandy Brown who said she learned of it from Deb Chachra who wrote an excellent book about Infrastructure (which absolutely cites The Real World of Technology) and perhaps that is not relevant to this post but it matters to me to cite and recognize that I’m always in conversation with others who are far smarter than me, even if they don’t know it.
The Real World of Technology
Copyright 1990, House of Anansi
Tech Industry, Cultural Analysis
(4th time reading)
The Real World of Technology is a book of essays which were first delivered as lectures. It was published in 1989. The writer of these lectures/essays is Ursula Franklin, who is a complete badass (the link on her name is her wikipedia page, it’s worth reading). In this book she outlines how technology functions in the world; the ways that we have a reciprocal relationship with it. We create technology, technology shapes us (and the broader world), we can identify how these patterns work, and—as she elucidates—we can understand those patterns to determine if the technology actually helps us build a world we want to live in.
In her book, she defines technology simply (and radically) as ways of doing something (p6).
One has to keep in mind how much the technology of doing something defines the activity itself, and, by doing so, precludes the emergence of other ways of doing “it,” whatever “it” might be. (…) I think it’s important to realize that technology defined as practice shows us the deep cultural link of technology, and it saves us from thinking that technology is the icing on the cake. Technology is part of the cake itself. (p9)
I read Franklin to be arguing that we must look closely at the technology we use to understand how the world works; that tech is not some independent entity that is entirely optional (the icing on the cake) but rather a fundamental ingredient in how the world works. Integrating this argument helps me ask much richer questions about the way any task is done, because I see that the technology available changes how I might do that task, and what other ways of doing it might be better or worse, and why. Consider microwaves—that modern convenience that now lives in nearly every kitchen. A microwave is theoretically just a fast way to heat things up, but because they exist, it is much easier to keep fully prepared meals on hand that are frozen. This means companies can sell frozen meals as easy conveniences, because they can assume that the technology for quickly bringing those frozen meals to edible temperatures is readily available. Microwaves, far from simply being heating devices, have introduced significant impacts into how cooking and eating work for many people, and by naming this we can start to ask about whether those impacts are net positive or net neutral.
Franklin’s small volume is utterly brilliant, as I skim through my marginalia there are dozens of points she makes that feel relevant to this essay. Alas, I am trying to stay focused, so if you get this far and her insights sound more interesting than mine, please do go read the book and forget this essay. I won’t complain.
One of the main reasons that Franklin takes on the project of understanding how technology works is to draw our attention to how much progress (the adoption of new technology and practices) happens in a very undemocratic way.
There is a lot of talk about global crises and “our common future.” However, there is far too little discussion of the structuring of the future which global applications of modern technologies carry in their wake. What ought to be of central concern in considering our common future are the aspects of the technological structuring that will inhibit or prevent future changes in social and political relations. (p42)
She argues that all technology “reduce or eliminate reciprocity” which is “some manner of interactive give and take, a genuine communication of interacting parties.” Reciprocity, in Franklin’s argument, is what in more common parlance now would be the existence of agency. Do all people who are impacted by new technology get a chance to understand the consequences of adoption and weigh in? Is actual consent (with full comprehension of what is being consented to) actually possible?
Franklin argues that “where there is no reciprocity, there is no need for listening. There is then no need to understand or accommodate.” This, from her perspective, has significant impacts on our ability center our common humanity. With this in mind, let’s turn to another book, all about the worldview of AI.
Knowledge Engineering
Diana E. Forsythe was an anthropologist who focused on AI—in the 1980’s and 1990’s. I told you that LLMs had a longer history than ChatGPT. She did ethnographic field work in the “AI” orgs of that era. Here’s how the editor’s intro describes it:
Studying Those Who Study Us
Copyright 2002, Stanford University
Tech Industry
In order to appreciate better what Diana was up against when she spoke clearly and honestly as an anthropologist, it is important to understand the position of AI during the 1980’s, when she began fieldwork. During this period AI enjoyed a highly privileged position both in the worlds of computer science an din the privileged worlds of defense funding. Millions of dollars were given freely by funding agencies such as DARPA (Defense Advanced Research Projects Agency) or ONR (Office of Naval Research) to researchers in AI, especially at elite schools such as Stanford, Carnegie-Mellon, and MIT. Very often the supported projects were “blue-sky,” that is, unconstrained by deliverable technology or any rigors of development. Under this aegis, many AI researchers were able to enjoy a freedom to range in their questioning across psychology, linguistics, history of science, philosophy, and biology, to name a few disciplines.
Into this elite world, Forsythe embedded, and took extensive notes. I think the intro frames up the story well, so I’ll keep quoting:
As in other scientific communities, AI researchers tend to think that they do not have a culture. They are instead “purely” technical. Diana insists that the technical is itself cultural and, furthermore, that AI researchers have a special kind of technical culture that is characterized by features such as technical bias; decontextualized thinking; quantitative, formal bias; a preference for explicit models; and a tendency to believe that there is only one correct interpretation (or reality) of events.
This is to say that Forsythe embedded in the orgs equivalent to OpenAI or Anthropic of her day, and she explored the belief systems that inform the building of AI systems. In this world, she found that the researchers and thinkers most excited about the potential of AI were also wildly ignorant about the problems they believed they were solving.
The overarching message in Studying Those who Study Us is that even decades ago it was very clear that AI researchers had extremely strong ideologies about what automation would do, and those ideologies run completely counter to the way the world actually works. At the time, the AI community felt that the problems they ran into were mostly around the size of datasets and the capacity of computing. They believed strongly that in the future, with larger datasets and scaled up compute, the obstacles would disappear, and AI would help solve a multitude of issues that were caused by our continued dependence on humans.
In the 80’s and 90’s, the hardest part about creating “AI” systems was obtaining data. The internet was still very obscure, we were not in the world of “big data” and the digitization of so much of the world had barely begun. So the researchers working in “AI” spaces had to build their training datasets manually, by interviewing experts, creating computer-readable data, and translating interviews and existing textbooks into machine readable format. Today of course, all of the major LLM companies are rapaciously obtaining every single byte of textual data they can—whether by piracy, scraping the web, and any other means available to them. They have automated what was previously—in Forsythe’s time—a very manual process.
I return to Forsythe’s insights today because the behind-the-scenes data acquisition is critically important for determining what LLMs can and cannot possibly do; and the fact that it happens without much public awareness means we miss critical information about the shape of the technology that is rapidly being deployed.
Because the “olden” days of AI training were so labor intensive, Forsythe was able to see exactly the belief systems that were shaping the technology. She recognized that there were significant assumptions being made by researchers about what learning is, what expertise is, and what data is necessary to automate a given task. She first establishes what we know about how human learning works:
First, knowledge is socially and culturally constituted. Second, knowledge is not self-evident, it must be interpreted. Messages are seen as having meaning because the interlocutors share knowledge about the world. Third, people are not completely aware of everything they know, a good deal of knowledge is tacit. Fourth, much knowledge is not in people’s heads at all, but is rather “distributed through the division of labor, the procedures for getting things done, etc.” Fifth, the relation between what people think they do, what they say they do, and what they can be observed to do is highly complex. And sixth, because of all these points, complete and unambiguous knowledge about expert procedures is unlikely to be transmitted through experts’ verbal or written self-reports. (p41)
From here, she contrasts the actions of the AI researchers. In the era before the massive training datasets available now to LLM builders, there was an entire category of AI research called “knowledge engineers;” people who were tasked with interviewing experts about their areas of expertise, to create “AI” training datasets. Forsythe finds in their way of operating a number of assumptions:
To knowledge engineers, “knowledge” means explicit, globally applicable rules whose relation to each other and to implied action is straight-forward. Knowledge in this sense is a stable entity that can be acquired and transferred. It can be rendered machine-readable and manipulated by a computer program. I believe that in effect “knowledge” has been operationally redefined in AI to mean “what can be programmed into the knowledge base of an expert system.” (p53)
With this insight, she concludes with the point that I think is deeply relevant to the current era:
The ability to decide what will count as knowledge in a particular case is a form of power. (…) The exercise of this power is to some extent invisible. (…) Once an expert system is built, it is all too easy for the user to take it at face value, assuming that what the system says is correct. Since most people who use such systems in business, medicine, or the military know little about how they are produced, they may not question the nature of the knowledge they contain. While system-builders know that every knowledge base has its limitations, they do not appear to be aware that members of the public may not know this. Possible misunderstandings on the part of future users are not viewed as their problems.
I will come back to these problems.
Feral Tech
Technology then, as Franklin helps us see, is not simply the tool we create for a task, but the entire process of doing the task, inclusive of the tool. Right now, we’re in this new era of LLMs with their wide interfaces, their seeming ability to generate relevant responses to almost any prompt, and what the proponents of “AI” are currently saying is that the tech is now so powerful that it can be used in every domain to change the entire world.
Welcome to the future, they say, where computers can take over so many tasks for us.
I believe that if we apply Diana Forsythe’s insights from an earlier era of “AI” to this current age of LLMs, and we think with the Field Guide author’s concept of “ferality,” we might really hope for the opportunity to deeply consider, in Franklin’s phrasing, “the technological structuring of our common future.” We might want to keep space open, wide open, for futures that are not so “AI” dependent. And from where I stand right now—I don’t think we will get that space by default. We’re going to have to fight for it. Let me unpack my thinking.
Ferality as the Field Guide authors define it, is the unexpected uses of human engineering and infrastructure. They focus on biological entities—plants and animals and fungi—for their examination of ferality.
I think the concept can be slightly bastardized as a way to think about the unintended consequences of broad adoption of technology. That not only can human engineering allow biological ferality to emerge, as the Field Guide explores, but also that technology itself can become feral.
I recognize that I’m bastardizing the concept of ferality from the book because the authors spend a good portion of the book making the case that we must treat nonhuman biological entities as actors in their own right. Nature does not exist in service of humans or only in response to humans; it exists for its own purposes and the many beings we share this planet with have their own agendas and ways of being. I am adamantly and philosophically against imbuing technology with any sort of will or autonomy; that’s part of the reason I use scare quotes around “AI.” LLMs are merely software operating on hardware, and without humans around to build the computers and generate the power for running the computers, LLMs will cease to exist. To put it another way; humans can go extinct and the natural world will continue on fine; probably even improve from some perspectives, but our technology will all fall prey to entropy, as it contains no will or innate biological drive to survive.
Acknowledging this, it is helpful to use Ursula Franklin’s insights to understand that when we create new technology, it reshapes us as well. Technology cannot be introduced without changing the world, and some technologies introduce quite unanticipated demands as we adopt them at scale. We are—in this current era—terraforming the world because we built a bunch of technology and became dependent on it and have in turn found it quite difficult to reduce our dependency on it, even though the demands of the technology are quite literally killing us.
I think fossil fuels are a canonical example of feral tech, as I am conceiving it. Coal, oil, and natural gas are foundational technologies that have helped us build the world we live in. They are the raw ingredients upon which we scaled up an entire electrified society, created plastics and other synthetic materials, and moved from an early industrial era to the globalized, digitized world that we now live in. But these gains have all come with horrible, feral costs, some of which we truly have no idea how to address. With every year, we are seeing climate change milestones coming more rapidly than worst-case scenarios previously anticipated, and even still we struggle to stop burning fossil fuels. It’s entirely likely—though coal dust was the original geological marker for the beginning of the anthropocene—plastic will be the main signifier in the geologic record. We have no idea how to get rid of it, it’s found everywhere on earth from the deepest part of the ocean to inside the placentas of pregnant people, and yet we continue to increase production every year.
Technological progress is thus never a neutral or even purely positive progression; there are costs and those costs can be literally existential if we adopt technology at scale without considering the impacts.
Feral tech is thus—as I conceive of it—technology that A) brings about consequences that would never be justifiable if accurately weighed in a cost/benefits analysis, and yet B) is so integral once adopted at scale that it becomes difficult to imagine a world without it. Feral technology escapes our control and requires us to adapt to its demands, even as we understand that the costs are unjustifiable and existentially threatening.
I believe that LLMs are such a technology, because they aren’t actually the Artificial Intelligence people believe them to be, and if we allow their largest boosters to keep promoting this deceptive framing, we are going to create a new world of technological ferality, even before we have dealt with the current one.
The Future we are Facing
To bring all this together, I want to think with Forsythe’s arguments in Studying Those who Study Us to try and imagine what the near future looks like if we continue on this path of treating LLMs as the science-fiction version of AI that they are definitively not.
A few decades ago, when AI researchers were building out cutting edge “AI” tools, the most intense labor in the field was in finding data upon which to build the “expert systems”. Today, AI researchers focus more on the training and design side of things, because we have mountains of available data. The data that goes into the training is generated from a broad variety of sources; all the books that have been digitized over time, all the reddit posts ever written, open source Github repos, so on and so on. The scale of this training data is so large that there is probably no one inside any LLM company who can credibly claim to comprehensively understand the information that is training the models.
This scale of training data—more information that any human could possibly sort through in a lifetime—is the reason that current generation LLMs feel so dang powerful.
Forsythe’s prescient, important ethnographic work is so useful because it helps us understand the assumptions that AI researchers make when they throw all this training data at the model. It shifts the focus from the outputs (talking computers) to the inputs (textual data available in digital form).
One of the key, hidden assumptions that LLM boosters make is that the information and data available in these vast repositories of training data is a comprehensive representation of human knowledge. You see this in the way the AI company execs talk about knowledge. Sam Altman refers to ChatGPT 5 (the most recent version as of this post) as “talking to an expert, a legitimate PhD-level expert in anything.” Which, in order to be accurate, assumes that all PhD-level information is encoded in ways that have been made available for training the model.
The makers of LLMs, as Forsythe pointed out decades ago, believe that expert knowledge is entirely constituted in the written output of their work. But not just this, they believe it across every domain. This is the reason that LLM products (ChatGPT, Claude, Gemini) all have simple text input boxes as their interface: the designers and producers of these products do not believe that there’s a reason to limit the ways that these tools work.
The assumptions of those promoting LLMs as they are currently and widely being deployed, is that they have been trained on—and thus can output—comprehensive, expert knowledge across all domains of human knowledge.
This is an extremely questionable claim on its face, and even more so when you consider what I quoted earlier from Forsythe about all the ways we know that learning and knowledge works for humans. A significant chunk of human knowledge exists in realms that live outside what we write or turn into data.
Education on the Edge of Possibility
Copyright 1997, Association for Supervision and Curriculum Development
Cultural Analysis
In the 1997 book Education on the Edge of Possibility, the authors Renate Nummela Caine and Geoffrey Caine try to build a case for changing how our educational system works. In it, they catalogue the current educational system as an “industrial model”, which treats education as a factory model, and knowledge as a commodity:
Educators and education have as their primary function the delivery of essential information. The core commodity of education is the information that is to provide a foundation for success in life. In a sense, this information is detached from the minds of people and has an independent existence. Facts and skills are conceived of as owned by the system and warehoused in schools, where they are packaged and then delivered to students. (p43)
This paradigm of education is, they write, fundamentally flawed and completely inaccurate.
When asked “what is the desired outcome of education?” people amost automatically say something like “well-informed students.” Educators and education have as their primary function the delivery of essential information. The core commodity of education is the information that is to provide a foundation for success in life. In a sense, this information is detached from the minds of people and has an independent existence. Facts and skills are conceived of as owned by the system and warehoused in schools, where they are packaged and then delivered to students. (p43)
Writing nearly 30 years ago, they argued for a wildly different pedagogy. A “brain-based” model of education that treats all humans as active, full people.
In brain-based learning, educators see learners as active participants in the learning process. The teacher is not the deliverer of knowledge, but the facilitator and intelligent guide who engages student interests in learning. Students and teachers become partners in the pursuit of understanding. (…) Brain-based instruction begins witht he entire school and the child’s whole being. The brain is not divided into individual segments marked “feelings” or “cognitive development” or “physical activity.” Rather, active learners are totally immersed in their world and learn from their entire experience.
Learning, the acquisition of information and how to exist in the world, is not simply the acquisition of structured facts. It is not a download of information. It is an active process. Think back to Diana Forsythe’s descriptions of how knowledge works; it is embodied, it is complex, it is not simply a set of facts and information, there is a much broader social context that it operates within.
I think this is relevant, because as Forsythe points out, the entire field of “AI Research” including the current generation of LLM training is premised on the idea that knowledge is a bunch of facts. The current proponents of LLM based tools are trying to embed them in every single domain of human life that they can, as quickly as possible, because they believe that the tools make humans smarter. There are companies taking millions if not billions of investment, premised on the idea that LLMs are good for education, good for healthcare, good for the legal field, good for engineering, and so on and so forth.
We are scaling up “knowledge” systems that are fundamentally premised on horribly outdated models of knowledge.
Far from being an advancement of human civilization, the people trying to scale up LLM technology for their own benefit are trying to trap us in modes of learning and knowledge that we already know are failures. They are doing this because there’s a lot of profit available to them if they succeed. If they do succeed, which I hope is not guaranteed, I think we will unleash two types of technological ferality that, from my perspective, will cost us as humans far more than we will gain. That’s where I’d like to end this long-ass piece.
Ferality, Toddlers, and Distrust
There are two ways that I think LLMs will become feral, if we let the companies trying to institutionalize them in the world succeed. First, I think they will trap us in an outdated, self-defeating pedagogical model that replaces learning with information. And then, even more alarmingly, they will undermine our ability to trust any information at all.
The companies and orgs that are promoting LLM-based products today are eager to see them used for nearly any and all purpose, in the hopes that some of these uses will turn out to be profitable, or at least lead to enterprise contracts. Because of that, you can ask LLM chatbots just about any question you want, and they will generate a confident response. This can feel like magic. The chatbots almost never say “I don’t know,” because that’s not a statistically probably response to most text in their training datasets.
Suddenly, we find ourselves in a world where anyone can feel like an expert, because the talking computer can provide all the information on demand. We have fucking CEOs of tech startups feeling like they are on the cusp of Physics discoveries because the computer is talking to them. Also, we have lots of people suffering AI Psychosis because LLMs, by their very mathematical nature, respond to inputs with the most probable responses, which often looks sycophantic and or affirming, no matter what the fuck you put into the text box.
If you dig into pedagogy and learning, as with the above book Education on the Edge of Possibility one of the things that we have lots of evidence for is that learning is an active process, and a significant portion of it is mapping out what is right and accurate, by learning to determine—through the slow process of failure and iteration—exactly what is wrong or inaccurate. There’s a both-sides process for learning, and the people we often think of as “geniuses” are actually people who are very good at rapidly integrating new knowledge through small failures (or they are assholes who think that everything they don’t know doesn’t matter, both are possible).
Everything amazing that humans have ever invented, learned, or imagined, has been acquired through some version of this form of learning. We try shit, we fail, we iterate, we fail less. We then are able to teach others, and they can continue the process to keep pushing knowledge forward.
This is how human civilization—the good parts and the bad parts—has slowly come to be.
LLMs fuck up that entire process. The way they are designed currently; the way that the many large companies and powerful people want to deploy them today delivers the feeling of expertise, the belief that you have acquired knowledge, without any actual learning.
It’s the industrial model of education, writ large, institutionalized in large-scale systems, and cemented into place because it’s profitable.
The ferality of this tech—the undesired and yet completely unavoidable outcomes of the path the tech industry currently has us on—is that we are going to have millions of people operating on information that came from the talking computer but they are completely incapable of validating. “Expertise” without learning. Information without context.
The best corollary I can think of is a toddler; learning to walk without any knowledge of how to actually navigate the world. If you have ever minded a toddler for a period of time when they were awake and active, what you know is that they have wayyyy more capabilities than they know how to properly use. And so you have to watch them like a hawk, or keep them contained in environments where their risks of harm are minimized. Eventually they will gain enough experience to be trusted with less supervision, but it’s not an automatic process, and until such a time, well, pay attention.
I think the shitty pedagogy and false assumptions that underly the LLM boosters’ beliefs about how human knowledge operates will have similar risk profiles to unleashing a preschool class without supervision in Times Square. Lots of people will be harmed, in ways that we can’t even imagine, because those little fuckers are creative, fast, and have no idea what dangers they should be watching out for.
Which brings me to the second ferality, and my concluding thoughts.
LLMs, as they operate based on the current and expected near-future state of our tech, will generate confidence in humans who lack the expertise or knowledge to determine if such confidence is actually deserved. Those humans will then make decisions, take action, live their lives, based on the LLM-generated confidence. And unlike the hypothetical toddlers in Times Square, we (the people who share the same space with the newly confident LLM-fueled experts) will not be able to easily identify the rogue actors. The consequences of tons of “experts” making actions based on unfounded confidence is that reality will suddenly become wildly untrustworthy.
This is already happening in smaller forms, but the LLM companies sure are hoping we will treat each case as an individual failing and not see the pattern at large.
Lawyers are submitting briefs referring to non-existent case-law. Newspapers are publishing summer reading lists that include non-existent books. These are the examples where we can fact-check and see that the information provided by the LLMs has no bearing in reality.
There are going to be millions of cases where we can’t easily see that a person is confidently acting on information they obtained from an LLM that is inaccurate. They don’t have the expertise to validate the information, because they never actually learned the subject matter. They input a prompt, got back some text, and then made decisions assuming that the talking computer was accurate. Remember what Diana Forsythe said in the fucking 1990’s about AI researchers?
While system-builders know that every knowledge base has its limitations, they do not appear to be aware that members of the public may not know this. Possible misunderstandings on the part of future users are not viewed as their problems.
This problem has not been addressed. It has been ignored. And now we are facing a future where not only is this ignored problem still present, it is now scaled up to a global, general-purpose level.
Here, finally, we see feral tech in all its horrors. What does the world look like if every single person has access to a computer that will confidently answer any question on any topic, with neither participant in the transaction (the human, or the computer) being capable of assessing the accuracy of the answer?
I personally think that world looks horrifying. I think it means we will not be able to trust anything. I think—far from being an empowering world of human advancement—it is a disempowering world where everyone is left to their own devices, needing to fact-check all information, be skeptical of all systems, and to generally live in fear that novel risks are emerging every day because new “experts” are implementing systems flawed in ways they won’t understand until failure occurs.
It feels like dystopia.
Conclusion
This piece is way longer than I ever anticipated, and while I believe strongly this is an accurate read of the world we’re building, I also feel like I need to end on some other note than where I am right now.
So I’ll turn back to Ursula Franklin, because every damn time I pick up The Real World of Technology I find I had forgotten just how prescient she was. It’s a glorious book and—despite the fact that I have read it 6 or 7 times, and thus have “obtained” all the information it contains—I fully expect to continue learning from it throughout life.
In the sixth chapter of the book, she outlines a framework for how we might adopt technology, a checklist for considering technology. After that checklist, she says
In the real world of technology there are also situations in which, in fact, one does not know what to do. With every development new domains of ignorance are discovered which become evident only as the project proceeds. The emergence of domains of ignorance is basically quite inevitable.
Basically, every time we build new technology, we encounter unforeseen consequences. And that’s ok, it’s inevitable. However, it’s also instructive. We know that new technology will change us.
I have argued in this piece that we should think of some technology as feral, because it’s a useful idea for understanding that sometimes technology changes us in really terrible ways, that might be nearly impossible to undo. I’d suggest that creating large-scale systems that engender confidence where none is deserved is one of those technologies.
Franklin provides a suggestion that I think is very useful for how we can pushback on large-scale technological changes.
The initial direct experience of people is an important source of information. To marginalize or discard such direct evidence removes an important source of knowledge from teh task of decreasing the domains of ignorance. Possibly even more important is the implicit attempt to keep people from challenging technology by making their direct experience appear marginal and irrelevant. This is a form of disenfranchisement, and I see disenfranchising people as one of the major obstacles to the formation and implementation of public policies that could safeguard the integrity of people and of nature. This disenfranchising has accelerated since the time of the Industrial Revolution as governments have turned their attention to the blind support of technology and its growth at the expense of other obligations.
This feels, to me, like a very astute diagnosis of what is happening around LLMs right now. Yes, the tech can feel magical. Yes, it is cool that you can provide just about any prompt and get some sort of answer.
But the experiences of using the tech at work are mid and we need to say that. We need to push back.
I don’t think this is the full solution, but this essay is so fucking long, and I’d like to conclude it. So what I’d suggest is that you do not let anyone gaslight you into feeling like the tools are better than they are. If you have expertise or skills, take pride in them. If the LLM tools you have to use do a shitty, mediocre job at replicating your skills, please understand that not as a personal failure, but as an exemplary situation. You know what good is, and it isn’t that, and that means that the people promoting the tech feel like their potential wealth is far more important than actually creating useful technology.
I think that tech can go feral, but that is not a default state, and I’m not sure that LLMs are yet an example of feral tech. But I do think that’s the future we’re barreling towards, and it’s going to take work to avoid it. I hope that this piece helped you think through what that might look like, and pointed you towards some people who are suggesting other paths. Thanks for reading.