Seeing Like a Tech Company

This week I finished James C. Scott’s book Seeing Like a State:How Certain Schemes to Improve the Human Condition Have Failed, which I enjoyed and learned a great deal from. The book is an examination of the challenges of administratively organizing a polity, the hubris of men who think they can centrally design more efficient plans for the world, and some possibilities for how we can resist these goals. Taking as examples a variety of examples from the past century such as the construction of Brasilia, Soviet collectivization under Lenin, the forced villagization in Tanzania, and more, Scott is interested in what the stated goals were (typically, state authorities operating with ostensibly benevolent aims), what the methods of implementation were, and what happened that led to failure.

In the introduction Scott lays out the 4 attributes of schemes he has identified that lead to the largest disasters. A state (1) seeks to organize its administrative aims more efficiently, then sets out to do so (2) by employing men (every single example involves men, and Scott argues that gendered hubris is a key component of these failures) with High Modernist ideological beliefs to optimize and improve the domain in question, who then (3) apply the authoritarian power of the state to implement the plans. In all of the scenarios he examines, the fourth element is a civil society unable to effectively resist the plans, for varying reasons.

The book exposes the hubris of men who have attempted to centrally remake some aspect of a national economy or land distribution with a plan that is more rational and efficient than existing systems. Scott anchors this work with an extended metaphor of the German development of scientific forestry, which happened in the late 1700’s. The German state was interested in accurately predicting the timber yield of a forest for a variety of administrative purposes. Scott explains:

The achievement of German forestry science in standarizing techniques for calculating the sustainable yield of commercial timber and hence revenue was impressive enough. What is decisive for our purposes, however, was the next logical step in forest management. That step was to attempt to create, through careful seeding, planting, and cutting, a forest that was easier for state foresters to count, manipulate, measure, and assess. The fact is that forest science and geometry, backed by state power, had the capacity to transform the real, diverse, and chaotic old-growth forest into a new, more uniform forest that closely resembled the administrative grid of its techniques. To this end, underbrush was cleared, the number of species was reduced (often to monoculture), and plantings were done simultaneously and in straight rows on large tracts. These management practices, as Henry Lowood observes, “produced the monocultural, even-age forests that eventually transformed the Normalbaum from abstraction to reality. The German forest became the archetype for imposing on disorderly nature the neatly arranged constructs of science.” [pp. 15, emphasis mine]

High Modernism as Scott defines it, is the ideological belief that scientific experts in a given field are much better qualified for designing modern systems—whether it be the layout of a city, the way agriculture works, how humans ought to live in community, or any number of other areas. Seeing Like a State uses the German Scientific Forestry metaphor as a way of helping us understand how horrific the consequences can be when we let experts redesign and implement an optimized version of any system that they do not fully understand. In the case of the German forests,

…the negative biological and ultimately commercial consequences of the stripped-down forest became painfully obvious only after the second rotation of conifers had been planted. (…) An exceptionally complex process involving soil building, nutrient uptake, and symbiotic relations among fungi, insects, mammals, and flora—which were, and still are, not entirely understood—was apparently disrupted, with serious consequences. Most of these consequences can be traced to the radical simplicity of the scientific forest. (pp. 20)

In every other example we seem the same hubris at work; a state seeks to make its people and land more legible, experts are convinced that science will allow them to control and optimize the terrain they are given control of, and the citizenry is incapable of resisting. The experts remake the world to optimize fo the metrics they care about, ignoring everything else. Exceptionally complex processes (that Scott explores) are disrupted, with horrific human consequences.

Seeing Like a State is interested in these processes that happen from the top down; states defining their goals and using power to force upon the populace changes that are intended “for their benefit” but ultimately fail and lead to great human suffering. As I was reading, I couldn’t help but see terrifying parallels to the modern tech industry. I’m going to attempt to explore the similarities in this post.

I’m focused on tech companies that have large if not monopoly power in various realms of the world today; Google, with its dominance in search and advertising on the open web, Facebook, with its dominance in online communication and an advertising realm inside its platforms, Amazon with its dominance in internet infrastructure and online shopping. Companies such as Apple, Palantir, Microsoft, perhaps even Squarespace and Spotify and more are surely included. These are all companies that have immense power over the shape of our modern world. Even though they are not state powers, their power to influence the lives of individuals is vast and that power seems to have a quasi-state shape. With that lens, I want to map Scott’s attributes of failed schemes (listed above) onto the tech industry, in reverse order.

A citizenry unable to resist

Scott sees that a necessary component for schemes to be implemented is a weakened citizenry, unable to resist the central authority. In Russia and Tanzania he argues that this was true because the rural populace was mostly self-organized in small collectives, and had no larger organizing power against the state. In the world of tech, there are at least three ways a large portion of the population may be subject to a tech company.

Many of the tech giants gained prominence because they were useful tools for accomplishing specific goals; Google was the most accurate search engine, Facebook made it easy to find your friends and passively keep in touch with them, Amazon made it easy to buy things, Apple gave us computers in our pockets. A large group may become users of the products/services of the company because it is genuinely useful.

The second way that a large portion of the population might become subject to a tech company is monopolization of a given realm. Google reached this with search, but also by releasing GMail, the free email service. YouTube is the dominant video platform online. Facebook had the benefit of a network effect, but they also aggressively bought up competitors and copied features of other platforms, so that all sorts of varied online human communication and interaction could be facilitated via their platforms. Even if the companies could argue that initial growth was due to innovation or usefulness, at a certain point, their power in the world shifted beyond a point where people were voluntarily seeking out the company, and instead had few or no options for alternatives.

The third way a large portion of the population might find themselves subject to a given company is if the company is a service provider for an entity the person has to interact with. Perhaps it’s a student using Google’s educational products, or even a person interested in political activism but the group does all its organizing on Facebook. An extremely common reason is that the tech company is a necessary service for accomplishing work. These can be contractual relationships or voluntary, but either way, each person did not choose the platform, it was chosen for them.

In all of these situations there is no national citizenry; instead we have what the tech industry refers to as a userbase, a nice euphemism for “the humans who use the services, products, or platforms of a given company.” The tech industry collectively touches billions of people today, and thus the things that they build have impacts on more people than most countries have population.

The authoritarian power of the state

A national state has a level of power over its citizenry that tech companies do not have over the people that use their products. There can be no conflation of the two. No tech company currently can levy taxes, or raise an army from its users, or claim carceral power, or any of the other powers we associate with nation-states.

And yet it’s impossible to ignore the power that tech companies do wield. Amazon powers the infrastructure for a lot of the web, and also controls all the algorithms for which products to surface when a person searches on its site. Google defines the shape of information via its search algorithm and the preferences it gives. Google also owns YouTube, which is fine-tuned to keep people watching. Facebook is the most obvious example; the platform itself remains a revolutionary platform for amplifying misinformation and helping fringe groups organize and connect.

In each of these cases, and dozens more unmentioned, a private corporation is wielding immense power, not simply over individuals, but also over the shape of human culture at large. Tech companies, when asked to defend themselves, are very incentivized to focus on the benefits their products offer to people, but when dealing with this much power, we should not allow a utilitarian analysis of each company, asking if the good they bring to the world outweighs the bad. That type of analysis is a flattening which allows the company to demand that we make a binary judgement about their value. These companies may very well bring benefits to people, but they also bring harm. And when we allow companies to center the question on some utilitarian calculus, we disregard the much more important question: why is it acceptable for any single entity to wield this much power over so many people?

The necessity of asking this question becomes much stronger when we apply the next of Scott’s components to the industry.

Applying High Modernist idealogy to optimize the world

Scott’s definition of High Modernism is helpful here. High Modernism is

best concieved as a strong version of the beliefs in scientific and technical progress (…). At its center was a supreme self-confidence about continued linear progress, the development of scientific and technical knowledge, the expansion of production, the rational design of social order, the growing satisfaction of human needs, and not least, an increasing control over nature (including human nature) commensurate with scientific understanding of natural laws. High modernism is thus a particularly sweeping vision of how the benefits of technical and scientific progress might be applied—usually through the state—in every field of human activity. (…) One might say that the high-modern state began with extensive prescriptions for a new society, and it intended to impose them.

Scott is focused on architects like Le Corbusier, and American agricultural scientists who were convinced that they knew how to improve farming around the world. I find it easy to see the tech industry in this definition. Seeing Like a State argues that High Modernism fails because central planners (the scientists developing the extensive prescriptions) can only define their prescriptions by simplifying the complexities of the systems they are reforming to a few simple variables that are easiest to optimize for. The world is infinitely complex, and it is impossible to centrally and rationally design a vastly complex system, so the planners presume to identify the most important parts of the system, and they optimize for these parts.

Scott identifies numerous examples of this, but perhaps most relevant here is in the architect Le Corbusier’s High Modernist theories for urban planning, which were used to design Brasilia, the modern capital of Brazil. Scott contrasts Le Corbusier’s theories—which optimize for aesthetics and building roads that make it easy for transporting goods from point A to point B—against Jane Jacob’s arguments in The Death and Life of Great American Cities, where she observes thriving city blocks and recognizes that externally they can appear disorganized and haphazard, but internally there is a great deal of logic and self-organizing complexity to them. Seeing Like a State compellingly argues that these local, contextual systems are critical to the health of human communities and the many worlds we are building.

The tech industry has no interest in listening to Scott (or Jane Jacobs, or the many others he cites in building his argument). The lure of internet technology is that it is infinitely scalable, borderless, and innovative. Facebook says its mission is “to give people the power to build community and bring the world closer together” and Google seeks “to organize the world’s information and make it universally accessible and useful.” The hubris in these mission statements is an echo of the High Modernist schemes in the book.

Scott argues in Seeing Like a State that we should be wary of High Modernist attempts to reorganize the world, because these schemes are often so revolutionary that they cannot be undone. He argues we should “prefer interventions that can easily be undone if they turn out to be mistakes. Irreversible interventions have irreversible consequences.” This is an excellent suggestion (Ursula Franklin offers a nearly identical suggestion in The Real World of Technology), but it’s not a suggestion that the tech industry cares to abide by.

The past ~20 years have seen a massive effort by the tech industry to make its various products, services, and platforms central to the modern world. Facebook wants to sign up every single person to their platforms, and then mediate all information distribution with their proprietary algorithms. Google desires to organize (which is a form of control) email, news, the mobile web, information organization, video distribution, and so much more, convinced that its algorithms are just the ticket for our increasingly connected world. Amazon wants to control the infrastructure of the internet, and the way consumers obtain their goods, and more. On and on the companies are presuming that they have the authority and expertise to remake the world as they see fit. Since they wield immense power by the nature of all the people using their services, products, and platforms, they are largely free to pursue their goals, consequences-be-damned.

Organize administrative aims more efficiently

In Seeing Like a State, examples of this occur when a state seeks to make its polity easier to map, or tax, or otherwise take census of. In the tech industry, the parallel I find is the thing that powers all these tech companies now; data. The internet, more than anything else in the modern era, ushered in the era of big data. We are tracked everywhere on the web, and the large tech companies sort through the immense amounts of data people generate online (and offline) to find patterns and connections and optimizations.

Data, we are told, is the key to progress and the future. Prior to our burgeoning big data era, humans were limited in how many variables could be taken into account when sorting through complex systems. But now, we can take (humanly) incomprehensibly large data sets and feed them to algorithms that run on massive server farms, and the computers can sort through the data and make sense of it. This will (we are told) lead to an automated world that makes things better for everyone.

A thing that very few people in the tech industry ever stop to consider is what isn’t included in the data, and whether what’s missing is important. Data feels important because it feeds into the High Modernist rational impulse; here is a large database full of numbers and events and metrics that a computer can sort through. Patterns can be detected, hypotheses can be vetted, optimizations can be made. But datasets are missing so much. As a small example, think of the way every platform has a “related to” functionality. Facebook has “People you may know” feature, that’s generated by a number of factors (people that multiple friends of yours know that you aren’t yet “connected” to, other people’s address books, photo recognition, etc). Google can suggest searches similar to the one you’re making. Amazon suggest products related to the one you’re looking at.

In each of these cases, the datasets each company is using suggest a correlation, but the data itself does not include why those correlations were built. Facebook cannot determine how you might know someone, in fact arguably it’s worthless information to Facebook. Its goal is merely to build community, and Facebook has defined community as “connections to other people facilitated by Facebook.” But for you, a genuine connection to another person might involve history, relationship, positive or negative memories. There is real-world context and meaning underlying the single data point Facebook cares about, context and meaning that can drastically impact your response to the things the algorithm highlights. Does that matter?


Scott concludes by focusing on a Greek concept named “mētis,” which he explains “represents a wide array of practical skills and acquired intelligence in responding to a constantly changing natural and human environment.” Later, he expands the definition a bit more:

While something can indeed be said about forestry, revolution, urban planning, agriculture, and rural settlement in general, this will take us only so far in understanding this forest, this revolution, this farm. All farming takes place in a unique space (fields, soil, crops) and at a unique time (weather pattern, season, cycle in pest populations) and for unique ends (this family with its needs and tastes). A mechanical application of generic rules that ignores these particularities is an invitation to practical failure, social disillusionment, or most likely both. The generic formula does not and cannot supply the local knowledge that will allow a successful translation of the necessarily crude general understandings to successful, nuanced, local application.

Mētis, in Scott’s formulation, is the local, contextual, ungeneralizable knowledge that humans develop while in community with each other and with the land they inhabit (the actual local terrain, not the general territory of their country). This knowledge escapes codification in datasets, it resists translation for broad application. Communities and human networks built around this type of knowledge may not scale, and they may have unique forms of communication, organization, maintenance needs, infrastructure needs, and more. Scott’s book argues—very effectively to me—that dismissing and erasing this knowledge is incredibly harmful for human and ecological health.

Scott, in arguing for preserving mētis, discusses how the close observation and inherent locality of this knowledge means that people who posess this knowledge are well aware of the risks, pitfalls, and likely consequences of their decisions. They are wary of large disruptions or grand proclamations because they understand that complex systems cannot be simplistically reduced. The men who believed in High Modernist ideology had no such concerns, so confident were they that their plans were a scientific, rational, and efficient improvement over the status quo. Recall the German Scientific Forest: the scientists succeeded at remaking messy, unpredictable, natural forests into orderly, predictable, scientific forests. Unfortunately, to do this they optimized out all the parts of the system that made it healthy and sustainable. They literally missed the forest for the trees, an din so doing they destroyed that which they were tasked with improving.

This is the risk of introducing significant changes to complex systems that are not fully understood; it is impossible to predict the second- and third-order consequences of such changes, and if those consequences are harmful or destructive, it may not be possible to undo the changes. Nearly every tech company mentioned above is actively using its power and technology to remake the world. The past 20 years have seen cultural and societal changes the scale of which we have barely even mapped, let alone understood the consequences of, and the majority of this change was driven by the tech industry. I’ll end by considering one example, although I think there are dozens to choose from.

Facebook, whose mission “is to give people the power to build community and bring the world closer together” has in its barely 15 years of existence, worked to become the primary platform for online human interaction. In a pre-Facebook world, it was extremely hard to passively keep in touch with people around the world. In that world there was no easy way to broadcast updates semi-publicly so everyone you’d ever known might be able to see. Similarly, the way people obtained media was varied and disparate; there was TV and print media, but also community groups and neighborhoods and newsletters and myriad small methods of communication. People who lived physically far from each other but had shared interests might never come in contact with each other.

In working towards its goal to “bring the world closer together” Facebook has introduced numerous systemic disruptions to the things I described. Facebook is greedy for everyone who has ever met another person to solidify that “friendship” via their platform. In order to help sort through the noise that dozens if not hundreds of friend’s broadcasts create on the platform, Facebook introduced algorithms to help show the “most interesting” things to people. Simultaneously, Facebook has sought to become the broadcast platform for myriad different areas of the world; news publishers, non-profits, community organizations, affinity groups and more. All are invited onto the platform to have “the power to build community,” but this of course creates more noise, so Facebook continues to refine its algorithms for surfacing content, working to give each person the “most relevant” Facebook content for them.

Facebook believes in algorithms; algorithms are the key to sorting through all the noise that they are inviting everyone to create. Here though, is where Seeing Like a State’s critiques give us great insight. Facebook creates algorithms to optimize for the way they think the platform should look. Their algorithm is proprietary and surely includes many signals, but one that we know has been optimized for years is “engagement” — meaning the algorithm finds content that others are interacting with and understands that as a positive signal, which means it assumes since some people liked this, others will probably like it too. And so the algorithm boosts the content.

But the thing that data cannot know—the local knowledge that is impossible to feed to an algorithm—is why people are interacting with specific content. Interaction data is just the recorded result of many people making choices based on their own reasoning. Facebook cannot interrogate or know the reasoning, despite it being much more helpful information for deciding whether some content is worth boosting or not. Proponents of big data will argue that this is the beauty of data; we don’t need to know the why, only the what, and that’s enough! Algorithms can detect the patterns and make decisions based on their modeling.

You’ll forgive me if I look around today at the rise of conspiratorial thinking, right-wing authoritarianism, a deepening climate-crisis, a media landscape that is impossible to sort through, and come away wondering if perhaps big data proponents are terribly wrong, and it would be helpful to know the why. We live in a world where in barely a decade, Facebook has completely remade the way over a third of humanity sends and receives information. Is Facebook the cause of all these trends? It’s potentially an impossible question to affirmatively answer. But the hubris of Facebook to think it can wield its power in this way, to think that this level of disruption is innately good and helpful and will make the world better. It feels familiar to the examples in Seeing Like a State.

Facebook, and the tech industry at large, is continuing to remake the world as it sees fit, building and optimizing on top of a mountain of data. But I wonder if we’re so enamored with data, so excited by the shiny new conveniences, that we are letting these massive corporations optimize the world to their ends, at the expense of all the parts of the world that don’t show up in their datasets. I wonder if everything that doesn’t show up in the datasets is actually the critical knowledge that makes our world sustainable. Have we surrendered the shape of the future to a bunch of engineers who are also missing the forest for the trees? If so, can we undo this, or have we already disrupted the system so much that there’s no easy way of going back? I hope we figure this out soon, because the scale of the experiments that tech companies are running on the world are terrifying.

Recently Read