New Dark Age

Over the last century, technological acceleration has transformed our planet, our societies, and ourselves, but it has failed to transform our understanding of these things. The reasons for this are complex, and the answers are complex too, not least because we ourselves are utterly enmeshed in technological systems, which shape in turn how we act and how we think. We cannot stand outside them; we cannot think without them. [p2]


Across the sciences and society, in politics and education, in warfare and commerce, new technologies do not merely augment our abilities, but actively shape and direct them, for better and for worse. It is increasingly necessary to be able to think new technologies in different ways, and to be critical of them, in order to meaningfully participate in that shaping and directing. If we do not understand how complex technologies function, how systems of technologies interconnect, and how systems of systems interact, then we are powerless within them, and their potential is more easily captured by selfish elites and inhuman corporations. Precisely because these technologies interact with one another in unexpected and often-strange ways, and because we are completely entangled with them, this understanding cannot be limited to the practicalities of how things work: it must be extended to how things came to be, and how they continue to function in the world in ways that are often invisible and interwoven. What is required is not understanding, but literacy. [p2-3]


The second danger of a purely functional understanding of technology is what I call computational thinking. Computational thinking is an extension of what others have called solutionism: the belief that any given problem can be solved by the application of computation. Whatever the practical or social problem we face, there is an app for it. But solutionism is insufficient too; this is one of the things that our technology is trying to tell us. Beyond this error, computational thinking supposes - often at an unconscious level - that the world really is like the solutionists propose. It internalises solutionism to the degree that it is impossible to think or articulate the world in terms that are not computa-ble. Computational thinking is predominant in the world today, driving the worst trends in our societies and interactions, and must be opposed by a real systemic literacy. If philosophy is that fraction of human thought dealing with that which cannot be explained by the sciences, then systemic literacy is the thinking that deals with a world that is not computable, while acknowledging that it is irrevocably shaped and informed by computation. [p4]


Finally, systemic literacy permits, performs, and responds to critique. The systems we will be discussing are too critical to be thought, understood, designed and enacted by the few, especially when those few all too easily align themselves with, or are subsumed by, older elites and power structures. There is a concrete and causal relationship between the complexity of the systems we encounter every day; the opacity with which most of those systems are constructed or described; and fundamental, global issues of inequality, violence, populism and fundamentalism. All too often, new technologies are presented as inherently emancipatory. But this is itself an example of computational thinking, of which we are all guilty. Those of us who have been early adopters and cheerleaders of new technologies, who have experienced their manifold pleasures and benefited from their opportunities, and who have consequently argued, often naively, for their wider implementation, are in no less danger from their uncritical deployment. But the argument for critique cannot be made from individual threats, nor from identification with the less fortunate or less knowledge-able. Individualism and empathy are both insufficient in the network. Survival and solidarity must be possible without understanding. [p5-6]


The greatest carrier wave of progress for the last few centuries has been the central idea of the Enlightenment itself: that more knowledge - more information - leads to better decisions. For which one can, of course, substitute any concept of ‘better’ that one chooses. Despite the assaults of modernity and postmodernity, this core tenet has come to define not merely what is implemented, but what is even considered possible from new technologies. The internet, in its youth, was often referred to as an ‘information superhighway’, a conduit of knowledge that, in the flickering light of fibre-optic cables, enlightens the world. Any fact, any quantum of information, is available at the tap of a keyboard - or so we have led ourselves to believe.

And so we find ourselves today connected to vast repositories of knowledge, and yet we have not learned to think. In fact, the opposite is true: that which was intended to enlighten the world in practice darkens it. The abundance of information and the plurality of worldviews now accessible to us through the internet are not producing a coherent consensus reality, but one riven by fundamentalist insistence on simplistic narratives, conspiracy theories, and post-factual politics. It is on this contradiction that the idea of a new dark age turns: an age in which the value we have placed upon knowledge is destroyed by the abundance of that profitable commodity, and in which we look about ourselves in search of new ways to understand the world. In 1926, H. P. Lovecraft wrote,

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.’

How we understand and think our place in the world, and our relation to one another and to machines, will ultimately decide if madness or peace is where our technologies will take us. The darkness I write of is not a literal darkness, nor does it represent an absence or occlusion of knowledge, as the popular idea of a dark age holds. It is not an expression of nihilism or hopelessness. Rather, it refers to both the nature and the opportunity of the present crisis: an apparent inability to see clearly what is in front of us, and to act meaningfully, with agency and justice, in the world - and, through acknowledging this dark-ness, to seek new ways of seeing by another light. [p10-11]


The argument set out in this book is that, like climate change, the effects of technology are widespread across the globe and are already affecting every area of our lives. These effects are potentially catastrophic, and result from an inability to comprehend the turbulent and networked outputs of our own inventions. As such, they upset what we have naively come to expect as the natural order of things, and they require a radical rethinking of the ways in which we think the world.

But the other thrust of this book is that all is not lost: if we really are capable of thinking in new ways, then we are also capable of rethinking the world, and thus understanding and living differently within it. And just as our current understanding of the world proceeds from our scientific discoveries, so our rethinking of it must emerge from and alongside our technological inventions, which are very real manifestations of the contested, complex, and contradictory state of the world itself. Our technologies are extensions of ourselves, codified in machines and infrastructures, in frameworks of knowledge and action; truly thought, they offer a model of a truer world.

We have been conditioned to think of the darkness as a place of danger, even of death. But the darkness can also be a place of freedom and possibility, a place of equality. For many, what is discussed here will be obvious, because they have always lived in this darkness that seems so threatening to the privileged. We have much to learn about unknowing. Uncertainty can be productive, even sublime. [p15]


Thinking through machines predates the machines them-selves. The existence of calculus proves that some problems may be tractable before it is possible to solve them practically. History, viewed as such a problem, might thus be transformed into a mathematical equation that, when solved, would produce the future. This was the belief of the early computational thinkers of the twentieth century, and its persistence, largely unquestioned and even unconscious, into our own time is the subject of this book. Personified today as a digital cloud, the story of computational thinking begins with the weather. [p20]


But what if these stories are the real history of computation: a litany of failures to distinguish between simulation and reality; a chronic failure to identify the conceptual chasm at the heart of computational thinking, of our construction of the world?

We have been conditioned to believe that computers render the world clearer and more efficient, that they reduce complexity and facilitate better solutions to the problems that beset us, and that they expand our agency to address an ever-widening domain of experience. But what if this is not true at all? A close reading of computer history reveals an ever-increasing. opacity allied to a concentration of power, and the retreat of that power into ever more narrow domains of experience. By reifying the concerns of the present in unquestionable architectures, computation freezes the problems of the immediate moment into abstract, intractable dilemmas; obsessing over the inherent limitations of a small class of mathematical and material conundrums rather than the broader questions of a truly democratic and egalitarian society. By conflating approximation with simulation, the high priests of computational thinking replace the world with flawed models of itself; and in doing so, as the modellers, they assume control of the world. [p34]


The danger of this emphasis on the coproduction of physical and cultural space by computation is that it in turn occludes the vast inequalities of power that it both relies upon and reproduces. Computation does not merely augment, frame, and shape culture; by operating beneath our everyday, casual awareness of it, it actually becomes culture. That which computation sets out to map and model it eventually takes over. Google set out to index all human knowledge and became the source and arbiter of that knowledge: it became what people actually think. Facebook set out to map the connections between people - the social graph - and became the platform for those connections, irrevocably reshaping societal relationships. Like an air control system mistaking a flock of birds for a fleet of bombers, software is unable to distinguish between its model of the world and reality - and, once conditioned, neither are we.

This conditioning occurs for two reasons: because the combination of opacity and complexity renders much of the computational process illegible; and because computation itself is perceived to be politically and emotionally neutral. Computation is opaque: it takes place inside the machine, behind the screen, in remote buildings - within, as it were, a cloud. Even when this opacity is penetrated, by direct apprehension of code and data, it remains beyond the comprehension of most. The aggregation of complex systems in contemporary networked applications means that no single person ever sees the whole picture. Faith in the machine is a prerequisite for its employment, and this backs up other cognitive biases that see automated responses as inherently more trustworthy than nonautomated ones. [p40-41]


At the foundation of automation bias is a deeper bias, firmly rooted not in technology, but in the brain itself. Confronted with complex problems, particularly under time pressure - and who among us is not under time pressure, all A the time? - people try to engage in the least amount of cognitive work they can get away with, preferring strategies that are both easy to follow and easy to justify.* Given the option of relinquishing decision making, the brain takes the road of least cognitive effort, the shortest cut, which is presented near-instantaneously by automated assistants. Computation, at every scale, is a cognitive hack, offloading both the decision process and the responsibility onto the machine. As life accelerates, the machine steps in to handle more and more cognitive tasks, reinforcing its authority - regardless of the consequences. We refashion our understanding of the world to better accommodate the constant alerts and cognitive shortcuts provided by automated systems. Computation replaces conscious thought. We think more and more like the machine, or we do not think at all. [p43]


Just as global telecommunications have collapsed time and space, computation conflates past and future. That which is gathered as data is modelled as the way things are, and then projected forward - with the implicit assumption that things will not radically change or diverge from previous experiences. In this way, computation does not merely govern our actions in the present, but constructs a future that best fits its parameters. That which is possible becomes that which is computable. That which is hard to quantify and difficult to model, that which has not been seen before or which does not map onto established patterns, that which is uncertain or ambiguous, is excluded from the field of possible futures. Computation projects a future that is like the past - which makes it, in turn, incapable of dealing with the reality of the present, which is never stable.

Computational thinking underlies many of the most divisive issues of our times; indeed, division, being a computational operation, is its primary characteristic. Computational thinking insists on the easy answer, which requires the least amount of cognitive effort to arrive at. Moreover, it insists that there is an answer - one, inviolable answer that can be arrived at - at all. The ‘debate’ on climate change, where it is not a simple conspiracy of petrocapitalism, is characterised by this computational inability to deal with uncertainty. Uncertainty, mathematically and scientifically understood, is not the same as unknowing. Uncertainty, in scientific, climatological terms, is a measure of precisely what we do know. And as our computational systems expand, they show us ever mere clearly how. much we do not know.

Computational thinking has triumphed because it has first seduced us with its power, then befuddled us with its complex-ity, and finally settled into our cortexes as self-evident. Its effects and outcomes, its very way of thinking, are now so much a part of everyday life that it appears as vast and futile to oppose as the weather itself. [p44]


But the second is more existential: it relates to our deep need to discover ever more about the world, to gather and process more data about it, in order that the models that we build of it may be more robust, more accurate, and more useful. But the opposite is occurring: our sources of data are slipping away, and with them the structures by which we have structured the world. The melting of the permafrost is both danger sign and metaphor: an accelerating collapse of both our environmental and our cognitive infrastructure. The certainties of the present are founded on the assumption of ever-increasing, ever-crystallising geologies of knowledge; it is reassuring to imagine a cooling earth, coming into shape, manifesting in distinct and solid forms. But, as in Siberia, the sponging of the Greenlandic landscape reiterates a return to the fluid: the marshy and boggy, the undifferentiated and gaseous. A new dark age will demand more liquid forms of knowing than can be derived from the libraries of the past alone. [p58]


This, then, is the true legacy of Moore’s law: as software centred itself within society, so its ever-rising power curve came to be associated with the idea of progress itself: a future of plenty for which no accommodations in the present need be made. A computing law become an economic law become a moral law - with its own accusations of bloat and decadence. Even Moore appreciated the wider implications of his theory, telling the Economist on the fortieth anniversary of its coin-age, ‘Moore’s Law is a violation of Murphy’s Law. Everything gets better and better.‘4 [p83]


In the age of big data, he argued, ‘correlation is enough. We can stop looking for models.’ This is the magic of big data. You don’t really need to know or understand anything about what you’re studying; you can simply place all of your faith in the emergent truth of digital information. In one sense, the big data fallacy is the logical outcome of scientific reductionism: the belief that complex systems can be understood by dismantling them into their constituent pieces and studying each in isolation. And this reductionist approach would hold if it did in practice keep pace with our experiences; in reality, it is proving to be insufficient. p84)


The way we think the world is shaped by the tools at our disposal. As the historians of science Albert van Helden and Thomas Hankins put it in 1994, ‘Because instruments determine what can be done, they also determine to some extent what can be thought.’ These instruments include the entire sociopolitical framework that supports scientific investigation, from government funding, academic institutions, and the journal industry, to the construction of technologies and software that vest unparalleled economic power and exclusive knowledge in Silicon Valley and its subsidiaries. There is also a deeper cognitive pressure at work: the belief in the singular, inviolable answer, produced, with or without human intervention, by the alleged neutrality of the machine. As science becomes increasingly technologised, so does every domain of human thought and action, gradually revealing the extent of our unknowning, even as it reveals new possibilities. [p102]


But such a position seems to ignore the fact that the complexity of contemporary technologies is itself a driver of inequality, and that the logic that drives technological deployment might be tainted at the source. It concentrates power into the hands of an ever-smaller number of people who grasp and control these technologies, while failing to acknowledge the fundamental problem with computational knowledge: its reliance on a Promethean extraction of information from the world in order to smelt the one true solution, the answer to rule them all. The result of this wholesale investment in computational processing - of data, of goods, of people - is the elevation of efficiency above all other objectives; what sociologist Deborah Cowen calls ’the tyranny of techne’. [p132]


A hermeneutics, or hermetic understanding, of technology might account for its perceived errors by pointing out that reality is never that simple, that there is always meaning beyond the meaning, that answers can be multiple, contested, and potentially infinite.

When our algorithms fail to converge on ideal situations; when, despite all the information at their disposal, intelligent systems fail to adequately grasp the world; when the fluid and ever-changing nature of personal identities fails to fit inside the neat rows of databases: these are moments of hermeneutic communication. Technology, despite its Epimethean and Promethean claims, reflects the actual world, not an ideal one.

When it crashes, we are capable of thinking clearly; when it is cloudy, we apprehend the cloudiness of the world. Technology, while it often appears as opaque complexity, is in fact attempting to communicate the state of reality. Complexity is not a condition to be tamed, but a lesson to be learned.[p134]


Today, the connectionist model of artificial intelligence reigns supreme again, and its primary proponents are those who, like Hayek, believe that there is a natural order to the world that emerges spontaneously when human bias is absent in our knowledge production. Once again, we see the same claims being made about neural networks as were made by their cheerleaders in the 1950s - but this time, their claims are being put to work in the world more widely. [p139]


This awareness of historic injustice is crucial to understanding the dangers of the mindless implementation of new technologies that uncritically ingest yesterday’s mistakes. We will not solve the problems of the present with the tools of the past. As the artist and critical geographer Trevor Paglen has pointed out, the rise of artificial intelligence amplifies these concerns, because of its utter reliance on historical information as training data: ‘The past is a very racist place. And we only have data from the past to train Artificial Intelligence.’" Walter Benjamin, writing in 1940, phrased the problem even more fiercely: ‘There is no document of civilisation which is not at the same time a document of barbarism.’ To train these nascent intelligences on the remnants of prior knowledge is thus to encode such barbarism into our future.

And these systems are not merely contained in academic papers and consumer cameras - they are already determining the macro scale of people’s daily lives. In particular, the faith placed in intelligent systems has been implemented widely in police and justice systems. Half of police services in the United States are already employing ‘predictive policing’ systems such as PredPol, a software package that uses ‘high-level mathematics, machine learning, and proven theories of crime behaviour to predict the most likely times and places that new crimes can be expected to occur: a weather forecast for lawbreaking. [p144]


Acknowledging the reality of nonhuman intelligence has deep implications for how we act in the world and requires clear thinking about our own behaviours, opportunities, and limitations. While machine intelligence is rapidly outstripping human performance in many disciplines, it is not the only way of thinking, and it is in many fields catastrophically destructive. Any strategy other than mindful, thoughtful cooperation is a form of disengage-ment: a retreat that cannot hold. We cannot reject contemporary technology any more than we can ultimately and utterly reject our neighbours in society and the world; we are all entangled An ethics of cooperation in the present need not be limited to machines either: with other nonhuman entities, animate and non-animate, it becomes another form of stewardship, empha-sising acts of universal justice not in an unknowable, uncomputable future, but in the here and now. [p160]


What is damaging is the act of leaking itself, not the contents of any specific leak. As Wikileaks entered the public eye and Assange himself became an increasingly powerful and arrogant figure, the organisation became involved in a series of feuds with the intelligence agencies - and ultimately a tool for states to attack one another - and this realisation was lost. What replaced it was a mistaken belief in the power of the ‘smoking gun’: the single source or piece of evidence that would bring down authority.

The problem of the smoking gun besets every strategy that depends on revelation to move opinion. Just as the activities of the intelligence agencies could have been inferred long before the Snowden revelations by multiple reports over decades, so other atrocities are ignored until some particular index of documentary truthfulness is attained. In 2005, Caroline Elkins published a thorough account of British atrocities in Kenya, but her work was widely criticised for its reliance on oral history and eyewitness accounts. It was only when the British government itself released documents that confirmed these accounts that they were accepted, becoming part of an acknowledged history. The testimony of those who suffered was ignored until it conformed to the account offered by their oppressors - a form of evidence that, as we have seen, will never be available for a multitude of other crimes. In the same manner, the cult of the whistle-blower depends upon the changing conscience of those already working for the intelligence services; those outside such organisations are left without agency, waiting helplessly for some unknown servant of government to deign to publish what they know. This is a fundamentally insufficient basis for moral action.

Just as the availability of vast computational power drives the implementation of global surveillance, so its logic has come to dictate how we respond to it, and to other existential threats to our cognitive and physical well-being. The demand for some piece of evidence that will allow us to assert some hypothesis with 100 per cent certainty overrides our ability to act in the present. Consensus - such as the broad scientific agreement around the urgency of the climate crisis - is disregarded in the face of the smallest quantum of uncertainty. We find ourselves locked in a kind of stasis, demanding that Zeno’s arrow hit the target even as the atmosphere before it warms and thickens. The insistence upon some ever-insufficient confirmation creates the deep strangeness of the present moment: everybody knows what’s going on, and nobody can do anything about it.

Reliance on the computational logics of surveillance to derive truth about the world leaves us in a fundamentally precarious and paradoxical position. Computational knowing requires surveillance, because it can only produce its truth from the data available to it directly. In turn, all knowing is reduced to that which is computationally knowable, so all knowing becomes a form of surveillance. Thus computational logic denies our ability to think the situation, and to act rationally in the absence of certainty. It is also purely reactive, permitting action only after sufficient evidence has been gathered and forbidding action in the present, when it is most needed.

The operation of surveillance, and our complicity in it, is one of the most fundamental characteristics of the new dark age, because it insists on a kind of blind vision: everything is illuminated, but nothing is seen. We have become convinced that throwing light upon the subject is the same thing as thinking it, and thus having agency over it. But the light of computation just as easily renders us powerless - either through information overload, or a false sense of security. It is a lie we have been sold by the seductive power of computational thinking.[p183-185]


Douglas Hofstadter, writing in 1964, created the term ‘paranoid style’ to characterise American politics. Citing examples ranging from Masonic and anti-Catholic panics in the 1800s to Senator Joe McCarthy’s assertions of high-level government conspiracy in the 1950s, Hofstadter outlined a history of other-ing: the casting of an invisible enemy as ‘a perfect model of malice, a kind of amoral superman - sinister, ubiquitous, power-ful, cruel, sensual, luxury-loving’ * The most common attribute of this enemy is their extraordinary power: ‘Unlike the rest of us, the enemy is not caught in the toils of the vast mechanism of history, himself a victim of his past, his desires, his limitations.

He wills, indeed he manufactures, the mechanism of history, or tries to deflect the normal course of history in an evil way? In short, the enemy is the other who rises above the convolutions and complexities of the present, who grasps the totality of the situation and is capable of manipulating it in ways the rest of us are not. Conspiracy theories are the extreme resort of the powerless, imagining what it would be to be powerful.

This theme was taken up by Fredric Jameson, when he wrote that conspiracy ‘is the poor person’s cognitive mapping in the postmodern age; it is the degraded figure of the total logic of late capital, a desperate attempt to represent the latter’s system, whose failure is marked by its slippage into sheer theme and content’ ° Surrounded by evidence of complexity - which for the Marxist historian is emblematic of the generalised alienation produced by capitalism - the individual, however outraged, resorts to ever more simplistic narratives in order to regain some control over the situation. As the technologically augmented and accelerated world trends toward the opposite of simplicity, as it becomes more - and more visibly - complex, conspiracy must of necessity become more bizarre, intricate, and violent to accommodate it. [p205]


Accompanying the violence are untold levels of exploitation: not of children because they are children, but of children because they are powerless. Automated reward systems like You Tube algorithms necessitate exploitation to sustain their revenue, encoding the worst aspects of rapacious, free market capitalism. No controls are possible without collapsing the entire system. Exploitation is encoded into the systems we are building, making it harder to see, harder to think and explain, harder to counter and defend against. What makes it disturbing is that this is not a science fictional exploitative future of Al overlords and fully robot workforces in the factories, but exploitation in the playroom, in the living room, in the home and the pocket, being driven by exactly the same computational mechanisms. And humans are degraded on both sides of the equation: both those who, numbed and terrified, watch the videos; and those who, low paid or unpaid, exploited or abused, make them. In between sit mostly automated corporations, taking the profit from both sides.

These videos, wherever they are made, however they come to be made, and whatever their own conscious intentions, are bred by a system that was consciously intended to show videos to children for profit. The unconsciously generated, emergent outcomes of this are all over the place.

To expose children to this content is abuse. This is not the same as the debatable but undoubtedly real effects of film or video game violence on teenagers, or the effects of pornography or extreme images on young minds. Those are important debates, but they’re not even what is being discussed here. At stake on YouTube is very young children, effectively from birth, being deliberately targeted with content that will traumatise and disturb them, via networks that are extremely vulnerable to exactly this form of abuse. It’s not about intention, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. [p229-230]


Just as we spent forty-five years locked in a Cold War perpetuated by the spectre of mutually assured destruction, we find ourselves in an intellectual, ontological dead end today.

The primary method we have for evaluating the world - more data - is faltering. It’s failing to account for complex, human-driven systems, and its failure is becoming obvious - not least because we’ve built a vast, planet-spanning information-sharing system for making it obvious to us. The mutually assured privacy meltdown of state surveillance and leak-driven countersurveillance activism is one example of this failure, as is the confusion caused by real-time information overload from surveillance itself. So is the discovery crisis in the pharmacological industry, where billions of dollars in computation are returning exponentially fewer drug breakthroughs. But perhaps the most obvious is that despite the sheer volume of information that exists online - the plurality of moderating views and alternative explanations - conspiracy theories and fundamentalism don’t merely survive, they proliferate. As in the nuclear age, we learn the wrong lesson over and over again. We stare at the mushroom cloud, and see all of this power, and we enter into an arms race all over again.

But what we should be seeing is the network itself, in all of its complexity. The network is only the latest, but certainly the most advanced, civilisation-scale tool for introspection our species has built thus far. To deal with the network is to deal with a Borgesian infinite library and all the inherent contradictions contained within it: a library that will not converge and continually refuses to cohere. Our categories, summaries and authorities are no longer merely insufficient; they are literally incoherent. As H. P. Lovecraft noted in his annunciation of a new dark age, our current ways of thinking about the world can no more survive exposure to this totality of raw information than we can survive exposure to an atomic core. [p248-249]


An atomic understanding of information presents, at the last, such a cataclysmic conception of the future that it forces us to insist upon the present as the only domain for action. In contrast and in opposition to nihilistic accounts of original sins and dys/utopian imaginings of the future, one strand of environmental and atomic activism posits the notion of guardianship. Guardianship takes full responsibility for the toxic products of atomic culture, even and especially when they have been created for our ostensible benefit. It is based on the principles of doing the least harm in the present and of our responsibility to future generations - but does not presume that we can know or control them. As such, guardianship calls for change, while taking on the responsibility of what we have already created, insisting that deep burial of radioactive materials precludes such possibilities and risks widespread contamination. In this, it aligns itself with the new dark age: a place where the future is radically uncertain and the past irrevocably contested, but where we are still capable of speaking directly to what is in front of us, of thinking clearly and acting with justice. Guardianship insists that these principles require a moral commitment that is beyond the abilities of pure computational thinking, but well within, and utterly appropriate to, our darkening reality. [p251-252]

Recently Read

  • Cover for Wolf Island

    Wolf Island

    by L. David Mech

    Copyright 2020, University of Minnesota Press

    Climate Change, Cultural Analysis

  • Cover for Artificial Condition

    Artificial Condition

    by Martha Wells

    Copyright 2018, Tordotcom

    Fiction

    (3rd time reading)

  • Cover for Theory of Water

    Theory of Water

    by Leanne Betasamosake Simpson

    Copyright 2025, Haymarket

    Climate Change, Indigenous Issues, Cultural Analysis

    My highlights

  • Cover for Adaptable

    Adaptable

    by Herman Pontzer, PhD

    Copyright 2025, Avery

    Cultural Analysis