An Anatomical Sketch of Software As A Complex System

As intellectually awkward artifacts that open up new capabilities, are surprising, frustrating and costly in other ways, and which regularly confound our physical intuitions about their behaviour, software systems meet an everyday language definition of complexity. A more systematic comparison, presented here, shows a significant family resemblance. Complexity science studies common techniques across a number of fields, and using that framework to analyze software engineering could allow a more precise technical understanding of these software problems.

This isn’t a unique thought. Various approaches, such as David Snowden’s Cynefin framework, have used complexity science as a source for insight on software development. Herbert Simon, in works like “Sciences of the Artificial”, helped build complexity science, with software programs and Good Old Fashioned AI as reference points. Famous papers such as Parnas et al’s “The Modular Structure of Complex Systems” also point the same way. As I was introduced to this material myself, I missed a more recent reference that lined up these features of complex systems with modern software in a brief and systematic way. These notes attempt that in the form of an anatomical sketch.

This note considers software systems of many internal models and at least thousands of lines, rather than shorter programs analysed in formal detail. This places it more under software engineering than formal computer science, without intending any strict break from the latter. Likewise, by default, it addresses consciously engineered software rather than machine learning. This complexity differs from algorithmic processing time complexity as captured by O(x) notation, though there may be interesting formal connections to be explored there too.


Anatomical Sketch

Ladyman et al give seven features of complex systems, and I’ve added one more from Crutchfield.

1. Non-linearity

Software exhibits non-linearity in the small and the large. Every ‘if’ condition, implicit or explicit, represents distinct possible outputs. This is most obvious in response to unexpected input or state; error and exit, segmentation fault, stack trace, NullPointerException.

2. Feedback

From a use perspective, many software systems are part of a feedback loop, with users and the world, and this feedback can often involve internal software state.

From an engineering perspective, all software systems beyond a trivial size are built in cycles where the current state of a codebase is a rich input into the next cycle of engineering. This is true whether iterative software development methodologies are used or not. For instance, consider bug fixes resulting from a test phase in waterfall.


3. Spontaneous Order

Spontaneous order is not a feature of large software systems. If anything, the usual condition of engineering large software systems is constantly and deliberately working to maintain order against a tendency for these systems to suffer from entropy, or into complicated disorder. The ideas of ‘software crisis’ and ‘technical debt’ are both reactions to lack of perceived order in engineered software.


4. Robustness and lack of central control

In the small, or even at the level of the individual system, software tends to brittleness, as noted above. Robustness, being “stable under perturbations of the system” (Ladyman), must be specifically engineered in by considering a wide variety of inputs and states and testing the system under those conditions. However, certain software ecosystems such as the TCP/IP substrate of the Internet display great robustness. Individual websites go down, but the whole Internet or World Wide Web tends not to. This is related to the choice of a highly distributed architecture based on relatively simple, standard, protocols and design guidelines like Postel’s principle (be tolerant in what you accept and strict in what you send). Like a flock of birds, the lack of central control makes the system tolerant of local failure. High availability systems make use of similar principles of redundancy.


5. Emergence

Software systems tend not to exhibit emergent behaviours as highly visible features of the system, in the way say a flock of birds assumes a particular overall shape once each bird follows certain simple rules about their position relative to their neighbour. Certain important non-visible features are emergent. Leveson, in Engineering A Safer World, argues that system safety (including software) is an emergent feature: “Determining whether a plant is acceptably safe is not possible, for example, by examining a single valve in the plant. In fact, statements about the ’safety of the valve’ without information about the context in which that valve is used are meaningless.” Difficult bugs in established software systems are often multi-causal and emerge from systemic interactions between components rather than isolated failure.

Conway’s law, the observation that a software system’s internal component structure mirrors the team structure of the organisation that created it, describes system shape emerging from social structure without explicit causal rules.

6. Hierarchical organisation

Formal models of computation did not originally differentiate between parts of a program; for instance Turing machines or the Church lambda calculus do not even distinguish between programs and data. Many of the advances in software development have by contrast been tools for structuring programs in hierarchies and differing levels of abstraction. A reasonable history of programming could be told simply through differentiated structure, eg:

  • Turing machines / Church lambda calculus
  • Von Neumann machine separation of program, data, input, output
  • MIT Summer Session Computer: named instructions
  • Hopper: ALGOL compiler and functions
  • Backus: FORTRAN distinguished control structures IF and DO-WHILE
  • Parnas: module decomposition through information hiding
  • Smalltalk object orientation
  • Codd: relational databases
  • GoF design patterns
  • Beck: xUnit automated unit testing
  • Fowler refactoring for improved structure
  • Maven systematic library dependency management

Navigating program hierarchy from user interface through domain libraries to system libraries and services is a significant, even dominant, proportion of modern programming work (from personal observation, though a quantified study should be possible).


7. Numerosity (Many more is different)

The techniques for navigating, designing and changing a codebase of hundreds of classes are different than with a short script, at least partly due to the limitations of human memory and attention span. An early recognition of this is Benington’s Production of large computer programs; a more recent one is Feathers’ Working Effectively With Legacy Code. Feathers states: “As the amount of code in a project grows, it gradually surpasses understanding”.


8. Historical information storage

“Structural complexity is the amount of historical information that a system stores” according to Crutchfield. This is relevant for both use- and engineering-time views of software systems.

In use, the amount of state stored by a software system is historical information in this sense. An example might be a hospital patient record database. A subtlety here is suggested measures of complexity based on amounts of information (such as Kolmogorov) tend to specify maximum compression. Simply allocating several blank terabytes of disk isn’t enough. This also covers implicit forms of complexity such as dependencies in code on particular structures in data. Contrast a hospital database alone (just records and basic SQL) and the same database together with software which provides a better user interface and imposes rules on how records may be updated to suit the procedures of the hospital.

Source control changes provide a build-time problem of historic information. In practice, when extending or maintaining a system, classes are rarely replaced wholesale or deleted. New classes are added or existing classes modified to add functionality. The existing code is always an input to the new state of the code for the programmer doing the change, even if existing code was left untouched. Welsh even declared, in a paper of that name, that “Software is history!”

The result, regardless, is increasing historical information in a codebase over time, and therefore complexity.


Conway – How Do Committees Invent? Datamation 1968
Crutchfield – Five Questions on Complexity, Responses
Feathers – Working Effectively With Legacy Code, 2006
Ladyman, Lambert, Wisener – What Is A Complex System?
Leveson – Engineering a Safer World, Chapter 3 p64, 2011
Parnas – On The Criteria To Be Used in Decomposing Systems into Modules, Communications of the ACM, 1972
Parnas, Clements, Weiss – The Modular Structure of Complex Systems, IEEE Transactions on Software Engineering, 1985
Postel –  RFC 761 Transmission Control Protocol
Simon – Sciences of the Artificial
Snowden and Boone – A Leader’s Framework For Decision Making (Cynefin)
Welsh – Software Is History!


Accelerationism: A Brief Taxonomy

It is a moment of pause for the theory of accelerationism. The burst of self-identifying activity over the last few years has cycled into something of a bear market, even as the conceptual toolbox is more powerful than ever in navigating our present. Theorists of acceleration are connoisseurs of vertigo, and will insist any snapshot of their thought is dead or out of date. This taxonomy is both. But it’s short.


Accelerationism: ACC: Capitalism is a feedback cycle of increasing spiraling power, which it is not possible to comprehend or control from within, and therefore at all. The complexity of this alien system includes hyperstitions and reverse causalities. Capitalism melts and reassembles everything. Fictions become realities through their articulation. Future structures assemble themselves through their conditioning of the past.


  • Deleuze and Guattari, Anti-Oedipus; A Thousand Plateaus
  • Land, Meltdown
  • CCRU – Writings 1997-2003
  • Collapse Journal I-VIII


Right Accelerationism: R#ACC: Capitalism is modernity, science, intelligence. What is powerful in all these things is one identical force. What is best in the world is represented by this force, the product of sharpening by relentless competition, brutal empiricism and blind peer review, the butcher’s yard of evolution. Historically, it was possible to put a defensive brake on capitalism and intelligence. That possibility is fast receding or likely already gone, and was always undesirable. Ethically and therefore politically, we should align ourselves with the emancipation of the means of production. Artificial intelligence, genetic engineering, corporate microstates, and breeding a cyborg elite are all means for achieving this end.

Right accelerationism is entwined with the techno-capitalist thread of neoreaction.



Left Accelerationism: L#ACC: The tremendous productive power of capitalism is a world system that is impossible to fully control, but it may be harnessed and steered for progressive ends. Only with the wealth and productivity of capitalism has fully automated luxury gay space communism become possible, and now it is within reach it can be seized. Only through computational power can the relationship between Homo sapiens and its ecosystem be understood and balanced. The great corporations and financial structures of the early twenty-first century are themselves prototypes of platform planned economies leveraging enormous computational power to fulfill billions of needs and desires across society. By accelerating progressive technological invention, reinvigorating the domesticated industrial state as a platform state, nationalizing data utilities, sharing dividends and redefining work, the system may be made sustainable and wealth shared with all according their need.

After a surge of activity, many left accelerationists rapidly swerved away from the name a few years ago. This was coincident with Srnicek and Williams’ book Inventing the Future, which is all about left accelerationism, without mentioning it once.



Unconditional Accelerationism: U#ACC: To erect any political program that pretends to steer, brake, or accelerate this system is folly and human-centric hubris. The system can be studied as a matter of fascination, and of survival. The only politics that makes sense is to embrace fragmentation and create a safe distance from centralized political power. A patchwork of small communities built across and within networks, societies and geographies are a means for some to survive and thrive. Many small ships can ride through a storm with a few losses, where one giant raft will be destroyed, dooming all.



Blaccelerationism: The separation of human and capital is a power structure shell game. Living capital, speculative value, and accumulated time is stored in the bodies of black already-inhuman (non)subjects.



Gender Accelerationism: G#ACC: Everyone is becoming transsexual lesbian programmer cyborgs. Enjoy it.



Zero Accelerationism: Z#ACC: The world-system is accelerating off a cliff.



Accelerating The Contradictions: Capitalism is riven with conflict and contradictions. Revolutionaries should accelerate this destructive process as it hastens the creation of a system beyond capitalism.

No modern accelerationist group has held this position (D/G: “Nothing ever died of contradictions!”), but it’s a common misunderstanding, or caricature, of Left Accelerationism.



Other introductions: Meta-nomad has a more theory-soaked introduction to accelerationism, which teases out the rhizomatic cross-connections between these threads, and is a good springboard for those diving further down the rabbit hole.

Just Like Reifying A Dinner

Closing the Sorites Door After The Cow Has Ambled

The Last Instance has an interesting, pro-slime response to my recent musings on the sorites paradox. TLI offers a more nuanced herd example in Kotlin, explicitly modelling the particularity of empty herds, herds of one cow, as well as herds of two or more cows, and some good thoughts on what code-wrangling metaphors we should keep to hand.

It’s a better code example, in a number of ways, as it suggests a more deliberate language alignment between a domain jargon and the model captured in code. It includes a compound type with distinct Empty and Singleton subtypes.

But notice that we have re-introduced the sorites paradox by the back-door: the distinction between a proper herd and the degenerate cases represented by the empty and singleton herds is based on a seemingly-arbitrary numeric threshold.

Probably in my rhetorical enthusiasm for the reductive case (herd=[]), the nuance of domain alignment was lost. I don’t agree that this new example brings the sorites paradox in by the back door, though. There is a new ProperHerd type that always has two or more members. By fixing a precise threshold, the ambiguity is removed, and the sorites paradox still disappears. Within this code, you can always work out whether something is a Herd, and which subtype (Empty, Singleton, or ProperHerd) it belongs to. It even hangs a lampshade on the philosophical bullet-biting existence of the empty herd.

Though you can imagine attempts to capture more of this ambiguity in code – overlapping categories of classification, and so on – there would ultimately be some series of perhaps very complicated disambiguating rules for formal symbolic processing to work. Insofar as something like deep learning doesn’t fit that, because it holds a long vector of fractional weights against unlabelled categories, it’s not symbolic processing, even though it may be implemented on top of a programming language.

Team Slime

I don’t think a programmer should take too negative a view of ontological slime. Part of this is practical: it’s basically where we live. Learning to appreciate the morning dew atop a causal thicket, or the waves of rippling ambiguity across a pond of semantic sludge, is surely a useful mental health practice, if nothing else.

Part of the power of Wimsatt’s slime term, to me, is the sense of ubiquity it gives. Especially in software, and its everyday entanglement with human societies and institutions, general rules are an exception. Once you find them, they are one of the easy bits. Software is made of both planes of regularity and vast quantities of ontological slime. I would even say ontological slime is one of Harrison Ainsworth’s computational materials, though laying that out requires a separate post.

Wimsatt’s slime just refers to a region of dense, highly local, causally entangled rules. Code can be like that, even while remaining a symbolic processor. Spaghetti code is slimy, and a causal thicket. Software also can be ontological slime because parts of the world are like slime. Beyond a certain point, a particular software system might just need to suck that up and model a myriad of local rules. As TLI says:

The way forward may be to see slime itself as already code-bearing, rather as one imagines fragments of RNA floating and combining in a primordial soup. Suppose we think of programming as refining slime, making code out of its codes, sifting and synthesizing. Like making bread from sticky dough, or throwing a pot out of wet clay.

And indeed, traditionally female-gendered perspectives might be a better way to understand that. Code can often use mending, stitching, baking, rinsing, plucking, or tidying up. (And perhaps you have to underline your masculinity when explaining the usefulness of this: Uncle Bob Martin and the Boy Scout Rule. Like the performative super-blokiness of TV chefs.) We could assemble a team: as well as Liskov, we could add the cyberfeminist merchants of slime from VNS Matrix, and the great oceanic war machinist herself

“It’s just like planning a dinner,” explains Dr. Grace Hopper, now a staff scientist in system programming for Univac. (She helped develop the first electronic digital computer, the Eniac, in 1946.) “You have to plan ahead and schedule everything so it’s ready when you need it. Programming requires patience and the ability to handle detail. Women are ‘naturals’ at computer programming.”

Hopper invented the first compiler: an ontology-kneading machine. By providing machine checkable names that correspond to words in natural language, it constructs attachment points for theory construals, stabilizing them, and making it easier for theories to be rebuilt and shared by others working on the same system. Machine code – dense, and full of hidden structure – is a rather slimy artifact itself. Engineering an ontological layer above it – the programming language – is, like the anti-sorites, a slime refinement manoeuvre.

To end on that note seems too neat, though, too much of an Abstraction Whig History. To really find the full programmer toolbox, we need to learn not just reification, decoupling, and anti-sorites, but when and how to blend, complicate and slimify as well.

Heaps of Slime

The sorites paradox is a fancy name for a stupid-sounding problem. It’s a problem of meaning, of the kind software developers have to deal with all the time, and also of the kind software generates all the time. It’s a pervasive, emergent property of formal and informal languages.

You have a heap of sand. One grain of sand is not a heap. You take away one grain of sand. One grain of sand makes little difference – so you still have a heap of sand.

You have a grain of sand. You add another grain. Two grains of sand are surely not a heap. You add another. Three grains of sand are not a heap.

If you add only a grain or take away only a grain of sand, since one grain of sand can hardly make a difference, how do you tell when you have a heap?

That’s the paradox. The Stanford Encyclopedia of Philosophy has a more comprehensive historical overview.


Slime Baking

To make software, you build a machine out of executable formal logic. Let’s call that code a model, including its libraries and compiler, but excluding the software machinic layers below that.

The model has different elements which we represent in programming language structures, usually with names corresponding to our understanding of the domain of the problem. These correspond to phenomena in two ways: parsing, and delegation to an analogue instrument. Parsing is the process of structuring information using formal rules. An analogue instrument from this perspective is a thermostat, a camera, a human user, a rabbit user, or possibly some statistical or computational processes with emergent effects, like Monte Carlo simulations or machine learning autoencoders.

You can imagine any particular software system as a free-floating machine, just taking in inputs and providing outputs over time. Think of a program where all names of classes, functions, variables, button labels, etc, are replaced with arbitrary identifiers like a1, a2, etc, (which does have some correspondence to the processing happening inside a compiler, or during zip compression). We tether this symbolic system to the world by replacing these arbitrary names with ones that have representational meaning in human language, so that users and programmers can navigate the use and internals of the system, and make new versions of it.

To make it easier to understand, navigate and change this system, we label its interface and internals with names that have meaning in whatever domain we are using it for. Dairy farm systems will have things named after cows and online bookstores will have data structures representing books.

We have then delegated the problem of representation to the user of the system – a human choosing from a dropdown box, on a web form, for example, does the work of identification for the user+software system. But we run slap bang into the problem of vagueness.

Most of the users of our dairy software will not be on quaint farms in the English countryside owning one cow named Britney, so it will be necessary to represent a herd. How many cows do you need to qualify as a herd? Well, in practice, a programmer will pick a useful bucket data structure, like a set or a list, and name that variable “herd”. Nowadays it would probably be a collection in a standard library, like java.util.HashSet. The concept of an empty collection is a familiar one to programmers, furthermore there is a specific object to point to called “herd” (the new variable), so a herd is defined to be a data structure with zero or more (whole) cows. Sorites paradox solved <dusts hands>. And unwittingly too.

herd = []
# I refute it thus!

The loose, informal, family resemblance definition of a concept (herd) gets forced into a symbolic structure, like an everyday Python variable, to treat it as an object in a software system. This identification of a concept with a specific software structure is called reification. In the case of a herd (or a heap of sand) the formalism is a fairly uncontroversial net win; after getting over the slightly weird idea of the empty herd, the language will may converge around this new more formal definition, at least in the context of the system. (Or it may not. It is interesting to note the continuing popularity of the shopping cart usability metaphor, a concrete physical container that can be empty, rather than say, a pile of books that is allowed to have zero books in it.) 

The sorites might be thought of as a limiting case of vagueness, due to the deliberate simplicity of the concept involved (one type of thing, one collection of it). There are much messier cases. Keith Braithwaite points out that software is built on a foundation of universal distinguished types, and it is a constant emphasis of training in science and engineering. People without that training tend to instead organize their thinking around representative examples, and categorize by what Wittgenstein called family resemblance, ie, sharing a number of roughly similar properties. Accordingly Braithwaite suggests foregrounding examples as a shared artifact for discussion between programmers and users, and using legible, executable examples, as in Behaviour Driven Development (BDD).

Example-driven reasoning is also a survival technique in an environment lacking clearly distinguishable universal rules. Training in physical sciences emphasizes the wonderful discovery of universal physical laws such as those for gravity or electrical charge. Biologists are more familiar with domains where simple universal laws do not have sufficient explanatory power, and additional, much more local rules, are the only navigational aids possible. Which is to say, non-scientific exemplary reasoning was likely rational in the context it evolved in, and additionally, there are many times in science and engineering when we can not solve problems using universal rules. William Wimsatt names these conditions of highly localized rules “ontological slime”, and the complex feedback mechanisms that accompany them “causal thickets”. He points out that even if you think an elegant theory of everything is somehow possible, we have to deal with the world today, where there definitely isn’t one to hand, but ontological slime everywhere.

Readers who have built software for organizations may see where this is going. It’s not that (fairly) universal rules are unknown to organizations, but that rules run the gamut from wide generality right down to ontological slime, with people in organizations usually navigating vagueness by rule-of-thumb and exemplar-based categories which don’t form distinguished types. Additionally, well-organized domains of knowledge often intersect in organizations in idiosyncratic ways. For example, a hospital has chemical, electrical and water systems, many different medical domains, radioactive and laser equipment, legal and regulatory codes, and financial constraints to work within. And so the work of software development proceeds, one day accidentally solving custom sorites paradoxes, the next breaking everything by squeezing a twenty-nine sided Escher tumbleweed peg into a square hole.



For software applications written for a domain, especially, software acts as a model to the world. This relation even holds for a great deal of “utility” software – software that other software is built on. An operating system needs to both use and provide functions dealing with time, for example, which has a lot more domain quirks than you might at first think.

Model is a specific jargon term in philosophy of science, and the use here is deliberate. For most software, the software : world relation is a close relative of the model : world relation in science. The image of code running without labels, untethered to the world, above, is an adaptation of an image from philosopher Chuang Liu: a map, showing only a selected structure, without labels or legend. We use natural language in all its power and ambiguity to attach labels to structures. This relation is organized according to a theory. Michael Weisberg calls the description, in the light of the theory, of how the world maps and doesn’t map to the model, a construal. Unlike scientific theories, the organizing theory for a software application is rarely carefully stated or specifically taught. So individual users and programmers build their own specific theory of the system as they work, and their own construals to go with them.

Software is not just a model: it’s also an instrument through which users act. The world it models is changed by its use, much more directly than for scientific models. Most observably, the world changes to be more like the model in software. Software also changes frequently. New versions chase changes in the world, including those conditioned by earlier versions of the software, in a feedback spiral. (Donald Mackenzie calls this “Barnesian performativity” when discussing economic models, the CCRU called it “hyperstition” when discussing fiction, and Brian Goetz and friends call it “an adventure in iterative specification discovery” when discussing programming.)

It is this feedback spiral which can eliminate ambiguity in terms by identifying them with exactly their use in software, therefore solving the sorites paradox in a stronger sense. It becomes meaningless to talk about an artifact outside its software context. We don’t argue about whether we have a pile of email, as it is obvious it is a container with one limit at Inbox Zero. This is one sense in which software can be said to be “eating the world”: by realigning the way a community sees and describes it.

There are other forms of software / language / world feedback, including ones that destroy meaning, dissolve formal definitions and create ambiguity. It’s often desirable, but perhaps not always, to collapse definitions into precise model-instrumented formality. Reifying an ambiguous concept by collapsing a sorites paradox into a concrete machine component is simply one process to be aware of when building software; an island of sediment in a river of slime.


Braithwaite – Things: how we think of them, what that means for building systems
Goetz et al – Java Concurrency In Practice
Hyde and Raffman – Sorites Paradox
Liu – Fictionalism, Realism, and Empiricism on Scientific Models
Mackenzie – An Engine, Not A Camera: How Financial Models Shape Markets
Visee – Falsehoods Programmers Believe About Time
Weisberg – Simulation and Similarity
Wimsatt – Re-Engineering Philosophy For Limited Beings

Bardo Birdsong

George Saunders has written a great sentimental inhumanist novel. The book comes at you time-sorted and many-voiced, like the chat room history of channel #civilwargraveyard. In Lincoln in the Bardo, messages come in slices, and names come uncapitalized, like a child’s signature, or a Twitter handle. Even the blocks of interspersed historical (or pretend-historical) text that ground the story have the feel of a link followed, or a long block quote shared as a photo, as those on bookish corners of the platform might recognize.

Amy Ireland has the best description, from 2016, of the sliced up, liminal design affordances of the birdsite, and so this novel:

Twitter is excellent. The botlife runs wild and free, swerving into sheer paranoia-inducing bizarreness at times (Weird Sun Twitter) and there are writers doing really innovative work that engages directly with the unique formal possibilities of the medium (Uel Aramchek’s ‘This Could Be Your Past’ is one of my favourite recent examples). It’s the Arcadia of human/bot collaboration.


Only here we have a scroll updated to capitalise on the possibilities of hypertextuality: it is effectively nonlinear yet accommodates series of interlinked tweets, its citation system harbours abyssal potential for embedded referencing, its search function and the public nature of its contents make for a vast and bizarre dataset […], and it forces the honing of expression to a compact 140 characters Per unit of information. […]

During its first exciting moments, Twitter appears as an open horizon for the accumulation of all sorts of gratifying information, […] Nevertheless, the illusion of accumulation inevitably breaks down and it does so in perfect correspondence with the intensity of one’s Twitter habit. Accumulation cycles pathologically into dispersion, and before you recognise what is occurring, the mesmeric infinity of the digital scroll has entirely voided your capacity to focus or reflect. There is nowhere to go but further into the abyss.

If one could allot a genre to the platform as a whole, Twitter would be horror. The interface manifests visually and cognitively as a series of incisions. What begins as a benign mode of textual organisation quickly becomes applicable to human concentration. Its twentieth century prototype can perhaps be found in the mechanical writing/torture machine of Kafka’s In the Penal Colony. Both oversee the virulent machining of the human through text, and both tend towards a similar outcome in which the relentless numerical insistence of machinic agency ultimately succeeds in eradicating the latter.

Poetry is Cosmic WarAJ Carruthers interview with Amy Ireland

Klee - Die Zwitscher Maschine (Twittering Machine)

Klee – Die Zwitscher Maschine (Twittering Machine)

The bardo is an intermediate state between death and rebirth in Tibetan Buddhism, a purgatorial place where we are separated from ties to mortal lives. So the Bardo Thodol, the Tibetan Book of the Dead, is more literally translated Liberation Through Hearing In The Intermediate State. Saunders populates the bardo with ghosts, imprisoned within the frame of a Washington graveyard, lost in scripts of their former lives, niggling at their traumas without accepting the central fact that they are dead.

The story follows the ghost of the boy Willie Lincoln, and the imagined aftermath of his sad death of typhoid at eleven years old. Eleven years old, that transitional age; “A sunny child, dear & direct, abundantly open to the charms of the world.” The talking, however, is largely done by more experienced graveyard spirits. There are quite a number – slave women and plantation owners, soldiers and farmwives – but with three men foregrounded: Hans Vollman, Roger Bevins III, and the Reverend Everly Thomas. Now with ghosts, Buddhism, death, presidents, Christianity, the US civil war, and what not, there’s a vast swathe of cultural allusions you could be drawing from. But I found myself most reminded of Journey To The West 西游记.

Delving into spoilery detail, three imprisoned spirits become disciples to a younger mortal, after a bit of ear-boxing encouragement at the start. Following his teachings and example, they protect him on his long journey, saving him from many demons intent on eating his flesh. Though they possess great magical power, when they get really stuck they need to call on Guanyin 观音, the bodhisattva goddess of mercy, to tip the scales a bit in their favour, and in the end they are released to positions of worth and enlightenment.  In this mapping, Willie Lincoln is the monk Xuanzang 玄奘 (Tripitaka), another real historical figure. The three disciples represent different virtues and sins. The Reverend Everly Thomas is the devout and overserious Sandy 沙悟净. Roger Bevins III is Pigsy Eight-sins 猪八戒, consumed by earthly gluttony and lusts, immersed in senses, always growing new eyes, ears and noses. Hans Vollman is the Monkey King 孙悟空, with a more than usually explicitly phallic giant red staff. ((You can even link the names – Vollman – Full Man – 悟空 – 无空 / Without Space, though I’m not sure if anyone really puns in three languages outside Hong Kong.))

Which makes Abraham Lincoln Guanyin. The One Who Perceives The Sounds of The World. Lincoln, in this mythic shape, is too large to fit onstage for long. We see his shaking grief through the eyes of the spirits, and then he leaves. He is the only character who re-enters and re-exits the graveyard. The Goddess of Mercy. The Great Emancipator.

Of course Lincoln was not just the rail splitter and the breaker of slavechains. He’s also the Doctor Frankenstein of the American body politic, stitching the dismembered states together for reanimation. Both George Saunders and Amy Ireland talk of writing as sampling and reassembling snippets from overwhelming torrents of data. Saunders describes it as curation: “I’d be in my room for six or seven hours, cutting up bits of paper with quotes and arranging them on the floor”, he tells Zadie Smith. Ireland notes that “the diminishment of human authorship plunges the human reader into a problematics of scale. … In response, less linear and sedentary methods of reading start to take precedence – techniques akin to scanning, scrolling, and – for the unashamedly hyperstimulated – spritzing.” In assembling his novel, Saunders does this for us across the corpus of civil war history, Lincoln biography, Sino-Tibetan Buddhism and his own imagination. Yet it still shows the zigzag path across that vast field more honestly and artfully than most novels. The omniscient narrator is replaced with the hyperstimulated archaeologist of the past-saturated present, asynchronously replayed by the reader at a rate just slow enough to allow understanding.

Lincoln’s mutated industrial union doesn’t fit in the novel’s timeline. The reader and the characters are severed from it by a bullet and the matterlightblooming phenomenon of a bound book’s last page. The sensory systems of the brain cut down, sample, pre-processes, and outright alter everything we see and hear. Our machines and our spirits do the same. There’s too much data for human consciousness to comprehend. Wasn’t there always?