Stasis Request Form

MegaCorp Industries, Technology Process Technology Department (TPT)

Name of system ______________________
MegaCorp Technology System Identifier (MCTSID) ______________________
Stasis owner name and employee ID
Dates requested (Note stasis periods longer than one calendar week require separate forms for each week) ______________________
System criticality rating ______________________
User population ______________________
Justification for stasis exemption (Eg natural catastrophe, medical emergency impacting > 60% of team, multi-site build chain outage)
Evidence supporting (please link or attach)
Mitigations for risks accumulated during freeze period
Emergency roll forward procedure
Size of development-support teams for system ______________________
Plan for remediation and resumption of normal releases (Please link or attach if space insufficient)
Line of business management approval
Diamond band manager approval
Other notes or comments

Stasis control board approval meetings are scheduled twice weekly. Stasis requests not represented in the meeting will be automatically rejected. A separate form is required for each request.

This was written as

  • A sarcastic reaction to experiences with established change procedures in large bureaucratic organisations.
  • A tool for framing the costs and risks of not releasing software frequently, instead accumulating large changesets so every release is inherently major and complex. Many change release procedures privilege the cost of change, rather than the cost of stasis (for instance, missed security patches). The actual work of filling out and reviewing a change request form is Risk Management Theatre, the point of which is not the content but the time tax imposed on release frequency by filling out the form, in a belief frequent releases are necessarily rushed and unsafe.
  • A document from a dromological future where continuous delivery techniques such as those used by Google and other firms are widespread through the industry and therefore institutionalized. Consider Facebook changing the way privacy settings work for the seventh time in a year, or a university administration sending one of these forms to Donald Knuth for TeX.

The Bureaucracy of Automatons

An introduction to the notes on Confucian Software.

Software and the Sage

Among the many dissimilarities between software and gentlemen of the classical Chinese Spring and Autumn Period, two in particular stand out. One existed in a pre-scientific feudal society on an agricultural technological and economic base, and the other presupposes the scientific method and a modern (or post-modern) industrial base. Secondly, the concept of virtue or potency (德) is central to The Analects, but software artifacts are, in our day and age, non-sentient. Morality requires some degree of self-awareness – of consciousness – and so software does not itself practice virtue any more than a spoon or a lawnmower.

The immediate relevance, for a developer, of the Analects, are the two other grand concerns of Confucius, which are existential fundaments of software. These are names (名), and the rites (礼).

Continue reading

Early Thoughts On Large Scale Scrum

Some people lose their sense of perspective. I’ve lost my sense of scale. – Will Self, Scale

We learn, in computing, that choice of data structure determines the algorithms one can run, their complexity, and the corresponding performance characteristics of an application. To talk of an algorithm without a data structure is to risk meaninglessness, for the two exist in a system, like the hub and spoke of a bicycle wheel. Likewise, Scrum and Scrum-like techniques are software processes, but they are just as fundamentally about changing team structure as about the various meetings and practices then enabled. Waterfall processes are pipelines of handoff between pinned threads organized by a task allocator-manager. Scrum is a common thread pool pulling work from a priority queue, then creating changes to a shared blackboard of state, which a product manager thread reacts to by adding or editing tasks on the queue. It’s model-view-controller: the code is the model, the production system is the view, and the product owner is the controller.

When organizations find success with agile, or Scrum, or something they call that which improves their delivery, they might reasonably want to expand on that success, simply because many organizations are larger than a team of seven people. Hence the demand for agile processes at scale. The demand, or criticism, of agile techniques ignoring large teams has been there for pretty much the whole history of the movement, but fifteen years in, we now have some road tested processes that try to address it directly. One of them is Large Scale Scrum (LeSS), and I had the good fortune this year to be introduced to it by one of its creators, Bas Vodde. That three day course and a bit of reading is the main source for these notes. There’s a lot of good material on the LeSS site.

Vodde and Larman say that Large Scale Scrum is Scrum, with some justification. There are various ways to combine small teams in delivering solutions for a client; you can federate, you can have cross cutting program managers and architects of some kind, create product governance committees, and so on. LeSS chooses to scale beyond one team working on a product by keeping the single product work queue pattern, and increasing the thread pool. The worker thread that pulls issues is now a team instead of a developer. That’s it. There is still only one product owner. Everyone in the organization works in the same sprint.

As in Scrum, everything else about the process is a support activity to facilitate humans maintaining that queue data structure. For scrum masters this is usually a combination of presiding over Scrum rituals and malediction of alternative work channels born of earlier software traditions – destruction of Old Ways, like long specification documents or individual users pushing their own priorities on particular developers. Other functions such as managers and sales exist not to direct development tasks but to secure resources from the larger organizational and commercial milieu.

This focus on the product at the heart of the software organization gives a rather existential question great importance: what exactly is the product? Why, exactly, is your organization here? What is the meaning of work life? Vodde and Larman don’t put the questions quite that way, but they do make product definition a key part of LeSS adoption, and strongly favour more expansive definitions based on actual user experience. This may seem obvious on the surface, but is the sort of obvious that is easily lost in the complex interactions of a large established organization. If your business provides sports news to its users, your real product is probably not a statistics vending web service with three “internal clients”. Or at least if you define your team’s product that way, you won’t be as effective at helping your users do something; it will at best be a local optimization. Part of the training was trying to define this product boundary for real systems we worked on, and real organizations. Often the best product definition cuts across senior managerial hierarchies, and this may limit what is the practical product definition can be in the near term. Crossing an organizational boundary usually requires coordination overhead that splits the thread pool of development teams, or introduces multiple product owners, both of which muddle the pattern. LeSS then uses the definition of done to ratchet the team closer to a product delivery mindset in every sprint. You are done only when the user experiences a change in the product.

LeSS mandates feature teams, which are fashionable at the moment, for mostly good reasons. This is again to support the work queue data structure and reinforce a whole-product view across the development teams. The argument is that programmers can learn, and the cost of context switches is worth it for product knowledge, reduced handoff queuing and integration risk, and flexibility of always having developers working on the most valuable features. Bas in particular emphasized learning as at the heart of software development. “When you switch components, velocity will go down, and that’s a good thing!” he said at multiple points during the three days. “It means you’re learning.”

Large Scale Scrum owes much of its success to Bas Vodde’s worldview of constructive organizational nihilism. “You have to give up the idea that things happen for a reason,” he says. “Or because someone decides them.” When I described it this way on morning three of the course Bas laughed, and tilted his head; then someone else asked what nihilism was. Study of nineteenth century philosophy isn’t a focus of a typical software engineering education, though Bas as it happens has squeezed some into his schedule around coding, becoming fluent in Chinese, raising a young family and keeping a day job of ripping internal corporate bureaucracies to pieces in the cause of building better software. He hadn’t seen Scrum and nihilism as linked.

Nihilism is the position that things have no inherent meaning, or, for Bas Vodde, that organizations have no inherent meaning. I call it constructive as I think he likes building useful things, and that’s what the most powerful parts of LeSS are about too. LeSS draws on Lean Engineering in its move away from an idea of best practice to one of continuous learning. In Taylorist scientific management, specialist managers discover a best practice and then encode it as a procedure to be followed by the organization. Nietzsche famously said that God was dead, that we had killed him, and set up Truth in his place, as another lie. Taylorism has the same tendency: it sets up procedures, and a hierarchical organizational tree, as an organizational singular Truth which is received by workers and mediated by a priesthood of managers. LeSS is pretty explicit about wanting to tear that down.

Hello? Is it Truth You're Looking For?

Bas himself, responding to an earlier version of this post, says his position is a little more moderate: that organizations usually have a purpose, but the things they actually do don’t achieve it, or often have any purpose at all, if you look for them.

The LeSS training explicitly frames itself against Ford and Taylor – sections contrasting Taylor with the Toyota way were part of the first day – but the argument is strictly pragmatic. Taylor’s techniques suit a world where large numbers of lightly schooled agricultural workers are coming to work in a factory, in industrial capitalism. It’s not well suited for knowledge workers building software under cognitive capitalism. You don’t need to discard God or Truth in general, both of which engineers can be fond of, to recognize that they aren’t a good model for software development, or other processes of design invention. An idea of a negotiated social truth, the artifacts and understanding of which mutates over time and depend on context, is rather more recognizable. That’s why the agile manifesto says it favours what it does; it is all about rejecting attempts to fixate on a flakily specified organizational high modernist Truth in favour of grappling with messy user reality. (Since the training I found Paulo Virno and the operaismo thinkers already made this link between nihilism and just-in-time production; eg quote and interview.)

Vodde and Larman tear down the idea of a single managerial Truth, then, but they also erect a replacement: the product.

Scrum is a strict process, but productive even in partial use. Ron Jeffries describes this himself when he describes many organizations as not really doing Scrum, and this is echoed by plenty of agile coaches, though they by their role tend to see this as a gap rather than positive variation. Let’s call this use of only some tools from the Scrum toolbox following a Scrum-like process. Scrum isn’t designed to work like this: it’s not a rational unified monstrosity where you are theoretically supposed to start by editing 90% of it out. You may have also added extra guff. Many teams running Scrum-likes may be doing it based on misconceptions, they may be Scrum-but, and they may well benefit from coaching. My guess, from the teams I’ve seen, is that most get some benefit from a short defined iteration (ie a sprint, though an over in Test match cricket might have been a better metaphor). They also benefit from a retrospective and a short daily standup meeting, even if the rest is a bit of a muddle. Crucially, Scrum-like teams also accommodate experiments and localization while keeping a common jargon.

Large Scale Scrum seems to have less room for this variation, because LeSS is Scrum, but scaled up, so each ritual and artifact bears more process weight. It has to do this because much variation will come from organizational inertia and dysfunction that will destroy the queue-worker thread structure at the heart of Scrum.

Scrum, with its focus on a single product owner and a single team, could conceal its essentially political nature pretty well. It was very directly about work that individuals in a small team do, so it could be introduced organically, at a team level. LeSS is about rebuilding your organization around a product, so it’s too big to hide. It is explicitly revolutionary and constitutional in its impact on office politics. For implementations smaller than nine teams, it even prescribes transitioning in a single day, starting a new organizational epoch with a dramatic software process Year Zero. This new Dictatorship of the Product has a politics which are collaborative and decentralizing, but shouldn’t be mistaken for something democratic. The rule of the product turns on the fulcrum of the product owner’s vision. Feature team programmer übermensch escape the shackles of component specialization, spreading their influence across technologies and languages. And of course the waterfall hierarchies of Taylorism were even less democratic and collaborative. LeSS might instead, when working well, have the character of an austere republicanism. There is a separation of powers, there is an executive product owner, and the duty to the Product takes the place of duty to the State (or The People).

Scrum, and Agile before it, build in the idea that the business and technology organizations are separate, and that one serves the other. In many industries that’s simply a recognition of organizational reality. The Scrum community aren’t deaf to it, and they tend to translate it into advice on hunting down, catching and breaking in a product owner. It’s pretty decent advice on the whole, if that’s what you want. In something like a startup, that may not make sense. Perhaps the lead developer has the best idea of the product, so role separation is meaningless. If, like Giles Bowkett, you want Scrum to die in a fire, I’d guess you’d want Large Scale Scrum to die in a fire too; maybe a larger one.

The most passionate argument in the three days of the course – and there were a few good ones – wasn’t about turning org charts upside down, or sidelining managers, or because I called Bas a nihilist, or what a product really is. It was about Jira. Bas recommends using a simple tool for the feature backlog, preferably a spreadsheet, and almost certainly not Jira. The argument, which has something to it, is that Jira is lightweight enough to be used, but heavyweight enough to destroy team ownership and collaborative planning. I have seen this too: it gets worse the more you type in meetings and the more workflow you add.

Say what you will about the tenets of National Socialism, Dude, at least it has traceability.

Say what you will about the tenets of National Socialism, Dude, at least it has traceability.

The Jira users were ready to circle the wagons and have it prised from their cold dead hands. I had forgotten how important the immediate tactility of sticky notes on boards is to a certain type of agilist. “Our ‘process’ is mainly github”, Bowkett says. The toolset for remote collaboration is much richer now, and the assumed developer toolset is a much stronger foundation. (Are you old enough to have had a serious argument about whether to use source control in your career? How about unit tests? With a senior developer?) Both Vodde and Bowkett are right to say tools shape processes. We default to their defaults. It made me realize we had been conflating the feature backlog and the sprint tasks for years, because that’s what’s easy in Jira. We also use Jira to coordinate across multiple cities, sometimes within teams – but I already knew that wasn’t Scrum.

One of the strengths of Scrum is its clarity of definition: you can talk clearly about whether or not you are following it, it doesn’t pretend to fit existing teams without change, and the same is true of LeSS. Vodde and Larman’s insight that making better software in large organizations requires refactoring the organization is spot on. LeSS is an organizational pattern for the product development interface. It’s not the only such pattern. MVC isn’t the only user interface pattern either, but its variants dominate the design space for a reason. If you don’t have the problems of product definition, flexibility and responsiveness in large organizations LeSS was designed to address, and you ship useful features at high quality every week already, don’t use it. For myself, and I suspect many software development organizations, those problems are of daily relevance. Software politics is the art of the possible. A software process coup d’état isn’t possible over here tonight; I’ll probably mangle and cajole a while instead.

Where In The Hyperreal Is Carmen Sandiego?

Capital In The Twenty-First Century – Thomas Piketty
Kentucky Route Zero – Jake Elliott, Tamas Kemenczy, Ben Babbitt
Manakamana – Stephanie Spray, Pacho Velez
Alien Phenomenology, or What It’s Like To Be A Thing – Ian Bogost

Unlike redwoods and lichen and salamanders, computers don’t carry the baggage of vivacity. – Ian Bogost, Alien Phenomenology

There is a dog, wearing a straw hat, riding through the night in a delivery truck.

It’s Hamlet’s dog.

They are driving through Kentucky with a TV repairwoman, trying to deliver a package to a lost mathematician.

This Hamlet doesn't talk so much. He likes to talk to his dog, sometimes. He has a lot on his mind. He has his ghosts. He has his debts to pay.

You can control Hamlet, but only in the following way: you can control how he moves and where he turns, but not his destination. You can drive right or left, but he will end up on a boat to England. You can't change the future but you can change the past. When you are playing Hamlet, you can't change the story, but you can say what it means.

There is something astounding in Kentucky Route Zero, the magic realist adventure game from Cardboard Computer. It is not perhaps a masterpiece. It still feels just a little rough. But it is very good, and sometimes like a secret door opening.

There is a very specific mechanic behind this effect, combined with the redneck leprechaun setting. In the usual ludic meter of an adventure game you navigate a graph. At each node you are given a menu of choices. And so it is here, yet your choices include how to explain yourself to other people and the audience. You can drive around exploring the map, but new places only exist when the story needs them. This is your direction, your interpretation of the role. It impacts your backstory, fleshing out the shape of your tragedy. You get different choices for how to explain yourself depending on what you answered before. But the seams are hidden behind the form of a puzzle. The pinnacle of this technique, three acts in, is the song It’s Too Late To Love You Now, where you choose the lyrics sung to a suddenly open star-filled sky. It is a sublime moment of shifted perspective; I don’t think I have ever felt it so intensely in a game before.

“Master, does Emacs have the Buddha nature?” the novice asked.

The Master Programmer thought for several minutes before replying: “I don’t see why not. It’s bloody well got everything else.” – archaic computing koan

There’s an impromptu jam session in the middle of the movie Manakamana, high above the earth, suspended in a Nepalese cable car. By this time, the setting is not such a surprise, because every scene is set on a Nepalese cable car. A scene is simply the ten minutes it takes to ride the car up or down the mountain, of two seats on the cable car, from a single fixed camera on the opposite seat. These scenes have the quality of both a scientific sample and a stanza: it is a cinematic documentary-poem. The filmmakers are anthropologists.

It’s hard work watching real people this way, especially today; the urge to check smartphones and slip into continuous partial attention is strong, and most of the small and presumably sympathetic audience I saw it with succumbed at one time or another. What I found, when trying to stick with it, is a small sense of what it feels like to be that person, at that time, in that place. To be a grandmother with ice cream dripping down your fingers, or a musician having an impromptu airborne jam for the camera; but at the same time to be aware of not being that person, of not having your second ice cream at age seventy, of not having a clue how to play the sarangi, of not being a goat.

In the famous Thomas Nagel essay What Is It Like To Be A Bat? he teases out the unknowability of another creature’s internal experience, by using the alienness of a bat’s sonar as his key example. We are doomed to incomplete knowledge because we must anthropomorphise things, simply by being a human, thinking. Ian Bogost, in Alien Phenomenology, acknowledges this, but argues it’s not a bug but a feature: you can sidle up to alien experience by analogy.

In a literal sense, the only way to perform alien phenomenology is by analogy: the bat, for example, operates like a submarine. – Ian Bogost, ibid

For Bogost and the other proponents of Object Oriented Ontology, this analogical understanding is imperfect but valuable. It can even be generalized: that this is how every thing relates to every other thing, sapient, material, or conceptual. Bats, submarines, the planet Venus, forced labour, the theory of phlogiston, roti prata, breathing. This knowledge by layers of imperfect storytelling has been put under the banner of Speculative Realism, but professional philosophers, well trained in defense against irony, cannot agree on whether it exists.

When, about halfway through Manakamana, the light fades in from the dark of the cable car station, as it has half a dozen times before, but now revealing a car full of tied-down goats, my first reaction was laughter. It was funny due to raw absurdity, and then it was funny because the filmmakers really were asking you to watch a goat’s arse for the next ten minutes, barefacedly indifferent to your comfort as a viewer. Yet there is method in it; you see the nervous bleating give way to more relaxed sightseeing. You wonder if they are going up the mountain as potential milk or potential meat. In one sense the goats are a mental palette cleanser for the humans in the later scenes, but in another sense I could identify with them. The fixed viewpoint and the familiarity of the repeated cable car setting lulled me into unconscious sympathy. It is a sympathy that can be found in computer games. Play Frogger intensely and you start to see the world as a digital frog dodging traffic. Play Tetris intensely and tile floors become suddenly filled with intuitive meaning.

What is it like to be a bat? I don’t know. What is it like to be a goat riding the Manakamana cable car? I feel I know, but I can’t truthfully say.

((If the car had been empty, would it have cleared my head in the same way? Would I have identified with the cable car itself? Is the player of a train simulator playing the driver or the train?))

It wouldn’t be surprising to see a cable car in the forthcoming Act IV of Kentucky Route Zero. It shows its nostalgia for the vacuum tubes, filing cabinets and combustion engines of last century in every scene. It is about giant machines that smash your leg when they fail. It’s about old trucks, and whiskey, mechanical men, and people entangled in debt to drug companies. It’s built on a Shakespearean frame – players, dog-soliloquies, mini-game, boat trip and all – but that frame is well hidden. Technology, repair and debt are in the foreground. Above all it is about decay. It is a tragedy of depreciation.

Piketty is well aware that the model he proposes would only work if enforced globally, beyond the confines of nation-states (otherwise capital would flee to the states with lower taxes); such a global measure requires an already existing global power with the strength and authority to enforce it. However, such a global power is unimaginable within the confines of today’s global capitalism and the political mechanisms it implies. – Slavoj Zizek

Thomas Piketty has run a decade-long research program on wealth and capital, a ruthlessly empirical effort which uncovered masses of new data. This feeds into a model for the behaviour of large pools of wealth over time: that in the absence of massive shocks like world wars, private wealth accumulates at a rate greater than background economic growth (r>g), tending to increasing inequality without limit. Then he wrote an introduction for the technical layman, dense enough to be serious about the topic, light enough to be illustrated with cultural examples.

Like many histories, Capital In The Twenty-First Century ends up spending more time on the preceding era than its ostensible topic. To make his projections and suggestions on the 21st century he needs to explore the 19th and 20th. This is where the literary examples come in, particularly those nineteenth century ones where the definition of rich is very numerically precise. It’s possible that Piketty intended those much discussed literary diversions as nothing more than a hook to make the book more accessible. After all, he includes examples from not just Balzac and Austen but also Disney’s The Aristocats. Yet they serve two deeper purposes. Firstly they are qualitative data supplementing his systemic data from national accounts, building his historical case from a second point of view. Secondly, they are fragments of the capitalist imaginary, answering questions of partially alien experience. What is it like to live in Belle Époque capitalism? What is it like to live in capitalism today? Eventually, we come to questions unanswerable directly: What is it like to be capital? What is it like to be a capitalism?

Piketty advocates tilting policy back to 20th century welfare capitalism, by means of a small wealth tax on the rich, arguing extreme inequality creates a power disparity that undermines democracy. Such an effort would be an extension of the state project of legibility and control that James C Scott has shown extends back to their formation. One of Scott’s books even has the Nagelian title Seeing Like A State, though Piketty’s readership may be dismayed by its subtitle, How Certain Schemes To Improve The Human Condition Have Failed.

Piketty is a social democrat as well as a bourgeois capitalist economist, and at this point in history there is really no contradiction in that. We are not choosing between capitalism and Something Else; capitalism is the situation of a society with industrial capital and market pricing. Ultimately he’s saying that key aspects of 20th century capitalism were pretty good, and certainly better than what we’ll get in the 21st without political action.

This utopian conservatism is closer to Nicholas Stern, Francis Fukuyama or Paul Krugman than a revolutionary like Marx, though the shrill response to Piketty’s proposed wealth tax shows it hit a nerve. Indeed Piketty’s polite impatience towards Marx’s verbosity and looseness with data is another amusing Easter egg in the book, though it doesn’t stop him analyzing by class and superstructure elsewhere.

Piketty seems to have spawned two serious technical arguments among economists, one existential, revisiting the Cambridge Capital Controversy, one science fictional, on elasticity of capital-labour substitution. The existential question on whether the rate of profit is a price or a systemic effect in time is the sleepy feeling of drifting off while two people riding the Manakamana cable car describe how this ten minute ride used to be a two day hike through the Nepalese foothills. The science fiction is Piketty’s measurements saying in the twenty-first century, robots are a somewhat better investment than employees; the sinister mechanical men in the caves of Kentucky Route Zero come to clean the black grime again, scaring another batch of terrified researchers away.

Symbiotic Design

Do we build code, or grow it? I was fortunate enough to attend a Michael Feathers workshop called Symbiotic Design earlier in the year, organized by the good people at Agile Singapore, where he is ploughing biological and psychological sources for ideas on how to manage codebases. There’s also some links to Naur and Simondon’s ideas on technical objects and programming that weren’t in the workshop but are meanderings of mine.

Ernst Haeckel - Trachomedusae, 1904 (wiki commons)

Ernst Haeckel – Trachomedusae, 1904 (wiki commons)

Feathers literally wrote the book on legacy code, and he’s worked extensively on techniques for improving the design of code at the line of code level. Other days in the week focused on those How techniques; this session was about why codebases change the way they do (ie often decaying), and techniques for managing the structures of a large codebase. He was pretty clear these ideas were still a work in progress for him, but they are already pretty rich sources of inspiration.

I found the workshop flowed from two organizing concepts: that a codebase is an organic-like system that needs conscious gardening, and Melvin Conway’s observation that the communication structure of an organization determines the shape of the systems its people design and maintain (Conway’s Law). The codebase and the organization are the symbionts of the workshop title. Some slides from an earlier session give the general flavour.

Feathers has used biological metaphors before, like in the intro to Working Effectively With Legacy Code:

You can start to grow areas of very good high-quality code in legacy code bases, but don’t be surprised if some of the steps you take to make changes involve making some code slightly uglier. This work is like surgery. We have to make incisions, and we have to move through the guts and suspend some aesthetic judgment. Could this patient’s major organs and viscera be better than they are? Yes. So do we just forget about his immediate problem, sew him up again, and tell him to eat right and train for a marathon? We could, but what we really need to do is take the patient as he is, fix what’s wrong, and move him to a healthier state.

The symbiotic design perspective sheds light on arguments like feature teams versus component teams. Feature teams are a new best practice, and for good reasons – they eliminate queuing between teams, work against narrow specialization, and promote a user or whole-product view over a component view. They do this by establishing the codebase as a commons shared by many feature teams. So one great question raised is “how large can the commons of a healthy codebase be?” Eg there is a well known economic effect of the tragedy of the commons, and a complex history of the enclosure movement behind it. I found it easy to think of examples from my work where I had seen changes made in an essentially extractive or short-term way that degraded a common codebase. How might this relate to human social dynamics, effects like Dunbar’s number? Presumably it takes a village to raise a codebase.

Feathers didn’t pretend to have precise answers when as a profession we are just starting to ask these questions, but he did say he thought it could vary wildly based on the context of a particular project. In fact that particularity and skepticism of top down solutions kept coming up as part of his approach in general, and it definitely appeals to my own anti-high-modernist tendency. I think of it in terms of the developers’ theory of the codebase, because as Naur said, programming is theory building. How large a codebase can you have a deep understanding of? Beyond that point is where risks of hacks are high, and people need help to navigate and design in a healthy way.

((You could, perhaps, view Conway’s Law through the lens of Michel Foucault, also writing around 1968: the communication lines of the organization become a historical a priori for the system it produces, so developers promulgate that structure without discussing it explicitly. That discussion deserves space of its own.))

Coming back to feature teams, not because the whole workshop was about that, but because it’s a great example, if you accept an organizational limit on codebase size, this makes feature/component teams a spectrum, not good vs evil. You might even, Feathers suggests, strategically create a component team, to help create an architectural boundary. After all, you are inevitably going to impact systems with your organizational design. You may as well do it consciously.

A discussion point was a recent reaction to all of these dynamics, the microservices approach, of radically shrinking the size of system codebases, multiplying their number and decentralizing their governance. If one component needs changes, the cost of understanding it is not large, and you can, according to proponents, just rewrite it. The organizational complement of this is Fred George’s programmer anarchy (video). At first listen, it sounds like a senior manager read the old Politics Oriented Software Development article and then went nuts with an organizational machete. I suspect that where that can work, it probably works pretty well, and where it can’t, you get mediocre programmers rewriting stuff for kicks while the business paying them drowns in a pool of its own blood.

Another architectural approach discussed was following an explicitly evolutionary approach of progressively splitting a codebase as it grew. This is a technique Feathers has used in anger, with the obvious biological metaphors being cell meiosis and mitosis, or jellyfish reproduction.

The focus on codebases and the teams who work on them brings me back to Gilbert Simondon’s idea of the “theatre of reciprocal causality”. Simondon notes that technical objects’ design improvements have to be imagined as if from the future. They don’t evolve in a pure Darwinian sense of random mutations being winnowed down by environmental survival. Instead, when they improve, especially when they improve by simplification and improved interaction of their components, they do so by steps towards a potential future design, which after the steps are executed, they become. This is described in the somewhat mindbending terms of the potential shape of the future design exerting a reverse causal influence on the present: hence the components interact in a “theatre of reciprocal causality”.

This is exactly what programmers do when they refactor legacy code. Maybe you see some copy-pasted boilerplate in four or five places in a class. So you extract it as a method, add some unit tests, and clean up the original callers. You delete some commented out code. Then you notice that the new method is dealing with a concept you’ve seen somewhere else in the codebase. There’s a shared concept there, maybe an existing class, or maybe a new class that needs to exist, that will tie the different parts of the system together better.

That’s the theatre of reciprocal causality. The future class calling itself into existence.

So, given the symbiosis between organization and codebase, the question is, who and what is in the theatre? Which components and which people? Those are the components that have the chance to evolve together into improved forms. If it gets too large, it’s a stadium where no one knows what’s going on, one team is filming a reality TV show about teddy bears and another one is trying to stage a production of The Monkey King Wreaks Havoc In Heaven. One of the things I’ve seen with making the theatre very small, like some sort of Edinburgh Fringe Festival production with an audience of two in the back of an old Ford Cortina, is you keep that component understandable, but cut off its chance at technical evolution and improvement and consolidation. I’m not sure how that works with microservices. Perhaps the evolution happens through other channels: feature teams working on both sides of a service API, or on opportunistically shared libraries. Or perhaps teams in developer anarchy rewrite so fast they can discard technical evolution. Breeding is such a drag when drivethru immaculate conception is available at bargain basement prices.