Lehman on Software, Models and Change

The modeled and evolving quality of software comes to the surface when thinking about software maintenance. A classic paper on this is Lehman’s 1980 paper Programs, life cycles, and laws of software evolution, which lays out the territory with great clarity, then confidently strides off in the completely wrong direction.

Model

Lehman introduces a distinction between 

  • S-programs, that implement a formal mathematical (s)pecification, such as the travelling salesman problem
  • P-programs, that solve some messy (p)roblem arising in the world, such as actually scheduling real salespeople and their ambiguities and partly-known preferences, and 
  • E-programs, those (e)mbedded in and changing the world they directly model, such as in air-traffic control.

For P-programs and E-programs, “the acceptability of a solution is determined by the environment in which it is embedded.” The distinction is in the programs relationship with its origin story: between top-down and bottom-up; transcendence and immanence.

Lehman goes on to note P-programs are also in a feedback loop arising from their use in the world. Their execution is observed, even lived, by their users, and this results in demand for change. 

This is a cybernetic view, though Lehman doesn’t use the terminology. The paper sketches some more complicated loops, particularly where a specification intermediates between the P-program and the world. It is that intermediation, rather than feedback, that is foregrounded in the difficult and famous statement on code:world relations:

Any program is a model of a model within a theory of a model of an abstraction of some portion of the world or of some universe of discourse.

Lehman drops this on page two, before defining S-, P- or E-programs, and never gets around to defining theory or model, or otherwise directly elaborating, disconnected sagelike pronouncements being an expected feature of software engineering papers of the time. Cook (and a team including Lehman) later link this to the social process of Kuhn’s paradigm shifts – renaming P-programs to (p)aradigm-programs and E-programs to (e)volving-programs.

Weisberg’s work on the crucial role of models in science could also help. For Weisberg, a theory maps a model to the world through (mostly explicit) construals. This plays a similar role to “abstraction” in Lehman’s definition. (Bit more on Weisberg here.)

It’s also worth throwing Naur’s “Programming as Theory Building” into the mix, though his paper does not make much distinction between model-as-code and theory-as-code.

Lehman also introduces “laws” of software evolution, which did have some empirical basis, but appear hard to reproduce. They might be compared to more recent work on meaningful descriptive code metrics, or properties of software as a material.

 

The Rivers and The Lakes That You’re Used To

After accelerating through insight after insight into the fluid and evolving nature of software, Lehman closes off the theory section by casually inventing microservices (in 1980), then taking a screaming left turn at the Process Street T-Junction, crashing through a railing and tumbling over a cliff. For over that cliff flows a process waterfall, and in the structured programming era, there’s nothing more attractive.

Like the rest of the structured programming crowd, he has factory envy: “An assembly line manufacturing process is possible when a system can be partitioned into subsystems that are simply coupled and without invisible links … Unfortunately, present day programming is not like this.” Lehman goes on to emphasize the care and structure needed when writing separate elaborate requirement and technical specifications. You get the idea. The remaining process recommendations I’m just going to skip.

It is easy to be wise after the fact in 2019. Agile practices and literal miniature software assembly lines (continuous build infra) now exist, and have made us much more sensitive to the damage done by scope size and delivery delay in large software systems. Trying to solve complex problems with more upfront planning was a high modernist worldview going out of fashion, but still very much in the intellectual water in 1980: Lehman gave a lecture in 1974 referencing both city planning and the Club of Rome report Limits to Growth. Perhaps it would be fairer to point out that thinkers who advocated short simple changes as a response to complex systems – like Jane Jacobs, or John Boyd and his OODA loop – were always tacking into the intellectual wind.

References

Cook, S., Harrison, R., Lehman, M.M. and Wernick, P.: ‘Evolution in software systems: foundations of the SPE classification scheme’, Software Maintenance and Evolution Research and Practice, 2006, 18, (1), pp. 1-35  
Lehman, M.M. , “Programs, cities, students – Limits to growth?” Inaugural Lecture, May 14,  1974, ICST Inaugral Lecture Series, Ed., VOI. 2, pp. 147-163, 1979. vol. 9, pp. 211-229,  1970-1974; and in Programming Methodology D. Gries, Ed. New York: Springer-Verlag, 1979,  pp. 42-69.
Lehman, M. M. (1980). Programs, life cycles, and laws of software evolution. Proceedings of the IEEE, 68(9), 1060-1076.
Naur, P. (1985). Programming as theory building. Microprocessing and microprogramming, 15(5), 253-261.
Weisberg – Simulation and Similarity.

The Consensus Reality Based Community

null

1. There’s a concept from science fiction criticism which has become a favourite of mine. Indeed it seems fundamental to this 21st century glocal postmodernity of ours, the concept of consensus reality.
1.1 It is worth remembering that this consensus often refers to the beliefs of the society in the work under criticism, in which marmalade may be money, spaceships may fly faster than light, and handheld communicators with vid screens may be ubiquitous.

2. The idea of consensus reality neatly captures several insights.
2.1 Reality proper, what Kant called the unsynthesized manifold, is unavoidably mediated by our senses and brain.
2.2 Our model of the world is socially constructed by a group we live in.
2.3 Powerful institutions of mainstream thought – like large newspapers – work within certain parameters of perception.
2.3.1 The first page of search engine results are representative. They are consensus reality engines. Common sense engines, in Bruce Sterling’s words.
2.4 Something in the consensus is inevitably and always wrong.
2.4.1 The consensus contains arguments with known for and against positions.
2.4.1.1 The argument itself can be wrong, irrelevant, meaningless side effect, not resolvable as either pro or con, etc.
2.5 Broad consensus realities often have enduring correlations with events.
2.6 Consensus is reinforced by breadth.

3. Kuhn’s concept of a scientific paradigm resembles a consensus reality, but is far more systematic.
3.1 Consensus reality includes cultural convention and everyday discussion including obvious internal logical contradictions.
3.2 Consensus reality is intuitive.
3.3 Consensus reality may be surprising – chance events – but not unanticipated ones.
3.3.1 “Black swans” are demonstrations of consensus reality.
3.3.2 Commuting to work is also demonstrative.

4. A reality based community responds to empirical sense-data.
4.1 Measures.
4.2 Adjusts in response to changes in data.
4.3 Follows technique.
4.3.1 Technique may be systematic. It may have a model.
4.3.1.1 The model may be tested empirically and systematically.
4.3.1.2 One might use a randomised controlled trial, or survey, or historical data source, or blind peer review.
4.4 Reality based communities survive by adaptation.
4.5 Strongly reality based communities would necessarily be scientific communities.
4.5.1 No serious political community today is also a scientific community.
4.5.1.1 Establishing professional pools of expertise for these processes is necessary but not sufficient.
4.5.1.1.1 Any such group analysing a public problem is inherently political.
4.5.1.1.2 This is technocracy.

5. The consensus reality based community is always broad, often well-established and always vulnerable to disruption of its reality.
5.1 This is the nature of Karl Rove’s insult.
5.1.1 By always anchoring themselves in well established consensus reality, Rove’s opponents fail to react to events initiated by his faction which change the broad understanding of reality.
5.1.2 Rove’s faction has since, with amusing consistency, repeatedly showed themselves to not be reality based.
5.1.2.1 This faction acts as an alternative consensus reality based community.
5.1.3 In rejecting the dominant consensus reality, and its rhetoric of objective evaluation, they went straight on and also rejected a reality base for their community.
5.1.3.1 This is not a survival technique.
5.1.3.2 On the day of the 2012 US Presidential election, both major parties expected to win.
5.2 The consensus reality based community may even tacitly acknowledge it is not reality based.
5.2.1 This is a society in which the consensus ritual detaches from its social meaning.
5.2.2 Incongruence between political consensus reality and reality manifests in scandal.
5.2.2.1 Fin de siècle Vienna.
5.2.2.2 Late Ming China.
5.2.3 Incongruence between social consensus reality and geophysics and biology manifests in natural disaster.
5.2.3.1 The Aral Sea.
5.2.4 Incongruence between financial consensus reality and economic and psychological reality manifests in financial crisis.
5.2.4.1 CDOs and CDSs.
5.2.4.2 South Sea Bubble.
5.2.4.3 Louisiana.
5.2.4.4 Tulips.

6. The siblings of consensus reality are the consensus future and the consensus past.
6.1 Revision is the change of the consensus past.
6.2 Changes to the consensus future feel like betrayal or relief.

Dismal, But Scientists Nonetheless

A continuing critique of economists throughout the global financial crisis has been of tunnel vision. Ideological, free market blinkers, meant economists missed the inflating bubble and fixated on irrelevancies while sailing us happily over a cliff. A fair sample of it can be found in this New York Times article from March 2009, quoting that old favourite, JK Galbraith:

“It’s business as usual,” he said. “I’m not conscious that there is a fundamental re-examination going on in journals.”

John has pointed out that this is more of an open academic secret than a staggering revelation, with William Buiter’s summary of the shabby state of the art serving as an example. Buiter used to be on the monetary commitee for the Bank of England; he’s pretty Establishment so far as economics goes, and you can see him merrily pointing holes in both Keynes and neoclassical models here.

These gaps, or gaping holes, do indeed make policy development horribly difficult, and the lives of politicians harder. This melds with old critiques of economists as somehow wooly or less than scientific. If you use Kuhn as a starting point, though, this pigheaded devotion to a model until its contradictions with reality become unmanageable is not a bug, but a feature, and a feature of science, what’s more.

Kuhn provided the terms paradigm and paradigm shift to the history of science (and a thousand failed dot com business plans), to describe the dominant worldview of normal science and the process by which it changes. A paradigm encompasses theory, conventional practice, instrumentation, and a domain of set problems and unsolved problems for the field. Different paradigms are not just competing theories but competing worldviews because they are in some sense incommensurable; proponents will often argue past each other.

It is the narrowing of focus provided by a successful paradigm that makes the activity of normal science so productive. With a professional consensus on worthwhile problems, tremendous attention and progress can be made on those problems very rapidly. Elements widely outside those areas become seen as philosophical, or at least part of a neighbouring academic discipline rather than the discipline defined by the paradigm.

Kuhn also points to why the neoclassical model is not yet academically dead. In his analysis, paradigms are always replaced by one or sometimes two victorious alternatives. Economics today (I would assert) is at a stage of one hundred flowers blooming; alternative paradigms are propagating but they are fairly wishy-washy for the most part. In part this is because some of them – Post Autistic Economics comes to mind – explicitly reject a quantitative or model centric worldview. It might be an interesting and successful policy or philosophical school but it is unlikely to meet with scientific success because it is not scientific. The trigger might be theoretical – some new technique to deal with the nasty math behind rational agents and complete markets, perhaps. Or the trigger might be empirical – the wealth of data coming out of computational sociology from social networking sites, perhaps. I’m too far away from the field to really pick a winner. But until there is a killer new paradigm which lets technical economists address a new range of technical issues or get a different traction on reality with them, I’d suggest the New Classical Model will continue to prevail.