When was the hand axe invented?
Answer: it wasn't.
Although the hand axe appeared in the Lower Paleolithic, it would be presuming too much to say that it was invented. The same could be said for most of the innovations that have characterized human culture for thousands of years: music, language, baby talk, death rituals, mealtimes, proper names, rhythm, fire, and gossip did not originate from the creativity of any individual. Like the biological adaptation of morning sickness, human cultural adaptations arose gradually, necessarily in coevolution with our genetics.
It worked this way - slowly - because that's what works, in the long run. The shock of a genuine innovation would very likely be more than a fragile early human system could bear. We are all aware that most genetic mutations are either neutral or harmful; the fraction of genetic innovations that are beneficial is vanishingly small. Organisms have developed the conservative process of DNA repair to protect against these likely-harmful shocks to the system. Organisms arise naturally, without conscious invention, and processes exist to prevent innovations (mutations) from harming functioning organisms. In human culture, from the perspective of the individual, things are just done a certain way, and conservative cultural processes exist to prevent cultural innovation from harming a functioning group. The only time a limited cultural change may be welcome is when the existing system ceases to function; otherwise, innovation is (mostly correctly) regarded as dangerous.
Christopher Alexander, in his Notes on the Synthesis of Form, refers to the process by which simple societies slowly change as unconscious design. The extreme conservatism of simple, pre-industrial societies protects their functioning systems (from social organization to food production to shelter-building) from the danger of innovation.
The Burden of Complexity
Unconscious design and protective conservatism work well - until the burden of complexity overwhelms these simple mechanisms. Unconscious design processes (like biological evolution) cannot keep up with change beyond a certain level of complexity. (That's why massive extinctions frequently follow major environmental change.)
Alexander asks us to visualize, as a stand-in for a given human system, a ten-by-ten (say) panel of light-emitting diodes, connected to one another in various ways. When a diode is "on," this symbolizes a bad fit - analogous to discomfort, pain, human misery, poor functioning, etc. When a light is "off," this symbolizes good fit (the absence of a problem). We want all the lights to be off - then we will have solved the design problem.
The probability of finding a solution to the problem is related to the density of interconnection between the diodes (the complexity of the system).
In real-life systems, changing one thing can change a lot of other things, too. Increasing the capacity of a teakettle may also increase its weight and cost, for instance. This is the essentially conservative message of all those fairy stories about making wishes.
Analogously, diodes in Christopher Alexander's imaginary diode box may be connected to each other such that turning one diode on or off turns one or more other diodes on or off. If only a few diodes are connected to each other, we have a pretty good shot at solving the problem just by dumb luck - turn off lights at random and see what happens, and very likely a solution will emerge.
However, when the diodes become sufficiently entangled, it becomes impossible to blindly tinker our way to a solution. If every diode is connected to every other diode, for instance, achieving a solution in this way is impossible.
Systems with dense, complex interaction of sub-parts may, from time to time, "hit on" a solution that functions for a while. But this is not stable. Any change in the environment that destroys this lucky "fit" will not be able to be remedied by a simple change in the system, because any change to one part will affect other parts, likely inflicting damage.
The more complex (interconnected) the system, the more incapable it is of successfully responding to environmental change.
Simple systems are stable, even given environmental change. Complex systems aren't stable in a changing environment. Beyond a given level of complexity, conservatism is a losing strategy.
Toward Conscious Design: Big Independent Parts
"Keep things as they are if they work, tinker and hope if they break" does not work to fix big, complex problems arising from a change in environment. We must instead approach big, complex design problems consciously.
Alexander's mathematical approach to complex design problems is to analyze the interconnections between the parts of the problem (the diodes, above), with the goal of identifying big independent parts. If we can identify a part of a design problem that doesn't interact much with the other parts of the problem, we can solve that, and then move on to the next piece.
One major problem with this method is that our language does not necessarily correspond to the "big independent parts" that are so important to identify if we are to have any hope of solving big design problems. It is highly unlikely that a word happens to correspond to a big independent part of the design problem - especially since societies complex enough to require conscious design are much, much newer than language.
How Conservatism Ensures Misfit
There are two ways in which cultural conservatism ensures a bad fit between design and environment. First, conservatism functions in simple societies to preserve a good fit; in order for a conservative process to be useful, good fit must already be present. The knowledge of this causes culturally conservative humans to insist that there is, in fact, a good fit when in reality, the fit is very bad indeed. Second, conservatism obviously functions to prevent the implementation of innovative solutions to problems. In these ways, conservatism perversely functions to preserve a bad fit.
I do not mean here to draw a line between modern political conservatives and modern political liberals, except perhaps connotatively. To varying degrees, we all have tendencies toward conservatism as part of our cultural and genetic legacy. This is expressed generally in the status quo bias and its near relatives (or perhaps subspecies), the endowment affect (loss aversion), risk aversion, and shame from norm violations.
We do not have a choice as to whether we feel the status quo bias; it is a part of who we are. We can, however, decide whether to instantiate it.
Jonathan Haidt and others have found that modern political liberals and modern political conservatives rely on different "moral foundations" in doing ethics. Conservatives rely more on what Haidt terms "authority/respect," "purity/sanctity," and "ingroup/loyalty" than do liberals; liberals rely more on "harm/care" and "fairness/reciprocity." Both respect for authority and preservation of purity or sacredness are essentially conservative functions in the sense outlined above: they function to blindly preserve the status quo, with no analysis of the goodness of the status quo.
The design problem for large, densely interconnected human systems has no chance of solution if the previously adaptive human tendency to conservatism is allowed to control the design process. Our tendency to conservatism is, rather, a part of the design problem that must be solved.
1. Alexander's example throughout the book is the design of a teakettle, although by the end he's designing simple villages. However, his model is of extremely broad applicability - software designers are at least as gay for Alexander's thinking and methods as are urban design nerds like me. The person who gave me my copy works on the search algorithm at Google.