Nothing in this article is really new, but I needed a document to which I could point people whenever I make use of Predictive-vs-Reactive terminology.
The Agile-vs-Waterfall debate is old, and, arguably, has been won. (Depending on who you ask);
However, I like to frame this dichotomy in other terms, which, I believe, offer both a superior perspective and (Hegel-like) the opportunity for synthesis.
Predictive (Waterfall) vs Reactive (Agile)
Traditional management techniques put the emphasis on predictive management, so that power may be consolidated in the hands of the decision maker. Planning and specification activities are important, leading naturally to a waterfall-style, gated development process. This is not so much a development methodology as a manifestation of the exercise of dominant political power within an organization.
The variance-minimizing aspects of predictive control trace their roots back to Deming's teachings in factory management, and are expressed through the TPS, six-sigma, lean sigma and so on -- Although in some situations, it can be argued that this philosophy is abused (More on this later).
Agile management techniques on the other hand put the emphasis on reactive management and feedback loops; the devolvement of power and responsibility to the individual developer, and the consequent restructuring of information flows to enable the organization to react to new information as it is discovered, and new events as they happen.
With predictive management everything is focussed around a small number of decision points and authority figures. A manager with authority for the project will either stop the project, or give the "green light" for work to continue at one of a handful of project gates. Predictive management requires extensive plans, forecasts and specifications to inform high-impact, long-lasting decisions. Predictive control is sensitive to changing environments, unexpected events, poor planning capacity, behavioral biases, and incorrect assumptions. Although appropriate in some situations, it is fragile and error-prone, and where it does occur, the guiding rationale is normally the consolidation of political or financial power.
With reactive management, the number and frequency of decision points is increased, so that work is planned out over short time scales only (anywhere from a few hours to a few weeks). Each decision is low-impact and carries only for a short time. Sensitivity to changing environments and unexpected events is reduced, as is sensitivity to poor planning, behavioral biases and incorrect assumptions (thanks to empirical feedback). Additionally, since decisions are greater in number and lower in impact, it becomes advantageous to devolve decision making authority to avoid the creation of decision-making information-flow bottlenecks.
One of the primary advantages of the reactive control approach is the opportunity that it offers for the incorporation of timely and relevant empirical information in the decision making process; the ability to seek feedback, to make mistakes and to recover (and learn) from them. Indeed, a properly functioning reactive control system does not seek to avoid mistakes, but rather to make them quickly, and learn from them, ("Move quickly and break things") although we often call the mistake-making process "experimentation" to disguise it's nature from those who, for political reasons, demand preternatural levels of perfection and clairvoyance from those around them.
The key property to look for, of course, is the flow of lesson-bearing information through the decision-making cycle. Error-feedback requires errors. An Interesting comparison is to be had between this and the CFAR (Constant False Alarm Rate) approach to adaptive signal processing - if you do not get any errors or make any mistakes, you are not trying hard enough!
Returning, for a moment, to Deming, variance-minimization, lean and six-sigma. Deming's argument is essentially the same as mine: To manage effectively, you need to incorporate empirical feedback, and empower individuals to act together in common cause. However, manufacturing is a highly controlled environment, where variance can be modelled as Gaussian, (six sigma) and unexpected "Black Swan" "Unknown Unknowns" can be omitted from the process control model. Software development (and other business activities) on the other hand, operate in a very different environment, where the Gaussian is a misleading and dangerous noise model. We do still need empirical and quantitative feedback, but we are no longer measuring variance in a simple, low-dimensional space, so what we can do with it is quite different. However, the concept of the feedback loop remains valid, and the organizational psychology is the same: Empirical feedback frees people from organizational politics, "gamifies" the work experience, and empowers individuals to work together for a common cause.
No comments:
Post a Comment