Friday, 28 December 2012
The Political Economics of the Singularity
"The singularity", in some sense at least, is already happening, and has been for the past couple of years.
Look, if you will, at the disconnect between the technology sector (booming; "talent" (labor) in exceedingly short supply, salaries rocketing) and the rest of the economy (tanking, many people out of work, surplus of labor, salaries plummeting).
Technology brings many new and unfamiliar nonlinearities into the economy. Access to the mass market does not (in all circumstances) require mass employment - For example, I am one individual, working from my home office, and I can easily make improvements to and deploy a product used (indirectly) by millions of people with a few keystrokes -- No expensive bureaucracy, no factory, no paperwork, and no infrastructure beyond a handful of laptops, an internet connection, and a few dozen rented Amazon EC2 machines.
The funny thing is this: We all expected the singularity to swing the balance of power firmly towards the side of capital, away from labor. After all, surely capital would simply buy robots instead of employing labor? However, it is not quite panning out as we expected. Developing a new technology product is now ridiculously cheap -- capital costs have all but disappeared. Technology startups now look to investors not so much for capital, but for advice, access to customers, and reputation. Just as the need for labor has (unevenly) diminished, so the need for capital has also (unevenly) diminished.
It is becoming clear that (in some circumstances at least) the old balance of power between labor and capital has been swept aside, with both, in some sense, having been made irrelevant.
What replaces it? The answer to that question is not easy to discern. One thing is clear though: this new world is far more complex, richly textured, baroque and interesting, and the old political battle-lines will need to be redrawn with greater subtlety and nuance than they ever were before.
Wednesday, 26 December 2012
Stupid is better than Smart (A call for humility)
Software development is a funny thing. It is full of nonlinearities and counterintuitive results.
Here is one of them: It is better to think of oneself as stupid (and be right) than it is to think of oneself as smart (and be wrong).
This sounds nonsensical, doesn't it? Surely it is better to be smart than it is to be stupid. Particularly since we spend so much of our time trying to demonstrate to other people just how smart we really are?
Well, if we were to start thinking of ourselves as smart people, relative to the rest of the population, then it is all too easy to start thinking of ourselves as being smart relative to the problems that we are trying to solve.
This is a problem, because *everybody* is pretty stupid in the grand scheme of things, and hubris is dangerous, particularly in the presence of complexity.
Complexity makes systems difficult to understand and manage, and difficult to fix when they go wrong. It is also difficult to gauge complexity, and humans have a consistent tendency to underestimate the complexity of unexplored functionality.
Let us consider a (mis) quote that illustrates the point:
"Debugging a system is harder than designing it in the first place, so if you are as clever as you possibly can be when you are designing the system, you are (by definition) too stupid to debug it."
The well known Dunning-Kruger effect applies here too: Because we tend to think that we are smarter than we really are, we tend to design systems that are too complex for us to debug and maintain.
In cases such as this, it is helpful to take a broader view. We may call somebody "Smart", but this term is defined relative to other humans, not relative to the problems that we need to solve.
We are frequently faced with problems that would be considered difficult by the very best of us. It is not a sign of weakness to acknowledge that; to treat the problems that we are trying to solve with the deference and respect that they deserve.
Here is one of them: It is better to think of oneself as stupid (and be right) than it is to think of oneself as smart (and be wrong).
This sounds nonsensical, doesn't it? Surely it is better to be smart than it is to be stupid. Particularly since we spend so much of our time trying to demonstrate to other people just how smart we really are?
Well, if we were to start thinking of ourselves as smart people, relative to the rest of the population, then it is all too easy to start thinking of ourselves as being smart relative to the problems that we are trying to solve.
This is a problem, because *everybody* is pretty stupid in the grand scheme of things, and hubris is dangerous, particularly in the presence of complexity.
Complexity makes systems difficult to understand and manage, and difficult to fix when they go wrong. It is also difficult to gauge complexity, and humans have a consistent tendency to underestimate the complexity of unexplored functionality.
Let us consider a (mis) quote that illustrates the point:
"Debugging a system is harder than designing it in the first place, so if you are as clever as you possibly can be when you are designing the system, you are (by definition) too stupid to debug it."
The well known Dunning-Kruger effect applies here too: Because we tend to think that we are smarter than we really are, we tend to design systems that are too complex for us to debug and maintain.
In cases such as this, it is helpful to take a broader view. We may call somebody "Smart", but this term is defined relative to other humans, not relative to the problems that we need to solve.
We are frequently faced with problems that would be considered difficult by the very best of us. It is not a sign of weakness to acknowledge that; to treat the problems that we are trying to solve with the deference and respect that they deserve.
BAM! The Combinatorial Explosion and Planning
Solving problems requires understanding.
Understanding is built, in part, by measuring features.
One or two features might be sufficient to describe something simple, but describing something complex often takes many more.
With one or two features, a handful of measurements may suffice to capture the behavior of whatever-it-is that we are trying to understand. As the number of features that we need to measure grows, the number of measurements that we need to properly and comprehensively capture that behavior grows faster than fast: It grows stupendously, ridiculously, unreasonably, explosively fast. It grows so fast that it is akin to ... BAM! ... hitting a brick wall.
This is the Combinatorial Explosion:- Know it, and Fear it, because it is Important.
In other words, it stops being possible in any reasonable, practicable sense of the word, to understand any system or problem, the description of which involves more than a small handful of features.
This has some terribly important consequences. Consequences that, as our world becomes more complex, and contains more and more complex, interconnected systems, we particularly need to understand and internalize, for we fail to do so at our peril.
The past becomes less useful as a guide for predicting the future; Our intuition becomes less effective; Unintended consequences and unusual, exceptional events become more prevalent; our ability to predict what will happen next is weakened to the point where it disappears, and the utility of planning (in the way that we commonly understand it) diminishes dramatically.
At no point in history did plans ever survive first contact with the enemy, but as (consciously or unconsciously) we become more liberal with complexity, these traits and characteristics will become more and more prevalent.
We need to adapt, and learn to deal with them.
PS:
This line of thinking is particularly interesting when applied to organizations:
http://daltoncaldwell.com/thoughts-on-organizational-complexity
Which, of course, then go on to produce more complex products:
http://en.wikipedia.org/wiki/Conway's_law
Understanding is built, in part, by measuring features.
One or two features might be sufficient to describe something simple, but describing something complex often takes many more.
With one or two features, a handful of measurements may suffice to capture the behavior of whatever-it-is that we are trying to understand. As the number of features that we need to measure grows, the number of measurements that we need to properly and comprehensively capture that behavior grows faster than fast: It grows stupendously, ridiculously, unreasonably, explosively fast. It grows so fast that it is akin to ... BAM! ... hitting a brick wall.
This is the Combinatorial Explosion:- Know it, and Fear it, because it is Important.
In other words, it stops being possible in any reasonable, practicable sense of the word, to understand any system or problem, the description of which involves more than a small handful of features.
This has some terribly important consequences. Consequences that, as our world becomes more complex, and contains more and more complex, interconnected systems, we particularly need to understand and internalize, for we fail to do so at our peril.
The past becomes less useful as a guide for predicting the future; Our intuition becomes less effective; Unintended consequences and unusual, exceptional events become more prevalent; our ability to predict what will happen next is weakened to the point where it disappears, and the utility of planning (in the way that we commonly understand it) diminishes dramatically.
At no point in history did plans ever survive first contact with the enemy, but as (consciously or unconsciously) we become more liberal with complexity, these traits and characteristics will become more and more prevalent.
We need to adapt, and learn to deal with them.
PS:
This line of thinking is particularly interesting when applied to organizations:
http://daltoncaldwell.com/thoughts-on-organizational-complexity
Which, of course, then go on to produce more complex products:
http://en.wikipedia.org/wiki/Conway's_law
Monday, 17 December 2012
Development Automation - The Revolution
In response to:
http://www.businessweek.com/articles/2012-12-17/google-s-gmail-outage-is-a-sign-of-things-to-come
It is really great to see an article on continuous deployment in the mainstream media, as it is an issue about which I am extremely enthusiastic.
http://www.businessweek.com/articles/2012-12-17/google-s-gmail-outage-is-a-sign-of-things-to-come
It is really great to see an article on continuous deployment in the mainstream media, as it is an issue about which I am extremely enthusiastic.
When we delay the release of a piece of our work, the psychological importance that we place on the quality of that work increases, so we spend more time manually finessing and polishing the work (often resulting in more delay, and possibly also raising the psychological barriers to release still higher).
This is all well and good when all testing and quality control must, by necessity, be manual. However, this is less and less common today, as automated testing and deployment practices become more common (In the form of Test Driven Development & Continuous Integration).
This (extremely) high level of automation and the work practices that go with it, together offer a revolutionary step-change in the way that we engineer complex systems:- a revolution that companies like Google and Netflix have embraced; a revolution that the rest of us ignore at our peril.
Instead of simply engineering products, we must engineer organizations and systems that produce world-beating products time and time again.
That is the revolution of DevOps, of Test Engineering, and of Development Automation.
And that is the only sane way to go about making complex systems today.
Subscribe to:
Posts (Atom)