Wednesday 18 December 2013
Low overhead trace-logging
Scatter trace-points throughout the code. Each trace-point is identified by a single unique integer (Macro magic?). The last n (64? 256? 1024?) trace-points are stored in a global buffer; which gets flushed to disk when an anomalous situation is encountered (e.g. an exception is thrown).
Friday 13 December 2013
Strategic vs. tactical programming
The great thing about programming is that you can do it at any level of abstraction -- from the minutest down-in-the guts detail, to broad strategic strokes of business logic.
There is absolutely no reason why a fully automated business should not have strategic programming roles as well as detailed "tactical" programming roles.
Monday 25 November 2013
Reasoning about state evolution using classes
In response to:
http://hadihariri.com/2013/11/24/refactoring-to-functionalwhy-class
Thinking about how application state evolves over time is difficult, and is responsible for all sorts of pernicious bugs. If you can avoid statefulness, you should ... but sometimes you just can't. In these cases, Classes give us a way to limit and control how state can change, making it easier to reason about.
In other words, placing state in a private variable, and limiting the possible state transitions with a restricted set of public methods (public setters don't count) can dramatically limit the combinatorial explosion that makes reasoning about state evolution so difficult. To the extent that this explosion is limited, classes help us to think and reason about how the state of our application evolves over time, reducing the occurrence and severity of bugs in our system.
The discussion of using classes as an OO domain modelling language is a separate question, and is (as the above article insinuates) is bedevilled with domain-model impedance mismatches resulting in unsatisfactory levels of induced-accidental-complexity.
http://hadihariri.com/2013/11/24/refactoring-to-functionalwhy-class
Thinking about how application state evolves over time is difficult, and is responsible for all sorts of pernicious bugs. If you can avoid statefulness, you should ... but sometimes you just can't. In these cases, Classes give us a way to limit and control how state can change, making it easier to reason about.
In other words, placing state in a private variable, and limiting the possible state transitions with a restricted set of public methods (public setters don't count) can dramatically limit the combinatorial explosion that makes reasoning about state evolution so difficult. To the extent that this explosion is limited, classes help us to think and reason about how the state of our application evolves over time, reducing the occurrence and severity of bugs in our system.
The discussion of using classes as an OO domain modelling language is a separate question, and is (as the above article insinuates) is bedevilled with domain-model impedance mismatches resulting in unsatisfactory levels of induced-accidental-complexity.
Friday 22 November 2013
20 percent together
20% time is a beguiling idea. Productivity is a function of passion; and nothing fuels passion like ownership. The problem with 20% time stems from the blurring of organisational focus and the diffusion of collective action that results from it.
So ... the problem remains: How to harness the passion of self-ownership, and steer it so that it is directed in line with the driving force and focus of the entire company ... so the group retains it's focus, and the individual retains his (or her) sense of ownership.
I can well imagine that the solution to this strategic challenge is going to be idiosyncratic to the business or industry in question ... but for many types of numerical software development, there is a real need for good tools and automation, so why not make a grand bargain: give employees some (limited) freedom to choose what they work on, but mandate that it should be in support of (well advertised) organisational strategic objectives.
So ... the problem remains: How to harness the passion of self-ownership, and steer it so that it is directed in line with the driving force and focus of the entire company ... so the group retains it's focus, and the individual retains his (or her) sense of ownership.
I can well imagine that the solution to this strategic challenge is going to be idiosyncratic to the business or industry in question ... but for many types of numerical software development, there is a real need for good tools and automation, so why not make a grand bargain: give employees some (limited) freedom to choose what they work on, but mandate that it should be in support of (well advertised) organisational strategic objectives.
Friday 15 November 2013
Continuous Collaboration: Using Git to meld continuous integration with collaborative editing.
Written in response to:
http://www.grahamlea.com/2013/11/git-mercurial-anti-agile-continuous-integration/
I strongly agree with the fundamental point that the author is making.
However, there are nuances. A lot of this depends on the type of development that you are doing.
For example, most of my day-to-day work is done in very small increments. Minor bug-fixes, incremental classifier performance improvements, parameter changes, and so on. Only rarely will I work on a feature that is so significant in its' impact that the work-in-progress causes the branch to spend several days in a broken / non-working state. I also work in fairly small teams, so the rate of pushes to Gerrit is quite low: only around a dozen pushes per day or so. This means that integration is pretty easy, and that our CI server gives us value & helps with our quality gating. We can follow a single-branch development path with little to no pain, and because both our software and the division of labour in the team are fairly well organised, conflicts very very seldom occur when merging (even when using suboptimal tools to perform the merges).
This state of affairs probably does not hold for all developers, but it holds for me, and for most of the people that I work with. As a result, we can happily work without feature branches (most of the time), and lean on the CI process to keep ourselves in sync & to measure the performance of our classifiers & other algorithms.
Now, don't get me wrong, I think that Git is great. I am the nominated Git expert in my team, and spend a lot of time helping other team members navigate the nuances of using Git with Gerrit, but for most people it is yet another tool to learn in an already over-complex development environment. Git gives us the flexibility to do what we need to in the environment that we have; but it is anything but effortless and transparent, which is what it really needs to be.
Software development is about developing software. Making systems that work. Not wrangling branches in Git.
My ideal tool would be the bastard son of Git and a real-time collaborative editor. My unit tests should be able to report when my local working copy is in a good state. Likewise, my unit tests should be able to report whether a merge or rebase has succeeded or failed. Why can I not then fully automate the process of integrating my work with that of my colleagues? Indeed, my work should be integrated & shared whenever the following two conditions are met: 1) My unit tests pass on my local working copy, and 2) My unit tests pass on the fully integrated copy. These are the same criteria that I would use when doing the process manually ... so why do it manually? Why not automate it? Triggered by every save, the resulting process would create the appearance of an almost-real-time collaborative working environment, opening up the possibility for new forms of close collaboration and team-working that are simply not possible with current tools. A source file would be a shared document that updates almost in real time. (If it is only comments that are being edited, then there is no reason why the updating could not actually be in real time). This means that you could discuss a change with a colleague, IRC-style, in the comments of a source document, and make the change in the source file *at the same time*, keeping a record not only of the logic change, but also of the reasoning that led to it. (OK, this might cause too much noise, but with comment-folding, that might not matter too much).
Having said all of that, branches are still useful, as are commit messages, so we would still want something like Git to keep a record of significant changes, and to isolate incompatible works-in-progress in separate branches; but there is no reason why we cannot separate out the "integration" use case and the "collaboration" use case from the "version control" and "record keeping" use cases.
http://www.grahamlea.com/2013/11/git-mercurial-anti-agile-continuous-integration/
I strongly agree with the fundamental point that the author is making.
However, there are nuances. A lot of this depends on the type of development that you are doing.
For example, most of my day-to-day work is done in very small increments. Minor bug-fixes, incremental classifier performance improvements, parameter changes, and so on. Only rarely will I work on a feature that is so significant in its' impact that the work-in-progress causes the branch to spend several days in a broken / non-working state. I also work in fairly small teams, so the rate of pushes to Gerrit is quite low: only around a dozen pushes per day or so. This means that integration is pretty easy, and that our CI server gives us value & helps with our quality gating. We can follow a single-branch development path with little to no pain, and because both our software and the division of labour in the team are fairly well organised, conflicts very very seldom occur when merging (even when using suboptimal tools to perform the merges).
This state of affairs probably does not hold for all developers, but it holds for me, and for most of the people that I work with. As a result, we can happily work without feature branches (most of the time), and lean on the CI process to keep ourselves in sync & to measure the performance of our classifiers & other algorithms.
Now, don't get me wrong, I think that Git is great. I am the nominated Git expert in my team, and spend a lot of time helping other team members navigate the nuances of using Git with Gerrit, but for most people it is yet another tool to learn in an already over-complex development environment. Git gives us the flexibility to do what we need to in the environment that we have; but it is anything but effortless and transparent, which is what it really needs to be.
Software development is about developing software. Making systems that work. Not wrangling branches in Git.
My ideal tool would be the bastard son of Git and a real-time collaborative editor. My unit tests should be able to report when my local working copy is in a good state. Likewise, my unit tests should be able to report whether a merge or rebase has succeeded or failed. Why can I not then fully automate the process of integrating my work with that of my colleagues? Indeed, my work should be integrated & shared whenever the following two conditions are met: 1) My unit tests pass on my local working copy, and 2) My unit tests pass on the fully integrated copy. These are the same criteria that I would use when doing the process manually ... so why do it manually? Why not automate it? Triggered by every save, the resulting process would create the appearance of an almost-real-time collaborative working environment, opening up the possibility for new forms of close collaboration and team-working that are simply not possible with current tools. A source file would be a shared document that updates almost in real time. (If it is only comments that are being edited, then there is no reason why the updating could not actually be in real time). This means that you could discuss a change with a colleague, IRC-style, in the comments of a source document, and make the change in the source file *at the same time*, keeping a record not only of the logic change, but also of the reasoning that led to it. (OK, this might cause too much noise, but with comment-folding, that might not matter too much).
Having said all of that, branches are still useful, as are commit messages, so we would still want something like Git to keep a record of significant changes, and to isolate incompatible works-in-progress in separate branches; but there is no reason why we cannot separate out the "integration" use case and the "collaboration" use case from the "version control" and "record keeping" use cases.
Wednesday 13 November 2013
Specification and Risk
When using a general purpose library for a specific application, we generally only use a tiny subset of the functionality that the library provides. As a result, it is often reasonable to wrap that library in a simplified, special-purpose API, tailored to the specific needs of the application under development. This simplifies the interface, and in reducing the number of ways that it can be used, we also reduce the number of ways that it can go wrong, reducing the cognitive burden on the developer, simplifying the system, and reducing both cost and risk.
In this way, a restriction in what we expect the system to do results in a beneficial simplification; reduced costs and reduced risk.
It is possible to go too far with this, though, and there are two deleterious effects that immediately spring to mind:
Firstly, there is the obvious risk of a bug in the specification - that proposed restrictions may be inappropriate or incompatible with the primary task of the system, or with the political needs of various stakeholders.
Secondly, and more insidiously, there is the risk that excessive restrictions become positive specification items; moving from "the system does not have to handle X" to "the system must check for X and explicitly error in this way". Whilst this seems ridiculous, it is a surprisingly easy psychological trap to fall into, and, of course, it increases system complexity and cost rather than reducing it.
Consistently being able to strike the right balance is primarily a function of development organisation culture; and another reason why businesses need to pay attention to this critical (but frequently overlooked) aspect of their internal organisation.
In this way, a restriction in what we expect the system to do results in a beneficial simplification; reduced costs and reduced risk.
It is possible to go too far with this, though, and there are two deleterious effects that immediately spring to mind:
Firstly, there is the obvious risk of a bug in the specification - that proposed restrictions may be inappropriate or incompatible with the primary task of the system, or with the political needs of various stakeholders.
Secondly, and more insidiously, there is the risk that excessive restrictions become positive specification items; moving from "the system does not have to handle X" to "the system must check for X and explicitly error in this way". Whilst this seems ridiculous, it is a surprisingly easy psychological trap to fall into, and, of course, it increases system complexity and cost rather than reducing it.
Consistently being able to strike the right balance is primarily a function of development organisation culture; and another reason why businesses need to pay attention to this critical (but frequently overlooked) aspect of their internal organisation.
Tuesday 12 November 2013
Qualitative changes in privacy induced by quantitative changes in social-communications network topology
How do we form opinions and make judgements of other people?
Can we understand this process in terms of the flow of information through a network/graph?
Can we use this model to understand the impact of changes in the network? (Both structural/topological changes in connectivity and quantitative changes to flow along existing graph edges).
Can we use this approach to predict what will happen as our privacy is increasingly eroded by technology?
I.e.: Do we see some relationship between the structure of a social/communication graph and the notion of "privacy". (Temporal aspects might also be important?)
If the graph changes, does the quality of "privacy" change in different ways, and how does that impact the way that we make judgements about other people, and the way that other people make judgements about us.
What does that mean for the nature of society going forwards; particularly as developments in personal digital technologies mean that increasing amounts of highly intimate, personal information are captured, stored and disseminated in an increasingly uncontrolled, (albeit sparsely distributed) manner.
The sparsity of the distribution might be important - in terms of the creation of novel/disruptive power/influence networks.
Wednesday 6 November 2013
The software conspiracy: Maintaining a software-developer biased imbalance in human-machine labour arbitrage.
Have you ever wondered why software engineering tools are so terrible?
Perhaps there is an implicit/unspoken conspiracy across our profession?
After all, we software developers are working away at our jobs to automate various economic activities; the inevitable result of which is to force workers in other industries out of their jobs and their livelihoods.
A claim can be made that technological developments create new, less rote and routine roles, with more intellectual challenge and greater responsibility -- but there is no real evidence that this outcome will necessarily always hold; indeed, there is some empirical evidence to suggest that this pattern is even now beginning to fail.
We are not stupid. Indeed, we are well aware of the effects of our actions on their individual welfare and security, so why should we bring the same calamity upon ourselves? Perhaps we should keep our software tools in their present primitive state; to ensure job security for ourselves just as we undermine it for others?
Perhaps there is an implicit/unspoken conspiracy across our profession?
After all, we software developers are working away at our jobs to automate various economic activities; the inevitable result of which is to force workers in other industries out of their jobs and their livelihoods.
A claim can be made that technological developments create new, less rote and routine roles, with more intellectual challenge and greater responsibility -- but there is no real evidence that this outcome will necessarily always hold; indeed, there is some empirical evidence to suggest that this pattern is even now beginning to fail.
We are not stupid. Indeed, we are well aware of the effects of our actions on their individual welfare and security, so why should we bring the same calamity upon ourselves? Perhaps we should keep our software tools in their present primitive state; to ensure job security for ourselves just as we undermine it for others?
Friday 25 October 2013
The insanity!
People often choose to use C++ because of it's performance, willingly sacrificing programmer productivity to the alter of language; tool-chain and application complexity to do so.
Then they inexplicably decide to use strings everywhere; with bazillions of dynamic allocations and memory copies, recklessly discarding any shred of hope that they might have had for, y'know, performance.
If you want to do a ton of unnecessary string manipulations & memcpys your application is going to be slower than it could be, regardless of the language it is written in. If you want performance, the programming language that you use is literally the least significant choice that you can make. What you chose to do is far more important than the language that you use to express those operations.
Also, the word "fast" is totally and utterly meaningless; to the point of being dangerous. What a database developer means by "blazingly fast" is totally different from what an embedded engineer, or a VHDL developer means by the same term.
Then they inexplicably decide to use strings everywhere; with bazillions of dynamic allocations and memory copies, recklessly discarding any shred of hope that they might have had for, y'know, performance.
If you want to do a ton of unnecessary string manipulations & memcpys your application is going to be slower than it could be, regardless of the language it is written in. If you want performance, the programming language that you use is literally the least significant choice that you can make. What you chose to do is far more important than the language that you use to express those operations.
Also, the word "fast" is totally and utterly meaningless; to the point of being dangerous. What a database developer means by "blazingly fast" is totally different from what an embedded engineer, or a VHDL developer means by the same term.
Biological opposition
http://www.theatlantic.com/politics/archive/2013/10/can-your-genes-predict-whether-youll-be-a-conservative-or-a-liberal/280677/
Is political orientation influenced by personality traits? Seems plausible.
Are personality traits influenced by genetic factors? Seems plausible.
Can we conclude that political affiliation is influenced by genetic factors? I suppose, although the causal link the degree of influence could be very weak.
Should this influence how political campaigns operate? Possibly, although the implications might be a little scary if people took the idea too seriously.
Does this mean that it is intrinsically very difficult (perhaps impossible), to see issues from alternative points of view? Possibly, and this is (I feel) an interesting question to contemplate.
How can we effectively communicate with other people if we face biological barriers when trying to empathise with them?
Does this mean that empathy is sometimes impossible, or merely that it can sometimes be a stressful, energy-intensive mental exercise?
Does this raise other issues?
What do you think?
Is political orientation influenced by personality traits? Seems plausible.
Are personality traits influenced by genetic factors? Seems plausible.
Can we conclude that political affiliation is influenced by genetic factors? I suppose, although the causal link the degree of influence could be very weak.
Should this influence how political campaigns operate? Possibly, although the implications might be a little scary if people took the idea too seriously.
Does this mean that it is intrinsically very difficult (perhaps impossible), to see issues from alternative points of view? Possibly, and this is (I feel) an interesting question to contemplate.
How can we effectively communicate with other people if we face biological barriers when trying to empathise with them?
Does this mean that empathy is sometimes impossible, or merely that it can sometimes be a stressful, energy-intensive mental exercise?
Does this raise other issues?
What do you think?
Thursday 24 October 2013
Intellectual diversification
Do tailored search force us into an individual intellectual rut? Is this a threat?
To really understand the world through the eyes of others - it is not sufficient to randomise the input that we receive.
The existence of a rut is not necessarily a problem, although an inability or an unwillingness to break out of it may be.
Perhaps, as Google builds an increased understanding of us, it might be willing to share that understanding more widely, so that we in turn may start to understand others; both as individuals and as social groups. (For those that are willing to invest the time).
I also hope that over time, the distribution of human culture and practice will change, with an increase in variance, but a reduction in extreme outliers; so we can enhance our diversity whilst placing an upper bound on the risk of misunderstanding and conflict.
To really understand the world through the eyes of others - it is not sufficient to randomise the input that we receive.
The existence of a rut is not necessarily a problem, although an inability or an unwillingness to break out of it may be.
Perhaps, as Google builds an increased understanding of us, it might be willing to share that understanding more widely, so that we in turn may start to understand others; both as individuals and as social groups. (For those that are willing to invest the time).
I also hope that over time, the distribution of human culture and practice will change, with an increase in variance, but a reduction in extreme outliers; so we can enhance our diversity whilst placing an upper bound on the risk of misunderstanding and conflict.
As complexity increases, phase transitions are observed
Once you get beyond a certain level of complexity; technology stops behaving like a deliberately engineered thing, and starts to take on other, less familiar characteristics.
Wednesday 23 October 2013
Oil and water: Mixing agility and government IT.
The below is a response to this Washington Post article:
http://www.washingtonpost.com/blogs/wonkblog/wp/2013/10/21/the-way-government-does-tech-is-outdated-and-risky/
Governments have a fiduciary responsibility to look after taxpayer money. When dealing with the private sector, this is normally interpreted to mean an obligation to issue a tender, and to take the lowest (viable) bid that meets the specification. This works fairly well when the scope of the work is known beforehand, when costs can be predicted, and when a schedule of work can be drawn up.
However, as the complexity of a system increases, making any sort of prediction about that system becomes exponentially more difficult. This means that the specification for a complex system must be exponentially longer than a specification for a simple system, with an exponentially greater risk of errors. Making time & cost estimates becomes exponentially more difficult, and the error in those estimates becomes exponentially greater. The number of unknowns that must be researched grows exponentially with complexity also.
The term "Agile" is a bit of a buzzword, and has attracted more than it's fair share of snake-oil salesmen, but what it comes down to is, essentially, throwing the towel in and admitting defeat. When you *cannot* make predictions about your system, what do you do? You need to find another way of managing costs and reducing project risk.
Unfortunately, because of the fiduciary responsibilities named above, these options are not open to Government contract mechanisms. There is a fundamental conflict that cannot be resolved. As a result, government cannot (should not?) attempt to implement anything other than the very simplest of systems in the traditional, mandated manner.
How then, can complex information systems be developed for public benefit? The philanthropy of private individuals & organisations is one solution; whether through crowd-funding, or open source initiatives. Political leadership and coordination of such activities is something that could easily fall into government remit, without the significant legal hurdles that contracting work out imposes.
http://www.washingtonpost.com/blogs/wonkblog/wp/2013/10/21/the-way-government-does-tech-is-outdated-and-risky/
Governments have a fiduciary responsibility to look after taxpayer money. When dealing with the private sector, this is normally interpreted to mean an obligation to issue a tender, and to take the lowest (viable) bid that meets the specification. This works fairly well when the scope of the work is known beforehand, when costs can be predicted, and when a schedule of work can be drawn up.
However, as the complexity of a system increases, making any sort of prediction about that system becomes exponentially more difficult. This means that the specification for a complex system must be exponentially longer than a specification for a simple system, with an exponentially greater risk of errors. Making time & cost estimates becomes exponentially more difficult, and the error in those estimates becomes exponentially greater. The number of unknowns that must be researched grows exponentially with complexity also.
The term "Agile" is a bit of a buzzword, and has attracted more than it's fair share of snake-oil salesmen, but what it comes down to is, essentially, throwing the towel in and admitting defeat. When you *cannot* make predictions about your system, what do you do? You need to find another way of managing costs and reducing project risk.
Unfortunately, because of the fiduciary responsibilities named above, these options are not open to Government contract mechanisms. There is a fundamental conflict that cannot be resolved. As a result, government cannot (should not?) attempt to implement anything other than the very simplest of systems in the traditional, mandated manner.
How then, can complex information systems be developed for public benefit? The philanthropy of private individuals & organisations is one solution; whether through crowd-funding, or open source initiatives. Political leadership and coordination of such activities is something that could easily fall into government remit, without the significant legal hurdles that contracting work out imposes.
Tuesday 22 October 2013
Learning abstractions
The more restricted and limited you make something, the easier it is to use.
Remove options, reduce flexibility and the device becomes simpler and easier. Good for the novice, bad for everybody else.
The real trick is to gradually surface functionality: provide powerful abstractions & flexible conceptual tools, but then hide them so that they only surface when they are needed, and, more importantly, when the user is able to cope with them.
So, by all means hide files, and other abstractions that serve to complicate the interaction & confuse the novice, but keep them in the background, waiting until they are needed; and provide neat ways of discovering and learning the conceptual tools required to use our devices to their full potential.
User interface design is about planning the user's learning process; guiding them on their journey from novice to expert.
Remove options, reduce flexibility and the device becomes simpler and easier. Good for the novice, bad for everybody else.
The real trick is to gradually surface functionality: provide powerful abstractions & flexible conceptual tools, but then hide them so that they only surface when they are needed, and, more importantly, when the user is able to cope with them.
So, by all means hide files, and other abstractions that serve to complicate the interaction & confuse the novice, but keep them in the background, waiting until they are needed; and provide neat ways of discovering and learning the conceptual tools required to use our devices to their full potential.
User interface design is about planning the user's learning process; guiding them on their journey from novice to expert.
Friday 18 October 2013
A failure to interoperate; or; the power struggle between tools.
I have spent the morning cursing and swearing as I try to get CMake and Eclipse to play nicely with one another.
This is a frustration that I feel again and again. So many tools try to take control; to impose their own view of how things "should be" in the world - and those views often conflict.
Both CMake and I agree that out-of-source builds are the only sane thing to do. Why take the risk of having the build process screw up the source tree in some subtle way? This also means that we can have easily create multiple builds from the same source - and not just limited to Debug & Release builds, either. We can have builds that just generate documentation, or builds that just perform static analysis & style checking, or builds for different platforms, different targets, different levels of optimisation; all completely configuration-managed and under version control, yet totally independent of the actual content of the source tree.
Yet why do so many IDEs take a totally divergent view on how the build should be organized? Why must the project file live in the root directory of the source tree? Why must I jump through hoops to build my software outside of the source tree?
Why is it that using an IDE automatically means that I have to use a brain-dead and limited concept of project organisation?
Come back make, come back gcc, come back vi. All is forgiven.
I am all for the idea that some tools should be "opinionated" - but you do need to be prepared for pain if you want to use two different "opinionated" tools together.
For that reason, we should think carefully about developing more tools that are less all-encompassing, that are more humble, more secular, and more flexible in their approach.
This is a frustration that I feel again and again. So many tools try to take control; to impose their own view of how things "should be" in the world - and those views often conflict.
Both CMake and I agree that out-of-source builds are the only sane thing to do. Why take the risk of having the build process screw up the source tree in some subtle way? This also means that we can have easily create multiple builds from the same source - and not just limited to Debug & Release builds, either. We can have builds that just generate documentation, or builds that just perform static analysis & style checking, or builds for different platforms, different targets, different levels of optimisation; all completely configuration-managed and under version control, yet totally independent of the actual content of the source tree.
Yet why do so many IDEs take a totally divergent view on how the build should be organized? Why must the project file live in the root directory of the source tree? Why must I jump through hoops to build my software outside of the source tree?
Why is it that using an IDE automatically means that I have to use a brain-dead and limited concept of project organisation?
Come back make, come back gcc, come back vi. All is forgiven.
I am all for the idea that some tools should be "opinionated" - but you do need to be prepared for pain if you want to use two different "opinionated" tools together.
For that reason, we should think carefully about developing more tools that are less all-encompassing, that are more humble, more secular, and more flexible in their approach.
Wednesday 16 October 2013
Software security and reliability is a public interest issue
In response to complaints that there are not enough computer-security-trained professionals in the market today:
http://www.reuters.com/article/2013/10/14/security-internet-idUSL6N0I30AW20131014
It is not just a question of skills and human resources: We also need better defensive tools and techniques, which need to be made freely available, ideally packaged with common software development tools like GCC or the LLVM compiler framework (and switched on by default)!
It would be a terrible waste if these "cyber warriors" (what a ludicrous title) all sat in isolated silos, tasked with protecting individual organizations, when the (stupendous) cost of this defensive work could be easily amortized across companies (with incredible cost savings as a result).
We need better tools to analyse complex systems, particularly software systems, for security vulnerabilities, so that those vulnerabilities can be closed. This includes static analysis tools; fuzz testing tools and vulnerability analysis tools.
We need better professional licensing and certification processes, so that we can better control the quality & reduce the vulnerability of the systems on which we all rely.
We need security-oriented programming conventions, and software registers, so that security software can do it's job more easily and more effectively.
We ALL have an interest in the reliability and trustworthiness of the complex systems that we rely on to power our infrastructure; our financial system; our workplaces. Nobody wants to gain competitive advantage because a competitor was targeted by a criminal. In a world dominated by unreliability and insecurity, it is only the criminals that win.
There is a HUGE incentive to work together here. In the public interest, let's do so.
http://www.reuters.com/article/2013/10/14/security-internet-idUSL6N0I30AW20131014
It is not just a question of skills and human resources: We also need better defensive tools and techniques, which need to be made freely available, ideally packaged with common software development tools like GCC or the LLVM compiler framework (and switched on by default)!
It would be a terrible waste if these "cyber warriors" (what a ludicrous title) all sat in isolated silos, tasked with protecting individual organizations, when the (stupendous) cost of this defensive work could be easily amortized across companies (with incredible cost savings as a result).
We need better tools to analyse complex systems, particularly software systems, for security vulnerabilities, so that those vulnerabilities can be closed. This includes static analysis tools; fuzz testing tools and vulnerability analysis tools.
We need better professional licensing and certification processes, so that we can better control the quality & reduce the vulnerability of the systems on which we all rely.
We need security-oriented programming conventions, and software registers, so that security software can do it's job more easily and more effectively.
We ALL have an interest in the reliability and trustworthiness of the complex systems that we rely on to power our infrastructure; our financial system; our workplaces. Nobody wants to gain competitive advantage because a competitor was targeted by a criminal. In a world dominated by unreliability and insecurity, it is only the criminals that win.
There is a HUGE incentive to work together here. In the public interest, let's do so.
Thursday 3 October 2013
Hubris and the complexity trap
I have been blessed with the opportunity to work with some fantastically bright people. A surprisingly large number of them become unstuck for the same reason: Hubris.
Overconfident in their own abilities, they engineer systems of dazzling cleverness; the very complexity of which turns them into snares that confound, befuddle, and ultimately humble their creators.
The true value of a person then becomes apparent. It is not intelligence, per se, but the humility to confront and acknowledge failure.
As Brian Kernighan once said: "Debugging a system is twice as hard as creating it in the first place. Therefore, if you create a system as cleverly as possible, you are, by definition, not smart enough to debug it."
Simplicity is of paramount importance in a world that grows ever more complex, day by day.
Sometimes, you need a self-aware simple-mindedness to achieve the simplest, and therefore best, result.
Overconfident in their own abilities, they engineer systems of dazzling cleverness; the very complexity of which turns them into snares that confound, befuddle, and ultimately humble their creators.
The true value of a person then becomes apparent. It is not intelligence, per se, but the humility to confront and acknowledge failure.
As Brian Kernighan once said: "Debugging a system is twice as hard as creating it in the first place. Therefore, if you create a system as cleverly as possible, you are, by definition, not smart enough to debug it."
Simplicity is of paramount importance in a world that grows ever more complex, day by day.
Sometimes, you need a self-aware simple-mindedness to achieve the simplest, and therefore best, result.
Wednesday 2 October 2013
All power to the developer
The typing pool has been superseded by the word processor; clerical workers by the spreadsheet and the database. What used to require the management of men, now requires the management of machines.
Two very different tasks indeed.
Man-management is a very interesting, very challenging job. Marshalling and aligning a group of humans to meet a common objective requires the ability to handle the vagaries of human relationships; interpersonal conflicts, political machinations, uncertain and conflicting motivations, forgetfulness, imperfect communication and understanding ... the challenges that must be overcome form a long long list. Indeed, one might be excused if the challenge of getting other people to do what you want them to rather masks the extent to which our objectives are woolly and ill-defined in the first place.
By contrast, as a programmer, our minions are infinitely more compliant; patient, predictable and indefatigable. The orders that we issue will be followed to the minutest detail. Indeed, the orders that we issue must be *specified* in the minutest detail.
Therein lies the rub.
The professions of programming and man-management both require the professional to make decisions and to delegate tasks. For the man-manager, the central challenge is to ensure that the tasks are completed; for the developer, the central challenge is to ensure that the tasks (and their consequences) are well enough understood in the first place.
This is surprisingly hard; especially the consequences bit, and especially in the face of complexity.
Two very different tasks indeed.
Man-management is a very interesting, very challenging job. Marshalling and aligning a group of humans to meet a common objective requires the ability to handle the vagaries of human relationships; interpersonal conflicts, political machinations, uncertain and conflicting motivations, forgetfulness, imperfect communication and understanding ... the challenges that must be overcome form a long long list. Indeed, one might be excused if the challenge of getting other people to do what you want them to rather masks the extent to which our objectives are woolly and ill-defined in the first place.
By contrast, as a programmer, our minions are infinitely more compliant; patient, predictable and indefatigable. The orders that we issue will be followed to the minutest detail. Indeed, the orders that we issue must be *specified* in the minutest detail.
Therein lies the rub.
The professions of programming and man-management both require the professional to make decisions and to delegate tasks. For the man-manager, the central challenge is to ensure that the tasks are completed; for the developer, the central challenge is to ensure that the tasks (and their consequences) are well enough understood in the first place.
This is surprisingly hard; especially the consequences bit, and especially in the face of complexity.
To what extent can our brains accommodate externally-directed actions without recognizing them as such?
The human brain is parallel and decentralized, yet we somehow manage to maintain the illusion that we are a single conscious entity, experiencing life in a sequential stream of thoughts and experiences.
This is clearly a lie that we tell ourselves. Various independent bits of brain make decisions and initiate actions entirely on their own, and somehow we manage to rationalize and confabulate and merrily deceive ourselves (after the fact) that we had some sort of explicit, sequential, conscious plan all along.
Whilst this model of human behavior is disturbing on a number of levels, it does have some interesting consequences when we consider the near-future technology of brain augmentation.
It is plausible that we could embed a bit of electronics into our brains; integrated so tightly that it is able to make decisions for us; to control our actions and influence our perceptions.
Would we feel that we were being controlled? Or would we integrate those decisions into our stream of consciousness: to confabulate a reality in which these decisions really were made by our conscious free will?
Will the perceptual pathways responsible for our self-perception bend to accommodate outside influences; to stretch the notion of self to accommodate the other? To allow us to believe that we initiated acts that were (in reality) initiated by others?
This is clearly a lie that we tell ourselves. Various independent bits of brain make decisions and initiate actions entirely on their own, and somehow we manage to rationalize and confabulate and merrily deceive ourselves (after the fact) that we had some sort of explicit, sequential, conscious plan all along.
Whilst this model of human behavior is disturbing on a number of levels, it does have some interesting consequences when we consider the near-future technology of brain augmentation.
It is plausible that we could embed a bit of electronics into our brains; integrated so tightly that it is able to make decisions for us; to control our actions and influence our perceptions.
Would we feel that we were being controlled? Or would we integrate those decisions into our stream of consciousness: to confabulate a reality in which these decisions really were made by our conscious free will?
Will the perceptual pathways responsible for our self-perception bend to accommodate outside influences; to stretch the notion of self to accommodate the other? To allow us to believe that we initiated acts that were (in reality) initiated by others?
GUI-NG Desiderata
The command line may be powerful and flexible, but it is ugly to look at and difficult to learn. WIMP style graphical user interfaces look nicer, and are easier to pick up and use, but they don't really help you do anything other than the most basic and pedestrian of tasks.
Cm'on guys, we should be able to do better than this. We should be able to design beautiful, powerful tools with a UI concept that combines the best of both worlds.
A beautiful, well designed UI for the power user: The design sense of Apple, the good looks of Lighttable and Sublime text; the power and flexibility of Unix.
The Acme editor from Plan9 shows us how radical we can be. IPython also makes a decent enough effort in something approaching the right direction, but does not go nearly far enough.
The (Unix) command-line points the way: the idea of the OS as a programming and data analysis environment above all else; the ability to perform complex tasks by streaming data from one small program to another; the use of the keyboard as the primary user input mechanism. (Mice are an ergonomic disaster zone).
From WIMP graphical user interfaces, we should prioritize menus above all else. Menus help the motivated novice to discover tools and functionality in a way that the command line simply cannot. In today's time-constrained world, RTFM is a condescending cop-out. Nobody has the time. Discoverability is everything; the requirement for arcane knowledge (and the time required to achieve it) is both obsolescent and wasteful.
Windows, Icons and Pointers are secondary considerations - The command line and menus should take priority. They could (and arguably should) still exist; but they do not have to dominate the interaction paradigm in the way that they have done in the past, and should be there to support other functionality, not to provide a framework within which other functionality must fit.
Speaking of paradigms; the "desktop metaphor" that is so baked in to the conventional user interface -- it is a redundant piece of skeuomorphism that has outlived it's usefulness. Sure, we can keep the notion of "depth", and layer things over the top of one another in the UI, but the flat, 2D relationships between items - connectedness, proximity, above/below, left/right -- these notions need to take priority and need to be given consistent semantic meanings in the "pattern language" that defines the UI that we create.
What does this all mean in practice?
Well, how about this: A primarily command-line driven interface, but with embedded widgets and graphics. I.e. emphatically NOT a tty - or anything resembling one. Something somewhat akin to tmux can act as a "window manager" - although it would have to be tmux on steroids. The edges and corners of the screen; key pieces of real estate all; should harbour hierarchical menus galore: some static, always where you expect them, some dynamic, perhaps responding to a search, perhaps responding to recently run applications.
Above all else, applications need to work together. Textual commands help us to quickly and easily glue functionality together, pipes help us to move data around the systems that we build, and the graphical capabilities of our user interface allows us to diagram and explore the data-flow of our scripts and systems.
Cm'on guys, we should be able to do better than this. We should be able to design beautiful, powerful tools with a UI concept that combines the best of both worlds.
A beautiful, well designed UI for the power user: The design sense of Apple, the good looks of Lighttable and Sublime text; the power and flexibility of Unix.
The Acme editor from Plan9 shows us how radical we can be. IPython also makes a decent enough effort in something approaching the right direction, but does not go nearly far enough.
The (Unix) command-line points the way: the idea of the OS as a programming and data analysis environment above all else; the ability to perform complex tasks by streaming data from one small program to another; the use of the keyboard as the primary user input mechanism. (Mice are an ergonomic disaster zone).
From WIMP graphical user interfaces, we should prioritize menus above all else. Menus help the motivated novice to discover tools and functionality in a way that the command line simply cannot. In today's time-constrained world, RTFM is a condescending cop-out. Nobody has the time. Discoverability is everything; the requirement for arcane knowledge (and the time required to achieve it) is both obsolescent and wasteful.
Windows, Icons and Pointers are secondary considerations - The command line and menus should take priority. They could (and arguably should) still exist; but they do not have to dominate the interaction paradigm in the way that they have done in the past, and should be there to support other functionality, not to provide a framework within which other functionality must fit.
Speaking of paradigms; the "desktop metaphor" that is so baked in to the conventional user interface -- it is a redundant piece of skeuomorphism that has outlived it's usefulness. Sure, we can keep the notion of "depth", and layer things over the top of one another in the UI, but the flat, 2D relationships between items - connectedness, proximity, above/below, left/right -- these notions need to take priority and need to be given consistent semantic meanings in the "pattern language" that defines the UI that we create.
What does this all mean in practice?
Well, how about this: A primarily command-line driven interface, but with embedded widgets and graphics. I.e. emphatically NOT a tty - or anything resembling one. Something somewhat akin to tmux can act as a "window manager" - although it would have to be tmux on steroids. The edges and corners of the screen; key pieces of real estate all; should harbour hierarchical menus galore: some static, always where you expect them, some dynamic, perhaps responding to a search, perhaps responding to recently run applications.
Above all else, applications need to work together. Textual commands help us to quickly and easily glue functionality together, pipes help us to move data around the systems that we build, and the graphical capabilities of our user interface allows us to diagram and explore the data-flow of our scripts and systems.
Friday 20 September 2013
Software & Hardware - How complexity and risk define engineering practice and culture.
Striking cultural differences exist between companies that specialise in software development and those that specialise in hardware development & manufacturing.
The key to understanding the divergence of these two cultures may be found in the differing approaches that each takes to risk and failure; driven both by a difference in the expected cost of failure and a difference in the cost associated with predicting failure.
One approach seeks to mitigate the impact of adverse events by emphasising flexibility and agility; the other approach seeks to minimise the chances that adverse events occur at all, by emphasising predictability and control.
In other words, do you design so you can fix your system quickly when it breaks (at the expense of having it break often), or do you design your system so that it very rarely breaks (but when it does, it is more expensive to fix)?
The answer to this question not only depends on how safety-critical the system is, but how complex it is too. The prediction-and-control approach rapidly becomes untenable when you have systems that reach a certain level of complexity -- the cost of accurately predicting when failures are going to occur rapidly becomes larger than the cost of the failure itself. As the complexity of the system under development increases, the activity looks less like development and more like research. The predictability of disciplined engineering falls apart in the face of sufficient complexity. Worse; complexity increases in a combinatorial manner, so we can very easily move from a predictable system to an unpredictable one with the addition of only a small number of innocuous looking components.
Most forms of mechanical engineering emphasise the second (predictive) approach, particularly for safety critical equipment, since the systems are simple (compared to a lot of software) and the costs of failure are high. On the other hand, a lot of software development emphasises the first (agile/reactive) approach, because the costs associated with failures are (normally) a lot less than the costs associated with development.
Of course, a lot of pejorative terms get mixed up in this, like: "sloppy engineering", and "cowboy developers", vs "expensive failures" and "moribund bureaucracy" but really, these approaches are just the result of the same cost/benefit analysis producing different answers given different input conditions.
Problems mainly arise when you use the wrong risk-management approach for the wrong application; or for the wrong *part* of the application. Things can get quite subtle quite quickly, and managers really need to be on top of their game to succeed.
One of the challenges in developing automotive ADAS systems is that a lot of the software is safety critical, and therefore very expensive to write, because of all of the (necessary) bureaucratic support that the OEMs require for traceability and accountability.
Equally, a lot of the advanced functionality for machine vision / Radar / Lidar signal processing is very advanced, and (unfortunately) has a lot of necessary complexity. As a result it is very very costly to develop when using the former approach; yet may be involved in safety-critical functions.
This is not by any means a solved problem, and very much requires detailed management on a case-by-case basis.
Certainly testing infrastructure becomes much more important as the sensor systems that we develop become more complex. (Disclaimer: my area of interest). Indeed, my experience indicates that for sophisticated sensor systems well over 80% of the effort (measured by both in hours of development & size of the code-base) is associated with test infrastructure, and less than 20% with the software that ends up in the vehicle.
Perhaps the word "test" is a misnomer here; since the role of this infrastructure is not so much to do V&V on the completed system, but to help to develop the system's requirements -- to do the "Data Science" and analytics that are needed to understand the operating environment well enough that you can correctly specify the behaviour of the application.
The key to understanding the divergence of these two cultures may be found in the differing approaches that each takes to risk and failure; driven both by a difference in the expected cost of failure and a difference in the cost associated with predicting failure.
One approach seeks to mitigate the impact of adverse events by emphasising flexibility and agility; the other approach seeks to minimise the chances that adverse events occur at all, by emphasising predictability and control.
In other words, do you design so you can fix your system quickly when it breaks (at the expense of having it break often), or do you design your system so that it very rarely breaks (but when it does, it is more expensive to fix)?
The answer to this question not only depends on how safety-critical the system is, but how complex it is too. The prediction-and-control approach rapidly becomes untenable when you have systems that reach a certain level of complexity -- the cost of accurately predicting when failures are going to occur rapidly becomes larger than the cost of the failure itself. As the complexity of the system under development increases, the activity looks less like development and more like research. The predictability of disciplined engineering falls apart in the face of sufficient complexity. Worse; complexity increases in a combinatorial manner, so we can very easily move from a predictable system to an unpredictable one with the addition of only a small number of innocuous looking components.
Most forms of mechanical engineering emphasise the second (predictive) approach, particularly for safety critical equipment, since the systems are simple (compared to a lot of software) and the costs of failure are high. On the other hand, a lot of software development emphasises the first (agile/reactive) approach, because the costs associated with failures are (normally) a lot less than the costs associated with development.
Of course, a lot of pejorative terms get mixed up in this, like: "sloppy engineering", and "cowboy developers", vs "expensive failures" and "moribund bureaucracy" but really, these approaches are just the result of the same cost/benefit analysis producing different answers given different input conditions.
Problems mainly arise when you use the wrong risk-management approach for the wrong application; or for the wrong *part* of the application. Things can get quite subtle quite quickly, and managers really need to be on top of their game to succeed.
One of the challenges in developing automotive ADAS systems is that a lot of the software is safety critical, and therefore very expensive to write, because of all of the (necessary) bureaucratic support that the OEMs require for traceability and accountability.
Equally, a lot of the advanced functionality for machine vision / Radar / Lidar signal processing is very advanced, and (unfortunately) has a lot of necessary complexity. As a result it is very very costly to develop when using the former approach; yet may be involved in safety-critical functions.
This is not by any means a solved problem, and very much requires detailed management on a case-by-case basis.
Certainly testing infrastructure becomes much more important as the sensor systems that we develop become more complex. (Disclaimer: my area of interest). Indeed, my experience indicates that for sophisticated sensor systems well over 80% of the effort (measured by both in hours of development & size of the code-base) is associated with test infrastructure, and less than 20% with the software that ends up in the vehicle.
Perhaps the word "test" is a misnomer here; since the role of this infrastructure is not so much to do V&V on the completed system, but to help to develop the system's requirements -- to do the "Data Science" and analytics that are needed to understand the operating environment well enough that you can correctly specify the behaviour of the application.
Monday 16 September 2013
Dstillery
Media 6 Degrees (the new owner of my former employer, EveryScreen Media), has changed it's name to "Dstillery".
http://www.fastcompany.com/3017495/dstillery-is-picasso-in-the-dark-art-of-digital-advertising
This Fast Company article does a remarkably good job of explaining what the technology delivers. (Disclosure: I implemented the early mobile-phone IP-tracking algorithms when I worked at EveryScreen Media).
Whilst I do believe (from what I saw) that ESM/M6D/Dstillery take privacy very seriously; and will (continue to) behave in a responsible manner, I still feel a sense of unease when I think about the extent to which participants in the advertising industry are able to peer into people's personal lives.
To balance this (sort-of) criticism, I feel I should emphasise the fact that the advertising industry is, in general, a pile 'em high sell 'em cheap kind of affair, where advertising impressions are bought and sold by the million; where performance is measured in statistical terms; and where the idea of paying close attention to any one individual would be laughed off as a ludicrous waste of time.
However, it is possible that not all participants will feel that way, just as it is possible that not all participants will be as motivated to act responsibly as my former employers were.
There has been a lot of debate recently about NSA spying on what we read on-line and using our mobile phones to track where we go in the world. Well, you don't need anything like the NSA's multi-billion dollar budgets and mastery of (de)cryptography to do something that feels (like it could be) similarly invasive. (The advertising industry is an order of magnitude less creepy, but it is facilitated by the same social & technological developments, and is still heading in a very similar direction).
I think that this is something that we should think very carefully about, and just as we seek to find better ways to regulate our security services, so too we should (carefully; deliberately; deliberatively) seek to find ways to regulate the flow of personal information around our advertising ecosystem.
--
Edit: One thing that does bear mentioning -- the link up between M6D & ESM really is a smart move. From a data point of view, it is a marriage made in heaven (a bit of a no-brainer, actually); and I think that the insights that result from combining their respective data-streams will yield genuine benefits for their clients & the brands that they represent.
http://www.fastcompany.com/3017495/dstillery-is-picasso-in-the-dark-art-of-digital-advertising
This Fast Company article does a remarkably good job of explaining what the technology delivers. (Disclosure: I implemented the early mobile-phone IP-tracking algorithms when I worked at EveryScreen Media).
Whilst I do believe (from what I saw) that ESM/M6D/Dstillery take privacy very seriously; and will (continue to) behave in a responsible manner, I still feel a sense of unease when I think about the extent to which participants in the advertising industry are able to peer into people's personal lives.
To balance this (sort-of) criticism, I feel I should emphasise the fact that the advertising industry is, in general, a pile 'em high sell 'em cheap kind of affair, where advertising impressions are bought and sold by the million; where performance is measured in statistical terms; and where the idea of paying close attention to any one individual would be laughed off as a ludicrous waste of time.
However, it is possible that not all participants will feel that way, just as it is possible that not all participants will be as motivated to act responsibly as my former employers were.
There has been a lot of debate recently about NSA spying on what we read on-line and using our mobile phones to track where we go in the world. Well, you don't need anything like the NSA's multi-billion dollar budgets and mastery of (de)cryptography to do something that feels (like it could be) similarly invasive. (The advertising industry is an order of magnitude less creepy, but it is facilitated by the same social & technological developments, and is still heading in a very similar direction).
I think that this is something that we should think very carefully about, and just as we seek to find better ways to regulate our security services, so too we should (carefully; deliberately; deliberatively) seek to find ways to regulate the flow of personal information around our advertising ecosystem.
--
Edit: One thing that does bear mentioning -- the link up between M6D & ESM really is a smart move. From a data point of view, it is a marriage made in heaven (a bit of a no-brainer, actually); and I think that the insights that result from combining their respective data-streams will yield genuine benefits for their clients & the brands that they represent.
Wednesday 4 September 2013
Antisocial development
Human beings are social creatures; to the extent that our sanity can be undermined by isolation.
Much of our behaviour is guided and controlled by observing others around us. We instinctively imitate our peers, and follow normative standards of behaviour. As a result, I believe that the most effective learning occurs in a social context; with the presence of peers to shape and guide behaviours.
I also believe that learning is at the centre of what we do as software developers. Functioning software is seemingly more a by-product of learning than the converse.
It is therefore a great pity that modern development methods seem tailor-made to encourage developers to work alone; to minimize the need for contact with their peers and colleagues, and to reduce the need for social interaction.
For example, part of the appeal of the modern distributed version control system is that it allows individuals to work independently and alone, without requiring coordination or synchronisation.
It is possible that this has a rational basis; After all Fred Brooks' analysis in: "The Mythical Man-Month" suggests that optimally sized teams are composed of a single solitary individual, Conway's Law also seems to suggest that maximum modularity is achieved by distributed teams.
Perhaps developer solitude really is the global optimum towards which we, as an industry, are headed. However, this clearly neglects the important social aspects of human behaviour, as well as our need to learn as a group.
I wonder if we ever see a resurgence of tools that support and encourage face-to-face social interaction and learning; rather than obviate the need for it.
Friday 16 August 2013
Managing Complexity
Quoting from Djikstra's "Go-To Considered Harmful" essay:
If we can shovel most of the essential complexity into the stateless parts, we can make sure that this complexity is (at least) easily testable.
Equally, if we can keep the stateful parts of the program relatively free of complexity, then we can (at least) reason about (and control) how the state can change.
This argues for a system architecture that has parts that are programmed in an Object-Oriented manner as well as parts that are programmed in a pure-functional manner.
Objects enable us to make state private, and to limit and control how that state is manipulated (through public interfaces).
The use of pure functions makes it easier to reason about and test the more complex parts of system functionality.
It is when we mix statefulness and complexity that we really come unstuck.
Of course, it goes without saying that if you can avoid state you should, and if you can avoid complexity, you should ... but in my experience at least a little of both is usually required as a cost of doing business.
"My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed."And from Moseley and Marks' "Out of the Tar Pit" essay:
"We believe that the major contributor to this complexity in many systems is the handling of state and the burden that this adds when trying to analyse and reason about the system."We find it difficult to reason about how system state changes over time, particularly when lots of things can modify that state. It therefore makes sense to (mentally) partition our systems into stateless and stateful parts.
If we can shovel most of the essential complexity into the stateless parts, we can make sure that this complexity is (at least) easily testable.
Equally, if we can keep the stateful parts of the program relatively free of complexity, then we can (at least) reason about (and control) how the state can change.
This argues for a system architecture that has parts that are programmed in an Object-Oriented manner as well as parts that are programmed in a pure-functional manner.
Objects enable us to make state private, and to limit and control how that state is manipulated (through public interfaces).
The use of pure functions makes it easier to reason about and test the more complex parts of system functionality.
It is when we mix statefulness and complexity that we really come unstuck.
Of course, it goes without saying that if you can avoid state you should, and if you can avoid complexity, you should ... but in my experience at least a little of both is usually required as a cost of doing business.
Wednesday 10 July 2013
Google Logic - The Challenge of Agility
Here is an interesting Guardian article examining Google's management culture. It starts of by contrasting traditional long term business planning and culture with the more "agile" approach that is exemplified by Google.
The article correctly notes that this approach to product development requires a culture that is fundamentally different from the culture and practices found at traditional (long-term-plan oriented) organizations.
These traditional cultures struggle to deal with shifting goals and a quickly changing competitive landscape.
In addition to these criticisms, I would like to add another of my own:
I do not think that these prediction & planning-oriented approaches are any good at dealing with complexity. This is particularly clear when faced with the complexity involved in the development of modern software systems.
The reasons are forehead-slappingly obvious:
Long term planning or indeed, any planning at all, relies on making predictions. Making predictions is difficult. Traditional companies respond by trying harder, by limiting the scope of what they try to predict, or by deluding themselves that they are better at it than they are. The longer your time-frame and the more complex your plan/product, the worse your predictions become. Indeed, the difficulty of making predictions increases catastrophically quickly with the complexity of your product - making planning for all but the simplest product development nothing better than a wild and irresponsible gamble. Various cognitive biases compound this difficulty, making it difficult for us to measure or even acknowledge complexity that we cannot see.
This is a very real and very serious multi-billion-dollar crisis - and it's impact extends far beyond the field of software development - although this is where it's effects are most easily and most keenly felt. How to escape from this tar-pit? How to deal with an unpredictable, complex, and fundamentally "unmanageable" world?
This, it is not easy.
It is clear that "agile/reactive" approaches are going to be a key, important part of the solution - but experience has shown that implementing an agile culture and achieving success is far from easy - it remains a long way from the turn-key exercise that we might wish it to be. Even Google, who you claim champions this approach, is hardly the shining beacon that you suggest. Good enough, perhaps, but hardly exemplary.
The agile culture needs to be able to plan to reduce the impact of adverse events, without knowing in advance what they will be. Similarly, it needs to plan to increase the impact of favourable events, again without knowing in advance what they will be. At the same time, it needs to balance this with the need to keep on top of and maintain a complex and growing inventory of capabilities and skills; technical and human capital, all of which is susceptible to the forces of entropy and decay.
It has been said that developing software (in the large) is akin to growing a plant - it requires tending & guiding as it grows, rather than planning and "engineering" (in the bridge-building sense). I imagine very much that the senior leadership of Google feel the same way.
Friday 28 June 2013
The Next Big Thing
You might not have noticed it yet, but the leaves on the grapevine are rustling, and there is a whispering breeze in your ear. This, my friends, is the sound of the winds of change: gathering, growing, building momentum. There is a storm brewing, friends, and the game: It is afoot!
So many people: working quietly, diligently, are spear-heading that change, and so few of us realize that it is even happening, let alone comprehend the profound; monumental; world-changing implications of the technological tidal-wave that is coming our way.
Automotive industry dollars are funding this work; the dream of safer roads, of self-driving cars is driving it forwards, but this dream on four wheels is merely the tip of the iceberg: the gateway drug that leads us to an intoxicating, and perhaps slightly scary future.
What are self-driving cars but mass produced autonomous robots? Autonomous vehicles and robots will be to my daughter's generation what computers and the internet were to mine; except this change will be bigger and far more wide-reaching, as the machines step off the desktop and climb out of the server-room, and march onto the streets; as our mobile phones and tablets sprout wheels and wings and legs and arms -- as our algorithms and neural networks and distributed systems stop passively sucking in click-stream data and twitter sentiment scores, and start actively participating in a world that they can see, hear and touch.
The physical world that my parents inhabited is not so very different from the physical world that I inherited, but the world that I pass on to my daughters will change so quickly, so rapidly, that I cannot begin to imagine what it will look like when they are my age, and working away on shaping a future for their children.
What a singularly monumental change is afoot! What a wonderful, exhilarating time to be alive! And how exciting to be a part of it!
http://bits.blogs.nytimes.com/2013/07/07/disruptions-how-driverless-cars-could-reshape-cities/?smid=tw-nytimes
So many people: working quietly, diligently, are spear-heading that change, and so few of us realize that it is even happening, let alone comprehend the profound; monumental; world-changing implications of the technological tidal-wave that is coming our way.
Automotive industry dollars are funding this work; the dream of safer roads, of self-driving cars is driving it forwards, but this dream on four wheels is merely the tip of the iceberg: the gateway drug that leads us to an intoxicating, and perhaps slightly scary future.
What are self-driving cars but mass produced autonomous robots? Autonomous vehicles and robots will be to my daughter's generation what computers and the internet were to mine; except this change will be bigger and far more wide-reaching, as the machines step off the desktop and climb out of the server-room, and march onto the streets; as our mobile phones and tablets sprout wheels and wings and legs and arms -- as our algorithms and neural networks and distributed systems stop passively sucking in click-stream data and twitter sentiment scores, and start actively participating in a world that they can see, hear and touch.
The physical world that my parents inhabited is not so very different from the physical world that I inherited, but the world that I pass on to my daughters will change so quickly, so rapidly, that I cannot begin to imagine what it will look like when they are my age, and working away on shaping a future for their children.
What a singularly monumental change is afoot! What a wonderful, exhilarating time to be alive! And how exciting to be a part of it!
http://bits.blogs.nytimes.com/2013/07/07/disruptions-how-driverless-cars-could-reshape-cities/?smid=tw-nytimes
Wednesday 26 June 2013
Thoughts on Compressed Sensing
Imagine that you have a sparsely sampled signal (with many missing measurements), and you want to recover a densely sampled version of the same signal.
If you can find a function that is capable of transforming the original signal into a representation that is also sparse (with many zero or trivially non-zero coefficients), then you have a good heuristic that you can use to find an approximation of the original signal:- maximising sparsity in the transform domain. This is really just Occam's razor - if you have good reason to suppose that the data can be fit by a simple model, then choosing the simplest model that fits the data is an optimal guessing strategy.
Good examples of such functions include the Fourier transform or various wavelet transforms - indeed, any parametric approximation or model would probably do the job.
If the transform domain really is sparse, and each coefficient in the transform domain impacts multiple measurements in the original domain, then (with high probability) you will be able to guess the "right" coefficients in the transform domain by simply
picking the set of coefficients in the transform domain which both reproduces the original signal and minimizes the number of non-zero components in the transform domain.
However, choosing the most parsimonious model is only one kind of regularization that we can (and should) deploy. We should also think about the set of coefficients that represents the most physically and statistically plausible model.
Thus, when choosing our approach to regularization, we need to consider three things:
* Parsimony (As represented by sparsity, approximated by the L0 and L1 norms).
* Physical plausibility (A transform domain closely related to a physical model is needed here).
* Statistical plausibility (A transform domain where coefficients are as independent as possible is needed here - to reduce the number and dimensionality of the joint distributions that we need to calculate).
These three requirements are not necessarily harmonious. Does that mean we need three different transforms? How do we reconcile differences when they crop up?
If you can find a function that is capable of transforming the original signal into a representation that is also sparse (with many zero or trivially non-zero coefficients), then you have a good heuristic that you can use to find an approximation of the original signal:- maximising sparsity in the transform domain. This is really just Occam's razor - if you have good reason to suppose that the data can be fit by a simple model, then choosing the simplest model that fits the data is an optimal guessing strategy.
Good examples of such functions include the Fourier transform or various wavelet transforms - indeed, any parametric approximation or model would probably do the job.
If the transform domain really is sparse, and each coefficient in the transform domain impacts multiple measurements in the original domain, then (with high probability) you will be able to guess the "right" coefficients in the transform domain by simply
picking the set of coefficients in the transform domain which both reproduces the original signal and minimizes the number of non-zero components in the transform domain.
However, choosing the most parsimonious model is only one kind of regularization that we can (and should) deploy. We should also think about the set of coefficients that represents the most physically and statistically plausible model.
Thus, when choosing our approach to regularization, we need to consider three things:
* Parsimony (As represented by sparsity, approximated by the L0 and L1 norms).
* Physical plausibility (A transform domain closely related to a physical model is needed here).
* Statistical plausibility (A transform domain where coefficients are as independent as possible is needed here - to reduce the number and dimensionality of the joint distributions that we need to calculate).
These three requirements are not necessarily harmonious. Does that mean we need three different transforms? How do we reconcile differences when they crop up?
Friday 14 June 2013
Code review tool affordances
I have had a bit of experience using different code review tools: Gerrit most recently, and Atlassian's FishEye in the roles preceding that.
Strangely, the most effective code reviews that I have participated happened at Sophos, where no tool at all was used: The rule was simply to get another developer to sit next to you when you did a commit, so you could talk him through the code line by line before submitting the change.
The experience there was that most (if not all) of the problems were spotted by the original developer, not by the reviewer, who frequently lacked the contextual awareness to identify anything deeper and more significant than the usual picayune: formatting and layout errors -- the real value came from being forced to re-examine and explain your work - controlling the searchlight of your attention into a more disciplined and fine-grained search pattern than is possible alone.
The other types of review to which I have been party:- asynchronous, distributed reviews mediated by a web tool of some sort, as well as formal half-dozen-people-in-a-room-with-slides style reviews have, in my experience, proven far less effective.
So, I sit here wondering if we can rescue the asynchronous distributed code review tool, either through an alternative approach or the application of a formal and disciplined approach of some sort ... or if it is doomed to more-than-uselessness?
Strangely, the most effective code reviews that I have participated happened at Sophos, where no tool at all was used: The rule was simply to get another developer to sit next to you when you did a commit, so you could talk him through the code line by line before submitting the change.
The experience there was that most (if not all) of the problems were spotted by the original developer, not by the reviewer, who frequently lacked the contextual awareness to identify anything deeper and more significant than the usual picayune: formatting and layout errors -- the real value came from being forced to re-examine and explain your work - controlling the searchlight of your attention into a more disciplined and fine-grained search pattern than is possible alone.
The other types of review to which I have been party:- asynchronous, distributed reviews mediated by a web tool of some sort, as well as formal half-dozen-people-in-a-room-with-slides style reviews have, in my experience, proven far less effective.
So, I sit here wondering if we can rescue the asynchronous distributed code review tool, either through an alternative approach or the application of a formal and disciplined approach of some sort ... or if it is doomed to more-than-uselessness?
Monday 3 June 2013
Dual functional & OO interfaces for program decomposition into stateless & stateful parts.
If you expose complex logic through a functional interface, it is (much) easier to test due to it's statelessness and lack of access controls.
On the other hand, parts of the program that are necessarily stateful are best handled via OO interfaces, as this allows us to limit the number of possible state transitions. Access to the stateful variables themselves is restricted (as private), and state transitions are allowed only via a limited number of (public) methods.
Management of finite resources (file handles, memory etc...) is a simple stateful part of the program, and is handled neatly by RIAA.
On the other hand, parts of the program that are necessarily stateful are best handled via OO interfaces, as this allows us to limit the number of possible state transitions. Access to the stateful variables themselves is restricted (as private), and state transitions are allowed only via a limited number of (public) methods.
Management of finite resources (file handles, memory etc...) is a simple stateful part of the program, and is handled neatly by RIAA.
Tuesday 28 May 2013
Types, Tensions and Risk.
The dichotomy between strongly typed languages (such as C) & dynamic scripting languages (such as Python) has already been debated & explored ad nauseam. This contribution, akin to flogging a horse long since dead and buried, will be of little interest to anybody, but it does help me clarify my own position on the matter, so here is my tuppence worth anyway:
Strong typing is supposed to buy you (the developer) a degree of protection from certain types of error. However, that protection comes at a significant cost, and really only provides a worthwhile benefit in a limited set of circumstances.
First of all you do not really reap the benefits of it *just* by using a strongly typed language. You really have to go out of your way to make use of the type system to get the benefits. You have to define your own types so that the system can enforce the non-convertibility of mutually incompatible data types. You also loose a helluva lot of flexibility, because every data item must be specifically typed.
Indeed, developers working with strongly typed languages waste a lot of time writing "generic" frameworks that do nothing else other than circumvent the type system; simultaneously losing the benefit of strong typing whilst also adding to the complexity and cognitive burden of others working on the same system.
Languages with weaker type constraints gain a *lot* of productivity simply by avoiding this whole "let's write something generic" song-and-dance routine. It also focuses the mind on alternative approaches to QA & testing. However, this additional flexibility & productivity often results in a *lot* more code being written (which must be maintained), a tendency which can quickly become self-defeating. (Sometimes what looks initially like exceptional productivity is just exceptional waste in disguise).
Strong typing, used properly, *does* provide value, under certain circumstances, in forcing decisions about data types to be made up-front, rather than deferred to a later stage. It also helps to document interfaces and to make them more specific.
It seems to me that weak typing & the generic interfaces that we end up with as a result result are most useful when we have to defer decisions about the specific data types travelling through those interfaces until later in the development process. Strong typing; leading to the development of specific interfaces are most useful when we are able to specify exactly what data types will be passing through our interfaces early on in the development process.
I.e. strong typing is more congruent with a predictive approach to risk-management, whereas weak typing is more congruent with a reactive / agile approach to risk-management. Of course, this is not a hard-and-fast rule, and successful adoption of either risk-management approach requires high levels of discipline.
Anyway, it is interesting to note the interplay between detailed technical decisions (Strong vs Weakly typed programming languages) and broader business decisions (Approach to risk management & decision scheduling).
Strong typing is supposed to buy you (the developer) a degree of protection from certain types of error. However, that protection comes at a significant cost, and really only provides a worthwhile benefit in a limited set of circumstances.
First of all you do not really reap the benefits of it *just* by using a strongly typed language. You really have to go out of your way to make use of the type system to get the benefits. You have to define your own types so that the system can enforce the non-convertibility of mutually incompatible data types. You also loose a helluva lot of flexibility, because every data item must be specifically typed.
Indeed, developers working with strongly typed languages waste a lot of time writing "generic" frameworks that do nothing else other than circumvent the type system; simultaneously losing the benefit of strong typing whilst also adding to the complexity and cognitive burden of others working on the same system.
Languages with weaker type constraints gain a *lot* of productivity simply by avoiding this whole "let's write something generic" song-and-dance routine. It also focuses the mind on alternative approaches to QA & testing. However, this additional flexibility & productivity often results in a *lot* more code being written (which must be maintained), a tendency which can quickly become self-defeating. (Sometimes what looks initially like exceptional productivity is just exceptional waste in disguise).
Strong typing, used properly, *does* provide value, under certain circumstances, in forcing decisions about data types to be made up-front, rather than deferred to a later stage. It also helps to document interfaces and to make them more specific.
It seems to me that weak typing & the generic interfaces that we end up with as a result result are most useful when we have to defer decisions about the specific data types travelling through those interfaces until later in the development process. Strong typing; leading to the development of specific interfaces are most useful when we are able to specify exactly what data types will be passing through our interfaces early on in the development process.
I.e. strong typing is more congruent with a predictive approach to risk-management, whereas weak typing is more congruent with a reactive / agile approach to risk-management. Of course, this is not a hard-and-fast rule, and successful adoption of either risk-management approach requires high levels of discipline.
Anyway, it is interesting to note the interplay between detailed technical decisions (Strong vs Weakly typed programming languages) and broader business decisions (Approach to risk management & decision scheduling).
Friday 10 May 2013
How to engineer an Artificial Intelligence
Intelligence is a defining human characteristic, so it is somewhat doubtful that we would ever be able to pin down what it is precisely enough to make an artificial version of it ... we will always try to define and redefine the term "intelligence" in such a manner that humans remain separated and protected as a separate and distinct class of entities ... our collective ego demands no less.
However, if we put our collective ego to one side, we can surmise that intelligent behaviour consists of some general purpose learning and behaviour-directing mechanisms sitting on top of a whole heap of special-purpose sensorimotor mechanisms.
Engineering an artificial system to emulate some or all of that behaviour is a monumental engineering task. Irrespective of the fundamental breakthroughs in learning, generalisation and planning that may or may not be required, we are still left with an exceedingly large and complex software engineering challenge.
In my mind, this represents the primary obstacle to the development of a true Artificial Intelligence: Not better science, but better software engineering practices and (particularly) better software engineering tools to help manage, visualise, understand and communicate complicated software systems.
Now, this is an interesting problem, because software engineering at a large scale is much less concerned with hard technical challenges and much more concerned with "soft" human challenges of communication and politics. The same can be said for any engineering challenge where the complexity of the subject matter, combined with it's lack of visibility makes communication and documentation a key technical challenge.
Let us look at the communication challenge first: One thing that is becoming more apparent in our increasingly information-rich environment is that communication is constrained much more by the time-management and motivation-management of the reader than by the availability of the information.
In other words, it is not enough just to make information available, you have to manage the relationship between the reader and the information as well; so that information is not just dropped in front of the reader, but the relationship between the reader and the information is managed to support learning and effective decision making also.
How might one achieve this?
However, if we put our collective ego to one side, we can surmise that intelligent behaviour consists of some general purpose learning and behaviour-directing mechanisms sitting on top of a whole heap of special-purpose sensorimotor mechanisms.
Engineering an artificial system to emulate some or all of that behaviour is a monumental engineering task. Irrespective of the fundamental breakthroughs in learning, generalisation and planning that may or may not be required, we are still left with an exceedingly large and complex software engineering challenge.
In my mind, this represents the primary obstacle to the development of a true Artificial Intelligence: Not better science, but better software engineering practices and (particularly) better software engineering tools to help manage, visualise, understand and communicate complicated software systems.
Now, this is an interesting problem, because software engineering at a large scale is much less concerned with hard technical challenges and much more concerned with "soft" human challenges of communication and politics. The same can be said for any engineering challenge where the complexity of the subject matter, combined with it's lack of visibility makes communication and documentation a key technical challenge.
Let us look at the communication challenge first: One thing that is becoming more apparent in our increasingly information-rich environment is that communication is constrained much more by the time-management and motivation-management of the reader than by the availability of the information.
In other words, it is not enough just to make information available, you have to manage the relationship between the reader and the information as well; so that information is not just dropped in front of the reader, but the relationship between the reader and the information is managed to support learning and effective decision making also.
How might one achieve this?
Sunday 14 April 2013
Procyclic
The Carnot cycle provides us with a fun little analogy that we can abuse and overextend in any number of ways:-
Energy is turned into work by harnessing and controlling a cycle of heating and cooling; expansion and contraction; increasing and decreasing pressure.
One side without the other provides only transient and worthless motion.
Brought together in opposition; made to alternate; managed and controlled in a reciprocating framework, these transient forces may be tamed and made to perform useful and beneficial work over an extended period of time.
Creative Cycles
In the same way, our innovative capacity does not produce new ideas in a single unbroken stream of creativity, but rather in fits and starts.
Contemplation and intensity must be brought together in alternating opposition:
Contemplation so that the mind may range wide, gathering inputs from disparate sources as one might pick blackberries from a brambly hedgerow.
Intensity so that the mind may delve deeply into the detail, working through the consequences of each decision; picking apart the idea to find it's essence, and turning the idea over and over to find the right terminology the right language, the right conceptual framework within which it may be best expressed
and exploited.
One phase without the other is unproductive. Bring the two together in alternation, and you get productive work. Manage that alternation, and you can start to exploit our real potential for creativity.
Communication Cycles
There has been some discussion about the utilization and design of shared workspaces to maximise spontaneous communication (prompted by Marissa Meyer's edict banning remote working at Yahoo). In my mind, this focuses on only half of the solution: perhaps unsurprisingly, in light of the theme of this piece, I believe that communication and isolation must alternate.
When working in a totally isolated manner, one cannot determine what is needed, one does not know what is important. When sitting in the middle of a bustling open plan office, one often knows what is important, but one lacks the space and calm isolation needed to actually do anything about it. It is noteworthy that many effective organisations mandate specific periods; specific opportunities for interaction, typically involving food or drink, perhaps we could also mandate specific periods for isolation and calm introspection, a "quiet time" each day free from phone calls, emails and other forms of communication.
This may be unnecessary, however, since the ubiquitous headphone-wearing developers sitting in an open-plan-office provides a very flexible compromise, offering opportunities both for focussed work and for collaboration.
Economic Cycles
As with the small, so with the large. "The era of boom and bust economics is over" was a statement that ran not only counter to common sense, but possibly counter to some sort of law of statistical physics.
To believe that we have any real positive control over the economy is a dangerous conceit. A vague and sloppy influence, perhaps, but not control in any recognisable meaning of the word.
The presence of investment cycles for individual businesses as well as individual people is a well understood fact, as is the web of interactions that exists between those economic actors, and (so I would suppose) the presence of effects that promote the synchronisation of these individual cycles into macroeconomic feedback loops.
Why attempt to eliminate these macro-scale cycles by controlling government spending? Why not, instead, as our analogy suggests, control their amplitude and frequency so that the harm that they do is minimised.
Indeed, as Nasim Taleb points out, a small "bust" can fend off a larger and more damaging one. Targeted cyclical and perhaps aperiodic variations in the tax burden might help stress, (or threaten stress) economic actors in a manner that promotes decoupling and desynchronisation, and inhibits catastrophic modes-of-failure, but, as stated above, it would be dangerously arrogant to assume that we really know what is going on, and what the effect of our actions is, or will be.
So we have to exert control, not in a direct and forceful manner, but in a way that tends to the ecosystem, but does not seek to dominate it.
Anyway, I am straying dangerously far from my area of expertise, and my intuition oft runs astray, so let me leave things here before I go too far off the beaten track.
Energy is turned into work by harnessing and controlling a cycle of heating and cooling; expansion and contraction; increasing and decreasing pressure.
One side without the other provides only transient and worthless motion.
Brought together in opposition; made to alternate; managed and controlled in a reciprocating framework, these transient forces may be tamed and made to perform useful and beneficial work over an extended period of time.
Creative Cycles
In the same way, our innovative capacity does not produce new ideas in a single unbroken stream of creativity, but rather in fits and starts.
Contemplation and intensity must be brought together in alternating opposition:
Contemplation so that the mind may range wide, gathering inputs from disparate sources as one might pick blackberries from a brambly hedgerow.
Intensity so that the mind may delve deeply into the detail, working through the consequences of each decision; picking apart the idea to find it's essence, and turning the idea over and over to find the right terminology the right language, the right conceptual framework within which it may be best expressed
and exploited.
One phase without the other is unproductive. Bring the two together in alternation, and you get productive work. Manage that alternation, and you can start to exploit our real potential for creativity.
Communication Cycles
There has been some discussion about the utilization and design of shared workspaces to maximise spontaneous communication (prompted by Marissa Meyer's edict banning remote working at Yahoo). In my mind, this focuses on only half of the solution: perhaps unsurprisingly, in light of the theme of this piece, I believe that communication and isolation must alternate.
When working in a totally isolated manner, one cannot determine what is needed, one does not know what is important. When sitting in the middle of a bustling open plan office, one often knows what is important, but one lacks the space and calm isolation needed to actually do anything about it. It is noteworthy that many effective organisations mandate specific periods; specific opportunities for interaction, typically involving food or drink, perhaps we could also mandate specific periods for isolation and calm introspection, a "quiet time" each day free from phone calls, emails and other forms of communication.
This may be unnecessary, however, since the ubiquitous headphone-wearing developers sitting in an open-plan-office provides a very flexible compromise, offering opportunities both for focussed work and for collaboration.
Economic Cycles
As with the small, so with the large. "The era of boom and bust economics is over" was a statement that ran not only counter to common sense, but possibly counter to some sort of law of statistical physics.
To believe that we have any real positive control over the economy is a dangerous conceit. A vague and sloppy influence, perhaps, but not control in any recognisable meaning of the word.
The presence of investment cycles for individual businesses as well as individual people is a well understood fact, as is the web of interactions that exists between those economic actors, and (so I would suppose) the presence of effects that promote the synchronisation of these individual cycles into macroeconomic feedback loops.
Why attempt to eliminate these macro-scale cycles by controlling government spending? Why not, instead, as our analogy suggests, control their amplitude and frequency so that the harm that they do is minimised.
Indeed, as Nasim Taleb points out, a small "bust" can fend off a larger and more damaging one. Targeted cyclical and perhaps aperiodic variations in the tax burden might help stress, (or threaten stress) economic actors in a manner that promotes decoupling and desynchronisation, and inhibits catastrophic modes-of-failure, but, as stated above, it would be dangerously arrogant to assume that we really know what is going on, and what the effect of our actions is, or will be.
So we have to exert control, not in a direct and forceful manner, but in a way that tends to the ecosystem, but does not seek to dominate it.
Anyway, I am straying dangerously far from my area of expertise, and my intuition oft runs astray, so let me leave things here before I go too far off the beaten track.
Friday 5 April 2013
Order of magnitude variations in developer productivity .... for the same person!
I have been in situations where I was (very nearly) 10x more productive than anybody else in the team, as well as in situations where I was (frustratingly) considerably less productive than those around me. Looking back at the last decade or so, I can definitely see periods where my productivity dipped, as well as periods where I was able to maintain consistently outstanding results. The variance is astonishing and shocking.
Over a shorter time-scale, programming, like any other creative endeavour, has tremendous temporal performance-volatility. Performance "in the zone", when I am in a mental flow-state of high concentration is orders-of-magnitude better than when sitting in the doldrums, unable to perceive or engage with the natural semantics of the problem domain. Writers block (to an extent) happens to programmers, too.
There are a number of reasons for these variations:
Firstly, over the long term, sampling effects play a part in (relative) performance. You can expect the quality and dedication of the team that you are working with to vary significantly, so as you move from team to team your baseline for comparison swings all over the place.
Secondly, experience and tools. You are never going to perform as well when you are learning as you are going - I am easily an order of magnitude more productive when using tools with which I am familiar than when trying to learn something new. (But, as per the technologist's typical Catch-22, you need to always be learning something new to stay relevant)...
Thirdly, personal circumstances:- commuting distance, family obligations, (small children), illness, living conditions. All of these have an impact on day-to-day mental alertness and ability to get "into the zone", although perhaps to a lesser extent than the other factors in this list.
Fourth, team dynamics. Some of my highest levels of productivity have been when operating as part of creative, collaborative partnerships, with another highly engaged team member to bounce ideas off, and to debate the merits of various approaches. This produces a creative dynamism that both improves the quality of the end product, and promotes active engagement in the process by both members of the "dynamic duo".
Finally, the big one: enthusiasm and engagement. This is really about organizational dynamics, leadership and psychology and is perhaps the hardest aspect to understand and control. For a programming task where attention to detail and mental engagement with complex systems is critical, the level of enthusiasm and engagement in the problem domain is critical for performance. In those roles where I have "lived the job", and spent every waking moment turning the problem-at-hand around in my head, dreaming about it when I fall asleep at night, I have obviously and significantly outperformed, in comparison to those roles where the job feels like an endless (and pointless) grind with no end in sight. You have to believe in the mission to perform, and that is as much an (external) function of leadership as your own (internal) reserves of fortitude and grit.
In summary, the biggest effect on (long term) performance is probably the existence of total, absolute and life-consuming dedication to the task at hand, as it promotes rapid learning and extended periods in flow-state. Inspirational leadership and creative partnerships do an enormous amount to encourage and support this level of engagement in the work environment, whilst suitable domain expertise and knowledge helps to reduce frustrations and remove barriers to progress. Finally, an absence of confounding factors and distractions in the out-of-work environment also helps.
So, a lot of it is how the job, the organization and the developer fit together. The 10x thing is not about innate skill (except in a statistical sense). As a leader, there are definitely a large number of things that you can do to increase the probability that members of your team will perform at their peak.
Thursday 28 March 2013
Wednesday 27 March 2013
On the importance of attention-to-detail in software tools development.
As the Apple example demonstrates, attention to detail, in aggregate, results in a superior product, which enables you to justify charging a premium price.
This truism is particularly apt when it comes to tools, because the affordances and micro-features that one's day-to-day tools offer have a huge impact on the way that we do our day-to-day job. The extent to which we developers follow the path of least resistance is, in many ways, quite sad, but this most human of characteristics places incredible power in the hands of the tool-makers. Best practice is often defined by the tools that we use. As a crude example, witness how Hungarian notation fell out of favour as soon as popular IDEs started to display "type hints" when the mouse hovered over the variable.
Make it slightly easier to do something, or to arrange text in a particular way, and developers will change their behaviour in response.
For many years (2001-2011) I used TextPad on windows, and came to particularly appreciate it's block select mode when managing vertically aligned characters. The shortcut for this feature (Ctrl+Q, B) has been burned irreversibly into my motor cortex. I used this feature not only to manage vertically-aligned assignment statements, but also to quickly add blocks of end-of-line comments, and to support a declarative programming style that relied heavily on in-source tables of parameters & other data.
This declarative approach to software construction had a particularly pleasing synergy with the MATLAB language and the uncomplicated, linear loop-free aesthetic which is made possible by it's impressive collection of array-oriented library functions.
More recently, (2011-2013) I have been using Sublime Text as my editor-of-choice. (As well as a little vim, when I have no choice). Whilst the multiple selection feature of Sublime text is super-duper-awesome in it's own way, I still miss TextPad's block select feature, particularly it's ability to automatically insert whitespace at the end of lines, allowing one to easily reclaim blocks of space to the right of the logic for descriptive commentary. (80 column limits be damned as an onions-in-the-varnish anachronistic throwback).
Anyway, seemingly small things, but still important: What seem like trivial affordances really do shape the way that one thinks and works. For the past year I switched away from Textpad + MATLAB towards Sublime Text & vim + Python, and the combined strictures of PEP8 & PyLint, together with the subtly different affordances of the Sublime Text vs. TextPad steered my development style away from the declarative parameter-driven approach towards a more "traditional" programming style, to the moderate detriment of the "quality" of the applications that I produced.
Despite Python's numerous, obvious and justifiable claims to superiority, there exists an ineffable and emergent quality that language and tools together supply, and a quality that was not (quite) captured in the tools that I was using for the past year.
Another observation: When moving the caret quickly along a line of text using the arrow keys with the "ctrl" key pressed down, the cursor in the MATLAB editor stops very slightly more frequently than other editors, resulting in a slightly "sluggish" feel to the navigation. Paying attention to matters such as this is the equivalent, to a tool-maker, of the auto-maker paying attention to the "thunk" noise that the car door makes when it closes. A feature that operates almost on a subliminal level to create the impression of quality, and, to the tool-user, the feeling of "floating" through the text file making changes, rather than the feeling of wading through a muddy swamp.
Another observation: Little bits of automation to make life easier. When I highlight a word in Sublime Text and press the open-parentheses "(" key, the word is surrounded in parentheses. When I do the same in the MATLAB editor, the word is deleted and replaced by the open parenthesis character.
I am a craftsman. I want to love my tools. MATLAB has elements of greatness, but a large number of flaws as well. Some of these flaws cannot be addressed without making the tool something other than what it is.
Fair enough.
But some can be addressed with passion, attention-to-detail, design sense, and strong, detailed technical leadership.
This truism is particularly apt when it comes to tools, because the affordances and micro-features that one's day-to-day tools offer have a huge impact on the way that we do our day-to-day job. The extent to which we developers follow the path of least resistance is, in many ways, quite sad, but this most human of characteristics places incredible power in the hands of the tool-makers. Best practice is often defined by the tools that we use. As a crude example, witness how Hungarian notation fell out of favour as soon as popular IDEs started to display "type hints" when the mouse hovered over the variable.
Make it slightly easier to do something, or to arrange text in a particular way, and developers will change their behaviour in response.
For many years (2001-2011) I used TextPad on windows, and came to particularly appreciate it's block select mode when managing vertically aligned characters. The shortcut for this feature (Ctrl+Q, B) has been burned irreversibly into my motor cortex. I used this feature not only to manage vertically-aligned assignment statements, but also to quickly add blocks of end-of-line comments, and to support a declarative programming style that relied heavily on in-source tables of parameters & other data.
This declarative approach to software construction had a particularly pleasing synergy with the MATLAB language and the uncomplicated, linear loop-free aesthetic which is made possible by it's impressive collection of array-oriented library functions.
More recently, (2011-2013) I have been using Sublime Text as my editor-of-choice. (As well as a little vim, when I have no choice). Whilst the multiple selection feature of Sublime text is super-duper-awesome in it's own way, I still miss TextPad's block select feature, particularly it's ability to automatically insert whitespace at the end of lines, allowing one to easily reclaim blocks of space to the right of the logic for descriptive commentary. (80 column limits be damned as an onions-in-the-varnish anachronistic throwback).
Anyway, seemingly small things, but still important: What seem like trivial affordances really do shape the way that one thinks and works. For the past year I switched away from Textpad + MATLAB towards Sublime Text & vim + Python, and the combined strictures of PEP8 & PyLint, together with the subtly different affordances of the Sublime Text vs. TextPad steered my development style away from the declarative parameter-driven approach towards a more "traditional" programming style, to the moderate detriment of the "quality" of the applications that I produced.
Despite Python's numerous, obvious and justifiable claims to superiority, there exists an ineffable and emergent quality that language and tools together supply, and a quality that was not (quite) captured in the tools that I was using for the past year.
Another observation: When moving the caret quickly along a line of text using the arrow keys with the "ctrl" key pressed down, the cursor in the MATLAB editor stops very slightly more frequently than other editors, resulting in a slightly "sluggish" feel to the navigation. Paying attention to matters such as this is the equivalent, to a tool-maker, of the auto-maker paying attention to the "thunk" noise that the car door makes when it closes. A feature that operates almost on a subliminal level to create the impression of quality, and, to the tool-user, the feeling of "floating" through the text file making changes, rather than the feeling of wading through a muddy swamp.
Another observation: Little bits of automation to make life easier. When I highlight a word in Sublime Text and press the open-parentheses "(" key, the word is surrounded in parentheses. When I do the same in the MATLAB editor, the word is deleted and replaced by the open parenthesis character.
I am a craftsman. I want to love my tools. MATLAB has elements of greatness, but a large number of flaws as well. Some of these flaws cannot be addressed without making the tool something other than what it is.
Fair enough.
But some can be addressed with passion, attention-to-detail, design sense, and strong, detailed technical leadership.
Confidence and Competence and Information Overload
The internet is a wonderful thing.
Every problem has a thousand solutions: documented; discussed; dissected.
This sumptuous surfeit of suitable solutions seems satisfactory; superb even, save for one small snag:
My effective competence, my ability to work quickly and solve problems, is driven largely by my confidence in my own abilities, my own evaluation of the completeness of my knowledge versus the task at hand.
When I was younger, the arrogance of youth made this easy. I plowed on ahead, ignorant of my own ignorance, I performed, gained accolades, and life was good.
However, faced with ever rising flood-waters issuing from the fire-hose of information that is the internet today, it is all too easy to start comparing what I know with what I could know, which, expressed as a ratio, is always going to be depressingly close to zero. With self-confidence thus undermined, the inevitable consequence of this is for me to loose sight of the fact that I have enough information at hand to get on with it, without worry or concern, and to get dragged down into the rabbit-hole.
This problem is compounded when one tries to pick up a new tool or technology, an activity that (given the pace of technological advance) one needs to do more-or-less continuously.
Now, I love learning, but I cannot resist the temptation to compare my knowledge and level of expertise in the latest tool to my knowledge and level of expertise in the last, and, in the worst-case-scenario one or two weeks worth of experience is never going to compare favourably with six or seven years.
In situations such as this, I have to keep telling myself to HTFU and demilquetoastify my attitude.
Every problem has a thousand solutions: documented; discussed; dissected.
This sumptuous surfeit of suitable solutions seems satisfactory; superb even, save for one small snag:
My effective competence, my ability to work quickly and solve problems, is driven largely by my confidence in my own abilities, my own evaluation of the completeness of my knowledge versus the task at hand.
When I was younger, the arrogance of youth made this easy. I plowed on ahead, ignorant of my own ignorance, I performed, gained accolades, and life was good.
However, faced with ever rising flood-waters issuing from the fire-hose of information that is the internet today, it is all too easy to start comparing what I know with what I could know, which, expressed as a ratio, is always going to be depressingly close to zero. With self-confidence thus undermined, the inevitable consequence of this is for me to loose sight of the fact that I have enough information at hand to get on with it, without worry or concern, and to get dragged down into the rabbit-hole.
This problem is compounded when one tries to pick up a new tool or technology, an activity that (given the pace of technological advance) one needs to do more-or-less continuously.
Now, I love learning, but I cannot resist the temptation to compare my knowledge and level of expertise in the latest tool to my knowledge and level of expertise in the last, and, in the worst-case-scenario one or two weeks worth of experience is never going to compare favourably with six or seven years.
In situations such as this, I have to keep telling myself to HTFU and demilquetoastify my attitude.
Tuesday 19 February 2013
Batteries Included
Software developers seem to be significantly more productive today than they were even a few years ago. What has happened? Have techniques improved? Have skill levels increased? Are best-practices being more closely adhered to?
One cannot easily isolate any one factor as bearing sole responsibility for the performance improvements, but I suspect that a great deal of our gratitude should be directed towards improvements in the availability, quality and performance of software libraries, together with growing communities around those libraries, promoting education and adoption in the wider engineering community.
After all, the one thing that makes software development so different from other disciplines are the tremendous cost savings available through reuse. The less logic your organization is responsible for, the less the expenditure on development, documentation and maintenance. The main expense incurred when using tools and libraries is in education and training ... this is perhaps not something that we as an industry have fully adjusted to:- neither in the way that we (as professionals) specialize; nor in the way that we (as organizations) recruit new talent or nurture existing talent.
In fact, the notion of talent as something that is exclusively innate and inimitable is a dangerous one. When one makes an experienced hire, the commodity that one is purchasing is (primarily) knowledge, rather than the commission of work/effort to be expended. The knowledge that is required to not have to do the work, to be more precise. Perhaps we need to introduce new work and employment patterns beyond the traditional notions of contractor and perm employee?
Tuesday 15 January 2013
Personal Statement
My passion for machine vision, machine learning and statistical pattern recognition is longstanding, having started over 10 years ago and continuing today.
Most of my undergraduate AI degree was oriented towards logic, theorem proving, and computational linguistics, which was fascinating in it's own right, but did not strike me as a particularly realistic or pragmatic way of dealing with the messiness and complexity of the real world. As a result, I latched on to the (at the time) less mainstream "soft" computing approaches with enthusiasm, devouring the content of the machine learning, machine vision and neural networks modules avidly. I saw these approaches as a pragmatic alternative to the hard and inflexible grammar-based approaches to Natural Language Processing espoused by the main body of the department.
This view of machine learning as a pragmatic tool, at odds with ivory-tower academicism has stuck with me ever since, even as the subject has become more mainstream (and more academic and mathematically sophisticated). As a result, I tend to focus on simple techniques that work, rather than techniques which demonstrate mathematical chops and academic sophistication. I am fortunate in this regard, because, paradoxically, the solutions to difficult problems are often conceptually simpler and mathematically less sophisticated than the "optimum" solutions to simple problems. Perhaps a little bit of the Yorkshire/Lancashire culture of engineering pragmatism rubbed off on me during my time in Manchester.
Another thing that was dawning on me as I finished my undergraduate degree was the importance of scale. As I attempted to find datasets for my hobby projects, (Far harder back then than today), I began to develop suspicions that scale, rather than any qualitative leap in understanding, was going to be a key factor in the development of genuinely interesting artificial intelligence techniques. From this came my interest in machine vision, which I saw as a key "gateway" technique for the collection of data -- to help the machine build an understanding of the world around it and to "bootstrap" itself to a more interesting level.
I was lucky with my first employer, Cambridge Research Systems, where I had the opportunity to work with some very talented people, both within the company and across our customer community. From that experience, and the abortive neuroscience PhD that I started, I learned a lot about the neuroscience of biological visual systems, particularly the older, lower-level pathways that go, not to the primary visual cortex, but to the evolutionarily older "reptilian" parts of the brainstem. In contrast with the "general purpose" and "reconfigurable" nature of the cortex, these older pathways consist of a large number of (less flexible) special-purpose circuits handling things like eye movements and attention-directing mechanisms. Crucially, these lower-level circuits enable our visual system to stabilise, normalise and "clean" the data that we present to our higher-level cortical mechanisms. This insight crosses across well to more commercial work, where the importance of solid groundwork (data quality, normalization and sampling) can make or break a machine learning implementation. I was also fortunate enough to pick up some signal processing and FIR filter design fundamentals - as I was writing software to process biological time-series signals (EOG) to identify and isolate events like saccades and blinks.
At around about this time, I was starting to become aware of the second important thread in my intellectual development: The incredible slowness of software development, and the difficulty and cost that we would incur trying to implement the large number of these lower level stabilization mechanisms that would be required.
I left Cambridge Research Systems specifically to broaden my real-world, commercial software development experience, working at a larger scale than was possible at CRS. Again, I was lucky to find a role with Sophos, where I learned a great deal from a large group of very talented C++ developers doing Test Driven Development in the highest-functioning Agile team I have yet encountered. Here, I started to think seriously about the role of communication and human factors in software development, as well as the role that tools play in guiding development culture, always with an eye to how we might go about developing those special purpose data processing functions.
Following a relocation closer to London (for family reasons), I left Sophos and started working for Thales Optronics. Again fortunate, I found myself working on (very) large scale machine vision applications. Here, during a joyous three year period, I was able to put much of my previous intellectual development and thinking into practice, developing not only the in-flight signal processing, tracking and classification algorithms, but more significantly, the petabyte-scale data handling systems needed to train, test and gain confidence in them. In addition to the technical work, I worked to encourage a development culture conducive to the development of complex systems. This was the most significant, successful and rewarding role I have had to date.
Unfortunately, budgetary constraints led Thales to close their office in Staines, and rather than transferring to the new office, I chose to "jump ship" and join Fidelity Asset Managers in the City of London, partly in an attempt to defeat some budgetary constraints of my own, and partly out of an awareness of the potential non-transferability of defense industry expertise, made more pressing by an impending overseas relocation.
At Fidelity, I used my knowledge of the MATLAB distributed computing toolbox to act as the High-Performance Computing expert in the "quant" team. I gained exposure to a very different development culture, and learned a lot about asset management and quantitative investing, gaining some insight into the accounting and management factors that drive development culture in other organizations. I particularly valued my exposure to the insights that Fidelity had, as an institutional investor, into what makes a successful organization, as well as it's attempts to apply those insights to itself.
At Fidelity, I used my knowledge of the MATLAB distributed computing toolbox to act as the High-Performance Computing expert in the "quant" team. I gained exposure to a very different development culture, and learned a lot about asset management and quantitative investing, gaining some insight into the accounting and management factors that drive development culture in other organizations. I particularly valued my exposure to the insights that Fidelity had, as an institutional investor, into what makes a successful organization, as well as it's attempts to apply those insights to itself.
Finally, in 2011, my family's long expected overseas posting came. Yet again we were incredibly lucky, and got to spend a wonderful year-and-a-half living in the middle of New York city. I was fortunate, and managed to get a job at an incredible Silicon Alley startup, EveryScreen Media, which was riding the wave of interest in mobile advertising that was just beginning to ramp up in 2011 and 2012. Again, finding myself working with incredibly talented and passionate colleagues, I was given the opportunity to broaden my skills once again, picking up Python and Unix development skills, becoming immersed in (back-end) web development, building out the data science infrastructure in an early-stage startup. From this year and a half or so, I particularly value what I learned about how to develop large scale, (scaleable) distributed real-time data processing systems systems and the effective use use of modern internet "web" technology.
Now, back in the UK, I am in search of the next step on my journey of learning and discovery. My focus is, and remains, on the pragmatics of developing complex statistical data processing systems, on how to create and curate large data-sets, how to integrate them into the development process, so that continuous integration and continuous testing, visualisation and monitoring help the development team to understand and communicate the system that they are building, as well as the data that feeds it; to respond to unexpected behaviors, and to steer the product and the project to success and, moreover, to help ensure that the organization remains rightly confident in the team and in the system.
Thursday 10 January 2013
A Network Model for Interpersonal Communication
Modeling interpersonal communication within an organization as a network of reconfigurable topology composed of high capacity data stores connected by limited bandwidth communication channels.
The Model:
The amount that we know on any given topic of interest vastly outweighs our practical ability to communicate that information in a reasonable time-frame. We simply do not have the time or the available bandwidth to communicate everything that we need to in the detail that the subject deserves. Our model reflects this - The data storage capacity at each node is immense, and contrasts sharply with the exceedingly limited bandwidth available for communication between nodes. The difference between the two is many orders of magnitude in size. For a visual analogy, we should not look to buckets connected by hosepipes, but rather half-million ton supertankers connected by thin cocktail straws. Transmitting even a gallon of knowledge is a challenge.
Chinese Whispers:
When communicating with distant nodes with messages routed through intermediary nodes, the information being transmitted is compressed to an incredibly high degree with a very lossy and low-quality compression algorithm. The poor quality of the communications channel is particularly evident when the network encompasses a diverse range of backgrounds, cultures and terminological-linguistic subtypes. In many such cases the intent of the message can easily be inverted as relevant details are either dropped or misinterpreted in transmission.
Systematic Factors impacting efficacy of communication:
The options available for compression are greater when two neighboring nodes already have a great deal in common, where shared datasets, terminology, mental models, and approaches to communication can be used to elide parts of the message, reducing bandwidth requirements, and allowing for communication that is both more reliable and more rapid. As a result of this, communication within an organization that has a strong, unified "culture" (common knowledge, terminology and practices) will be far more effective than communication within an organization that has a less cohesive "culture", purely because the options for message compression are greater, irrespective of any other measures that the organization might put in place to improve the available bandwidth. It is worth noting that, whilst this does improve the situation considerably, the problem itself is fundamental and always presents a significant challenge.
Organizational Optimization for Effective Communication:
Given that there is nothing inherently fixed about the topology of the communications network within which we are embedded, one simple response to this problem is to remove intermediary steps from source to destination nodes, and to allow the source node to connect directly with the destination node, permitting a direct and relatively high bandwidth exchange. This is even more effective if the exchange is bidirectional, permitting in-situ error correction. This argues for a collaborative approach to organization - with technical experts communicating with one another directly, with no intermediary communications or management specialists.
Another approach is to optimize for effective compression of the messages being transmitted through the network. As noted above, this relies on a common terminology, a common knowledge base, and a common set of practices and approaches to problems. In other words, most of the communication is moved out-of-band -- communicated though various channels prior to the point where it is actually needed. Again, this is well aligned with the collaborative approach to organization - where the technical experts within an organization continually educate one another on their own area of expertise so that when they need to communicate quickly, they can do so both effectively and reliably.
A Common Antipattern: Spin Doctors and Office Politicians:
Of course, the approaches outlined above are not the only solution to this fundamental problem. However, I argue that at least some of these approaches are anti-patterns, detrimental to the long term health and development of the organization.
Many individuals craft the messages they send extremely carefully to minimize the probability of corruption or misinterpretation en route, partly by reducing the information content of the message, and partly by crafting the emotional tone to remove ambiguity. This process is colloquially known as "spin", or "crafting the message".
In some situations this approach may even be appropriate, for example: where the network is large and cannot be reconfigured so that messages must be transmitted over more than one "hop"; where the environment for communication is particularly disadvantageous, with little common culture, terminology, or background technical knowledge; and finally, where long-term systematic improvements must be subordinated to short-term goals.
However, there are a couple of significant drawbacks to this approach. Firstly, by restricting knowledge transfer, opportunities to grow a base of common knowledge and understanding are squandered. Secondly, this approach is in direct conflict with cultural norms that emphasize honesty and transparency in interpersonal communication, the adherence to which builds a basis of trust and mutual understanding that further enhances communication.
The Model:
The amount that we know on any given topic of interest vastly outweighs our practical ability to communicate that information in a reasonable time-frame. We simply do not have the time or the available bandwidth to communicate everything that we need to in the detail that the subject deserves. Our model reflects this - The data storage capacity at each node is immense, and contrasts sharply with the exceedingly limited bandwidth available for communication between nodes. The difference between the two is many orders of magnitude in size. For a visual analogy, we should not look to buckets connected by hosepipes, but rather half-million ton supertankers connected by thin cocktail straws. Transmitting even a gallon of knowledge is a challenge.
Chinese Whispers:
When communicating with distant nodes with messages routed through intermediary nodes, the information being transmitted is compressed to an incredibly high degree with a very lossy and low-quality compression algorithm. The poor quality of the communications channel is particularly evident when the network encompasses a diverse range of backgrounds, cultures and terminological-linguistic subtypes. In many such cases the intent of the message can easily be inverted as relevant details are either dropped or misinterpreted in transmission.
Systematic Factors impacting efficacy of communication:
The options available for compression are greater when two neighboring nodes already have a great deal in common, where shared datasets, terminology, mental models, and approaches to communication can be used to elide parts of the message, reducing bandwidth requirements, and allowing for communication that is both more reliable and more rapid. As a result of this, communication within an organization that has a strong, unified "culture" (common knowledge, terminology and practices) will be far more effective than communication within an organization that has a less cohesive "culture", purely because the options for message compression are greater, irrespective of any other measures that the organization might put in place to improve the available bandwidth. It is worth noting that, whilst this does improve the situation considerably, the problem itself is fundamental and always presents a significant challenge.
Organizational Optimization for Effective Communication:
Given that there is nothing inherently fixed about the topology of the communications network within which we are embedded, one simple response to this problem is to remove intermediary steps from source to destination nodes, and to allow the source node to connect directly with the destination node, permitting a direct and relatively high bandwidth exchange. This is even more effective if the exchange is bidirectional, permitting in-situ error correction. This argues for a collaborative approach to organization - with technical experts communicating with one another directly, with no intermediary communications or management specialists.
Another approach is to optimize for effective compression of the messages being transmitted through the network. As noted above, this relies on a common terminology, a common knowledge base, and a common set of practices and approaches to problems. In other words, most of the communication is moved out-of-band -- communicated though various channels prior to the point where it is actually needed. Again, this is well aligned with the collaborative approach to organization - where the technical experts within an organization continually educate one another on their own area of expertise so that when they need to communicate quickly, they can do so both effectively and reliably.
A Common Antipattern: Spin Doctors and Office Politicians:
Of course, the approaches outlined above are not the only solution to this fundamental problem. However, I argue that at least some of these approaches are anti-patterns, detrimental to the long term health and development of the organization.
Many individuals craft the messages they send extremely carefully to minimize the probability of corruption or misinterpretation en route, partly by reducing the information content of the message, and partly by crafting the emotional tone to remove ambiguity. This process is colloquially known as "spin", or "crafting the message".
In some situations this approach may even be appropriate, for example: where the network is large and cannot be reconfigured so that messages must be transmitted over more than one "hop"; where the environment for communication is particularly disadvantageous, with little common culture, terminology, or background technical knowledge; and finally, where long-term systematic improvements must be subordinated to short-term goals.
However, there are a couple of significant drawbacks to this approach. Firstly, by restricting knowledge transfer, opportunities to grow a base of common knowledge and understanding are squandered. Secondly, this approach is in direct conflict with cultural norms that emphasize honesty and transparency in interpersonal communication, the adherence to which builds a basis of trust and mutual understanding that further enhances communication.
Subscribe to:
Posts (Atom)