Wednesday 27 June 2012

Apple and strategic capability development

I just finished reading Someone is coming to eat you by Rands in Repose.

It is is an interesting analysis of Apple's strategic stance. In particular, the comment about Tim Cook and the Apple operations team resonates particularly strongly with me.

Perhaps I am reading too much into it, but here I imagine that I can see an organization that, yes, has a legendary focus on product design, but also has the ability and the intelligence to stand back and design an organizational capability that acts as a strategic competitive differentiator.

I think that there are two complementary strands at work here. The first is simplicity. The organization acknowledges that simplicity is hard, and works hard to achieve it, not only in product design but in number of products and (presumably) also in organizational structure.

Having achieved that level of simplicity, the organization can then stand back from the coal face, and engineer itself into a better competitive condition by creating one, or possibly more, world-class capabilities as organizational differentiators.

In the article, the operations team is picked out as an example. To me, this has a little of the flavor of the discussion around "product line" engineering, but it feels much more fundamental, bigger in scope and goes far further into the heart of the company.

Initially prompted by Conway's law, many years ago, I am now becoming increasingly convinced that we need to think of the structure, organization and culture of the enterprise as being the foundation upon which the design of the products are laid, and that a single vision needs to shine through all of these aspects, from management and financial reporting, through organizational culture, and all to build a set of strategic capabilities that will set the enterprise apart.

Behavioral aspects of Software Engineering

I recently started reading "Thinking, Fast and Slow" by Daniel Kahneman. This book, aside from being awesome, describes a lucid mechanism for analyzing human intellectual mechanisms.

Although I have only yet digested the first couple of chapters,  I feel compelled to write about the parts of Daniel's thesis that resonate with me the most.

His basic premise is that we can better understand human thinking if we decompose our abilities into two categories. "System 1", the "fast" system, corresponds loosely to "gut instinct", whereas "System 2", the "slow" system, corresponds loosely to "concentrated reasoning".

Immediately, I have a mental image of the fast "system 1" as being a highly parallel, wide-but-deep, horizontally-scaled collection of special-purpose systems; classic winner-takes-all type architecture.

In contrast, the slow system seems rather less easily understood; I hesitate to de-emphasize parallelism, because virtually everything in biological neural systems is highly parallel; but it seems (intuitively) to have a fundamentally iterative aspect; involving some kind of interplay between working memory and decisioning/pattern recognition systems.


The fast System 1 operates more often than we think it does, consumes much less energy, but has a tendency towards bias and an over-simplification of the world. The slow System 2 operates much less frequently than we think it does, consumes much more energy, requires mental effort and concentration, but is able to perform somewhat more complex, sequential/algorithmic tasks than the fast System 1.


Anyway, my excitement does not derive from this indulgence of my mechanomorphistic instincts, but rather from the application of this analysis to my favorite topic: Software development and developer productivity. 


My basic thesis is that highly skilled, highly productive developers make more extensive use of the fast System 1, less productive developers make more extensive use of the slow System 2.


The degree to which System 1 is preferred is a function both of experience and of the tools that are available to the developer. When faced with a new task or when one finds oneself in a new environment, frequently one has to concentrate very hard to get anything done; performing novel tasks requires a high level of slow System 2 involvement. As one acclimatizes oneself to the new problem or environment, one builds up "muscle memory", or the cognitive equivalent, and System 1 becomes gradually trained to take over more and more of the task.


To reiterate a point from one of my previous posts; software engineering is fundamentally a learning process. One comes across new problem domains, languages, tools, datasets, etc... all the time. A really good set of developer tools will revolve around the job of supercharging our ability to learn; to absorb new systems; new libraries, new concepts, new mathematics, to understand legacy code, to see patterns in data etc...


Personally, I have a very visual brain; I have always enjoyed drawing and painting, found pleasure in the intricacies of models and 3D structures.  I always studied for exams by drawing diagrams, because I could remember pictures far more easily than words. For me, the ideal developer tools will the semantic content of the subject under study and translate it into rich, intricate visual images.

Not everybody has such a focus on visual information, (although many do), so visualisation is only one possible approach out of many.

The overarching motivation behind tools like these is, in my mind, to translate concepts into a form that enable the developer to engage his instincts, his fast System 1 thinking, to learn the necessary domain-specific skills quickly and frictionlessly.

What representations do you like? What conditions enable you to quickly learn to perform at a high level? How do you make yourself better at what you do?

Sunday 24 June 2012

OOP Product vs Product-Line modeling.

In response to:

http://www.jmolly.com/2012/06/24/functional-and-object-oriented-programming.html

Everything that jmolly says is sincere, and rings true enough, but decoupling data and functional behavior is only part of it. Indeed, OOP was designed specifically to couple data and behavior together, so some people (at least) think that this is a good thing.

In my mind, the difficulty lies not with OOP per se, but in the thought-patterns and practices that have grown up around modern OOP programming, and the mental confusion that we have between the construction of models, the design of algorithms, and the creation of machines to help solve a particular problem. To a large extent, we lack the intellectual tools required to understand the problem; or, at least, the use and understanding of those tools is not widespread.

Personally, I suspect that there is a significant difference between product-centric thinking (design the product, ignore all else), and product-line-centric (or capability-centric) thinking (design the machinery that will make it trivially easy to design the product). This, for me, is a more important distinction than OOP vs FP, or any other dichotomy that you might care to mention.

So in focussing our OOP modeling efforts on the product itself, it becomes easy to fail to model potential variations in the product. To compensate, we attempt to use "best practices" and "design patterns" to ensure that we produce software that is flexible and malleable, but because we do so without any concrete reference point, we end up lost, making software that is too abstract and too complex.

We need to take a step back, and realize that software development is not abstract. It is concrete and is embedded in a social, political and business environment. It only appears to be abstract because there is so little communication between the software itself and the other stakeholders in the environment.

We do not, rightly, start our engineering effort by writing code. We start it by engineering the social conditions and the organizational structures first.

Once this is done, writing the software becomes, perhaps more straightforwards, once the goals and objectives of the organization are well aligned with the goals and objectives of the development effort.

See also Conway's Law.

I am not suggesting that I am right here, mind you: this is just the way that I feel; my gut instinct ... so perhaps some Socratic questioning might be in order here:

Let me start.

What problem, as software developers, are we actually trying to solve?

What I learned from Prolog

Prolog gives me headaches in places that other programming languages cannot reach, and I would not for an instant want to try to use it to create any significant sort of application; but it does teach a very fundamental and very important lesson.

Programs should, as much as is practicable, try to function as executable specifications, and to state their purpose in declarative form.

To achieve this, we can write somewhat general purpose machines, the parameters for which serve as the behavior specification.

In Prolog, the inference engine is the general purpose machine, the rule database is the executable specification.

For the majority of real-world projects that I have encountered, it has been possible to write libraries that together constitute a set of (somewhat less) general purpose machines; the configuration for which, in declarative form, serves as the "executable" specification of the problem.

All of my work therefore tends to be divided into two parts: A set of general purpose libraries that provide some configurable functionality, and a set of products that combine and configure those libraries, doing double duty as specification-documentation and top-level executable.

From Prolog comes the feeling that the declarative form is more suitable than the imperative form for the specification and communication of program function; and therefore the imperative parts of the program should be subsumed under a declarative layer.

Of course, none of this is new. Also, this notion does not by itself guarantee success, and must be tempered by simplicity and common sense.

Towards Semantic Code Search

In this discussion I will use the term "unit" to generalize over functions, classes, methods, procedures, macros and so on.

As I sit and write code, I would like to have a background process quietly searching open source repositories for units that are semantically similar to the ones that I am currently engaged in creating and/or modifying; these can then be presented as options for re-use; to limit the re-invention of the wheel, or to help identify and give advance warning of potential pitfalls and/or bugs.

To achieve this, we need to create some sort of metric space wherein similar units sit close together, and dissimilar (functions/classes) far apart. I would like this similarity metric to take > 2 values, so we do not just limit ourselves to binary matching, but can tune the sensitivity of the algorithm. This approach is suitable for exploratory work, because it helps give us a tool that we can use to build an intuitive understanding of the problem.

The algorithm can follow the prototypical pattern recognition architecture: A collection of 2-6 feature-extraction algorithms, each of which extracts a structural feature of the code under search. These structural features shall be designed so that their outputs are invariant under certain non-significant transformations of the source unit. (E.g. arbitrary renaming of variables, non-significant re-ordering of operations etc..).


These feature extraction algorithms could equally correctly be called normalisation algorithms.


Together, the outputs of the feature-extraction algorithms will define a point in some feature space. This point will be represented either by a numeric vector (if all the feature-extraction algorithms have scalar-numeric outputs), or will be something more complex, in the highly probable event that one or more of the feature extraction algorithms produces non scalar-numeric output (e.g. a tree structure).


Once we can construct such feature "vector"/structures, we need a metric that we can use to measure similarity/dissimilarity between pairs of such structures. If all the features end up being scalar-numeric, and the resulting point in feature space is a numeric vector, then a wide range of possible metrics are immediately available, and something very simple will probably do the trick. If, on the other hand, the features end up being partially or wholly non-numeric, then the computation of the metric may end being more involved.


If this all sounds too complex, then perhaps an example of a possible (non scalar-numeric) feature extraction algorithm will bring the discussion back down to earth and make it more real: http://c2.com/doc/SignatureSurvey/

The number of features should be strictly limited by the number of samples that are available to search. Given repositories like Github and SourceForge, somewhere around 3 features would be appropriate. 

It is worth noting that we are not just looking for ways of identifying functional equivalence in programs. Functional equivalence is an important feature, but similarities in presentation are important also; so for some features, choice of function / class / variable name may not be important, but for others, it may be significant.


What types of normalization might be useful? What features of the source unit should we consider to be significant, and what features should we ignore?


Suggestions in comments gladly received.


Note: There exists some prior literature in this field. I have not yet started to explore it, so the above passages illustrate my uninformed initial thoughts on the problem.
http://www.cs.brown.edu/~spr/research/s6.html
http://adamldavis.com/post/24918802271/a-modest-proposal-for-open-source-project-hashes

Friday 22 June 2012

How did I get here?


When I was young, I dreamt of becoming a physicist, but I very quickly discovered that I did not have the mathematical literacy required for that particular career choice.


(In retrospect, perhaps I should have just dug my heels in and persisted with it)


Well, if I cannot do the mathematics myself, I thought, perhaps I can program a computer to help me? This thought lead, eventually, to my doing an undergraduate degree in Artificial Intelligence.


It was during this course that I began to think of reasoning as a process driven predominantly by knowledge, and about the problem of acquiring the vast amounts of knowledge that would so obviously be required to do any sort of useful reasoning in the real world. 


I was particularly taken by the potential for machine vision systems to help build knowledge bases such as these, and so I developed an enduring interest in machine vision (and statistical pattern recognition more generally).


In my early career, I was fortunate enough to work with scientists studying human perception, which built up my nascent interest in perceptual processes; a filter through which I still percieve many technical problems.


It has become clear, however, that the main barrier standing in the way of developing sophisticated software systems that can reason about the world and help us to understand our universe is the paucity and limited capability of the software development tools that we have at our disposal.


The latter half of my career has therefore largely turned towards improving the software tools available to the academics, scientists and engineers that I have been priviledged to work with over the years.

Thursday 7 June 2012

Credo

I believe in the notion that development cost control & cost amortization through software reuse needs to be explicitly factored in to the organization's management and financial structure through the use of product line centric organizational structures.

I also believe that the structure of the source document repository used by the organization is a quick and easy way to communicate organizational structures and axes of reuse.

This is largely an extension of Conway's law.

Tuesday 5 June 2012

Pushing complexity

"The simple rests upon the difficult" - Theodore Ayrault Dodge

I have often observed that many software engineering techniques or methods that aim to simplify actually just push complexity around rather than actually resolving anything.


As Fred Brooks has already told us: There is no silver bullet.


For example, I once came across a software development process that mandated the creation of elaborate and detailed specification documents together with an extremely formal and rigid process for translating those specifications into executable source documents (code).


The author of the process proudly claimed that his development method would eliminate all coding errors (implying that the majority of bugs are mere typos in the transliteration of specification to application logic).


To me, this seemed like hubris, as it pushes the burden from the software engineer (who in this scheme is reduced to a mere automaton) to the individual writing the specification, who (in the process) takes on the role of developer; (absent the tools and feedback needed to actually do the job well).


Thus, whilst approaches like this might help bloat costs and fuel the specsmanship games that blight certain (nameless) industrial sectors, they do nothing whatsoever to help developers produce higher quality product with less effort.


We do not make progress by pushing complexity around. We make progress by consuming it and taming it. We need to do the work and tackle the problem to make things happen.


Rather than focus on the silver bullet of simplification, we would be better served by processes, methods and tools that explicitly acknowledge the development feedback loop, and aim to tighten and broaden it through automation.

Code considered harmful

As developers, we often glibly talk of code and coding.

These words do us a tremendous disservice, as they imply that the source documents that we write are in some way encoded, or obfuscated, and so only interpretable by the high priests of the technocracy.

This might give us a moment of egotistical warmth, and provide fuel to our collective superiority complex, but in the long run it does a tremendous amount of harm to the industry.

We would be better served if we insisted (as is indeed the case) that a source document, well written in a high-level programming language is entirely legible to the intelligent layperson.

Whatever view you take on linguistic relativity; whether you believe that our choice of words actually affects our attitudes or not, I think (event from a purely aesthetic point of view) that the word "code" is as ugly as it is arrogant.

Perhaps, rather than talking about "code", we should talk about source documents, process descriptions, executable specifications, procedures or even just logic.

Let us acknowledge, in the words that we choose, that a central part of our jobs is to craft formal descriptions that are as easily interpretable by the human mind as by the grinding of an automaton.

Monday 4 June 2012

Delays and Rates


From a recent post on news.ycombinator.com, vertically aligned for ease of comparison, with corresponding rates to better understand the implications:

(Edit: Expanded with numbers from a recent Ars Technica article on SSDs)

Register                                << 1 ns
L1 cache reference (lower bound)         < 1 ns 2,000,000,000 Hz
L1 cache reference (upper bound)           3 ns   333,333,333 Hz
Branch mispredict                          5 ns   200,000,000 Hz
L2 cache reference                         7 ns   142,857,143 Hz
L3 cache reference                        20 ns    50,000,000 Hz
Mutex lock/unlock                         25 ns    40,000,000 Hz
Main memory reference                    100 ns    10,000,000 Hz
Compress 1K bytes with Zippy           3,000 ns       333,333 Hz
Send 2K bytes over 1 Gbps network     20,000 ns        50,000 Hz
Read 1 MB sequentially from memory   250,000 ns         4,000 Hz
Round trip within same datacenter    500,000 ns         2,000 Hz
Disk seek (lower bound)            3,000,000 ns           333 Hz

Disk seek (upper bound)           10,000,000 ns           100 Hz
Read 1 MB sequentially from disk  20,000,000 ns            50 Hz
Send packet CA->Netherlands->CA  150,000,000 ns           < 7 Hz

Surprises and take home lesson(s):

1. Data intensive (I/O bound) systems are REALLY slow compared to the raw CPU grunt that is available.
2. Within-datacenter Network I/O is faster than disk I/O.
3. It makes sense to think about network I/O in the same way as we used to think about the SIMD/AltiVec/CUDA tradeoff. The payoff has to be worth while, because the packaging/transfer operations are expensive.
4. Branch mis-prediction is actually pretty expensive compared to L1 cache. For CPU bound inner-loop code, it makes sense to spend a bit more time trying to avoid branching. 

Here is the table from Ars Technica:

Level                Access time    Typical size
Registers        "instantaneous"    under 1 KB
Level 1 Cache                1-3 ns      64 KB per core
Level 2 Cache               3-10 ns     256 KB per core
Level 3 Cache              10-20 ns    2-20 MB per chip
Main Memory                30-60 ns    4-32 GB per system
Hard Disk   3,000,000-10,000,000 ns

Friday 1 June 2012

Recent Developments: Developing Future Development

In response to recent buzz around the idea of instant feedback in development environments:
Anything that tightens the feedback loop will increase velocity, so approaches like the ones espoused by Bret & Chris will definitely have a positive impact, although we may need to develop more advanced visualization techniques to help when the state of the system is not naturally visual. (http://tenaciousc.com/)

I am also convinced that there are a few more steps (in addition to instant feedback) that we need to take as well:

Firstly, the development environment needs to encourage (or enforce) a range of always-on continuous development automation, including unit-testing, static-analysis, linting, documentation generation etc... This should include automated meta-tests such as mutation-based fuzz testing so that the unit-test coverage itself is tested. This helps us to have confidence that we have not missed any regressions creeping in. (To compensate for our inability to pay attention to everything all of the time)

Secondly, refactoring tools need to be supported, so that code can be mutated easily and the solution-space explored in a semi-automated manner. (To compensate for the fact that we can only type at a limited speed).

Thirdly, we need to start using pattern recognition to find similarities in the code that people write, so we can be guided to find other people and other software projects so that we can either re-use code if appropriate, or share experiences and lessons learned otherwise. (To compensate for the fact that we know a vanishingly small part of what is knowable).