Thursday 27 February 2014

Taste on the frontier of complexity

Issues of taste and aesthetics are not normal fodder for engineers ... but when you make something sufficiently complex, you are by necessity operating on the frontier, far away from the well-trod paths of tradition and precedent.

Out here, survival is not so much a matter of rules, but a matter of gut instinct: Of taste. Of elegance. Of aesthetics and style.

Here, where the consequences of your decisions can be both impenetrable and dire: vigorous debate and discussion are essential; ego and stubbornness both fatal disorders, and a shared sense of taste an incredible boon.

This is where leadership at its' most refined can shine; The creation of a group aesthetic; A culture and identity that is oriented around matters of taste.

Wednesday 26 February 2014

The role of Machine Vision as a Rosetta Stone for Artificial Intelligence.

My life has not followed a straight path. There have been many detours and deviations. Never the less, if I turn my head to one side, turn around twice, squint through one eye, and be very selective about what I pick from my memories, I can piece together a narrative that sort-of makes sense.

During my teenage years, I had (typically for a teenager) overwrought and unrealistic ambitions. Driven by a somewhat mystical view of science and mathematics, I harboured ambitions to be a physicist. I wanted to explain and understand a frequently baffling world, as well as (perhaps) to find a way to escape to a place of abstract elegance: A way to remove myself from the social awkwardness and profound embarrassment that plagued my life at that time. I was, after all, a socially inept and acne-ridden nerd with sometimes erratic emotional self-control.

It was in this context that the ethical supremacy of abstract (and disciplined) reasoning over unreliable and sometimes destructive emotional intuition was founded: A concept that forms one of the prime narrative threads that bind this story together.

To me, the abstract world was the one thing that held any hope of making consistent sense; and provided (now as then) the ultimate avenue for a near-perpetual state of denial. Not that I have been terribly successful in my quest (by the overwrought standards of my teenage ambitions at least), but the role of science & technology "groupie" seems to have served me and my career reasonably well so far, and has cemented a view of life as a tragedy in which abstract intellectualism serves as a platonic ideal towards which we forever strive, but are cursed never to achieve.

In any case, I quickly came to the conclusion that my intellectual faculties were completely insufficient to grasp the mathematics that my aspirations required. In retrospect this was less a victory of insight than the sort of failure that teaches us that excessive perfectionism, when coupled with a lack of discipline and determination will inevitably lead to self-imposed failure.

So, I drifted for a few years before discovering Artificial Intelligence, reasoning that if I was not bright enough to be a physicist in my own right, I should at least be able to get some assistance in understanding the world from a computer: an understanding that might even extend to the intricacies of my own unreliable brain. Indeed, my own (possibly narcissistic) quest to improve my understanding both of my own nature and that of the wider world is another key thread that runs through this narrative.

A good part of my motivation at the time came from my popular science reading list. Books on Chaos theory and non-linear dynamics had a great impact on me in those years, and from these, and the notions of emergence that they introduced, I felt that we were only beginning to scratch the surface of the potential that general purpose computing machines offered us.

My (eventual) undergraduate education in AI was (initially) a bit of a disappointment. Focusing on "good old fashioned" AI and computational linguistics, the intellectual approach that the majority of the modules took was oriented around theorem proving and rule-based systems: A heady mix of Noam Chomsky and Prolog programming. This classical and logical approach to understanding the world was really an extension of the philosophy of logic to the computer age; a singularly unimaginative act of intellectual inertia that left little room for the messiness, complexity and chaos that characterised my understanding of the world, whilst similarly confirming my view that the tantalising potential of general-purpose computation was inevitably destined to remain untapped. More than this, the presumption that the world could be described and understood in terms of absolutist rules struck me as an essentially arrogant act. However, I was still strongly attracted to the notion of logic as the study of how we "ought" to think, or the study of thought in the abstract; divorced from the messy imperfections of the real world. Bridging this gap, it seemed to me, was an activity of paramount importance, but an exercise that could only realistically begin at one end: the end grounded in messy realities rather than head-in-the-clouds abstraction.

As a result of this, I gravitated strongly towards the machine learning, neural networks and machine vision modules that became available towards the end of my undergraduate education. These captured my attention and my imagination in a way that the pseudo-intellectualism of computational linguistics and formal logic could not.

My interest in neural networks was tempered somewhat by my continuing interest in "hard" science & engineering, and the lingering suspicion that much "soft" (and biologically inspired) computing was a bit of an intellectual cop-out. A view that has been confirmed a couple of times in my career. (Never trust individuals that propose either neural networks or genetic algorithms without first thoroughly exploring the alternatives!).

On the other hand, machine learning and statistical pattern recognition seemed particularly attractive to my 20-something-year-old mind, combining a level of mathematical rigour which appealed to my ego and my sense of aesthetics, and readily available geometric interpretation which appealed to my predilection for visual and spatial reasoning. The fact that it readily acknowledged the inherent complexity and practical uncertainty involved in any realistic "understanding" of the world struck a chord with me also: It appeared to me as a more intellectually honest and humble practitioners' approach than the "high church" of logic and linguistics, and made me re-appraise the A-level statistics that I had shunned a few years earlier. (Honestly, the way that we teach statistics is just horrible, and most introductory statistics textbooks do irreparable damage to an essential and brilliant subject).

The humility and honesty was an important component. Most practitioners that I met in those days talked about pattern recognition being a "dark art", emphasis on exploratory data analysis and intuitive understanding of the dataset. Notably absent was the arrogance and condescension that seems to characterise the subject now that "Big Data" and "Data Science" have become oh-so-trendy; attracting the mayflies and the charlatans by the boatload.

In any case, then as now, statistical pattern recognition is a means to an end: An engineering solution to bridge the gap between the messy realities of an imperfect world, low level learning and data analysis and the platonic world of abstract thought and logic. This view was reinforced by the approach taken by the MIT COG team: reasoning that in order to learn how to behave in the world, the robot needs a body with sensors and effectors, so it can learn how to make sense of the world in a directed way.

I didn't have a robot, but I could get data. Well, sort of. At that point in time, data-sets were actually quite hard to get hold of. The biggest dataset that I could easily lay my hands on (as an impoverished undergraduate) were the text files from Project Gutenberg; and since my mind (incorrectly) equated natural language with grammars and parsing, rather than statistics and machine learning, my attention turned elsewhere.

That elsewhere was image data. In my mind (influenced by the MIT COG approach), we needed to escape from the self-referential bubble of natural language by pinning abstract concepts to real-world physical observations. Text alone was not enough. Machine Vision would be the rosetta stone that would enable us to unlock it's potential. By teaching a machine to look at the world of objects, it could teach itself to understand the world of men.

One of my fellow students actually had (mirable diu!) a digital camera, which stored its' images on a zip-disk (the size of a 3.25 inch floppy disk), and took pictures that (if I recall correctly) were about 800x600 in resolution. I borrowed this camera and made my first (abortive) attempts to study natural image statistics; an attempt that continued as I entered my final year as an undergraduate, and took on my final year project: tracing bundles of nerves through light microscopy images of serial ultra-microtome sections of drosophila ganglia. As ever, the scope of the project rather outstripped my time and my abilities, but some important lessons were nonetheless learned.

... To be continued.

Software sculpture

Developing software is a craft that aspires to be an art.

It is both additive and subtractive. As we add words and letters to our formal documents, we build up declarations and relations; descriptions of logic and process. As this happens, we carve away at the world of possibilities: we make some potentialities harder to reach. The subtractive process is *implied* by the additive process, rather than directly specified by it.

If this subtractive process proceeds too slowly, we end up operating in a space with too many degrees of freedom: difficult to describe; difficult to reason about; and with too many ways that the system can go wrong.

If the subtractive process proceeds too quickly, we end up eliminating potentialities which we need, eventually, to realise. This results in prohibitively expensive amounts of rework.

The balance is a delicate one, and it involves intangible concepts that are not directly stated in the formal documents that we write; only indirectly implied.

Thursday 20 February 2014

The cost of complexity in software engineering is like the sound barrier. How can we break it?

In response to: http://www.pistoncloud.com/2014/02/do-successful-programmers-need-to-be-introverted/

Q: Do successful programmers need to be introverted?
A: It depends.

One unpleasant consequence of network effects is that the cost of communication has a super-linear relationship with system complexity. Fred Brooks indicates that the communication overhead for large (or growing) teams can become significant enough to stop work in it's tracks. As the team grows beyond a certain point, the cost quickly shoots up to an infeasible level. By analogy with the sound barrier, I call this the communications barrier; because both present a seemingly insurmountable wall blocking our ability to  further improve our performance.

This analysis argues for small team sizes, perhaps as small as n=1. Clearly introversion is an asset under these circumstances.

However, irrespective of their innate efficiency, there are obvious limits to what a small team can produce. Building a system beyond a certain level of complexity requires a large team, and large teams need to break the communications barrier to succeed. Excellent, highly disciplined and highly effective communications skills are obviously very important under these circumstances, which calls for a certain type of (disciplined) extroversion; perhaps in the form of visionary leadership?

My intuition tells me that breaking the communications barrier is a matter of organization and detail-oriented thinking. Unfortunately I have yet to observe it being done both effectively and systematically by any of the organisations that I have yet worked for.

Has anybody else seen or worked with an organisation that has successfully broken the communications barrier? If so, how did they do it?

Wednesday 19 February 2014

Computer security in the machine age.

The complexity of modern technology makes it terribly vulnerable to abuse and exploitation. So many devices have been attacked and compromised that when faced with any given piece of technology, the safest assumption to make is that it has been or will be subverted.

For today's devices, the consequences don't intrude so much into the physical world. Some money may get stolen; a website may be defaced, some industrial (or military) secrets stolen, but (Stuxnet aside), the potential for physical death, damage & destruction is limited.

For tomorrow's devices, the story is quite terrifying. A decade from now, the world will be filled with thousands upon thousands of autonomous cars and lorries, domestic robots and UAVs.

Today, criminals use botnets are used to send spam and commit advertising fraud. Tomorrow, what will malicious hackers do when their botnet contains tens of thousands of robot cars and trucks?

What can we do to change the trajectory of this story? What can we do to alter it's conclusion?

Sunday 16 February 2014

Performance

The internet is just terrible for our sense of self-worth. We can reach out and connect to the very best and most talented individuals in the world; we can read what they write, and even converse with them if we choose. It is only natural that we should compare ourselves to them and find ourselves wanting.

It is easy to retreat from this situation with a sense of despair and self-pity, but there is another thought that occurs to me also, and that thought is this: Role models are a red herring. Does your role model have a role model of his own? Probably not. You don't get to be an outstanding performer by emulating somebody else, nor (just) by competing with your peers, nor (just) by following your passion, nor (just) by putting in your 10,000 hours. True performance comes from owning an area of expertise; from living and breathing it. From having your identity and sense of self so totally wrapped up in it that you can do nothing else.

Clearly, this sucks for everybody else around you, so it is a path that not many people should follow .... which makes me feel a bit better about my own inadequacies.

So there.

Friday 14 February 2014

Side channel attacks on silos

Silos of knowledge and expertise build up all to easily, not only as a result of human tendency towards homophily, but also because of more fundamental bandwidth limitations.

As with most human failings, we look to simple technological fixes to resolve them.

One frequently overlooked technology is the use of ambient "side" channels to encourage or enforce communication:

1. Human environment. (Who do you work with).
2. Built environment. (Who do you sit next to).
3. Informational environment. (Where do your documents live).

Every act of sensory perception during the work day is an opportunity for meaningful communication.

Thursday 13 February 2014

Code as haiku

Response to: http://www.ex-parrot.com/~pete/but-it-doesnt-mean-anything.html


"Code" is a horrible word.

I prefer "source documents", or, if pressed, "Formal descriptions of the program".

Using the word "code" implies that the document is "encoded" somehow, which is plainly undesirable and wrong.

With some notable (*) exceptions, the primary consumer of a "source document" is a human being, not a machine.

The machine's purpose is to ensure the formal validity and correctness of the document - but the majority of the informational content of the document (variable names, structure, comments) is exclusively directed to human developers.

We will never program in a "wild" natural language, but many programming languages (**) make a deliberate effort to support expressions which simulate or approximate natural language usage, albeit restricted to a particular idiomatic form.

There will always be a tension between keeping a formal language simple enough to reason about and permitting free, naturalistic expression - but this is the same tension that makes poetry and haiku so appealing as an art form.

So many source documents appear to be "in code", not because this is a necessary part of programming, but because it is very very difficult to write things which combine sufficient simplicity for easy understanding, and the correct representation of a difficult and complex problem. In most of these cases, clear understanding is baffled more by the complexity of the real world than by the nature of the programming language itself.

The rigidly deterministic nature of the computer forces the programmer to deal with a myriad of inconsistencies and complications that the non-programmer is able to elide or gloss over with linguistic and social gymnastics. The computer forces us to confront these complications, and to account for them. Legal drafting faces similar (albeit less extreme) challenges.

In the same way that Mathematics isn't really about numbers, but about the skill and craftsmanship of disciplined thought, programming isn't really about computers, but about what happens when you can no longer ignore the details within which the devil resides.

(*) Assembler & anything involving regular expressions.
(**) Python