Friday 25 October 2013

The insanity!

People often choose to use C++ because of it's performance, willingly sacrificing programmer productivity to the alter of language; tool-chain and application complexity to do so.

Then they inexplicably decide to use strings everywhere; with bazillions of dynamic allocations and memory copies, recklessly discarding any shred of hope that they might have had for, y'know, performance.

If you want to do a ton of unnecessary string manipulations & memcpys your application is going to be slower than it could be, regardless of the language it is written in. If you want performance, the programming language that you use is literally the least significant choice that you can make. What you chose to do is far more important than the language that you use to express those operations.

Also, the word "fast" is totally and utterly meaningless; to the point of being dangerous. What a database developer means by "blazingly fast" is totally different from what an embedded engineer, or a VHDL developer means by the same term.

Biological opposition

http://www.theatlantic.com/politics/archive/2013/10/can-your-genes-predict-whether-youll-be-a-conservative-or-a-liberal/280677/

Is political orientation influenced by personality traits? Seems plausible.

Are personality traits influenced by genetic factors? Seems plausible.

Can we conclude that political affiliation is influenced by genetic factors? I suppose, although the causal link the degree of influence could be very weak.

Should this influence how political campaigns operate? Possibly, although the implications might be a little scary if people took the idea too seriously.

Does this mean that it is intrinsically very difficult (perhaps impossible), to see issues from alternative points of view? Possibly, and this is (I feel) an interesting question to contemplate.

How can we effectively communicate with other people if we face biological barriers when trying to empathise with them?

Does this mean that empathy is sometimes impossible, or merely that it can sometimes be a stressful, energy-intensive mental exercise?

Does this raise other issues?

What do you think?

Thursday 24 October 2013

Intellectual diversification

Do tailored search force us into an individual intellectual rut? Is this a threat?

To really understand the world through the eyes of others - it is not sufficient to randomise the input that we receive.

The existence of a rut is not necessarily a problem, although an inability or an unwillingness to break out of it may be.

Perhaps, as Google builds an increased understanding of us, it might be willing to share that understanding more widely, so that we in turn may start to understand others; both as individuals and as social groups. (For those that are willing to invest the time).

I also hope that over time, the distribution of human culture and practice will change, with an increase in variance, but a reduction in extreme outliers; so we can enhance our diversity whilst placing an upper bound on the risk of misunderstanding and conflict.

As complexity increases, phase transitions are observed

Once you get beyond a certain level of complexity; technology stops behaving like a deliberately engineered thing, and starts to take on other, less familiar characteristics.

Wednesday 23 October 2013

Oil and water: Mixing agility and government IT.

The below is a response to this Washington Post article:

http://www.washingtonpost.com/blogs/wonkblog/wp/2013/10/21/the-way-government-does-tech-is-outdated-and-risky/

Governments have a fiduciary responsibility to look after taxpayer money. When dealing with the private sector, this is normally interpreted to mean an obligation to issue a tender, and to take the lowest (viable) bid that meets the specification. This works fairly well when the scope of the work is known beforehand, when costs can be predicted, and when a schedule of work can be drawn up.

However, as the complexity of a system increases, making any sort of prediction about that system becomes exponentially more difficult. This means that the specification for a complex system must be exponentially longer than a specification for a simple system, with an exponentially greater risk of errors. Making time & cost estimates becomes exponentially more difficult, and the error in those estimates becomes exponentially greater. The number of unknowns that must be researched grows exponentially with complexity also.

The term "Agile" is a bit of a buzzword, and has attracted more than it's fair share of snake-oil salesmen, but what it comes down to is, essentially, throwing the towel in and admitting defeat. When you *cannot* make predictions about your system, what do you do? You need to find another way of managing costs and reducing project risk.

Unfortunately, because of the fiduciary responsibilities named above, these options are not open to Government contract mechanisms. There is a fundamental conflict that cannot be resolved. As a result, government cannot (should not?) attempt to implement anything other than the very simplest of systems in the traditional, mandated manner.

How then, can complex information systems be developed for public benefit? The philanthropy of private individuals & organisations is one solution; whether through crowd-funding, or open source initiatives. Political leadership and coordination of such activities is something that could easily fall into government remit, without the significant legal hurdles that contracting work out imposes.

Tuesday 22 October 2013

Learning abstractions

The more restricted and limited you make something, the easier it is to use.

Remove options, reduce flexibility and the device becomes simpler and easier. Good for the novice, bad for everybody else.

The real trick is to gradually surface functionality: provide powerful abstractions & flexible conceptual tools, but then hide them so that they only surface when they are needed, and, more importantly, when the user is able to cope with them.

So, by all means hide files, and other abstractions that serve to complicate the interaction & confuse the novice, but keep them in the background, waiting until they are needed; and provide neat ways of discovering and learning the conceptual tools required to use our devices to their full potential.

User interface design is about planning the user's learning process; guiding them on their journey from novice to expert.

Friday 18 October 2013

A failure to interoperate; or; the power struggle between tools.

I have spent the morning cursing and swearing as I try to get CMake and Eclipse to play nicely with one another.

This is a frustration that I feel again and again. So many tools try to take control; to impose their own view of how things "should be" in the world - and those views often conflict.

Both CMake and I agree that out-of-source builds are the only sane thing to do. Why take the risk of having the build process screw up the source tree in some subtle way? This also means that we can have easily create multiple builds from the same source - and not just limited to Debug & Release builds, either. We can have builds that just generate documentation, or builds that just perform static analysis & style checking, or builds for different platforms, different targets, different levels of optimisation; all completely configuration-managed and under version control, yet totally independent of the actual content of the source tree.

Yet why do so many IDEs take a totally divergent view on how the build should be organized? Why must the project file live in the root directory of the source tree? Why must I jump through hoops to build my software outside of the source tree?

Why is it that using an IDE automatically means that I have to use a brain-dead and limited concept of project organisation?

Come back make, come back gcc, come back vi. All is forgiven.

I am all for the idea that some tools should be "opinionated" - but you do need to be prepared for pain if you want to use two different "opinionated" tools together.

For that reason, we should think carefully about developing more tools that are less all-encompassing, that are more humble, more secular, and more flexible in their approach.

Wednesday 16 October 2013

Software security and reliability is a public interest issue

In response to complaints that there are not enough computer-security-trained professionals in the market today:

http://www.reuters.com/article/2013/10/14/security-internet-idUSL6N0I30AW20131014

It is not just a question of skills and human resources: We also need better defensive tools and techniques, which need to be made freely available, ideally packaged with common software development tools like GCC or the LLVM compiler framework (and switched on by default)!

It would be a terrible waste if these "cyber warriors" (what a ludicrous title) all sat in isolated silos, tasked with protecting individual organizations, when the (stupendous) cost of this defensive work could be easily amortized across companies (with incredible cost savings as a result).

We need better tools to analyse complex systems, particularly software systems, for security vulnerabilities, so that those vulnerabilities can be closed. This includes static analysis tools; fuzz testing tools and vulnerability analysis tools.

We need better professional licensing and certification processes, so that we can better control the quality & reduce the vulnerability of the systems on which we all rely.

We need security-oriented programming conventions, and software registers, so that security software can do it's job more easily and more effectively.

We ALL have an interest in the reliability and trustworthiness of the complex systems that we rely on to power our infrastructure; our financial system; our workplaces. Nobody wants to gain competitive advantage because a competitor was targeted by a criminal. In a world dominated by unreliability and insecurity, it is only the criminals that win.

There is a HUGE incentive to work together here. In the public interest, let's do so.

Thursday 3 October 2013

Hubris and the complexity trap

I have been blessed with the opportunity to work with some fantastically bright people. A surprisingly large number of them become unstuck for the same reason: Hubris.

Overconfident in their own abilities, they engineer systems of dazzling cleverness; the very complexity of which turns them into snares that confound, befuddle, and ultimately humble their creators.

The true value of a person then becomes apparent. It is not intelligence, per se, but the humility to confront and acknowledge failure.

As Brian Kernighan once said: "Debugging a system is twice as hard as creating it in the first place. Therefore, if you create a system as cleverly as possible, you are, by definition, not smart enough to debug it."

Simplicity is of paramount importance in a world that grows ever more complex, day by day.

Sometimes, you need a self-aware simple-mindedness to achieve the simplest, and therefore best, result.

Wednesday 2 October 2013

All power to the developer

The typing pool has been superseded by the word processor; clerical workers by the spreadsheet and the database. What used to require the management of men, now requires the management of machines.

Two very different tasks indeed.

Man-management is a very interesting, very challenging job. Marshalling and aligning a group of humans to meet a common objective requires the ability to handle the vagaries of human relationships; interpersonal conflicts, political machinations, uncertain and conflicting motivations, forgetfulness, imperfect communication and understanding ... the challenges that must be overcome form a long long list. Indeed, one might be excused if the challenge of getting other people to do what you want them to rather masks the extent to which our objectives are woolly and ill-defined in the first place.

By contrast, as a programmer, our minions are infinitely more compliant; patient, predictable and indefatigable. The orders that we issue will be followed to the minutest detail. Indeed, the orders that we issue must be *specified* in the minutest detail.

Therein lies the rub.

The professions of programming and man-management both require the professional to make decisions and to delegate tasks. For the man-manager, the central challenge is to ensure that the tasks are completed; for the developer, the central challenge is to ensure that the tasks (and their consequences) are well enough understood in the first place.

This is surprisingly hard; especially the consequences bit, and especially in the face of complexity.

To what extent can our brains accommodate externally-directed actions without recognizing them as such?

The human brain is parallel and decentralized, yet we somehow manage to maintain the illusion that we are a single conscious entity, experiencing life in a sequential stream of thoughts and experiences.

This is clearly a lie that we tell ourselves. Various independent bits of brain make decisions and initiate actions entirely on their own, and somehow we manage to rationalize and confabulate and merrily deceive ourselves (after the fact) that we had some sort of explicit, sequential, conscious plan all along.

Whilst this model of human behavior is disturbing on a number of levels, it does have some interesting consequences when we consider the near-future technology of brain augmentation.

It is plausible that we could embed a bit of electronics into our brains; integrated so tightly that it is able to make decisions for us; to control our actions and influence our perceptions.

Would we feel that we were being controlled? Or would we integrate those decisions into our stream of consciousness: to confabulate a reality in which these decisions really were made by our conscious free will?

Will the perceptual pathways responsible for our self-perception bend to accommodate outside influences; to stretch the notion of self to accommodate the other? To allow us to believe that we initiated acts that were (in reality) initiated by others?

GUI-NG Desiderata

The command line may be powerful and flexible, but it is ugly to look at and difficult to learn. WIMP style graphical user interfaces look nicer, and are easier to pick up and use, but they don't really help you do anything other than the most basic and pedestrian of tasks.

Cm'on guys, we should be able to do better than this. We should be able to design beautiful, powerful tools with a UI concept that combines the best of both worlds.

A beautiful, well designed UI for the power user: The design sense of Apple, the good looks of Lighttable and Sublime text; the power and flexibility of Unix.

The Acme editor from Plan9 shows us how radical we can be. IPython also makes a decent enough effort in something approaching the right direction, but does not go nearly far enough.

The (Unix) command-line points the way: the idea of the OS as a programming and data analysis environment above all else; the ability to perform complex tasks by streaming data from one small program to another; the use of the keyboard as the primary user input mechanism. (Mice are an ergonomic disaster zone).

From WIMP graphical user interfaces, we should prioritize menus above all else. Menus help the motivated novice to discover tools and functionality in a way that the command line simply cannot. In today's time-constrained world, RTFM is a condescending cop-out. Nobody has the time. Discoverability is everything; the requirement for arcane knowledge (and the time required to achieve it) is both obsolescent and wasteful.

Windows, Icons and Pointers are secondary considerations - The command line and menus should take priority. They could (and arguably should) still exist; but they do not have to dominate the interaction paradigm in the way that they have done in the past, and should be there to support other functionality, not to provide a framework within which other functionality must fit.

Speaking of paradigms; the "desktop metaphor" that is so baked in to the conventional user interface -- it is a redundant piece of skeuomorphism that has outlived it's usefulness. Sure, we can keep the notion of "depth", and layer things over the top of one another in the UI, but the flat, 2D relationships between items - connectedness, proximity, above/below, left/right -- these notions need to take priority and need to be given consistent semantic meanings in the "pattern language" that defines the UI that we create.

What does this all mean in practice?

Well, how about this: A primarily command-line driven interface, but with embedded widgets and graphics. I.e. emphatically NOT a tty - or anything resembling one. Something somewhat akin to tmux can act as a "window manager" - although it would have to be tmux on steroids. The edges and corners of the screen; key pieces of real estate all; should harbour hierarchical menus galore: some static, always where you expect them, some dynamic, perhaps responding to a search, perhaps responding to recently run applications.

Above all else, applications need to work together. Textual commands help us to quickly and easily glue functionality together, pipes help us to move data around the systems that we build, and the graphical capabilities of our user interface allows us to diagram and explore the data-flow of our scripts and systems.