You might not have noticed it yet, but the leaves on the grapevine are rustling, and there is a whispering breeze in your ear. This, my friends, is the sound of the winds of change: gathering, growing, building momentum. There is a storm brewing, friends, and the game: It is afoot!

So many people: working quietly, diligently, are spear-heading that change, and so few of us realize that it is even happening, let alone comprehend the profound; monumental; world-changing implications of the technological tidal-wave that is coming our way.

Automotive industry dollars are funding this work; the dream of safer roads, of self-driving cars is driving it forwards, but this dream on four wheels is merely the tip of the iceberg: the gateway drug that leads us to an intoxicating, and perhaps slightly scary future.

What are self-driving cars but mass produced autonomous robots? Autonomous vehicles and robots will be to my daughter's generation what computers and the internet were to mine; except this change will be bigger and far more wide-reaching, as the machines step off the desktop and climb out of the server-room, and march onto the streets; as our mobile phones and tablets sprout wheels and wings and legs and arms -- as our algorithms and neural networks and distributed systems stop passively sucking in click-stream data and twitter sentiment scores, and start actively participating in a world that they can see, hear and touch.

The physical world that my parents inhabited is not so very different from the physical world that I inherited, but the world that I pass on to my daughters will change so quickly, so rapidly, that I cannot begin to imagine what it will look like when they are my age, and working away on shaping a future for their children.

What a singularly monumental change is afoot! What a wonderful, exhilarating time to be alive! And how exciting to be a part of it!

http://bits.blogs.nytimes.com/2013/07/07/disruptions-how-driverless-cars-could-reshape-cities/?smid=tw-nytimes

## Friday, 28 June 2013

## Wednesday, 26 June 2013

### Thoughts on Compressed Sensing

Imagine that you have a sparsely sampled signal (with many missing measurements), and you want to recover a densely sampled version of the same signal.

If you can find a function that is capable of transforming the original signal into a representation that is also sparse (with many zero or trivially non-zero coefficients), then you have a good heuristic that you can use to find an approximation of the original signal:- maximising sparsity in the transform domain. This is really just Occam's razor - if you have good reason to suppose that the data can be fit by a simple model, then choosing the simplest model that fits the data is an optimal guessing strategy.

Good examples of such functions include the Fourier transform or various wavelet transforms - indeed, any parametric approximation or model would probably do the job.

If the transform domain really is sparse, and each coefficient in the transform domain impacts multiple measurements in the original domain, then (with high probability) you will be able to guess the "right" coefficients in the transform domain by simply

picking the set of coefficients in the transform domain which both reproduces the original signal and minimizes the number of non-zero components in the transform domain.

However, choosing the most parsimonious model is only one kind of regularization that we can (and should) deploy. We should also think about the set of coefficients that represents the most physically and statistically plausible model.

Thus, when choosing our approach to regularization, we need to consider three things:

* Parsimony (As represented by sparsity, approximated by the L0 and L1 norms).

* Physical plausibility (A transform domain closely related to a physical model is needed here).

* Statistical plausibility (A transform domain where coefficients are as independent as possible is needed here - to reduce the number and dimensionality of the joint distributions that we need to calculate).

These three requirements are not necessarily harmonious. Does that mean we need three different transforms? How do we reconcile differences when they crop up?

If you can find a function that is capable of transforming the original signal into a representation that is also sparse (with many zero or trivially non-zero coefficients), then you have a good heuristic that you can use to find an approximation of the original signal:- maximising sparsity in the transform domain. This is really just Occam's razor - if you have good reason to suppose that the data can be fit by a simple model, then choosing the simplest model that fits the data is an optimal guessing strategy.

Good examples of such functions include the Fourier transform or various wavelet transforms - indeed, any parametric approximation or model would probably do the job.

If the transform domain really is sparse, and each coefficient in the transform domain impacts multiple measurements in the original domain, then (with high probability) you will be able to guess the "right" coefficients in the transform domain by simply

picking the set of coefficients in the transform domain which both reproduces the original signal and minimizes the number of non-zero components in the transform domain.

However, choosing the most parsimonious model is only one kind of regularization that we can (and should) deploy. We should also think about the set of coefficients that represents the most physically and statistically plausible model.

Thus, when choosing our approach to regularization, we need to consider three things:

* Parsimony (As represented by sparsity, approximated by the L0 and L1 norms).

* Physical plausibility (A transform domain closely related to a physical model is needed here).

* Statistical plausibility (A transform domain where coefficients are as independent as possible is needed here - to reduce the number and dimensionality of the joint distributions that we need to calculate).

These three requirements are not necessarily harmonious. Does that mean we need three different transforms? How do we reconcile differences when they crop up?

## Friday, 14 June 2013

### Code review tool affordances

I have had a bit of experience using different code review tools: Gerrit most recently, and Atlassian's FishEye in the roles preceding that.

Strangely, the most effective code reviews that I have participated happened at Sophos, where no tool at all was used: The rule was simply to get another developer to sit next to you when you did a commit, so you could talk him through the code line by line before submitting the change.

The experience there was that most (if not all) of the problems were spotted by the original developer, not by the reviewer, who frequently lacked the contextual awareness to identify anything deeper and more significant than the usual picayune: formatting and layout errors -- the real value came from being forced to re-examine and explain your work - controlling the searchlight of your attention into a more disciplined and fine-grained search pattern than is possible alone.

The other types of review to which I have been party:- asynchronous, distributed reviews mediated by a web tool of some sort, as well as formal half-dozen-people-in-a-room-with-slides style reviews have, in my experience, proven far less effective.

So, I sit here wondering if we can rescue the asynchronous distributed code review tool, either through an alternative approach or the application of a formal and disciplined approach of some sort ... or if it is doomed to more-than-uselessness?

Strangely, the most effective code reviews that I have participated happened at Sophos, where no tool at all was used: The rule was simply to get another developer to sit next to you when you did a commit, so you could talk him through the code line by line before submitting the change.

The experience there was that most (if not all) of the problems were spotted by the original developer, not by the reviewer, who frequently lacked the contextual awareness to identify anything deeper and more significant than the usual picayune: formatting and layout errors -- the real value came from being forced to re-examine and explain your work - controlling the searchlight of your attention into a more disciplined and fine-grained search pattern than is possible alone.

The other types of review to which I have been party:- asynchronous, distributed reviews mediated by a web tool of some sort, as well as formal half-dozen-people-in-a-room-with-slides style reviews have, in my experience, proven far less effective.

So, I sit here wondering if we can rescue the asynchronous distributed code review tool, either through an alternative approach or the application of a formal and disciplined approach of some sort ... or if it is doomed to more-than-uselessness?

## Monday, 3 June 2013

### Dual functional & OO interfaces for program decomposition into stateless & stateful parts.

If you expose complex logic through a functional interface, it is (much) easier to test due to it's statelessness and lack of access controls.

On the other hand, parts of the program that are necessarily stateful are best handled via OO interfaces, as this allows us to limit the number of possible state transitions. Access to the stateful variables themselves is restricted (as private), and state transitions are allowed only via a limited number of (public) methods.

Management of finite resources (file handles, memory etc...) is a simple stateful part of the program, and is handled neatly by RIAA.

On the other hand, parts of the program that are necessarily stateful are best handled via OO interfaces, as this allows us to limit the number of possible state transitions. Access to the stateful variables themselves is restricted (as private), and state transitions are allowed only via a limited number of (public) methods.

Management of finite resources (file handles, memory etc...) is a simple stateful part of the program, and is handled neatly by RIAA.

Subscribe to:
Posts (Atom)