tag:blogger.com,1999:blog-32434588084654004482024-02-20T12:35:44.472-08:00Idle Conjectures in Search of RefutationA mixed bag of rants, raves, ideas and opinions.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.comBlogger140125tag:blogger.com,1999:blog-3243458808465400448.post-1198386277367198732016-12-02T04:07:00.002-08:002016-12-02T04:07:44.222-08:00What are the biggest factors influencing developer productivity?Working extra hours increases productivity by (at most) a small linear factor. All other things being equal, increasing from a 40 hour week to an 80 hour week will at most double productivity.<br />
<br />
Increasing the rate of production increases productivity by (at most) a moderate<br />
linear factor. Crunch time may increase productivity by a factor of three or four at most.<br />
<br />
<br />
Error rate grows costs (at most and in practice) super-linearly. The remediation of errors can (and often does) cost hundreds or thousands of times as much as was saved by the practices that led to their initial introduction.<br /><br />Controlling the error rate is therefore the primary lever that we have available to us that we can use in order to improve productivity. <br /><br />In particular, any increase in working hours or in the rate of production which also increases the rate at which errors are introduced is likely to be self-defeating in terms of overall productivity.<br /><br />There exists an optimal frontier of working hours and rate of production which balances the cost due to errors with individual productivity.<br /><br />This optimal frontier will be more strongly determined by factors which influence the cost due to errors than by factors which influence the rate of production.<br /><br />The factors which influence the cost due to errors can be split into two categories.<br /><br />The first category is comprised of factors which influence the shape of the (super-linear) cost function; i.e. things which limit the impact of errors to make them less expensive. (Architecture; quality systems; testing).<br /><br />The second category is comprised of factors which reduce the initial error introduction rate. (Architecture; quality systems; level of fatigue; ability to concentrate).<br /><br />Architecture and quality systems are obviously of critical importance because they influence both the cost of each error and the rate of error introduction.<br /><br />Human factors such as working environment; cardiovascular fitness; length of commute; quality of accommodation and family situation will all influence fatigue and concentration and so impact the rate of error introduction.<br /><br />Architecture and quality systems look like the big hitters in this analysis, but I would suggest that monitoring for indicators of fatigue is probably also worth doing, as is encouraging cardiovascular health amongst developers.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-35853760293729541102014-07-21T11:07:00.000-07:002016-08-24T09:12:36.460-07:00Brooks & Conway in a a discussion around CI & development automation.<br />
Discussion arising from this HBR blog post discussing CI:<br />
<br />
<a href="http://blogs.hbr.org/2014/07/speed-up-your-product-development-without-losing-control/">http://blogs.hbr.org/2014/07/speed-up-your-product-development-without-losing-control/</a><br />
<b>William Payne:</b><br />
<br />
<b>"</b>One of the reasons that the "Agile" movement has lost credibility in recent years is because many of the consultants selling "Scrum" and similar processes failed to emphasize the fact that a significant investment in automated "testing" and continuous integration is prerequisite for the success of these approaches.<br />
<br />
A big barrier to adoption seems to be related to the use of the word "testing". In a modern and effective development process such as those mentioned in the article, the focus of the "test" function isn't really (only) about quality any more ... it is about maximizing the pace of development within an environment that does not tolerate preventable faults.<br />
<br />
As a result of this, within my sphere of influence, I have tried to promote the notion of "Development Automation" as an umbrella term that captures the automation of the software build process, module & integration testing, deployment and configuration control, documentation generation and management information reporting ... a term that may help to speed adoption of the techniques mentioned in the article above.<br />
<br />
In many ways "Development Automation" is to product development what "DevOps" is to SAAS systems development: Promoting the use of integrated cross-functional systems of automation for the testing and deployment of software and other complex systems.<br />
<br />
Indeed, as products become more complex, the importance of automation becomes greater and more critical, and the requirement for a carefully considered, well planned, and aggressively automated integration, test and configuration management strategy becomes a prerequisite for success.<br />
<br />
Nowhere is this more apparent than in my field of expertise: the production of machine vision and other sensor systems for deployment into uncontrolled and outdoor environments, systems where specification and test pose a set of unique challenges with considerable knock-on impacts on the system design and choice of integration strategy.<b>"</b><br />
<br />
<b>Bradford Power:</b><br />
<br />
<b>"</b>Do you break down the product into small modules and have small teams that are responsible for design, deployment, AND testing? Do you use simulations to shrink the cycles on new product tests?<b>"</b><br />
<br />
<b>William Payne:</b><br />
<br />
<b>"</b>It depends.<br />
<br />
Taken together, Rodney Brooks & Melvin Conway have the answer.<br />
<br />
Firstly, Brooks' "No Silver bullet" tells us that we cannot drive down development costs forever. Complexity costs money.<br />
<br />
Since we can't meaningfully reduce the cost of complexity, we either have to maximize the top line, or amortize that cost across multiple products. This is product-line engineering taken to the extreme.<br />
<br />
Secondly, Conway's law tells us that our team structure will become our de-facto system architecture. Complex systems development is primarily a learning activity and the fundamental unit of reuse is the individual engineer.<br />
<br />
Team structure therefore has to be organized around the notion that team expertise will be reused across products within one or more product lines, and the more reuse we have, the more we amortize the cost of development and the more profitable we become.<br />
<br />
Whether this means small teams or large teams really depends on the industry and the nature of the product. Similarly, the notion of what constitutes a "module" varies widely, sometimes even within the same organization.<br />
<br />
However, in order to facilitate this, you need a reasonably disciplined approach, together with a shared commitment to stick to the discipline.<br />
<br />
Finally, and most importantly, none of this works unless you can 100% rely upon your automated tests to tell you if you have broken your product or not. This is absolutely critical and is the keystone without which the whole edifice crumbles.<br />
<br />
You can't modify a single component that goes into a dozen different products unless you are totally confident in your testing infrastructure, and in the ability of your tests to catch failures.<br />
<br />
I have spoken to Google test engineers, and they have that confidence. I have got close in the past, and it is a transformative experience, giving you (as an individual developer) the confidence to proceed with a velocity and a pace that is otherwise impossible to achieve.<br />
<br />
Separate test teams have a role to play, particularly when safety standards such as ASIL and/or SIL mandate their use. Equally, simulations have a role to play, although this depends a lot on the nature of the product and the time and engineering cost required to implement the simulation.<br />
<br />
The key point is that there is no silver bullet that will make product development cheaper on a per-unit-of-complexity basis ... only a pragmatic, rigorous, courageous and detail-oriented approach to business organization that acknowledges that cost and is willing to pay for it.<b>"</b><br />
<br />
<b>Andy Singleton:</b><br />
<br />
<b>"</b>Yes, I think that Conway's law is very relevant here. We are trying to build a system as multiple independent services, and we use separate service teams to build, release and maintain them.<br />
<br />
Yes, complexity will always cost money and time. However, I think that Brooks' "Mythical Man Month" observations are obsolete. He had a 40 year run of amazing insights about managing big projects. During this time, it was generally true that large projects were inefficient or even prone to "failure", and no silver bullet was found. Things have changed in the last few years. Companies like Amazon and Google have blasted through the size barrier.<br />
<br />
They did it with a couple of tactics:<br />
1) Using independent service teams. These teams communicate peer-to-peer to get what htey need and resolve dependencies.<br />
2) Using a continuous integration machine that finds problems in the dependencies of one team on another through automated testing, and notifies both teams. This is BRILLIANT, because it replaces the most difficult part of human project management with a machine.<br />
<br />
The underlying theory behind this goes directly against Brook's theory. He theorized that the problem is communications - with an increase in comunication channels of N partricipants to N^2 channels, which causes work and confusion. If you believe this, you organize hierarchically to contain the communicaitons. ACTUALLY, the most scalable projects (such as LInux) have the most open communications.<br />
<br />
I think that the real problem with big projects is dependencies. If you have one person, he is never waiting for himself. If you have 100 people, it's pretty common for 50 people to be waiting for something. The solution to this is actually more open communication that allows those 50 people to fix problems themselves.<br />
<br />
I have written several blog articles challenging the analysis in the Mythical Man Month, if you are interested.<b>"</b><br />
<br />
<b>William Payne:</b><br />
<br />
<b>"</b>What you say is very very interesting indeed.<br />
<br />
I agree particularly strongly with what you say about using your CI & build tools to police dependencies. This is key. However, I am a little less convinced that "peer-to-peer" communication quite represents the breakthrough that you suggest. Peer-to-peer communication is unquestionably more efficient than hierarchical communication, with its' inbuilt game of chinese-whispers and proliferation of choke-points. However, simply communicating in a peer-to-peer manner by itself does not sidestep the fundamental (physical) problem. You still need to communicate, and that still costs time and attention.<br />
<br />
IMHO organizing and automating away the need for communication is absolutely the best (only?) way to improve productivity when working on complex systems. This is achieved either by shifting communication from in-band to out-of-band through appropriate organizational structures, or setting (automatically enforced) policythat removes the need for communication (aka standardisation).<br />
<br />
These are things that I have tried very hard to build into the automated build/test systems that I am responsible for, but it is still a very difficult "sell" to make to professionals without the requisite software engineering background.<b>"</b>Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-69041162787928198492014-03-30T09:04:00.003-07:002014-06-01T01:26:51.966-07:00In defense of defaults<span style="background-color: white; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px;">It is necessary but not sufficient for something to be possible: The possibilities need to be communicated also, and we have limited capacity for in-band communication. Default settings are a great way to communicate desiderata in a way that sidesteps our ubiquitously crippling bandwidth limitations.</span>Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-40511968336470596262014-03-28T08:29:00.001-07:002014-06-01T01:26:51.956-07:00Social network topology and interpersonal bandwidth as a predictor for conflictResponse to: http://squid314.livejournal.com/329561.html?view=11210073#t11210073<br />
<br />
A lot of arguments and disagreements come about because of misunderstood terminology: words and phrases that evoke different imagery and have varying definitions between different groups of people.<br />
<br />
These differences come about because of uneven diffusion of information through the social network, and restrictions in interpersonal bandwidth more generally.<br />
<br />
We can reduce the long-term aggregate severity of arguments & disagreements (at the expense of some short term pain) if we increase our communications bandwidth both individually and in aggregate; and act to bridge the gaps between super-clusters in the social network.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-75344709071616307922014-03-25T05:41:00.004-07:002014-06-01T01:26:51.969-07:00Accidental and essential complexityResponse to: http://250bpm.com/blog:36<br />
<br />
This sort of reminds me of Rodney Brook's commentary on essential and accidental complexity. Once you have got rid of the accidental complexity, you are left with the essential complexity, which cannot be further reduced without compromising on functionality.<br />
<br />
You can shovel the essential complexity around your system design all you like, but the overall quantity of complexity still remains constant. A practical consequence of this is that you can move the complexity away from the 1500 line function by splitting it up into smaller functions, but that complexity remains embedded in the implicit relationship that exists between all of your 50 line functions. (Plus some additional overhead from the function declarations themselves).<br />
<br />
Of course, splitting up the large function into smaller ones gives you other benefits: Primarily the ability to test each small function in isolation, but also (and more importantly) the ability to read and understand the scope and purpose of each small function without being confused by irrelevant details.<br />
<br />
Personally, I would like to have the ability to give names to chunks of code within a large function - and to write tests for those chunks without having to extract them into separate smaller functions.<br />
<br />
I would also like to see better tools (and widespread usage of those that exist) for building various graphs and diagrams showing the call-graph and other relationships between functions and objects, so that we can explore, understand, and communicate the implicit complexity encoded in program structure more easily.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-36184195204678775512014-03-12T07:46:00.003-07:002014-06-01T01:26:51.950-07:00Fundamental prerequisites for developmentGreat technical leadership guides developers to move in the same direction by articulating a clear (and simple) technical philosophy and encouraging the formation of a strong and consistent culture.<br />
<br />
Well thought through infrastructure and high levels of automation allow high quality work to be performed at pace.<br />
<br />
Intelligent and passionate developers are required so that work is not pulled backwards by inconsistencies introduced because of insufficient levels of attention and concentration.<br />
<br />
Once all this (and more) has been achieved; only then is it worthwhile thinking about whether "Agile" makes sense as a risk management and/or stakeholder communications approach.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-52810803801063032022014-03-09T16:43:00.001-07:002014-06-01T01:26:51.963-07:00The expectation of perfection is poisonIf you create an expectation for flawless perfection, you are setting yourself up to be lied to.<br />
<br />
It is a distressingly rare thing to find an organisational culture that <i>successfully</i> puts a premium on humility, public acknowledgement of ones' own flaws, and the learning of lessons for the future.<br />
<br />
I am particularly reminded of my experience whilst working for Fidelity: As a company, they tried very very hard to create a culture that stood apart from the financial industry mainstream: One of maturity, professionalism, introspection and self awareness: Yet still the testosterone-stewed aggression of some individuals, combined with industry behavioural norms rapidly undid all that good work.<br />
<br />
In time, the company's self-regulating mechanisms kicked in and the offenders were shown the door, but the experience shows how quickly and how easily a shallow message backed by aggression beats a deep message backed by considered thought.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-50613153338195388072014-03-09T01:32:00.001-08:002014-06-01T01:26:51.958-07:00Towards a declaration of independence for the internet?Shortly after I first heard of bitcoin back in 2011, my first thought was that it could be used to implement a sort of peer-to-peer exchange in which the instruments being traded would not be stocks and shares in the conventional sense, but voting rights in committees that made decisions controlling shared interests. I envisaged that this would operate as a sort of virtual crypto-corporation controlling some real-world assets.<br />
<br />
Since then, others have come up with ways of using the bit-coin infrastructure to provide a general-purpose computing resource, so one could, in principle, replace the virtual crypto-corporation and the real-world assets with some source code and a computing resource; raising the possibility of truly autonomous, distributed, financially independent digital entities; which is about as close to a declaration of independence as the internet is ever going to be able to give.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-37658694455572662622014-03-09T01:16:00.000-08:002014-06-01T01:26:51.953-07:00Lessons from computer gamesOperation Flashpoint & Armed Assault:<br />
<br />
<ul>
<li>In a serious gunfight, pretty much everybody dies.</li>
<li>Don't join the army: It's a dumb idea.</li>
</ul>
<br />
Total Annihilation:<br />
<br />
<ul>
<li>There is no such thing as too much.</li>
<li>People set themselves up for failure by limiting the scope of their imagination.</li>
</ul>
<br />
<div>
Planetary Annihilation:</div>
<div>
<ul>
<li>Attention is the most critical resource.</li>
<li>Play the person, not the game.</li>
</ul>
</div>
Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-52416855869751182412014-03-07T15:19:00.002-08:002014-03-07T15:19:44.460-08:00Computer and network security in the robot age<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
In response to: http://www.theatlantic.com/technology/archive/2014/03/theres-no-real-difference-between-online-espionage-and-online-attack/284233/</div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
<br /></div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
It doesn't matter if you are "just" eavesdropping or if you are trying to cause damage directly. If you are trying to take control over somebody else's property; trying to make it do things that the owner of that property did not intend and does not want, then surely that is a form of theft?</div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
<br style="box-sizing: border-box;" /></div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
Possession isn't just about holding something in your hand: It is also about power and control.<br style="box-sizing: border-box;" /><br style="box-sizing: border-box;" />The implications of this might be a little hard to see, because right now computers don't have very much direct interaction with the "real" world.... but it won't be like that forever. </div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
<br style="box-sizing: border-box;" /></div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
Take me, for example. I am working on systems to control autonomous vehicles: Networked computers in charge of a car or a truck.</div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
<br style="box-sizing: border-box;" /></div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
This is just the beginning. In a decade or more, it won't just be cars and trucks on the road that drive themselves: airborne drones; and domestic robots of every size and shape will be everywhere you look.</div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
<br style="box-sizing: border-box;" /></div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
What would the world look like if we allowed a party or parties unknown to seize control of all these computers? What kind of chaos and carnage could a malicious actor cause?</div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
<br style="box-sizing: border-box;" /></div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #3f4549; font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19.600000381469727px; padding: 0px;">
We have an opportunity right now. A tremendous gift. We need to put in place the infrastructure that will make this sort of wholesale subversion impossible; or, at the very least, very very much harder than it is today, and we need to do it before the stakes become raised to a dangerous degree.</div>
Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-19051947248026858632014-03-01T06:23:00.002-08:002014-03-01T06:23:41.991-08:00Nature vs Nurture; Talent vs PracticeIn response to: <a href="http://www.bbc.co.uk/news/magazine-26384712">http://www.bbc.co.uk/news/magazine-26384712</a><br />
<br />
Practice is the immediate (proximal) cause of high performance at a particular task. The notion that anybody has evolved an "innate" talent at something as specific to the modern world as playing a violin is obviously laughable.<br />
<br />
<b><i>However:</i></b> The consistently high levels of motivation that an individual needs to practice a skill for the prerequisite length of time is very much a function of innate characteristics; particularly personality traits; Notably those associated with personality disorders such as GAD, OCD & ASPDS.<br />
<br />
Of course, the extent to which a borderline personality disorder can be harnessed to support the acquisition of extremely high levels of skill and performance is very much a function of the environment. For example:<br />
<br />
1. The level of stress that the individual is subjected to.<br />
2. The built environment within which they live.<br />
3. The social culture that they are part of.<br />
4. The support that they get from family and friends.<br />
<br />
In summary: To exhibit high levels of performance you <i>do</i> need some innate characteristics, (although not necessarily the innate "talents" that we typically associate with skill), but those characteristics need to be shaped and formed in the right environment: both built and social.<br />
<br />
It should be possible to engineer a higher incidence of high levels of performance in selected individuals, but I suspect that the interaction between personality and environment is sufficiently subtle that we would not be able to guarantee an outcome in anything other than statistical terms.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-40175473690751693932014-02-27T15:04:00.001-08:002014-02-27T15:04:19.823-08:00Taste on the frontier of complexityIssues of taste and aesthetics are not normal fodder for engineers ... but when you make something sufficiently complex, you are by necessity operating on the frontier, far away from the well-trod paths of tradition and precedent.<br />
<br />
Out here, survival is not so much a matter of rules, but a matter of gut instinct: Of taste. Of elegance. Of aesthetics and style.<br />
<br />
Here, where the consequences of your decisions can be both impenetrable and dire: vigorous debate and discussion are essential; ego and stubbornness both fatal disorders, and a shared sense of taste an incredible boon.<br />
<br />
This is where leadership at its' most refined can shine; The creation of a group aesthetic; A culture and identity that is oriented around matters of taste.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-41635307087951087402014-02-26T22:33:00.003-08:002014-02-26T22:33:48.803-08:00The role of Machine Vision as a Rosetta Stone for Artificial Intelligence.<div>
My life has not followed a straight path. There have been many detours and deviations. Never the less, if I turn my head to one side, turn around twice, squint through one eye, and be very selective about what I pick from my memories, I can piece together a narrative that sort-of makes sense.</div>
<div>
<br /></div>
During my teenage years, I had (typically for a teenager) overwrought and unrealistic ambitions. Driven by a somewhat mystical view of science and mathematics, I harboured ambitions to be a physicist. I wanted to explain and understand a frequently baffling world, as well as (perhaps) to find a way to escape to a place of abstract elegance: A way to remove myself from the social awkwardness and profound embarrassment that plagued my life at that time. I was, after all, a socially inept and acne-ridden nerd with sometimes erratic emotional self-control.<br />
<br />
It was in this context that the ethical supremacy of abstract (and disciplined) reasoning over unreliable and sometimes destructive emotional intuition was founded: A concept that forms one of the prime narrative threads that bind this story together.<br />
<br />
<div>
To me, the abstract world was the one thing that held any hope of making consistent sense; and provided (now as then) the ultimate avenue for a near-perpetual state of denial. Not that I have been terribly successful in my quest (by the overwrought standards of my teenage ambitions at least), but the role of science & technology "groupie" seems to have served me and my career reasonably well so far, and has cemented a view of life as a tragedy in which abstract intellectualism serves as a platonic ideal towards which we forever strive, but are cursed never to achieve.<br />
<br />
In any case, I quickly came to the conclusion that my intellectual faculties were completely insufficient to grasp the mathematics that my aspirations required. In retrospect this was less a victory of insight than the sort of failure that teaches us that excessive perfectionism, when coupled with a lack of discipline and determination will inevitably lead to self-imposed failure.<br />
<div>
<br />
So, I drifted for a few years before discovering Artificial Intelligence, reasoning that if I was not bright enough to be a physicist in my own right, I should at least be able to get some assistance in understanding the world from a computer: an understanding that might even extend to the intricacies of my own unreliable brain. Indeed, my own (possibly narcissistic) quest to improve my understanding both of my own nature and that of the wider world is another key thread that runs through this narrative.<br />
<br />
A good part of my motivation at the time came from my popular science reading list. Books on Chaos theory and non-linear dynamics had a great impact on me in those years, and from these, and the notions of emergence that they introduced, I felt that we were only beginning to scratch the surface of the potential that general purpose computing machines offered us.</div>
</div>
<div>
<br /></div>
<div>
My (eventual) undergraduate education in AI was (initially) a bit of a disappointment. Focusing on "good old fashioned" AI and computational linguistics, the intellectual approach that the majority of the modules took was oriented around theorem proving and rule-based systems: A heady mix of Noam Chomsky and Prolog programming. This classical and logical approach to understanding the world was really an extension of the philosophy of logic to the computer age; a singularly unimaginative act of intellectual inertia that left little room for the messiness, complexity and chaos that characterised my understanding of the world, whilst similarly confirming my view that the tantalising potential of general-purpose computation was inevitably destined to remain untapped. More than this, the presumption that the world could be described and understood in terms of absolutist rules struck me as an essentially arrogant act. However, I was still strongly attracted to the notion of logic as the study of how we "ought" to think, or the study of thought in the abstract; divorced from the messy imperfections of the real world. Bridging this gap, it seemed to me, was an activity of paramount importance, but an exercise that could only realistically begin at one end: the end grounded in messy realities rather than head-in-the-clouds abstraction.<br />
<br />
As a result of this, I gravitated strongly towards the machine learning, neural networks and machine vision modules that became available towards the end of my undergraduate education. These captured my attention and my imagination in a way that the pseudo-intellectualism of computational linguistics and formal logic could not.<br />
<br />
My interest in neural networks was tempered somewhat by my continuing interest in "hard" science & engineering, and the lingering suspicion that much "soft" (and biologically inspired) computing was a bit of an intellectual cop-out. A view that has been confirmed a couple of times in my career. (Never trust individuals that propose either neural networks or genetic algorithms without first thoroughly exploring the alternatives!).<br />
<br />
On the other hand, machine learning and statistical pattern recognition seemed particularly attractive to my 20-something-year-old mind, combining a level of mathematical rigour which appealed to my ego and my sense of aesthetics, and readily available geometric interpretation which appealed to my predilection for visual and spatial reasoning. The fact that it readily acknowledged the inherent complexity and practical uncertainty involved in any realistic "understanding" of the world struck a chord with me also: It appeared to me as a more intellectually honest and humble practitioners' approach than the "high church" of logic and linguistics, and made me re-appraise the A-level statistics that I had shunned a few years earlier. (Honestly, the way that we teach statistics is just horrible, and most introductory statistics textbooks do irreparable damage to an essential and brilliant subject).<br />
<br />
The humility and honesty was an important component. Most practitioners that I met in those days talked about pattern recognition being a "dark art", emphasis on exploratory data analysis and intuitive understanding of the dataset. Notably absent was the arrogance and condescension that seems to characterise the subject now that "Big Data" and "Data Science" have become oh-so-trendy; attracting the mayflies and the charlatans by the boatload.<br />
<br />
In any case, then as now, statistical pattern recognition is a means to an end: An engineering solution to bridge the gap between the messy realities of an imperfect world, low level learning and data analysis and the platonic world of abstract thought and logic. This view was reinforced by the approach taken by the MIT COG team: reasoning that in order to learn how to behave in the world, the robot needs a body with sensors and effectors, so it can learn how to make sense of the world in a directed way.<br />
<br />
I didn't have a robot, but I could get data. Well, sort of. At that point in time, data-sets were actually quite hard to get hold of. The biggest dataset that I could easily lay my hands on (as an impoverished undergraduate) were the text files from Project Gutenberg; and since my mind (incorrectly) equated natural language with grammars and parsing, rather than statistics and machine learning, my attention turned elsewhere.<br />
<br />
That elsewhere was image data. In my mind (influenced by the MIT COG approach), we needed to escape from the self-referential bubble of natural language by pinning abstract concepts to real-world physical observations. Text alone was not enough. Machine Vision would be the rosetta stone that would enable us to unlock it's potential. By teaching a machine to look at the world of objects, it could teach itself to understand the world of men.<br />
<br />
One of my fellow students actually had (mirable diu!) a digital camera, which stored its' images on a zip-disk (the size of a 3.25 inch floppy disk), and took pictures that (if I recall correctly) were about 800x600 in resolution. I borrowed this camera and made my first (abortive) attempts to study natural image statistics; an attempt that continued as I entered my final year as an undergraduate, and took on my final year project: tracing bundles of nerves through light microscopy images of serial ultra-microtome sections of drosophila ganglia. As ever, the scope of the project rather outstripped my time and my abilities, but some important lessons were nonetheless learned.<br />
<br /></div>
<div>
... To be continued.</div>
Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-72583486774200059942014-02-26T22:14:00.001-08:002014-02-26T22:14:19.694-08:00Software sculptureDeveloping software is a craft that aspires to be an art.<br />
<br />
It is both additive and subtractive. As we add words and letters to our formal documents, we build up declarations and relations; descriptions of logic and process. As this happens, we carve away at the world of possibilities: we make some potentialities harder to reach. The subtractive process is *implied* by the additive process, rather than directly specified by it.<br />
<br />
If this subtractive process proceeds too slowly, we end up operating in a space with too many degrees of freedom: difficult to describe; difficult to reason about; and with too many ways that the system can go wrong.<br />
<br />
If the subtractive process proceeds too quickly, we end up eliminating potentialities which we need, eventually, to realise. This results in prohibitively expensive amounts of rework.<br />
<br />
The balance is a delicate one, and it involves intangible concepts that are not directly stated in the formal documents that we write; only indirectly implied.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-48935550515423758412014-02-20T04:21:00.001-08:002014-02-20T04:50:10.982-08:00The cost of complexity in software engineering is like the sound barrier. How can we break it?<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;">In response to: http://www.pistoncloud.com/2014/02/do-successful-programmers-need-to-be-introverted/</span></span><br />
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;"><br /></span></span>
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;">Q: Do successful programmers need to be introverted?</span></span><br />
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;">A: It depends.</span></span><br />
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;"><br /></span></span>
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;">One unpleasant consequence of network effects is that the cost of communication has a super-linear relationship with system complexity. Fred Brooks indicates that the communication overhead for large (or growing) teams can become significant enough to stop work in it's tracks. As the team grows beyond a certain point, the cost quickly shoots up to an infeasible level. By analogy with the sound barrier, I call this the communications barrier; because both present a seemingly insurmountable wall blocking our ability to further improve our performance.</span></span><br />
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;"><br /></span></span>
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;">This analysis argues for small team sizes, perhaps as small as n=1. Clearly introversion is an asset under these circumstances.</span></span><br />
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;"><br /></span></span>
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;">However, irrespective of their innate efficiency, there are obvious limits to what a small team can produce. Building a system beyond a certain level of complexity requires a large team, and large teams need to break the communications barrier to succeed. Excellent, highly disciplined and highly effective communications skills are obviously very important under these circumstances, which calls for a certain type of (disciplined) extroversion; perhaps in the form of visionary leadership?</span></span><br />
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;"><br /></span></span>
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;">My intuition tells me that breaking the communications barrier is a matter of organization and detail-oriented thinking. Unfortunately I have yet to observe it being done both effectively and systematically by any of the organisations that I have yet worked for.</span></span><br />
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;"><br /></span></span>
<span style="color: #3f4549; font-family: Georgia, Times, serif;"><span style="line-height: 20.98958396911621px;">Has anybody else seen or worked with an organisation that has successfully broken the communications barrier? If so, how did they do it?</span></span>Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-2768963062302473832014-02-19T09:36:00.002-08:002014-02-19T09:36:48.478-08:00Computer security in the machine age.The complexity of modern technology makes it terribly vulnerable to abuse and exploitation. So many devices have been attacked and compromised that when faced with any given piece of technology, the safest assumption to make is that it has been or will be subverted.<div>
<br /></div>
<div>
For today's devices, the consequences don't intrude so much into the physical world. Some money may get stolen; a website may be defaced, some industrial (or military) secrets stolen, but (Stuxnet aside), the potential for physical death, damage & destruction is limited.</div>
<div>
<br /></div>
<div>
For tomorrow's devices, the story is quite terrifying. A decade from now, the world will be filled with thousands upon thousands of autonomous cars and lorries, domestic robots and UAVs.</div>
<div>
<br /></div>
<div>
Today, criminals use botnets are used to send spam and commit advertising fraud. Tomorrow, what will malicious hackers do when their botnet contains tens of thousands of robot cars and trucks?</div>
<div>
<br /></div>
<div>
What can we do to change the trajectory of this story? What can we do to alter it's conclusion?</div>
Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-54513886458767936392014-02-16T04:54:00.000-08:002014-02-17T04:32:05.298-08:00PerformanceThe internet is just terrible for our sense of self-worth. We can reach out and connect to the very best and most talented individuals in the world; we can read what they write, and even converse with them if we choose. It is only natural that we should compare ourselves to them and find ourselves wanting.<br />
<br />
It is easy to retreat from this situation with a sense of despair and self-pity, but there is another thought that occurs to me also, and that thought is this: Role models are a red herring. Does your role model have a role model of his own? Probably not. You don't get to be an outstanding performer by emulating somebody else, nor (just) by competing with your peers, nor (just) by following your passion, nor (just) by putting in your 10,000 hours. True performance comes from owning an area of expertise; from living and breathing it. From having your identity and sense of self so totally wrapped up in it that you can do nothing else.<br />
<br />
Clearly, this sucks for everybody else around you, so it is a path that not many people should follow .... which makes me feel a bit better about my own inadequacies.<br />
<div>
<br /></div>
<div>
So there.</div>
Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-76572235939637869652014-02-14T10:11:00.001-08:002014-02-14T10:11:20.720-08:00Side channel attacks on silosSilos of knowledge and expertise build up all to easily, not only as a result of human tendency towards homophily, but also because of more fundamental bandwidth limitations.<br />
<br />
As with most human failings, we look to simple technological fixes to resolve them.<br />
<br />
One frequently overlooked technology is the use of ambient "side" channels to encourage or enforce communication:<br />
<br />
1. Human environment. (Who do you work with).<br />
2. Built environment. (Who do you sit next to).<br />
3. Informational environment. (Where do your documents live).<br />
<br />
Every act of sensory perception during the work day is an opportunity for meaningful communication.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-40021062638610994712014-02-13T09:22:00.004-08:002014-02-13T09:22:45.831-08:00Code as haikuResponse to: <a href="http://www.ex-parrot.com/~pete/but-it-doesnt-mean-anything.html">http://www.ex-parrot.com/~pete/but-it-doesnt-mean-anything.html</a><br />
<br />
<br />
"Code" is a horrible word.<br />
<br />
I prefer "source documents", or, if pressed, "Formal descriptions of the program".<br />
<br />
Using the word "code" implies that the document is "encoded" somehow, which is plainly undesirable and wrong.<br />
<br />
With some notable (*) exceptions, the primary consumer of a "source document" is a human being, not a machine.<br />
<br />
The machine's purpose is to ensure the formal validity and correctness of the document - but the majority of the informational content of the document (variable names, structure, comments) is exclusively directed to human developers.<br />
<br />
We will never program in a "wild" natural language, but many programming languages (**) make a deliberate effort to support expressions which simulate or approximate natural language usage, albeit restricted to a particular idiomatic form.<br />
<br />
There will always be a tension between keeping a formal language simple enough to reason about and permitting free, naturalistic expression - but this is the same tension that makes poetry and haiku so appealing as an art form.<br />
<br />
So many source documents appear to be "in code", not because this is a necessary part of programming, but because it is very very difficult to write things which combine sufficient simplicity for easy understanding, and the correct representation of a difficult and complex problem. In most of these cases, clear understanding is baffled more by the complexity of the real world than by the nature of the programming language itself.<br />
<br />
The rigidly deterministic nature of the computer forces the programmer to deal with a myriad of inconsistencies and complications that the non-programmer is able to elide or gloss over with linguistic and social gymnastics. The computer forces us to confront these complications, and to account for them. Legal drafting faces similar (albeit less extreme) challenges.<br />
<br />
In the same way that Mathematics isn't really about numbers, but about the skill and craftsmanship of disciplined thought, programming isn't really about computers, but about what happens when you can no longer ignore the details within which the devil resides.<br />
<br />
(*) Assembler & anything involving regular expressions.<br />
(**) PythonWillhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-20203346974431016192014-01-09T15:08:00.003-08:002014-01-09T15:08:39.999-08:00The UX of large-scale online education.Written in response to: http://www.fastcompany.com/3021473/udacity-sebastian-thrun-uphill-climb<br />
<br />
The number of students that complete the course may not be the right metric to look at. However, there are a number of steps that you could take that I think might improve completion rates.<br />
<br />
Human beings are a pretty undisciplined bunch, as a rule. We dislike rigour and crave immediate gratification. Our Puritan work-ethic may predispose us to look down upon such human foibles, but there is no shame in exploiting them in the pursuit of the expansion of learning and the spread of knowledge.<br />
<br />
Most of the following suggestions are oriented around giving students more fine-grained control over the timing and sequencing of their studies, as well as increasing the frequency and substance of the feedback. To complement this, some innovation may be required to come up with mechanisms that encourage and support the development of discipline without penalising those who simply cannot fit regular study around their other life commitments.<br />
<br />
1. Recognise that study is a secondary or tertiary activity for many students: Study may be able to trump entertainment or leisure, but work and family will always come first.<br />
<br />
2. Break the course materials up into tiny workshop-sized modules that can be completed in less than two weeks of part-time study. About 1 weekends' worth should be about right, allowing "sprints" of study to be interspersed and balanced with a healthy and proper commitment to family life.<br />
<br />
3. Each module does not have to stand alone. It can build on prerequisites taught in other modules, but those prerequisites should be documented and suggested rather than enforced programmatically.<br />
<br />
4. Assessments within and at the end of each of these micro-modules should be for the benefit of the student, and should not count towards any sort of award or certification.<br />
<br />
5. The scheduling of study over the calendar year should be optional. One or more group schedules may be suggested, so collections of students can progress through the materiel together and interact online and in meet-ups, but others should be allowed to take a more economical pace, each according to their budget of time and attention.<br />
<br />
6. Final exams can still be scheduled once or twice per year, coincident with the end of one or more suggested schedules. Students pacing their own study may need to wait a while before exam-time comes around, but the flexibility in study more than compensates for any disadvantage that they may have in the exam.<br />
<br />
These suggestions should help lower barriers for students with otherwise packed calendars. In addition, it may be worthwhile experimenting with various techniques to grab students attention and re-focus it back on their learning objectives: Ideas from gamification point to frequent feedback and frequent small rewards to encourage attention and deep concentration. Also from the gaming world, sophisticated algorithms exist that are designed to match players of similar ability in online matches. The same algorithms can be used to match students of similar ability for competitive assessments and quizzes. In addition to gamification techniques, it should be possible to explore different schedules for "pushing" reminders and messages to students, or other prompts for further study. For example, you could send out emails with a few quiz questions that require just one more video to be watched. Finally, you can get people to pledge / commit to a short-term goal, for example, to reach a certain point in the module by a certain point in time (e.g. the end of the weekend).Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-71829310659031661802013-12-18T04:51:00.002-08:002013-12-18T04:51:13.022-08:00Low overhead trace-loggingScatter trace-points throughout the code. Each trace-point is identified by a single unique integer (Macro magic?). The last n (64? 256? 1024?) trace-points are stored in a global buffer; which gets flushed to disk when an anomalous situation is encountered (e.g. an exception is thrown).Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-45793500326406365272013-12-13T01:48:00.002-08:002013-12-13T01:48:41.963-08:00Strategic vs. tactical programming<div style="background-color: white; border: 0px; box-sizing: border-box; color: rgba(29, 47, 58, 0.701961); font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19px; padding: 0px;">
The great thing about programming is that you can do it at any level of abstraction -- from the minutest down-in-the guts detail, to broad strategic strokes of business logic.</div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: rgba(29, 47, 58, 0.701961); font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19px; padding: 0px;">
<br style="box-sizing: border-box;" /></div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: rgba(29, 47, 58, 0.701961); font-family: 'Helvetica Neue', helvetica, arial, sans-serif; font-size: 14px; line-height: 19px; padding: 0px;">
There is absolutely no reason why a fully automated business should not have strategic programming roles as well as detailed "tactical" programming roles.</div>
Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-89829708333224713452013-11-25T06:41:00.003-08:002013-11-25T06:41:19.638-08:00Reasoning about state evolution using classesIn response to:<br />
<br />
<a href="http://hadihariri.com/2013/11/24/refactoring-to-functionalwhy-class">http://hadihariri.com/2013/11/24/refactoring-to-functionalwhy-class</a><br />
<br />
Thinking about how application state evolves over time is difficult, and is responsible for all sorts of pernicious bugs. If you can avoid statefulness, you should ... but sometimes you just can't. In these cases, Classes give us a way to limit and control how state can change, making it easier to reason about.<br />
<br />
In other words, placing state in a private variable, and limiting the possible state transitions with a restricted set of public methods (public setters don't count) can dramatically limit the combinatorial explosion that makes reasoning about state evolution so difficult. To the extent that this explosion is limited, classes help us to think and reason about how the state of our application evolves over time, reducing the occurrence and severity of bugs in our system.<br />
<br />
The discussion of using classes as an OO domain modelling language is a separate question, and is (as the above article insinuates) is bedevilled with domain-model impedance mismatches resulting in unsatisfactory levels of induced-accidental-complexity.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-23944511178548583792013-11-22T12:53:00.002-08:002013-11-22T12:53:31.272-08:0020 percent together<span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13.333333969116211px; line-height: 20px;">20% time is a beguiling idea. Productivity is a function of passion; and nothing fuels passion like ownership. The problem with 20% time stems from the blurring of organisational focus and the diffusion of collective action that results from it. </span><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13.333333969116211px; line-height: 20px;" /><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13.333333969116211px; line-height: 20px;" /><span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13.333333969116211px; line-height: 20px;">So ... the problem remains: How to harness the passion of self-ownership, and steer it so that it is directed in line with the driving force and focus of the entire company ... so the group retains it's focus, and the individual retains his (or her) sense of ownership.</span><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13.333333969116211px; line-height: 20px;" /><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13.333333969116211px; line-height: 20px;" /><span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13.333333969116211px; line-height: 20px;">I can well imagine that the solution to this strategic challenge is going to be idiosyncratic to the business or industry in question ... but for many types of numerical software development, there is a real need for good tools and automation, so why not make a grand bargain: give employees some (limited) freedom to choose what they work on, but mandate that it should be in support of (well advertised) organisational strategic objectives.</span>Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0tag:blogger.com,1999:blog-3243458808465400448.post-75441226410043324112013-11-15T11:56:00.003-08:002013-11-15T11:56:47.414-08:00Continuous Collaboration: Using Git to meld continuous integration with collaborative editing.Written in response to:<br />
<br />
http://www.grahamlea.com/2013/11/git-mercurial-anti-agile-continuous-integration/<br />
<br />
I strongly agree with the fundamental point that the author is making.<br />
<br />
However, there are nuances. A lot of this depends on the type of development that you are doing.<br />
<br />
For example, most of my day-to-day work is done in very small increments. Minor bug-fixes, incremental classifier performance improvements, parameter changes, and so on. Only rarely will I work on a feature that is so significant in its' impact that the work-in-progress causes the branch to spend several days in a broken / non-working state. I also work in fairly small teams, so the rate of pushes to Gerrit is quite low: only around a dozen pushes per day or so. This means that integration is pretty easy, and that our CI server gives us value & helps with our quality gating. We can follow a single-branch development path with little to no pain, and because both our software and the division of labour in the team are fairly well organised, conflicts very very seldom occur when merging (even when using suboptimal tools to perform the merges).<br />
<br />
This state of affairs probably does not hold for all developers, but it holds for me, and for most of the people that I work with. As a result, we can happily work without feature branches (most of the time), and lean on the CI process to keep ourselves in sync & to measure the performance of our classifiers & other algorithms.<br />
<br />
Now, don't get me wrong, I think that Git is great. I am the nominated Git expert in my team, and spend a lot of time helping other team members navigate the nuances of using Git with Gerrit, but for most people it is yet another tool to learn in an already over-complex development environment. Git gives us the flexibility to do what we need to in the environment that we have; but it is anything but effortless and transparent, which is what it really needs to be.<br />
<br />
Software development is about developing software. Making systems that work. Not wrangling branches in Git.<br />
<br />
My ideal tool would be the bastard son of Git and a real-time collaborative editor. My unit tests should be able to report when my local working copy is in a good state. Likewise, my unit tests should be able to report whether a merge or rebase has succeeded or failed. Why can I not then fully automate the process of integrating my work with that of my colleagues? Indeed, my work should be integrated & shared whenever the following two conditions are met: 1) My unit tests pass on my local working copy, and 2) My unit tests pass on the fully integrated copy. These are the same criteria that I would use when doing the process manually ... so why do it manually? Why not automate it? Triggered by every save, the resulting process would create the appearance of an almost-real-time collaborative working environment, opening up the possibility for new forms of close collaboration and team-working that are simply not possible with current tools. A source file would be a shared document that updates almost in real time. (If it is only comments that are being edited, then there is no reason why the updating could not actually be in real time). This means that you could discuss a change with a colleague, IRC-style, in the comments of a source document, and make the change in the source file *at the same time*, keeping a record not only of the logic change, but also of the reasoning that led to it. (OK, this might cause too much noise, but with comment-folding, that might not matter too much).<br />
<br />
Having said all of that, branches are still useful, as are commit messages, so we would still want something like Git to keep a record of significant changes, and to isolate incompatible works-in-progress in separate branches; but there is no reason why we cannot separate out the "integration" use case and the "collaboration" use case from the "version control" and "record keeping" use cases.Willhttp://www.blogger.com/profile/16325301752282552046noreply@blogger.com0