Sunday, 9 March 2014

Lessons from computer games

Operation Flashpoint & Armed Assault:

  • In a serious gunfight, pretty much everybody dies.
  • Don't join the army: It's a dumb idea.

Total Annihilation:

  • There is no such thing as too much.
  • People set themselves up for failure by limiting the scope of their imagination.

Planetary Annihilation:
  • Attention is the most critical resource.
  • Play the person, not the game.

Friday, 7 March 2014

Computer and network security in the robot age

In response to: http://www.theatlantic.com/technology/archive/2014/03/theres-no-real-difference-between-online-espionage-and-online-attack/284233/

It doesn't matter if you are "just" eavesdropping or if you are trying to cause damage directly. If you are trying to take control over somebody else's property; trying to make it do things that the owner of that property did not intend and does not want, then surely that is a form of theft?

Possession isn't just about holding something in your hand: It is also about power and control.

The implications of this might be a little hard to see, because right now computers don't have very much direct interaction with the "real" world.... but it won't be like that forever. 

Take me, for example. I am working on systems to control autonomous vehicles: Networked computers in charge of a car or a truck.

This is just the beginning. In a decade or more, it won't just be cars and trucks on the road that drive themselves: airborne drones; and domestic robots of every size and shape will be everywhere you look.

What would the world look like if we allowed a party or parties unknown to seize control of all these computers? What kind of chaos and carnage could a malicious actor cause?

We have an opportunity right now. A tremendous gift. We need to put in place the infrastructure that will make this sort of wholesale subversion impossible; or, at the very least, very very much harder than it is today, and we need to do it before the stakes become raised to a dangerous degree.

Saturday, 1 March 2014

Nature vs Nurture; Talent vs Practice

In response to: http://www.bbc.co.uk/news/magazine-26384712

Practice is the immediate (proximal) cause of high performance at a particular task. The notion that anybody has evolved an "innate" talent at something as specific to the modern world as playing a violin is obviously laughable.

However: The consistently high levels of motivation that an individual needs to practice a skill for the prerequisite length of time is very much a function of innate characteristics; particularly personality traits; Notably those associated with personality disorders such as GAD, OCD & ASPDS.

Of course, the extent to which a borderline personality disorder can be harnessed to support the acquisition of extremely high levels of skill and performance is very much a function of the environment. For example:

1. The level of stress that the individual is subjected to.
2. The built environment within which they live.
3. The social culture that they are part of.
4. The support that they get from family and friends.

In summary: To exhibit high levels of performance you do need some innate characteristics, (although not necessarily the innate "talents" that we typically associate with skill), but those characteristics need to be shaped and formed in the right environment: both built and social.

It should be possible to engineer a higher incidence of high levels of performance in selected individuals, but I suspect that the interaction between personality and environment is sufficiently subtle that we would not be able to guarantee an outcome in anything other than statistical terms.

Thursday, 27 February 2014

Taste on the frontier of complexity

Issues of taste and aesthetics are not normal fodder for engineers ... but when you make something sufficiently complex, you are by necessity operating on the frontier, far away from the well-trod paths of tradition and precedent.

Out here, survival is not so much a matter of rules, but a matter of gut instinct: Of taste. Of elegance. Of aesthetics and style.

Here, where the consequences of your decisions can be both impenetrable and dire: vigorous debate and discussion are essential; ego and stubbornness both fatal disorders, and a shared sense of taste an incredible boon.

This is where leadership at its' most refined can shine; The creation of a group aesthetic; A culture and identity that is oriented around matters of taste.

Wednesday, 26 February 2014

The role of Machine Vision as a Rosetta Stone for Artificial Intelligence.

My life has not followed a straight path. There have been many detours and deviations. Never the less, if I turn my head to one side, turn around twice, squint through one eye, and be very selective about what I pick from my memories, I can piece together a narrative that sort-of makes sense.

During my teenage years, I had (typically for a teenager) overwrought and unrealistic ambitions. Driven by a somewhat mystical view of science and mathematics, I harboured ambitions to be a physicist. I wanted to explain and understand a frequently baffling world, as well as (perhaps) to find a way to escape to a place of abstract elegance: A way to remove myself from the social awkwardness and profound embarrassment that plagued my life at that time. I was, after all, a socially inept and acne-ridden nerd with sometimes erratic emotional self-control.

It was in this context that the ethical supremacy of abstract (and disciplined) reasoning over unreliable and sometimes destructive emotional intuition was founded: A concept that forms one of the prime narrative threads that bind this story together.

To me, the abstract world was the one thing that held any hope of making consistent sense; and provided (now as then) the ultimate avenue for a near-perpetual state of denial. Not that I have been terribly successful in my quest (by the overwrought standards of my teenage ambitions at least), but the role of science & technology "groupie" seems to have served me and my career reasonably well so far, and has cemented a view of life as a tragedy in which abstract intellectualism serves as a platonic ideal towards which we forever strive, but are cursed never to achieve.

In any case, I quickly came to the conclusion that my intellectual faculties were completely insufficient to grasp the mathematics that my aspirations required. In retrospect this was less a victory of insight than the sort of failure that teaches us that excessive perfectionism, when coupled with a lack of discipline and determination will inevitably lead to self-imposed failure.

So, I drifted for a few years before discovering Artificial Intelligence, reasoning that if I was not bright enough to be a physicist in my own right, I should at least be able to get some assistance in understanding the world from a computer: an understanding that might even extend to the intricacies of my own unreliable brain. Indeed, my own (possibly narcissistic) quest to improve my understanding both of my own nature and that of the wider world is another key thread that runs through this narrative.

A good part of my motivation at the time came from my popular science reading list. Books on Chaos theory and non-linear dynamics had a great impact on me in those years, and from these, and the notions of emergence that they introduced, I felt that we were only beginning to scratch the surface of the potential that general purpose computing machines offered us.

My (eventual) undergraduate education in AI was (initially) a bit of a disappointment. Focusing on "good old fashioned" AI and computational linguistics, the intellectual approach that the majority of the modules took was oriented around theorem proving and rule-based systems: A heady mix of Noam Chomsky and Prolog programming. This classical and logical approach to understanding the world was really an extension of the philosophy of logic to the computer age; a singularly unimaginative act of intellectual inertia that left little room for the messiness, complexity and chaos that characterised my understanding of the world, whilst similarly confirming my view that the tantalising potential of general-purpose computation was inevitably destined to remain untapped. More than this, the presumption that the world could be described and understood in terms of absolutist rules struck me as an essentially arrogant act. However, I was still strongly attracted to the notion of logic as the study of how we "ought" to think, or the study of thought in the abstract; divorced from the messy imperfections of the real world. Bridging this gap, it seemed to me, was an activity of paramount importance, but an exercise that could only realistically begin at one end: the end grounded in messy realities rather than head-in-the-clouds abstraction.

As a result of this, I gravitated strongly towards the machine learning, neural networks and machine vision modules that became available towards the end of my undergraduate education. These captured my attention and my imagination in a way that the pseudo-intellectualism of computational linguistics and formal logic could not.

My interest in neural networks was tempered somewhat by my continuing interest in "hard" science & engineering, and the lingering suspicion that much "soft" (and biologically inspired) computing was a bit of an intellectual cop-out. A view that has been confirmed a couple of times in my career. (Never trust individuals that propose either neural networks or genetic algorithms without first thoroughly exploring the alternatives!).

On the other hand, machine learning and statistical pattern recognition seemed particularly attractive to my 20-something-year-old mind, combining a level of mathematical rigour which appealed to my ego and my sense of aesthetics, and readily available geometric interpretation which appealed to my predilection for visual and spatial reasoning. The fact that it readily acknowledged the inherent complexity and practical uncertainty involved in any realistic "understanding" of the world struck a chord with me also: It appeared to me as a more intellectually honest and humble practitioners' approach than the "high church" of logic and linguistics, and made me re-appraise the A-level statistics that I had shunned a few years earlier. (Honestly, the way that we teach statistics is just horrible, and most introductory statistics textbooks do irreparable damage to an essential and brilliant subject).

The humility and honesty was an important component. Most practitioners that I met in those days talked about pattern recognition being a "dark art", emphasis on exploratory data analysis and intuitive understanding of the dataset. Notably absent was the arrogance and condescension that seems to characterise the subject now that "Big Data" and "Data Science" have become oh-so-trendy; attracting the mayflies and the charlatans by the boatload.

In any case, then as now, statistical pattern recognition is a means to an end: An engineering solution to bridge the gap between the messy realities of an imperfect world, low level learning and data analysis and the platonic world of abstract thought and logic. This view was reinforced by the approach taken by the MIT COG team: reasoning that in order to learn how to behave in the world, the robot needs a body with sensors and effectors, so it can learn how to make sense of the world in a directed way.

I didn't have a robot, but I could get data. Well, sort of. At that point in time, data-sets were actually quite hard to get hold of. The biggest dataset that I could easily lay my hands on (as an impoverished undergraduate) were the text files from Project Gutenberg; and since my mind (incorrectly) equated natural language with grammars and parsing, rather than statistics and machine learning, my attention turned elsewhere.

That elsewhere was image data. In my mind (influenced by the MIT COG approach), we needed to escape from the self-referential bubble of natural language by pinning abstract concepts to real-world physical observations. Text alone was not enough. Machine Vision would be the rosetta stone that would enable us to unlock it's potential. By teaching a machine to look at the world of objects, it could teach itself to understand the world of men.

One of my fellow students actually had (mirable diu!) a digital camera, which stored its' images on a zip-disk (the size of a 3.25 inch floppy disk), and took pictures that (if I recall correctly) were about 800x600 in resolution. I borrowed this camera and made my first (abortive) attempts to study natural image statistics; an attempt that continued as I entered my final year as an undergraduate, and took on my final year project: tracing bundles of nerves through light microscopy images of serial ultra-microtome sections of drosophila ganglia. As ever, the scope of the project rather outstripped my time and my abilities, but some important lessons were nonetheless learned.

... To be continued.

Software sculpture

Developing software is a craft that aspires to be an art.

It is both additive and subtractive. As we add words and letters to our formal documents, we build up declarations and relations; descriptions of logic and process. As this happens, we carve away at the world of possibilities: we make some potentialities harder to reach. The subtractive process is *implied* by the additive process, rather than directly specified by it.

If this subtractive process proceeds too slowly, we end up operating in a space with too many degrees of freedom: difficult to describe; difficult to reason about; and with too many ways that the system can go wrong.

If the subtractive process proceeds too quickly, we end up eliminating potentialities which we need, eventually, to realise. This results in prohibitively expensive amounts of rework.

The balance is a delicate one, and it involves intangible concepts that are not directly stated in the formal documents that we write; only indirectly implied.

Thursday, 20 February 2014

The cost of complexity in software engineering is like the sound barrier. How can we break it?

In response to: http://www.pistoncloud.com/2014/02/do-successful-programmers-need-to-be-introverted/

Q: Do successful programmers need to be introverted?
A: It depends.

One unpleasant consequence of network effects is that the cost of communication has a super-linear relationship with system complexity. Fred Brooks indicates that the communication overhead for large (or growing) teams can become significant enough to stop work in it's tracks. As the team grows beyond a certain point, the cost quickly shoots up to an infeasible level. By analogy with the sound barrier, I call this the communications barrier; because both present a seemingly insurmountable wall blocking our ability to  further improve our performance.

This analysis argues for small team sizes, perhaps as small as n=1. Clearly introversion is an asset under these circumstances.

However, irrespective of their innate efficiency, there are obvious limits to what a small team can produce. Building a system beyond a certain level of complexity requires a large team, and large teams need to break the communications barrier to succeed. Excellent, highly disciplined and highly effective communications skills are obviously very important under these circumstances, which calls for a certain type of (disciplined) extroversion; perhaps in the form of visionary leadership?

My intuition tells me that breaking the communications barrier is a matter of organization and detail-oriented thinking. Unfortunately I have yet to observe it being done both effectively and systematically by any of the organisations that I have yet worked for.

Has anybody else seen or worked with an organisation that has successfully broken the communications barrier? If so, how did they do it?

Wednesday, 19 February 2014

Computer security in the machine age.

The complexity of modern technology makes it terribly vulnerable to abuse and exploitation. So many devices have been attacked and compromised that when faced with any given piece of technology, the safest assumption to make is that it has been or will be subverted.

For today's devices, the consequences don't intrude so much into the physical world. Some money may get stolen; a website may be defaced, some industrial (or military) secrets stolen, but (Stuxnet aside), the potential for physical death, damage & destruction is limited.

For tomorrow's devices, the story is quite terrifying. A decade from now, the world will be filled with thousands upon thousands of autonomous cars and lorries, domestic robots and UAVs.

Today, criminals use botnets are used to send spam and commit advertising fraud. Tomorrow, what will malicious hackers do when their botnet contains tens of thousands of robot cars and trucks?

What can we do to change the trajectory of this story? What can we do to alter it's conclusion?

Sunday, 16 February 2014

Performance

The internet is just terrible for our sense of self-worth. We can reach out and connect to the very best and most talented individuals in the world; we can read what they write, and even converse with them if we choose. It is only natural that we should compare ourselves to them and find ourselves wanting.

It is easy to retreat from this situation with a sense of despair and self-pity, but there is another thought that occurs to me also, and that thought is this: Role models are a red herring. Does your role model have a role model of his own? Probably not. You don't get to be an outstanding performer by emulating somebody else, nor (just) by competing with your peers, nor (just) by following your passion, nor (just) by putting in your 10,000 hours. True performance comes from owning an area of expertise; from living and breathing it. From having your identity and sense of self so totally wrapped up in it that you can do nothing else.

Clearly, this sucks for everybody else around you, so it is a path that not many people should follow .... which makes me feel a bit better about my own inadequacies.

So there.

Friday, 14 February 2014

Side channel attacks on silos

Silos of knowledge and expertise build up all to easily, not only as a result of human tendency towards homophily, but also because of more fundamental bandwidth limitations.

As with most human failings, we look to simple technological fixes to resolve them.

One frequently overlooked technology is the use of ambient "side" channels to encourage or enforce communication:

1. Human environment. (Who do you work with).
2. Built environment. (Who do you sit next to).
3. Informational environment. (Where do your documents live).

Every act of sensory perception during the work day is an opportunity for meaningful communication.

Thursday, 13 February 2014

Code as haiku

Response to: http://www.ex-parrot.com/~pete/but-it-doesnt-mean-anything.html


"Code" is a horrible word.

I prefer "source documents", or, if pressed, "Formal descriptions of the program".

Using the word "code" implies that the document is "encoded" somehow, which is plainly undesirable and wrong.

With some notable (*) exceptions, the primary consumer of a "source document" is a human being, not a machine.

The machine's purpose is to ensure the formal validity and correctness of the document - but the majority of the informational content of the document (variable names, structure, comments) is exclusively directed to human developers.

We will never program in a "wild" natural language, but many programming languages (**) make a deliberate effort to support expressions which simulate or approximate natural language usage, albeit restricted to a particular idiomatic form.

There will always be a tension between keeping a formal language simple enough to reason about and permitting free, naturalistic expression - but this is the same tension that makes poetry and haiku so appealing as an art form.

So many source documents appear to be "in code", not because this is a necessary part of programming, but because it is very very difficult to write things which combine sufficient simplicity for easy understanding, and the correct representation of a difficult and complex problem. In most of these cases, clear understanding is baffled more by the complexity of the real world than by the nature of the programming language itself.

The rigidly deterministic nature of the computer forces the programmer to deal with a myriad of inconsistencies and complications that the non-programmer is able to elide or gloss over with linguistic and social gymnastics. The computer forces us to confront these complications, and to account for them. Legal drafting faces similar (albeit less extreme) challenges.

In the same way that Mathematics isn't really about numbers, but about the skill and craftsmanship of disciplined thought, programming isn't really about computers, but about what happens when you can no longer ignore the details within which the devil resides.

(*) Assembler & anything involving regular expressions.
(**) Python

Thursday, 9 January 2014

The UX of large-scale online education.

Written in response to: http://www.fastcompany.com/3021473/udacity-sebastian-thrun-uphill-climb

The number of students that complete the course may not be the right metric to look at. However, there are a number of steps that you could take that I think might improve completion rates.

Human beings are a pretty undisciplined bunch, as a rule. We dislike rigour and crave immediate gratification. Our Puritan work-ethic may predispose us to look down upon such human foibles, but there is no shame in exploiting them in the pursuit of the expansion of learning and the spread of knowledge.

Most of the following suggestions are oriented around giving students more fine-grained control over the timing and sequencing of their studies, as well as increasing the frequency and substance of the feedback. To complement this, some innovation may be required to come up with mechanisms that encourage and support the development of discipline without penalising those who simply cannot fit regular study around their other life commitments.

1. Recognise that study is a secondary or tertiary activity for many students: Study may be able to trump entertainment or leisure, but work and family will always come first.

2. Break the course materials up into tiny workshop-sized modules that can be completed in less than two weeks of part-time study. About 1 weekends' worth should be about right, allowing "sprints" of study to be interspersed and balanced with a healthy and proper commitment to family life.

3. Each module does not have to stand alone. It can build on prerequisites taught in other modules, but those prerequisites should be documented and suggested rather than enforced programmatically.

4. Assessments within and at the end of each of these micro-modules should be for the benefit of the student, and should not count towards any sort of award or certification.

5. The scheduling of study over the calendar year should be optional. One or more group schedules may be suggested, so collections of students can progress through the materiel together and interact online and in meet-ups, but others should be allowed to take a more economical pace, each according to their budget of time and attention.

6. Final exams can still be scheduled once or twice per year, coincident with the end of one or more suggested schedules. Students pacing their own study may need to wait a while before exam-time comes around, but the flexibility in study more than compensates for any disadvantage that they may have in the exam.

These suggestions should help lower barriers for students with otherwise packed calendars. In addition, it may be worthwhile experimenting with various techniques to grab students attention and re-focus it back on their learning objectives: Ideas from gamification point to frequent feedback and frequent small rewards to encourage attention and deep concentration. Also from the gaming world, sophisticated algorithms exist that are designed to match players of similar ability in online matches. The same algorithms can be used to match students of similar ability for competitive assessments and quizzes. In addition to gamification techniques, it should be possible to explore different schedules for "pushing" reminders and messages to students, or other prompts for further study. For example, you could send out emails with a few quiz questions that require just one more video to be watched. Finally, you can get people to pledge / commit to a short-term goal, for example, to reach a certain point in the module by a certain point in time (e.g. the end of the weekend).

Wednesday, 18 December 2013

Low overhead trace-logging

Scatter trace-points throughout the code. Each trace-point is identified by a single unique integer (Macro magic?). The last n (64? 256? 1024?) trace-points are stored in a global buffer; which gets flushed to disk when an anomalous situation is encountered (e.g. an exception is thrown).

Friday, 13 December 2013

Strategic vs. tactical programming

The great thing about programming is that you can do it at any level of abstraction -- from the minutest down-in-the guts detail, to broad strategic strokes of business logic.

There is absolutely no reason why a fully automated business should not have strategic programming roles as well as detailed "tactical" programming roles.

Monday, 25 November 2013

Reasoning about state evolution using classes

In response to:

http://hadihariri.com/2013/11/24/refactoring-to-functionalwhy-class

Thinking about how application state evolves over time is difficult, and is responsible for all sorts of pernicious bugs. If you can avoid statefulness, you should ... but sometimes you just can't. In these cases, Classes give us a way to limit and control how state can change, making it easier to reason about.

In other words, placing state in a private variable, and limiting the possible state transitions with a restricted set of public methods (public setters don't count) can dramatically limit the combinatorial explosion that makes reasoning about state evolution so difficult. To the extent that this explosion is limited, classes help us to think and reason about how the state of our application evolves over time, reducing the occurrence and severity of bugs in our system.

The discussion of using classes as an OO domain modelling language is a separate question, and is (as the above article insinuates) is bedevilled with domain-model impedance mismatches resulting in unsatisfactory levels of induced-accidental-complexity.

Friday, 22 November 2013

20 percent together

20% time is a beguiling idea. Productivity is a function of passion; and nothing fuels passion like ownership. The problem with 20% time stems from the blurring of organisational focus and the diffusion of collective action that results from it. 

So ... the problem remains: How to harness the passion of self-ownership, and steer it so that it is directed in line with the driving force and focus of the entire company ... so the group retains it's focus, and the individual retains his (or her) sense of ownership.

I can well imagine that the solution to this strategic challenge is going to be idiosyncratic to the business or industry in question ... but for many types of numerical software development, there is a real need for good tools and automation, so why not make a grand bargain: give employees some (limited) freedom to choose what they work on, but mandate that it should be in support of (well advertised) organisational strategic objectives.

Friday, 15 November 2013

Continuous Collaboration: Using Git to meld continuous integration with collaborative editing.

Written in response to:

http://www.grahamlea.com/2013/11/git-mercurial-anti-agile-continuous-integration/

I strongly agree with the fundamental point that the author is making.

However, there are nuances. A lot of this depends on the type of development that you are doing.

For example, most of my day-to-day work is done in very small increments. Minor bug-fixes, incremental classifier performance improvements, parameter changes, and so on. Only rarely will I work on a feature that is so significant in its' impact that the work-in-progress causes the branch to spend several days in a broken / non-working state. I also work in fairly small teams, so the rate of pushes to Gerrit is quite low: only around a dozen pushes per day or so. This means that integration is pretty easy, and that our CI server gives us value & helps with our quality gating. We can follow a single-branch development path with little to no pain, and because both our software and the division of labour in the team are fairly well organised, conflicts very very seldom occur when merging (even when using suboptimal tools to perform the merges).

This state of affairs probably does not hold for all developers, but it holds for me, and for most of the people that I work with. As a result, we can happily work without feature branches (most of the time), and lean on the CI process to keep ourselves in sync & to measure the performance of our classifiers & other algorithms.

Now, don't get me wrong, I think that Git is great. I am the nominated Git expert in my team, and spend a lot of time helping other team members navigate the nuances of using Git with Gerrit, but for most people it is yet another tool to learn in an already over-complex development environment. Git gives us the flexibility to do what we need to in the environment that we have; but it is anything but effortless and transparent, which is what it really needs to be.

Software development is about developing software. Making systems that work. Not wrangling branches in Git.

My ideal tool would be the bastard son of Git and a real-time collaborative editor. My unit tests should be able to report when my local working copy is in a good state. Likewise, my unit tests should be able to report whether a merge or rebase has succeeded or failed. Why can I not then fully automate the process of integrating my work with that of my colleagues? Indeed, my work should be integrated & shared whenever the following two conditions are met: 1) My unit tests pass on my local working copy, and 2) My unit tests pass on the fully integrated copy. These are the same criteria that I would use when doing the process manually ... so why do it manually? Why not automate it? Triggered by every save, the resulting process would create the appearance of an almost-real-time collaborative working environment, opening up the possibility for new forms of close collaboration and team-working that are simply not possible with current tools. A source file would be a shared document that updates almost in real time. (If it is only comments that are being edited, then there is no reason why the updating could not actually be in real time). This means that you could discuss a change with a colleague, IRC-style, in the comments of a source document, and make the change in the source file *at the same time*, keeping a record not only of the logic change, but also of the reasoning that led to it. (OK, this might cause too much noise, but with comment-folding, that might not matter too much).

Having said all of that, branches are still useful, as are commit messages, so we would still want something like Git to keep a record of significant changes, and to isolate incompatible works-in-progress in separate branches; but there is no reason why we cannot separate out the "integration" use case and the "collaboration" use case from the "version control" and "record keeping" use cases.

Wednesday, 13 November 2013

Specification and Risk

When using a general purpose library for a specific application, we generally only use a tiny subset of the functionality that the library provides. As a result, it is often reasonable to wrap that library in a simplified, special-purpose API, tailored to the specific needs of the application under development. This simplifies the interface, and in reducing the number of ways that it can be used, we also reduce the number of ways that it can go wrong, reducing the cognitive burden on the developer, simplifying the system, and reducing both cost and risk.

In this way, a restriction in what we expect the system to do results in a beneficial simplification; reduced costs and reduced risk.

It is possible to go too far with this, though, and there are two deleterious effects that immediately spring to mind:

Firstly, there is the obvious risk of a bug in the specification - that proposed restrictions may be inappropriate or incompatible with the primary task of the system, or with the political needs of various stakeholders.

Secondly, and more insidiously, there is the risk that excessive restrictions become positive specification items; moving from "the system does not have to handle X" to "the system must check for X and explicitly error in this way". Whilst this seems ridiculous, it is a surprisingly easy psychological trap to fall into, and, of course, it increases system complexity and cost rather than reducing it.

Consistently being able to strike the right balance is primarily a function of development organisation culture; and another reason why businesses need to pay attention to this critical (but frequently overlooked) aspect of their internal organisation.

Tuesday, 12 November 2013

Qualitative changes in privacy induced by quantitative changes in social-communications network topology

How do we form opinions and make judgements of other people?

Can we understand this process in terms of the flow of information through a network/graph?

Can we use this model to understand the impact of changes in the network? (Both structural/topological changes in connectivity and quantitative changes to flow along existing graph edges).

Can we use this approach to predict what will happen as our privacy is increasingly eroded by technology?

I.e.: Do we see some relationship between the structure of a social/communication graph and the notion of "privacy". (Temporal aspects might also be important?)

If the graph changes, does the quality of "privacy" change in different ways, and how does that impact the way that we make judgements about other people, and the way that other people make judgements about us.

What does that mean for the nature of society going forwards; particularly as developments in personal digital technologies mean that increasing amounts of highly intimate, personal information are captured, stored and disseminated in an increasingly uncontrolled, (albeit sparsely distributed) manner.

The sparsity of the distribution might be important - in terms of the creation of novel/disruptive power/influence networks.

Wednesday, 6 November 2013

The software conspiracy: Maintaining a software-developer biased imbalance in human-machine labour arbitrage.

Have you ever wondered why software engineering tools are so terrible?

Perhaps there is an implicit/unspoken conspiracy across our profession?

After all, we software developers are working away at our jobs to automate various economic activities; the inevitable result of which is to force workers in other industries out of their jobs and their livelihoods.

A claim can be made that technological developments create new, less rote and routine roles, with more intellectual challenge and greater responsibility -- but there is no real evidence that this outcome will necessarily always hold; indeed, there is some empirical evidence to suggest that this pattern is even now beginning to fail.

We are not stupid. Indeed, we are well aware of the effects of our actions on their individual welfare and security, so why should we bring the same calamity upon ourselves? Perhaps we should keep our software tools in their present primitive state; to ensure job security for ourselves just as we undermine it for others?