My passion for machine vision, machine learning and statistical pattern recognition is longstanding, having started over 10 years ago and continuing today.
Most of my undergraduate AI degree was oriented towards logic, theorem proving, and computational linguistics, which was fascinating in it's own right, but did not strike me as a particularly realistic or pragmatic way of dealing with the messiness and complexity of the real world. As a result, I latched on to the (at the time) less mainstream "soft" computing approaches with enthusiasm, devouring the content of the machine learning, machine vision and neural networks modules avidly. I saw these approaches as a pragmatic alternative to the hard and inflexible grammar-based approaches to Natural Language Processing espoused by the main body of the department.
This view of machine learning as a pragmatic tool, at odds with ivory-tower academicism has stuck with me ever since, even as the subject has become more mainstream (and more academic and mathematically sophisticated). As a result, I tend to focus on simple techniques that work, rather than techniques which demonstrate mathematical chops and academic sophistication. I am fortunate in this regard, because, paradoxically, the solutions to difficult problems are often conceptually simpler and mathematically less sophisticated than the "optimum" solutions to simple problems. Perhaps a little bit of the Yorkshire/Lancashire culture of engineering pragmatism rubbed off on me during my time in Manchester.
Another thing that was dawning on me as I finished my undergraduate degree was the importance of scale. As I attempted to find datasets for my hobby projects, (Far harder back then than today), I began to develop suspicions that scale, rather than any qualitative leap in understanding, was going to be a key factor in the development of genuinely interesting artificial intelligence techniques. From this came my interest in machine vision, which I saw as a key "gateway" technique for the collection of data -- to help the machine build an understanding of the world around it and to "bootstrap" itself to a more interesting level.
I was lucky with my first employer, Cambridge Research Systems, where I had the opportunity to work with some very talented people, both within the company and across our customer community. From that experience, and the abortive neuroscience PhD that I started, I learned a lot about the neuroscience of biological visual systems, particularly the older, lower-level pathways that go, not to the primary visual cortex, but to the evolutionarily older "reptilian" parts of the brainstem. In contrast with the "general purpose" and "reconfigurable" nature of the cortex, these older pathways consist of a large number of (less flexible) special-purpose circuits handling things like eye movements and attention-directing mechanisms. Crucially, these lower-level circuits enable our visual system to stabilise, normalise and "clean" the data that we present to our higher-level cortical mechanisms. This insight crosses across well to more commercial work, where the importance of solid groundwork (data quality, normalization and sampling) can make or break a machine learning implementation. I was also fortunate enough to pick up some signal processing and FIR filter design fundamentals - as I was writing software to process biological time-series signals (EOG) to identify and isolate events like saccades and blinks.
At around about this time, I was starting to become aware of the second important thread in my intellectual development: The incredible slowness of software development, and the difficulty and cost that we would incur trying to implement the large number of these lower level stabilization mechanisms that would be required.
I left Cambridge Research Systems specifically to broaden my real-world, commercial software development experience, working at a larger scale than was possible at CRS. Again, I was lucky to find a role with Sophos, where I learned a great deal from a large group of very talented C++ developers doing Test Driven Development in the highest-functioning Agile team I have yet encountered. Here, I started to think seriously about the role of communication and human factors in software development, as well as the role that tools play in guiding development culture, always with an eye to how we might go about developing those special purpose data processing functions.
Following a relocation closer to London (for family reasons), I left Sophos and started working for Thales Optronics. Again fortunate, I found myself working on (very) large scale machine vision applications. Here, during a joyous three year period, I was able to put much of my previous intellectual development and thinking into practice, developing not only the in-flight signal processing, tracking and classification algorithms, but more significantly, the petabyte-scale data handling systems needed to train, test and gain confidence in them. In addition to the technical work, I worked to encourage a development culture conducive to the development of complex systems. This was the most significant, successful and rewarding role I have had to date.
Unfortunately, budgetary constraints led Thales to close their office in Staines, and rather than transferring to the new office, I chose to "jump ship" and join Fidelity Asset Managers in the City of London, partly in an attempt to defeat some budgetary constraints of my own, and partly out of an awareness of the potential non-transferability of defense industry expertise, made more pressing by an impending overseas relocation.
At Fidelity, I used my knowledge of the MATLAB distributed computing toolbox to act as the High-Performance Computing expert in the "quant" team. I gained exposure to a very different development culture, and learned a lot about asset management and quantitative investing, gaining some insight into the accounting and management factors that drive development culture in other organizations. I particularly valued my exposure to the insights that Fidelity had, as an institutional investor, into what makes a successful organization, as well as it's attempts to apply those insights to itself.
At Fidelity, I used my knowledge of the MATLAB distributed computing toolbox to act as the High-Performance Computing expert in the "quant" team. I gained exposure to a very different development culture, and learned a lot about asset management and quantitative investing, gaining some insight into the accounting and management factors that drive development culture in other organizations. I particularly valued my exposure to the insights that Fidelity had, as an institutional investor, into what makes a successful organization, as well as it's attempts to apply those insights to itself.
Finally, in 2011, my family's long expected overseas posting came. Yet again we were incredibly lucky, and got to spend a wonderful year-and-a-half living in the middle of New York city. I was fortunate, and managed to get a job at an incredible Silicon Alley startup, EveryScreen Media, which was riding the wave of interest in mobile advertising that was just beginning to ramp up in 2011 and 2012. Again, finding myself working with incredibly talented and passionate colleagues, I was given the opportunity to broaden my skills once again, picking up Python and Unix development skills, becoming immersed in (back-end) web development, building out the data science infrastructure in an early-stage startup. From this year and a half or so, I particularly value what I learned about how to develop large scale, (scaleable) distributed real-time data processing systems systems and the effective use use of modern internet "web" technology.
Now, back in the UK, I am in search of the next step on my journey of learning and discovery. My focus is, and remains, on the pragmatics of developing complex statistical data processing systems, on how to create and curate large data-sets, how to integrate them into the development process, so that continuous integration and continuous testing, visualisation and monitoring help the development team to understand and communicate the system that they are building, as well as the data that feeds it; to respond to unexpected behaviors, and to steer the product and the project to success and, moreover, to help ensure that the organization remains rightly confident in the team and in the system.