I was taught in my computer science courses as an undergraduate the importance of efficiency: time and space complexity, average and worst case analyses, caching, sorting, and so on. I was also taught about correctness: specifications, requirements, policies, and, more formally, proofs and theorems. Both of these are important, but I was not heavily instructed on how to balance the two when they come into conflict — and they do conflict in some practical scenarios. One quote I recall from my instructor in software engineering is (paraphrasing), that no one cares how fast and low-cost your program is if it does the wrong thing. What I’ve come to realize though is that in constructing software, for anything non-trivial, there quite a few “right” answers, implementations which meet requirements, which are, nonetheless, unacceptable solutions because they are too difficult to use, especially because they are too slow.
Of course, I’m not the first person to identify that a “need for speed” is baked into efforts to meet requirements. Indeed, latency targets are typically included on requirements lists, especially for performance critical applications. We also have, famously, the notion of “eventual consistency” touted by Amazon, and the more general set of optimistic models for concurrency control or distributed commit protocols that trade consistency among processes to avoid latency (among other things). Still, beyond that, I’ve been thinking about how the efficiency of a process can move from a difference in degree to a difference in kind. This distinction between degree and kind, which I first heard phrased in this way in a discussion on the relationship of complex symbol processing to conscious thought (maybe a later blog post), points up a phenomenon where changing a property of some entity in a given direction can, within some bounded range of values, change only the relative intensity of the effect of that property (e.g., hot water gets hotter), while for values outside of that range, the entity becomes, for all practical purposes, a different kind of thing altogether (e.g., liquid water boils into steam). I observe these kinds of phase transitions in software frequently, and I find them interesting because they have the potential to prompt changes to requirements (especially if they are overly prescriptive) and implementation–changing what the “right” thing might be by showing what more might be possible.
A salient example of the kind of efficiency-determined phase transition is in linting. For more complex software projects, a project will typically employ a linter. Depending on the project, developers will look at the output of the linter and even address the recommendations therein. One factor that can affect whether developers actually address recommendations of the linter is where in the development life-cycle it runs. If it runs only on a CI server, the developers typically won’t see the issues until they have done a sizeable portion of the work since, to end up there, the developer has generally gotten the work to a point where they expect it won’t break anyone else’s code with unit testing, documentation, and the like already in place. Now, especially with the unit tests in place, this may seem a prime time to handle the fixes and refactors the linter suggests — the problem here is that, across teams, software developers are more likely to feel like the work is done when it gets to this point. It passes the tests, it’s to spec, and schedule pressure seems to justify triaging the 50ish nit picks and remote edge-cases from the linter into a bucket of issues to be cleared later. If, on the other hand, the linter runs on the developer’s own machine as part of a local build, then maybe they see the issues earlier along with their unit test passes and failures. In this tighter cycle, the developer is more inclined to see fixing the issues as part of the main stage of software construction rather than being a later refinement or “cleanup” stage. Further, if the linter runs in the editor or IDE for every key stroke or file save, then the linter’s suggestions have the potential to prompt better designs earlier in addition to reducing the number of salient recommendations which are ignored or deferred for the sake of the schedule.
The key discriminator in my linter example of when analysis happens is typically where it happens, and where it happens typically depends on performance requirements. Although there are a number of deployment models which could serve all three of the software construction life-cycle placements I describe above, it is obvious that simplest deployment for all three would have a single program running on the same machine as the code to be analyzed with the report being output on the same machine. For the sake of this simple model, it is best if the linter runs as efficiently as possible for, especially on the developers machine, there are finite resources both in terms of machine resources and developer time and attention. Here then is the tension and the opportunity I introduced earlier — we may have in our linter a choice of analyses for detecting out-of-bounds accesses, some of which demand greater resources. If choosing a cheaper analysis means our linter can run for every key stroke in the developer’s IDE rather than on the CI server, then we’ve got a radically different solution from a starting point of a post-commit analysis on a dedicated server.
Again, I don’t think these ideas are new at all — the reasons I give are, at least in part, why we already have linters running in the IDE, and tools like LiveReload or LightTable. Nonetheless, the idea that just improving efficiency can lead to better and simpler solutions is a shift for me. It’s an idea that cues me in to seeing the potential for phase changes that can make more delightful and easier-to-use products and gives me a language for explaining to other engineers the value of pursuing greater efficiency in areas that were considered to be “good enough” and of suggesting to customers how a given requirement (e.g., for an estimated time to complete) may be obviated by the efficiency of another process.