The Future of Software Engineering
There comes a time in every technical blog when the author must write about the future of the industry. Consider this to be the obligatory post. Where is this going?
The first of the two waves of the future was offered by my former professor, Larry Bernstein:
|We are moving from compile–link–run to find–build–verify|
Survey says: partially.
The soul of productivity in this industry, and probably any other, is abstraction. In normal-person’s terms, less work. I have seen time and time again the exponential gains in productivity that resulted from the first Assembly language, the first C compiler, and the introduction of garbage-collected languages.
I will say that find-build-verify is part of the future of this line of work when it comes to accidental complexity.
The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstracts away its essence.
That is from, of course, Fred Brooks. There is no silver bullet. I’m old school. I don’t think essential complexity can be tackled by abstracting it away into a blurry mess of ambiguous boxes and arrows.
Even the best and brightest individuals and teams in this industry have trouble expressing this essential complexity in code. In my past, I was not immune to an Anemic Domain Model, and on my current poster-child-in-a-good-way agile team at work, an Anemic Domain Model was a conscious design decision. Before we manufacture it, we need to agree on how to express it. Such an agreement is likely not possible.
This is why I find claims of people building products that allow non-programmers to create their own applications without writing a line of code to be dubious. It’s not that I doubt the claim. I doubt the scope and complexity of the system that can be created with such a tool.
The accidental complexities of building software are ready targets for find-build-verify, which I will now capitalize as a proper noun. Find-Build-Verify. That’s better. Anyway, user interface, persistence, O/RM, application containers, inversion of control containers, hot swapping, load balancing, and so on are already on this path.
A fine example is UI: by the time I was ready to move beyond the command line interface as a programmer, I never had to write a single line of code to draw rectangles on the screen and detect mouse clicks. All I had to write was what I wanted to happen and when.
Shouldn’t it already be the case that some huge company, like SAP, has already written the ultimate generalized inventory management solution for companies? Why was I hired to write this system?
And it’s not just me. I’ve talked to at least three different co-workers who all had to either create or work on custom inventory systems as well. If it were my money, I would gladly spring for a cheaper but harder to learn system than shell out the time and money for arduous custom development. There has to be something intrinsic about the problem itself that causes managers to make this choice.
While I do think the amount of code we’ll write in the future will decrease, I don’t think there is any way to manufacture useful abstractions of essential complexity.
Perhaps a better name would be existential complexity: it exists before its essence, and no amount of re-casting its essence will make it go away.
The other wave of our future is a little less philosophical and more rooted in hardware.
Driven by fundamental limits of physics, Moore’s law won’t continue forever. CPU manufacturers can only stretch the limits so far. To improve performance, they have already been pushing multi-core processors as the newest speed demons. Dual-cores are old news. Quad-core is now starting to hit the desktop market, and is coming to a laptop near you soon.
(multiple cores? I can’t even make a pot of spaghetti)
The second wave of the future:
|Due to hardware trends, concurrent programming will loom in importance in the future, and our concurrent programming models are poorly suited for the challenge.|
(if you call these lines “concurrent lines”, back away from the IDE)
Survey says: partially.
To quote the famous words of Knuth, 97% of the time, you aren’t gonna need it.
This goes back to the first point about essential complexity. Most of the problems we will have will not be about how to get sixteen cores synchronized when loading the list of customers from the database. They will be about which customers you are loading and why.
And we will be focusing more and more on the latter question because of Find-Build-Verify, whose future depends on taking advantage of concurrency.
Accidental complexity’s future definitely depends on concurrent programming. Here, where automation is king, efficiency is queen, and her crown is made of concurrency. For the march of Find-Build-Verify to continue on, our software will have to make use of the new hardware that will soon become widespread, and to do that, someone will have to do something about our dreadful thread model.
In summary: we are headed towards a world where the code for components will focus more and more on the multiple cores it is running on, while the rest of us are left to ponder the essential complexity that is running on top and alongside those components.
Maybe this explains why I’m so in love with domain-driven design: it’s a long-term relationship, not a one-night stand with a silver bullet.