Over the last few months, I’ve been involved in a lot of discussions about how to make software systems more efficient.
When we look at making software go faster, there are three basic approaches
- Pick a better algorithm
- Rearchitect software to take advantage of hardware
- Write more efficient software.
From about 1974 when Intel introduce the original 8080, up until 2004, conventional wisdom was that writing more efficient software was a losing proposition. By the time the more efficient software was written, Intel’s next generation processor would be released improving your code’s performance. The time you spent on making your software go faster, represented a lost opportunity to add features.
As a result, a generation of software engineers was taught of the evil of premature optimization.
Textbooks, and teachers routinely admonished their students to write correct code, and not efficient and correct code.
Starting in 2005 with the shift to multi-core processors making software go fast was about taking advantage of multiple cores.
Software developers had to adapt their systems to be multi-threaded.
At the same time, software developers noticed that the number of cores per system was limited and to get ever increasing scale, they had to be able to leverage multiple systems.
And thus the era of scale out distributed architectures began.
In this era, software engineers had to create new algorithms and new software architectures, and writing efficient code was still not viewed as an important part of delivering ever faster software.
See from 1974 to 2015, the name of the game was to use more and more hardware to make your software go faster without any consideration to how efficient the software is. From 1974 to 2004, you just waited for the next processor. From 2004 to 2015 you re-architected your software to take advantage of more cores and then later to scale out to more systems.
And by 2012, writing large scale distributed systems was easy. A combination of established frameworks and patterns made it easy to build a system that scaled to hundreds of systems.
Software engineering had discovered the magic elixir to ever increasing performance. We could harness an increasingly large number of systems combined with multi-threaded code to get infinite performance.
If the 1975-2004 era made writing efficient code of dubious value, the scale-out age made it even more questionable because you could just add more systems to improve performance.
High-level languages, coupled with clever system architectures could make anyone deliver an application at scale with minimal effort.
Was this the end of history?
No.
It turns out that large scale-out systems are expensive. Much like processors hit a power wall, massive data centers that consume huge amounts of energy are expensive. And companies started to wonder how do I reduce the power bill?
And the answer was to make the code go more efficiently. And we saw things like Hip-hop emerge, and Rust. Hip-hop tried to optimize code. Rust tries to provide a better language for writing efficient code. And in parallel we see programming languages like Node.js and Go become popular because they allow for more efficient code.
Software efficiency has become valuable again. The third pillar of software performance after a 40-year wait is the belle of the ball.
And what is interesting is that the software systems of the last 40 years are ridiculously inefficient. Software engineers assumed hardware was free. And because software engineers made that assumption, large chunks of software are very inefficient.
The challenge facing our industry is that to improve the efficiency of software we will either have to rewrite the software or figure out how to automatically improve performance without relying on hardware. No white knight is coming to save us.
And we are now looking at the world where performance and scale are not just going to be a function of the algorithms, and the architectures, but of the constants. And in this brave new world, writing efficient and correct code will be the name of the game.
We will not only have to scale out and up, we will also have to do so efficiently.
Put differently, perhaps there is no longer such a thing as a premature optimization?
[…] efficiently instead of up or out This is an interesting article. It is about the history of writing code and how scaling up or out has changed throughout coding […]