In 2003, I stopped being a systems software engineer and joined the Manageability team at NetApp. My then boss, Nawaf Bitar, had orchestrated a re-org to absorb that team under him and he saw this as an attractive opportunity for me. The team was very small and understaffed and very talented.
And I was an ambitious, obnoxious, 20-something determined to make my mark which was encouraged to go blow up the current piece of software architecture and build something new.
There were a lot of lessons I learned during that time. And many of them people lessons. And I’ll get to them in time.
But there was one that is particularly relevant to my day today, so I’ll repeat it here.
At the time, the storage management product was called “Data Fabric Manager.” The basic architecture and I am going from memory, was a monitor service that polled the infrastructure, an embedded database, eventing and alarm service that sent out SNMP traps, or emails based on what the monitor service uncovered.
The DFM CLI was a program that executed as a CGI-bin script inside of an Apache web server. The CLI implemented a web-UI and a CLI command set and had an XML input interface.
The problem with the technology was that in 2004, the kind of UI you could build in a web browser was quite limited.
At the time I believed that to build a slick performance monitoring tool, you wanted a thick-client and that a web-UI wasn’t going to cut it.
The team agreed to build a new thick client that would have a new API service, called Acropolis that the thick client would use.
Later on, we built Protection Manager on top of this new architecture, and Protection Manager required a lot of APIs.
And then there was a debate over whether the APIs would be public or not.
As we were building the UI, one of our most talented engineers observed he could be 5x more efficient if there were a private API that the UI engineers could use. His point is that the UI needed a lot of API’s and not all of these APIs were going to be useful to anyone but the UI.
And it was an interesting point. Here I was advocating for a public API at a cost to development at a time when no one was integrating into management systems.
And after some thought, I made the call that all APIs should be public.
And the rationale was the following
- We had no idea what someone would use the APIs for
- We had no idea when the APIs would be used
- It’s practically impossible to justify investing in APIs except when your products need them.
And it became a mantra of mine when you create an API make it public. Because making the API later is very hard to prioritize and get resources for.
Yes, your API is probably not the world’s best API. But then again neither was MS-DOS or the x86 instruction set.
So what happened next…
Later on, when we needed to do integrations, integrations we didn’t anticipate that decision paid some dividends.
The existence of the APIs made it possible to have partnership discussions that centered around extending or improving the API instead of “whether APIs exist”. And because all of the functionality was exposed, the partner could play with the totality of the functionality even if we didn’t have everything they wanted. And more importantly, this allowed us to discuss how we could evolve the whole system to do the right thing.