As is tradition!
In 1978, I read a book about the Holocaust. The book was in my elementary school library. A school with a large Jewish population. And I was exposed to horrors that profoundly shake your faith in humanity. Read a book that describes the horrors of Nazi Germany at six, and your world will get warped.
In 1988, a Chemistry teacher at Campion School told my classmates and me that we were dead men walking. The human species was ultimately going to destroy the planet, and our civilization was done. We were 14 years old, and we were dead before our lives had even started.
In 1994, a very senior professor of CS walked into a room of CS majors and told us that our jobs were going to go away. That Indian outsourcing was going to eliminate our jobs. Only 13 people graduated in 1996 with a degree in CS because the rest of my peers took his warning seriously.
From about the mid-1990’s, a profound understanding of global warming made me appreciate that our actions killed our current civilization. Either a gentle transition or a massive collapse would happen. And my understanding of the human condition from my reading of the Holocaust, made me bet on the massive collapse.
Now in 2017, with many of the predictions about global climate coming out true, I look at children and wonder what kind of hellscape they will inherit.
And you think to yourself if you can’t do anything and you are fucked, then you might as well drink the coffee, hug the wife, play with the kid and sing gospels or chant Orthodox prayers.
The fact that this kind of despair has permeated my life makes me wonder why I am still alive, what force has propelled me to keep living?
Only one: hopelessness is not interesting.
Working in the tech industry, I have learned that you can not motivate people with hopelessness. If you walk into a meeting and tell a bunch of execs that we are going out of business, then they are not interested in what you have to say. If you say to that unless we do X, or Y or Z we are going out of business, they are still not interested.
Because every company goes out of business.
You are telling your business leaders that the sun will also rise, that the Universe will end and that they will be forgotten. You are giving them no new information.
And the reasons you can go out of business are so broad and varied and complex that this is just one threat in a spectrum of global threats that affect them.
Strategic Software Architects must be about hope. Our job is not to find the millions of reasons we will die; our job is to find the one way we can win.
And what makes the job so very hard is that we have to create the circumstances that allow us to win.
And here’s why I believe that.
In 2006, I was working at NetApp, and I was asked to produce an analysis of data center technology trends and application trends. And I walked into a meeting with my peers and observed that multi-core systems had created a strategic dead end for NetApp. The value of having external storage was to improve the performance of applications. And in particular, to deal with the fact that storage consumed a lot of CPU and Memory. By having external storage, you could improve the overall performance of your system.
Unfortunately utilization was going down on the servers (2-10%), and as a result, it was increasingly obvious that running the storage on the local system made sense. At the time Oracle and Microsoft were pushing hard for clustered file systems and databases that they felt didn’t need external storage arrays.
And I remember, saying in that meeting: We’re fucked, this was a nice company, time for us to look for a job.
A few months later, Tom Georgens asked Dave Kresse and me to study VMware and see what we could do with them. And I came to the same meeting and said: We’re saved! It turns out that VMware has figured out how to make this utilization problem go away.
And then that begat the: how the hell do we sell into EMC accounts NetApp storage?
And I remember sitting in a meeting with every business leader at the company explaining a very me too product strategy. And I remember everyone just staring at their laptops. Ten years later, I would have seen a whole bunch of LinkedIn updates.
And somebody asked me: Hey did you see how this guy saved 90% storage using NetApp dedupe?
And it clicked for us all. We had this feature, called dedup, that allowed us to deduplicate data on primary storage. And VMware had a problem that they needed shared storage to store identical images.
And what was incredible is that dedupe was the feature that we kept trying to kill. Originally imagined as an answer to data domain, or perhaps a generalization of snapshots, for years teams tried to kill it, and somehow it survived. This piece of unwanted technology transformed our company.
We shut down releases, redid roadmaps to take a piece of technology that barely worked and made it the centerpiece of everything we did.
We convinced the world that deduplication on primary storage was the right thing to do. Dedupe was a technology that no one had. Because it was insane. Intentionally introduce fragmentation. Dedupe on primary workloads was a crazy stupid proposition for storage.
And we won and lived for another decade.
The morality play, in my head, was the following that we could have curled up and died, I could have taken that interview at Google or I could have kept looking. And I chose to keep looking.
And because we kept looking, we found something, and we survived and thrived.
If you want to inspire people, don’t tell them they are dead, stare into the abyss and say we will find a portal out of here.
And you know what, you may find a way out of the abyss, and trying to find a way out is always way more interesting.
Once you become an operational or strategic architect, programming languages become an option in the toolbox. And then the question becomes which one to pick.
The most important considerations when I started my career were:
- Can the language interface with pre-existing code
- How mature and stable is the programming language
- How many programmers can you hire that know the language
- Does the language have a debugger and a profiler
- What tradeoffs does the language impose regarding performance and safety and portability.
- What specific libraries and tools and constructs does the language provide for making the project go faster
With large monolithic systems of the 1990’s, #1 forced you to keep the same programming language indefinitely. Unless someone signed up for a rewrite, you had no choice. Even in the case of a rewrite, you always wanted to leverage some of the pre-existing code.
And in the 1990’s the most important piece of code you had to leverage was OS system services.
Microsoft and other programming language vendors attempted to invent program language technologies that allowed applications to call from one another, but the tools didn’t quite work, and they locked you into a specific vendor and OS. The first C compiler I bought from Microsoft in 1987, had a long discussion of how you could get basic and C to work together.
And you still had the whole problem of cross-OS portability.
Attempts at standardizing libraries through things like POSIX didn’t work at all.
What changed in the 2000’s was the movement to multi-process application architecture using databases as a mechanism to exchange data. The database was cross platform, and vendor neutral and language agnostic. And all of a sudden, choice became an option.
And more importantly, C++ was a real option because it was designed to solve #1 and #5.
In 2004, when I had an opportunity to pick a language as the operational architect for the NetApp Performance Advisor, I chose C++.
I agonized over the decision for a month, because – based on my prior experience, this was a once in a decade decision.
And the reasons for C++ were:
- C++ could call into all of our C code.
- C++ was much more mature than Java at the time
- C++ programmers were easy to hire, and C++ programmers could work on the C parts of our system.
- Working debuggers and profilers
- Allowed us to trade off some performance for safety (string class instead of char*, and reference counted pointers) with no loss of portability across the platforms we care about.
My old school thinking decided that the Java Native Invocation was just too clunky as a mechanism to leverage our existing C code base. And I had spent a lot of time in college writing C++ wrappers and had no time to do that for the huge Data Fabric Manager code base.
Furthermore, I remember thinking that C# would crush Java because it made calling code from C# into C/C++ easier…
Java, in the end, won, because the service architectures were just a better way to write software, and using the database as a way to get different parts of the system to talk to each other made it easy to add Java to a system.
Applications no longer needed to call into libraries, they could use the database to share information. I believed that this was a dead end architecture because database – increasingly became a bottleneck, and that would lead to large monolithic Java systems and that would lead to a stable Java dominated ecosystem except…
The emergence of SOAP and JSON and REST made it even easier to combine programming languages and circumvented the DB bottleneck.
In the 1990’s picking a language was a once in a decade decision, now picking a language became a pretty standard decision. And that leads to a new problem for strategic software architects, given that operational architects can pick any language at any time, what guidance do you give them?
In short, the basic model still holds, except for one minor tweak:
- Can the language interface with pre-existing code
- How mature and stable is the programming language
- How many programmers can you hire that know the language – and how easily can they move between languages.
- Does the language have a debugger and a profiler
- What tradeoffs does the language impose regarding performance and safety and portability?
And that last tweak is hugely important. Each service has a life span, and engineers have to be moved between services as business priorities change. And having different languages creates friction in your ability to move people.
At Zynga, Cadir was adamant that we only support two backend programming languages (C and PHP and Java) because he valued the ability to move engineers easily very highly.
I, personally, loathed PHP and found C to be too low-level and could never quite get over my initial interaction with Java in 1994, but he had a good point and fully supported his decisions. One time, I forced a team to rewrite their Ruby code in PHP because of our policies.
And Cadir’s decision was a huge win because it people could easily move across the company, and sharing code was very easy.
And this leads me to the conclusion that the real list now is:
- How mature and stable is the language including debuggers and profilers?
- How easily can programmers move between services
- What specific tools or constructs does the language provide for the domain problem.
You’ll notice that this, the most important property in 2004, got dropped,
- Can the language interface with pre-existing code
Because that’s basically a solved problem.
In a fun thread on Facebook, where a bunch of my fellow architects and I interacted, a question came up:
What is this operational thing you keep talking about?
And well, I thought it might be worthwhile to define.
I borrowed the terms from military games and my long-standing fascination with the first and second world war. Generally speaking, a tactical engagement is one involving a small number of soldiers that is part of some battle. An operational engagement is a large battle like the Battle of Kursk or the Invasion of Normandy. A strategic engagement is the Liberation of Europe or even grander the Defeat of the Axis. And the lines I just drew are not as simple as that.
For me, the key points are that the bigger the scope:
- the more the resources are available
- the more impact decisions of the past have in the present,
- the bigger the stakes.
Using that mental model:
- Tactical software architecture is a bug or a feature spanning a single release
- Operational software architecture is about a product spanning multiple releases
- Strategic software architecture is about multiple products spanning multiple releases.
Using that mental model, we can answer the question that was posed as a follow on:
When do I pick a new language or stack?
Definitely out of scope for a tactical software architecture. You’re incrementally improving the product within a bigger operational software architecture, and you should use the tools provided.
That’s an operational software architecture question. The overall strategic software architecture may constrain some choices. However, an operational software architect should have some degrees of freedom.
At the strategic software architecture level, there is never one language or stack. Instead, there are questions of how many, and what are the preferred and what are the rules for adding or removing, and how do they inter-operate.
And on-prem vs. SaaS?
As a strategic software architect, on-prem software limits architectural choices at the operational level. Each new stack has to be as mature as every pre-existing stack, and that means you keep using the same stuff that someone picked years ago. The challenge of adding a new stack is the underlying reason why on-prem software suffers from periodic rewrititis.
For SaaS, on the other hand, adding new technology stacks is easier. Unlike on-prem, where the entire product suffers if a service is bad, in SaaS, the impact is more constrained. And this gives operational software architects more authority and strategic software architects more freedom. The flexibility software architects have is, why SaaS platforms don’t go through rewrites – although individual services do.
(1) The last great operational tank battle, the Battle of Kursk. Map By Bundesarchiv, Bild 101III-Zschaeckel-206-35 / Zschäckel, Friedrich / CC-BY-SA 3.0, CC BY-SA 3.0 de, https://commons.wikimedia.org/w/index.php?curid=5414021
Over the last 20 years, I have been involved with or lead several rewrites of large systems. And that experience has taught me some basic rules of how to successfully do a rewrite. And they are pretty simple.
- The business reasons have to make sense to every engineer and the business teams and everyone has to believe them.
- The business leaders had to be committed to the rewrite. The bigger the scope of the rewrite, the more important the business leader. If you are rewriting the entire product, then the CEO has to be committed.
- Once the plan was agreed upon, the entire team had to be working on the plan, and we had to deliver business value as soon as possible.
- Optimize for the right strategic software architecture over the right operational or tactical software architecture.
The first rewrite I was peripherally involved in was a project at SGI called Cellular IRIX. Cellular IRIX was an attempt to build a multi-kernel single memory address space system to support SGI’s new cc-NUMA architecture.
The rewrite failed because there were too many constituencies who opposed the rewrite and too many business leaders who didn’t understand the value leaving too many engineers confused about why they should sign up for this.
When SGI imploded and some key engineering directors got fired, the project died a miserable death.
The second rewrite I was involved in as an individual contributor was the rewrite of the NetCache product and that was a successful rewrite.
The original version of NetCache was a port of software written by Internet Middleware Corporation. The system suffered from some flaws, flaws that were very hard to fix. If my memory serves me well, it was built on a callback-based system that had unclear and uncertain rules about how resources were being managed. There were if-statements littered throughout the code where function writers were expected to understand all paths that could result in a message with the state calling their function. In effect, to understand how to write a leaf function you needed to understand all possible states of the global system.
Although a technical mess, the reasons the rewrite succeeded have little to do with that. The technical mess and the near impossibility of fixing the technical mess were not why we successfully rewrote NetCache; it’s why we wanted to rewrite NetCache.
We were able to successfully rewrite NetCache because the performance and availability and feature velocity were a serious business problem and there were no alternatives on the table, and yet that wasn’t enough.
The real reasons are that
- Everyone was working on the rewrite.
- We knew that our new hardware could not be fully utilized using the old architecture
- There were no constituencies in favor of fixing the old code base. The product managers didn’t want, the engineers didn’t want it and the customers didn’t want it.
- Everyone from the GM down was committed to the rewrite
- The code base caused business problems everyone understood.
- The investment in the rewrite was 2x the total man years in the original code base.
After the NetCache rewrite, the next failed rewrite was of ONTAP following the Spinnaker acquisition.
NetApp decided to buy a great company with a great engineering organization, called Spinnaker. Spinnaker had a clustered file system and an embedded namespace virtualization and worked. Unfortunately, post-acquisition we failed to properly deliver an integrated product. Instead, we delivered a new version of ONTAP, ONTAP-GX that was not compatible with ONTAP. And then after we delivered that product, started another attempt to re-integrate the Spinnaker technology into ONTAP, an effort that became known as ONTAP 8.0
There were a lot of reasons why this effort failed, and I was in the periphery of the effort. And it’s tempting to point fingers at people I respect a lot. And so I won’t. Because tactical or operational considerations of the failure are of no interest to this blog. What is of interest, is what was the strategic software context that doomed this effort from the get-go?
At the time of the acquisition, the kind of performance and scale Spinnaker offered were of interest to a tiny part of the overall market. More importantly, the kind of virtualization Spinnaker offered – namely global namespaces was of less interest.
By 2004, the entire storage industry had decided that the FC and it’s evil step child iSCSI was the most important protocol on the planet because structured data was the growth business. And structured data lived in databases.
And NetApp’s business problem was to figure out how to address FC and iSCSI not how to insert namespace virtualization. And more importantly – at the time – there was little interest in core storage innovation. Dave Hitz, the EVP, and founder of NetApp, wrote a future history paper that said as much.
NetApp’s business problem was how do they sell more of what they had to more different customers, not a new storage array.
And every engineering manager, director, and engineer knew this. And so we had a constituency at NetApp develop that said – sure I’ll invest in this new system and at the same time, I will invest in the old system. I know about this alternative plan, because as an architect for the storage management team, I was 100% aligned to wait for the ONTAP rewrite to fail.
The original plan called for all of the engineering team to pivot to the rewrite. And then the pivot didn’t happen. And the pivot didn’t happen because when problems occurred with our core business, investing in a rewrite that wasn’t business critical became a luxury. And so the rewrite got starved of resources.
At the core, if I can use 20/20 hindsight, the mistake at a strategic software architecture perspective was the decision to have two teams, one working on the original product and one working on this rewrite. The core team felt that they were stuck doing sustaining and the rewrite team didn’t have enough resources to compete.
And the core manifestation of this separation was that the new ONTAP had a different build system and source code repository than the original ONTAP.
Much like Cellular IRIX, there were too many people looking outside in, and trying to find ways to cause it to fail.
The company more or less figured this out after ONTAP GX failed in the market. They realized that the only way to go forward was with one ONTAP that the entire company owned making it more cluster aware. And it didn’t hurt that by then; people felt that that the kind of storage system that ONTAP 8.0 was building was something that could be market leading.
The rest, as they say, is history.
The next rewrite I was involved in was the rewrite of Zynga’s backend, something I lead.
The business goal of the rewrite was quite clear. Zynga wanted to be in the 3rd party market and to do that we need to offer 3rd party games services. We had services, but the API for using them were PHP libraries, and that wouldn’t work. So we had to offer services that they could consume via APIs from their apps that didn’ t have to be in our data center.
And so we decided to take all of our systems and export them over a network API to the world. And this became Zynga’s 3rd party platform that lived a short life, but the API infrastructure turned out to be very valuable for something else, and that was mobile gaming. Later on what we built got heavily modified and re-architected, and I like to think that initial effort, a project called Darwin lead the way.
From my lessons of ONTAP and NetCache, I took three key lessons
- There had to be a compelling business value that was non-technical.
- The business leaders had to be committed to the project, not the technologists. There could be no way for someone to appeal to some exec higher in the chain to reverse course.
- Once the plan was decided, we had to go very hard very fast with the entire team and deliver business value as soon as possible.
And that’s more or less what we did. I remember sitting in a room with Cadir, the CTO, and owner of all central engineering, where we made the decision to do this. And I told him I need 18 months, and he told me I had six. And then I remember telling him we needed everyone to work on this or not to bother at all, and he said – let do this. I continue to admire him for that level of commitment.
And to make sure everyone knew he was personally committed to this effort, Cadir lead an all hands where he personally said that this new effort to rearchitect the backend was something that he was committing the entire team to do.
We then had a series of architectural meetings where we figured out how to solve the fundamental problems of making our services available to the internet. At the time, we chose systems that were already in existence over writing new ones. Our goal was to have a working system in a quarter and something we could sell in six months.
Those meetings included every architect and constituency, and ultimately decisions were made that pissed people off. Some that were painful for me later on in my career, as I ran over people when I could have just listened and taken them along for the journey.
We did succeed in an environment where people were quitting post-IPO and then quitting because our business was suffering.
And by success, I don’t mean what we shipped was the right software, I mean we shipped the right software architecture. And the organization was reorganized around the idea of delivering services through APIs that were centrally managed and accounted for.
And that brings me to rule #4 – Optimize for architecture not for software implementation.
Because Cadir forced me to go fast, we couldn’t figure out the right thing to do in all cases. He forced me to prioritize what we needed to accomplish and what we needed to accomplish was get the APIs on the internet fast, not have the best pieces of software to do that.
The point he taught me and made clear is that we can always improve a shipping product, we can’t improve a project that got canceled. And that if the architecture is put together right, then the bits can be enhanced over time.
And because architecture and org structure are the same things, once you reorganize to implement an architecture, you can quit the company. The company will naturally improve the architecture if the business problem remains. And that happened.
My favorite song in Hamilton is the “Room where it happens.” And my favorite part of the song is this part:
And the reason is that I had a similar experience in my career, which served to drive all of the rest of my professional success.
It was in 2002, at NetApp. Chris Wagner was the CTO of the NetCache product group and called a meeting of all of the senior engineers who worked on NetCache. And I wasn’t invited. And I remember standing outside of that room, looking in and wanting to be there, inside.
And for the next several years, I struggled to figure out how to get into that room. And I succeeded, and only after I succeeded, I realized I didn’t want to be in the room. Because I realized that the room where it happens wasn’t the room I wanted to be in.
What I wanted to be was George Washington.
See when Thomas Jefferson, James Madison, and Alexander Hamilton walk into the room, they are debating options that George Washington was okay with. George Washington had created a strategic framework that they had to operate within.
For example, if George Washington wanted Alexander’s plan to come to fruition, he would have pushed for the plan himself instead of sending his annoying right-hand man to negotiate with James Madison and Jefferson.
Similarly, he didn’t care if the capital was in New York or Virginia. If he had cared, then the topic would have been resolved much earlier, with Washington’s intervention.
In short, George Washington gave the folks in the room where it happens a set of choices that they could make, and he was okay any decision they made.
Strategic Software Architecture done right is about ensuring that any tactical, operational software decision is immaterial and doesn’t affect the long-term strategy allowing individuals to make choices on-their-own that still ultimately produce the right final outcomes. In effect, every decision that gets made is one you are okay with, so you don’t have to be in the room where it happens.
In 2003, I stopped being a systems software engineer and joined the Manageability team at NetApp. My then boss, Nawaf Bitar, had orchestrated a re-org to absorb that team under him and he saw this as an attractive opportunity for me. The team was very small and understaffed and very talented.
And I was an ambitious, obnoxious, 20-something determined to make my mark which was encouraged to go blow up the current piece of software architecture and build something new.
There were a lot of lessons I learned during that time. And many of them people lessons. And I’ll get to them in time.
But there was one that is particularly relevant to my day today, so I’ll repeat it here.
At the time, the storage management product was called “Data Fabric Manager.” The basic architecture and I am going from memory, was a monitor service that polled the infrastructure, an embedded database, eventing and alarm service that sent out SNMP traps, or emails based on what the monitor service uncovered.
The DFM CLI was a program that executed as a CGI-bin script inside of an Apache web server. The CLI implemented a web-UI and a CLI command set and had an XML input interface.
The problem with the technology was that in 2004, the kind of UI you could build in a web browser was quite limited.
At the time I believed that to build a slick performance monitoring tool, you wanted a thick-client and that a web-UI wasn’t going to cut it.
The team agreed to build a new thick client that would have a new API service, called Acropolis that the thick client would use.
Later on, we built Protection Manager on top of this new architecture, and Protection Manager required a lot of APIs.
And then there was a debate over whether the APIs would be public or not.
As we were building the UI, one of our most talented engineers observed he could be 5x more efficient if there were a private API that the UI engineers could use. His point is that the UI needed a lot of API’s and not all of these APIs were going to be useful to anyone but the UI.
And it was an interesting point. Here I was advocating for a public API at a cost to development at a time when no one was integrating into management systems.
And after some thought, I made the call that all APIs should be public.
And the rationale was the following
- We had no idea what someone would use the APIs for
- We had no idea when the APIs would be used
- It’s practically impossible to justify investing in APIs except when your products need them.
And it became a mantra of mine when you create an API make it public. Because making the API later is very hard to prioritize and get resources for.
Yes, your API is probably not the world’s best API. But then again neither was MS-DOS or the x86 instruction set.
So what happened next…
Later on, when we needed to do integrations, integrations we didn’t anticipate that decision paid some dividends.
The existence of the APIs made it possible to have partnership discussions that centered around extending or improving the API instead of “whether APIs exist”. And because all of the functionality was exposed, the partner could play with the totality of the functionality even if we didn’t have everything they wanted. And more importantly, this allowed us to discuss how we could evolve the whole system to do the right thing.
My career has been a lot of fun. And as a career, I hope the best is yet to come.
There are three major pieces of technology that have shaped my view on software architecture.
The first was the delivery of a streaming media cache at NetApp. The streaming media cache was a feature of the NetCache product line. In this project, I was an individual contributor working with some of the best software architects I have had the good fortune of working with. What we built was a system that allowed us to sit between a windows media server or a quick time server or a Real Networks Server and delivery media at a lower cost per $ than having a bunch of servers.
The second was the set of products that I delivered as part of the storage management team at NetApp. The first was a performance monitoring tool that created the first client-server product NetApp had called Performance Advisor. The second was a radical data protection tool that used policy-driven data management that was too far ahead of its time. And the third was the first solution to integrate storage and virtual machine management.
The third was my time at Zynga. What I did at Zynga, was to create a centralized operations team that delivered a shared back-end with a large of collection of services that reduced the OPEX and CAPEX of managing Web-scale infrastructures while simultaneously improving uptime. Some of the stuff we did was open sourced, like zperfmon and zbase. Our infrastructure and team were so amazing that after a mistake and a bug in a partner product, we were able to restore a 12 million DAU game with several thousand servers in less than 12 hours. The capstone of my time there was the delivery of an API infrastructure that took all of Zynga’s different backend services and put them behind a REST API that made it significantly easier to deliver features and services and enable 3rd party feature development.
Since then, I have had the good fortune to work at VMware, and a lot of what I am doing there I can’t talk about so will not. Although one thing that was accomplished as part of the 6.5 release, was an architectural review board that I chaired that reviewed many vCenter features.
The theme of my career, and what I view as strategic software architecture, is that I didn’t architect a single product. Most of the architects were people who I advised. What I did was to create the context that allowed them to deliver miracles. There were times, of course, where I had to step in, and advice requires technical understanding, but it wasn’t about me writing all or any of the code.
And this is where the confusion happens. I remember sitting in a room with a manager at NetApp. And she was screaming at me, telling me I was good for nothing worthless software architect. That a software architect created a working prototype like her husband. She was pissed that she was forced to work with me.
Given it was early in my career, I freaked out.
And I get her reaction a lot. Every time someone looks at me, and what I do they think – he doesn’t write code, so he can’t be any good. He just talks and talks and talks. And he doesn’t understand what’s really going on.
And then three years later they are like that woman who screamed at me. After we shipped the first version of the product, and more and more of the set of products started to adopt that unifying vision, she realized that I actually added a lot of value. That the whole org of over 200 people was moving in the same direction even though they were working on a variety of products. That what I did was create a context that enabled very large teams to work together well.
All that talking and probing and pushing and getting people aligned and providing the technical depth that allowed people to get unstuck produced a staggering amount of software, more than anyone human being could write in a year.
Because what I do is to figure out what are the strategic lever points, and use those lever points to move the world.
About a month ago, my wife and I attended a performance of Hamilton in San Francisco.
And afterward, as I was sitting in the car, I realized that there was a story I wanted to tell in defense of software architecture.
More precisely a story that was a defense of the kind of software architecture I do. Because it’s a kind of software architecture that is extraordinarily valuable, and extremely undervalued: strategic software architecture. What strategic software architecture is and is not will be something this set of essays will attempt to define, describe, characterize and explain. In a nutshell, the central thesis is that for any business the strategic 5 -7-year question of how to marshall people and technology and product to deliver outsized business results is actually a software architecture problem that tries to impose structure and chaos on a fluid situation while providing flexibility in the choice of tactics.
A mouthful indeed.
And in honor of Alexander Hamilton and the Founding Fathers, I decided to write a series of Essays titled the architecturalist papers, a pompous homage to the Federalist papers, that tried to explain and defend strategic software architecture.
The struggle I faced in putting the finger to keyboard, was the daunting task of doing research. After all, shouldn’t I do some survey of the state of the, and show where my thoughts fit into the general understanding of software?
Thankfully, a friend of mine remarked that many researchers in her field are not empiricists. And first I had to look up the word empiricist and discovered that it meant
a person who supports the theory that all knowledge is based on experience derived from the senses.
And it became apparent, that I had a lot of experience in this space, and there was a lot of knowledge to be derived, and some more abstract thinkers could figure out general models that were more valuable.
What clinched the deal, was another exchange with my wife about common sense. Common sense, I had read somewhere was defined as the set of accumulated wisdom from experience. When we say someone lacks common sense, what we mean is they lack the accumulated experience or don’t have access to that experience and make poor decisions.
This set of essays is an attempt to share my collected set of experiences and will allow others and myself to derive knowledge from the entire experience and hopefully share some common sense ideas that have proven to be very useful over the years.
The next essay will be a survey of what things I have been involved in that forms the basis of my experience.
Because some Americans thought it was okay to elect a p**y grabber.
And then millions of women decided to say no.
And then hundreds of people decided to tell advertisers that advertising on Breitbart wasn’t okay.
And then an NPS ranger decided to say no when Trump told her to shut up.
And then when Bannon told the press to shut-up, the press said no.
And when Miller told us that our brothers couldn’t come from abroad, the ACLU and hundreds showed up to say no.
And when Paul Ryan wanted to give me a tax cut, a cut I didn’t want, thousands reached out to their representatives and said no.
And so when Bill O’Reilly got exposed as the ass-hat that he apparently is, we were all very used to saying say no.
Or perhaps, in terms, Mr. O’Reilly understands, if we liberals are a bunch of snowflakes, enjoy the winter.