wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

57 architecturalist papers: generated software and created software

March 26, 2023 by kostadis roussos Leave a Comment

Over the years of writing and architecting software systems, I saw the emergence of automatically generated layers of software that were nominally supposed to eliminate a class of software development or even software developers as an employment category.

Borland C++ 3.1

For example, in the 1990s, higher-level programming languages and dynamic compilation of code were supposed to eliminate the need for low-level programming. I remember taking classes in the early 1990s where garbage collection and object-oriented programming would unburden software developers from resource management.

At the same time, we had the emergence of frameworks that promised further productivity gains, like the Distributed Computing Environment and CORBA for distributed systems. As for building Graphical User Interfaces, we had the proliferation of toolkits like OpenView, X-Motif, Borland’s Application Framework, Java Swing, Microsoft .NET Framework, etc.

Over the next 30 years, the story stayed more or less the same. Every year there would be some new technology that would, once again, unburden the software developer from toil.

And, of course, if you happen to be the person paying the software developer, make it possible to have fewer of those developers working for you.

I recall in 1994, a Professor at Brown University remarked to the students in computer science his concern that there would not be enough high-paying jobs in the field. The emergence of OO programming, and the ability to have people work remotely, would reduce the value of a degree in computer science over the long haul.

And he was right, but not quite in the way he anticipated.

All those technologies did reduce the need for highly skilled technology experts to build much of the world’s software. In 2003, I had to write my own time-series database for a product. In 2022, I would spend more time deciding which one to use.

The available software building blocks are more powerful, flexible, and compelling than at any point in the history of computer science.

And that has enabled more people to write software that I could not have conceived of in shocking time frames.

That point was brought home to me in 2009 when a small team of engineers at Zynga leveraging AWS, PhP, and a home-grown NoSQL database built a game on a distributed systems architecture with over 2000 servers. And none of them had a Computer Science degree.

And yet we still invest in software, building new products, and hiring new engineers.

Why?

Competition. It turns out that when everyone can make a scalable game without a CS degree, the magic is not in making the scalable game but in building a scalable game. And when there is a large pool of competing choices, and success only goes to the best game, marginal and incremental improvements result in huge wins.

This brings me to my somewhat non-obvious conclusion: software products’ success is a function of how easy it makes the lives of people who do something useful. The problem is that when anyone can do something, it has no commercial value.

For example, when everyone can build a scalable game, the value of a scalable one is zero. It’s like with special effects; when everyone can have great special effects, the value of special effects in a movie is zero.

So what did the framework and toolkits make easier? They made the expected stuff easier, so you can work on the element differentiating your product or game. You still need to invent a new game mechanic, and that new game mechanic, because it is new, requires new code. And, here’s where it gets fun; eventually, you find new game mechanics that require new hardware, which requires new frameworks, which require…

And if the demand for new games is significant, then the need for new game mechanics is large, and the demand for new software is powerful. We all wonder why we are not more productive while playing games that are more beautiful, engaging, and deeper than ever before.

We live in a world where the amount of software is growing. And more people are writing software than ever before.

And that’s a good thing. But the need for new innovative software remains. And that innovative software is not generated; it is created. And that new software must be created when the boundary between the digital world and the human or physical world changes.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers

56 architecturalist papers: what if the different people never came?

February 20, 2023 by kostadis roussos Leave a Comment

In my post yesterday, I attempted a rebuttal of Mr. Jassey’s arguments about the superiority of in-place work.

My meta-argument was that a diverse workforce has a diverse set of needs. And that what works for some does not work for others.

A great meta-question is – so what? Does it matter to an organization if some folks are excluded, and some are not? Is cultural homogeneity an asset or a disadvantage?

When a company asserts specific lived experiences are valid and others are not, they explicitly articulate that specific human experiences suit their culture and others are wrong.

Those whose experiences fit into the wrong category self-select against that company. And over time, if it is successful, the company views its choices as correct because those who don’t fit in never show up.

They never bothered to come and talk to you because they saw that you thought they were terrible and that rejection of their existence meant there was no space for them there. But it’s just working from home? Right? Except it isn’t. Where does that end when you assert your lived experience is correct and the other person is wrong? Why should it stop at how we work together? Why not how we dress? How do we address each other?

“We chose to do it this way, and we are successful, therefore we were right.”

What was a choice, a preference, becomes because of the group’s success criteria for articulating what is generally correct and generally incorrect for all human beings.

And again, so what?

Diversity and diverse perspectives matter because otherwise, you suffer from groupthink. If everyone does it one way, and all successful people do it that way, and no one objects to how we are doing it, then it must be the right way to do things. And what happens, invariably, is that someone else figures out a better way to do it, and then the group collapses as the new way triumphs over the old course.

Several years ago, I wrote about Usain Bolt. Usain Bolt was too tall for a sprinter. That was the orthodoxy. But a coach decided to ignore orthodoxy, and guess what? Usain became the greatest sprinter of all time.

Or Ervin Johnson was too tall to be a point guard. But another coach saw his passing and skills and decided that the advantages he brought to the game because of his vision and height and reaching from the point position outweighed the benefits he got to his team as a front-court player.

History is littered with companies, teams, and organizations convinced they were doing things the right way. By asserting that their lived experience was correct over other people’s lived experiences, they drove talent away. That talent then thrived elsewhere.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers

55 architecturalist papers: Facebook v Zynga Microchannel, the Apple Store, D&D, and killing platforms

January 11, 2023 by kostadis roussos 2 Comments

IBM PS2 60 and 80 side-by-side

In 1981, IBM introduced a revolutionary computer that radically transformed the tech industry, the IBM-PC.

What made the IBM PC revolutionary was its open-system architecture and the royalty-free nature of its use.

Specifically, anyone could create an IBM clone, and anyone could develop software for said IBM clone and make money without paying a dime to IBM.

IBM hated that. So their solution was in 1987 to create the Microchannel PS/2. The intent was to create a new divergent market of PC’s that no one could make clones of.

What happened was that the market split between PS/2 and EISA, and the PS/2 became a failed computer system and a historical artifact and a warning to those who would want to close an open platform.

In June 2008, Apple introduced the App Store. The Apple App Store was the first massive commercial success of a cell phone app store. And it created a standard for how to take royalties from creators. And it was a huge success. So much so that we have had lawsuits that are still going through the court systems to determine the boundaries and limits of the market rules Apple can enforce in the market they created.

In June 2010, Facebook saw the success of the Apple Store and decided they wanted a piece of the Zynga action. Facebook had created an Open Platform, and that Open Platform enabled Zynga to grow like wildfire. But Facebook wasn’t making any money from Zynga. And so, there was a stand-off between Zynga and Facebook. Zynga and Facebook signed a deal that required Zynga to hand over 30% of its revenue to Facebook. That 30% revenue haircut killed Zynga.

Facebook’s thesis was that by creating a new currency, Facebook Credits, that Zynga customers would use, they would increase the total number of people who had digital money, and thus more money would be spent. The thesis failed, and Zynga suffered.

In both Apple and Facebook’s cases, they saw intellectual property creators who used their platforms as free-loaders. In Apple’s case, the tax was declared up-front, so you knew what you were getting into. In Facebook, it was an after-the-fact revision that destroyed a business.

But what happened to said intellectual property creators? As my son recently said – “why does mobile gaming suck?”

Gaming is a hit-driven business. One hit makes all the money, allowing you to make the next game. 30% is a massive tax, restricting the money you have to make further investments. And so the gaming industry has moved back to open platforms, ironically, the Windows PC.

This now brings us to the recent decision by Hasbro to introduce OGL 1.1. A lawyer covered it well here – https://medium.com/@MyLawyerFriend/lets-take-a-minute-to-talk-about-d-d-s-open-gaming-license-ogl-581312d48e2f

If you parse the Lawyer’s responses, it boils down to trying to create an Apple-like App Store for D&D content. Essentially, you hand over your financials and content for a smaller slice of the pie for the right to play.

In effect, it destroys the open ecosystem that D&D had. For example, suppose I have a website with a random generator of D&D content. That random generator is illegal.

Now Hasbro is betting that people play D&D and don’t care about the open content and that the creators will have to suck it up and deal.

Except, and this is a big exception, that isn’t true.

The iPhone was a singular technology with no ability to be replaced. Folks used Facebook because they wanted to connect with friends. Those platforms had value outside of the gaming industry. D&D is a game and a platform.

But it’s an extraordinary game where the player and the GM create content while playing. And the GM can adapt content from other gaming systems to their game. And the GM can adapt rules to their game.

In short, I expect the TTRPG community to discover the power of system-neutral gaming. And that the internet will increase with systems that allow you to convert to the gaming system of your choice. Except for the new restricted one.

My take, and it’s hopeful, is that D&D will continue, but tabletop role-playing will finally escape the long shadow of its creator and his original game.

But going back to software architecture and platforms, it’s always tempting to control a market, and there is a lot of value in doing that. But when you extract a lot of money from a market, you eventually kill the market. And over time, those creators whose businesses are hits will move to open platforms.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers, Facebook, Zynga

54 architecturalist papers: career progression for those of us who are under-represented

December 1, 2022 by kostadis roussos 1 Comment

I intend to write a lot about why you should go and read this https://hackingcapitalism.io/

And then realized I could write very little.

It would be best if you read Kris Nóva’s Hacking Capitalism because it is a highly well-written document by an outsider on how to function and succeed in the alien tech world. It would be best if you read it because Kris Nóva (@nova@hachyderm.io) is a fantastic person who is impressive and distills a lifetime of insight into a tight actionable document.

And that’s the only reason you should read it. And if you need me to tell you why beyond that, here’s my life story.

At NetApp circa 2002, a great manager named Jonathan Crowther took me to lunch to discuss my career. It was the first time any manager had done that.

I was frustrated because my career had stalled out. And I didn’t understand why.

And he smiled and said that the problem was that I only approached people as functional units that needed to solve my problems or I needed to solve theirs. That I never treated people as people.

And I remember staring at him, and then he explained, “when you come in on Monday, you never say – how was your weekend? It’s always – I need this.”

I stared at him. I suspect I am somewhere on the spectrum. And it was a moment of clear revelation.

I spent the entire weekend thinking about this one conversation. A key benefit I had was that I had done a lot of acting, enjoyed D&D, and had an absurdly high executive function. And so I came on Monday morning and acted. I have a youtube video where I talk about this here.

What resonates with Ms. Nóva’s document is that she detaches herself from the reality of understanding why and accepts that things are and that she must devise rules on how to win based on that.

I lack her precision of English, wit, insight, and, to be quite frank, experience. And I wish, when I was 22, she had given me this document.

So who should read it? If you were not born in the United States, you were not a white CIS-gendered man, and you are not privileged, you should read this document.

The United States is an alien country. Its rules are strange. And its culture is derived from 300+ years of interaction between property rights, the evil belief that labor should be enslaved, that extreme Christian Dogmas are fundamental, and a sense of supreme righteousness and contempt.

For example, let’s consider ownership.

A year ago, I had an opportunity to talk to a Chinese woman, from mainland China. And she was talking about ownership and how she was struggling with her career because of that. And I realized she was talking about ownership in a way that sounded like a colorblind person talking about colors.

And then we talked about “what is ownership in the USA.” In the USA, ownership of property is a sacred right. When you own something, you own it. You can do whatever you want to it. And nobody can tell you otherwise. A considerable amount of the conflict in this country is putting boundaries on the ownership of people by others and the limits of ownership of common goods by individuals at the expense of the group. The idea that the country protected property rights from others and the government was eye-opening to her. Her life experience in Xi’s China and property rights was not that. And as we talked about ownership, she said – “oh, so that’s what ownership means.”


I love this country, and as a Greek from a village, a Canadian from Montreal surrounded by more small-town Greeks who fled Greece due to a civil war, a world war, and a junta, the culture of the United States and beliefs frequently leave me perplexed.

Hacking Capitalism is a guide to making this system work for you without understanding the why and just understanding its weak points.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers

51 architecturalist papers: transactions and arrangements and architecture reviews

September 22, 2021 by kostadis roussos Leave a Comment

One of the best program managers I have had the chance to work with observed that companies have challenges when transactional-based decision making and arrangement-based decisions come into conflict.

Transactional-based decision-making is what I call strict enumeration and documentation in the form of a contract. Once both sides agree on the document, then even if people change, the decision remains. The trust is in the record, and the change controls surrounding the document. This kind of decision depersonalizes the decision. These decisions tend to be transparent.

Arrangement-based decision-making relies on trust between the two parties. The parties establish trust in various ways, but typically it’s about making sure that the personal goals between both leaders align. Arrangements are remarkably durable and transportable. Meaning that once you establish an arrangement with someone, trust is the basis of subsequent decisions much faster. It also means that decisions without going through a complex process. The problem is that these decisions tend to be opaque.

Both models taken to extremes can be a disaster. I have worked in both. And they both suck.

The law is THE example of a transactional system. Overly transactional environments turn into bureaucratic, legalistic environments.

A pure transactional system can turn into a totalitarian state. A great example is – “the only way to change this spec is to call a meeting of the change control board and submit a request.”

A pure arrangement system can turn into a cult where belief in the great leader is paramount and the phrase “blah said” is used to justify or denounce everything. That disagreement with the great leader is an unforgivable sin. A great example of this is when someone says – “but SR ARCHITECT FOO said”. Or alternatively – an old-boys network – that is impenetrable.

I’ll also observe that both approaches feel natural to different people. My personal experience is being Greek and Canadian and living in the USA. As a Greek, I believe that laws are suggestions. That relationships, and in particular, family relationships, trump everything. As someone who lives in Sunnyvale, CA, I have learned that in the USA, laws matter, but that the laws are structured to satisfy arrangements among the wealthy.

A transactional system feels like a spectacular waste of time because personal relationships trump everything and that, ultimately, transactional systems are a facade for arrangements.

But I have learned that that is naive. The critical flaw of arrangements is the opacity of building trust and the boundaries of trust. Transactional-based decision-making creates a public record of the moment trust was built and describes the trust boundary. Without such a record, trust-building naturally devolves to family, culture, background, and other attributes.

This brings me to how I think about architecture reviews. The purpose of the review is to create a trust boundary between the approver and the author. The more high-level the spec, the more trust that has to exist. That trust is typically built over a series of smaller successes or previous professional successes. The more detailed the spec, the less trust that exists. A key trust moment as an architect is when you are willing to stop being the sole approver of an area.

The key illusion in all of this is that as an architect the spec actually defines what is built. The reality is that unless you are writing the software, some other human being is going to take that document and do what they think the right thing is. My job is to make sure that they are thinking about the problem in a way that aligns with a reasonable solution.

Decrying arrangements because they are old boys networks is wrong. Decrying transactional-based systems because Process is also wrong. Like most everything in life, there is a balance. And like most everything in life, navigating that balance is the art of living and being a software architect.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers

50 architecturalist papers: portability and the multi-cloud

July 18, 2021 by kostadis roussos Leave a Comment

Over the last 15 years, I’ve been noodling about portability. And the more I noodle about portability, the more I think it’s very poorly defined.

So, let me try.

In the context of computers, there are two flavors of portability, data path portability, and control plane portability. The data path is the bits of code that do real work. The control path is the code that sets up the data path, monitors the data path, and takes corrective actions.

In that weird way that computers work, the control path itself is just another data path. And so – in a bizarre way – there is only data path portability. Simplistically, the control path is instructions that execute on processors, store data in memory and storage.

And although Paris Kanelakis would be pleased to know I learned something in CS051, things are not- quite that simple in the world of commerce that I live in.

Except when they are.

The VM encapsulates a traditional single VM applications control and data path. And the VM, as a result, is a highly portable abstraction because the hypervisor can ignore which instructions are from the control path and which data path; to it, they are all data path instructions. And we know that if you don’t care about performance, any data path can simulate any other data path.

So far, so good.

VMware demonstrated that the portability of a VM is very valuable because you can move it between generations of hardware, move it around to utilize the efficiency of your infrastructure better, and because it’s a convenient way to manage workloads.

And so, the VM abstraction and its portability had another interesting effect. You can hand over the infrastructure operation to another person.

But while I was at NetApp and Zynga, there were all sorts of other portability that people discussed. This data path portability was nice but not sufficient, and it wasn’t enough if you couldn’t encapsulate the entire app into a single VM.

So sure, I could move the VM around, but if I didn’t have the control path, what value was the VM?

And so there was this thought process that said, “VM’s are not that interesting.” are

And I looked into all sorts of interesting programming languages and tool chests.

And yet, along the way, something funny happened. If I cared about performance, I cared a lot about the hardware I was running on. And all of a sudden, the behavior of the hypervisor matter a lot.

At Zynga, when Amazon changed their NIC, our costs skyrocketed. Their decision to optimize their business almost tanked ours.

But performance, who cares about performance? Right? Except, here’s the rub, when you don’t have pricing power for a feature, and you sell the hardware that the feature runs on. The only way it can be worth implementing is to implement the feature without meaningfully increasing the hardware costs.

And then the performance of the feature matters a lot. And some features don’t get implemented because there isn’t an efficient way to do that.

But if you can’t charge more money, why implement the feature? Because – here’s the rub – your competition is adding features very fast. And if you don’t stay ahead, the whole SaaS model suggests that eventually, your customers will move. So as a vendor, you have to keep adding features while keeping the cost of the features in check.

Hoom. So – wait, if I want to grow my business and add more services, and can’t meaningfully increase the price, then, erm, the performance of the feature matters a lot.

And yes, you can make code more efficient and thoughtful, but at some point, you start caring about whether the disk is an SSD or whether the NIC has enough buffers, or what exactly is the CPU you are running on.

I mean, if people didn’t care about these things, those things wouldn’t exist.

Okay, so?

Well, once you start getting to the point that you care about performance that much, then the hypervisor details matter a lot. But surely you jest? Hypervisors make choices, and those choices impact the way your applications run. And if you design an application to run on a particular hypervisor, then the portability of your application is the portability of that hypervisor. If the hypervisor only runs on hardware X or Y, or Z, any performance improvement locks you into that vendor.

To put it differently, if the control path of the hardware is in the hypervisor, and you need to control the hardware, you need to control the hypervisor.

So what? It turns out that the hardware vendors are adding more hardware faster than that integrated solution vendor is sharing. And so, being able to have some degree of portability to run on other hardware will matter a lot.

Because why? Because the more it costs to run a feature, the less money you have and the more money the other person has.

Hoom.

Baremetal! BAREMETAL!

I hear you say. I was cheeky. See, every modern OS is a hypervisor of some kind. When you run on a public cloud, you can’t control the hypervisor.

And in some ways, not to be annoying, running your app in a VM on a cloud is like trying to tune your Java code – optimizations can be done, or you can use C or Rust or C++.

So when I hear bare metal, I don’t hear “no hypervisor”; I hear – – run a hypervisor of your choice on hardware.

BUT BAREMETAL IS HARD. I agree.

So?

Well, here’s where things get very interesting. Suppose you had a portable hypervisor that existed, and you didn’t have to do the hard part of managing the hardware but could control said hypervisor to your heart’s content?

Why then, for those applications where the data path was critical, you wouldn’t be tied to the cloud vendor’s hardware choices.

Data path portability is about being able to run your VM anywhere you want and be able to control the hypervisor.

AHA!

So for the essential thing that can move my costs, I need to move my VM to whatever hardware I want and tweak the hypervisor any way I want.

But then why has the public cloud won so much business?

Well, for the same reason, Java has won over C++.

It’s just got a better control plane for application development. Java makes so many things easier.

And really, that portable hypervisor was painful to use compared to the cloud.

So hypervisor portability was nice, but it was uninteresting if you couldn’t run the control plane. And you couldn’t because it took a lot of time and money to build, and the value of doing that work was marginal. Except when it wasn’t like for Dropbox. Or at Zynga, where we designed our control plane to be portable and had options when Amazon decided to optimize their business at our expense.

But the control planes aren’t tied to the hardware platforms as much.

What does that mean?

Well, it means that for some workloads where the costs matter, using the non-proprietary control planes so I can use the portable hypervisor may be a better choice.

And well, a portable control plane is – to a hypervisor – just another data path.

REPATRIATION!

No.

No software service that exists and increases in value hasn’t been optimized and improved over time. The original S3 relied on MySQL databases, for crying out loud, and Facebook used a single NetApp filer.

Thinking about this as “repatriation” is the wrong mental model. The suitable model is that critical services will get optimized, and at some point, the optimizations will care about the hardware. The existence of portable control planes and portable data planes will be fascinating.

But I have to build my data center!

Nope.

See, those portable hypervisors are pretty much available as a service now.

And that leads me to the final round of thinking.

When application control and data plane portability is possible, will the ability to cram a small number of hardware configurations in data centers be that valuable?

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers

48 architecturalist papers: system efficiency vs features and the multi-cloud debate

June 20, 2021 by kostadis roussos Leave a Comment

Over the years in my career, I have seen the tension between – “MOAR FEATURES” vs. “MOAR EFFICIENCY’.

The more features camp tends to view everything through the prism of – add more features to find more customers. Therefore, every business problem is a feature problem. And any activity that doesn’t result in more features is a waste of time.

The more efficient camp tends to view everything through the prism of – if the software and the construction of the software were more efficient, we could save money and be more productive.

The pro-features folks tend to argue that growth is the solution to every efficiency problem. If you have more money, you can spend your way out of any problem. And so the goal is always to be making more money. The pro-features camp also argues that there are many ways to be more efficient, including hiring cheaper engineers.

This Manichean view of the universe is how we arrive at Product Manager type CEOs arguing that we shouldn’t invest in anything that detracts from feature velocity.

Strategic software architecture must take a different point of view.

Until a product has achieved product-market fit and viability past 18 months isn’t assured, any investment that pays out after 18 months is of questionable value.

Once a product-market fit has been achieved and viability past 18 months is assured, efficiency investments effectively extend the business’s runway, return the margin to the business, and allow the company to generate more cash per dollar spent.

And the point at which you pivot investment is a delicate one.

The pro-feature camp controls the agenda. And they will push hard against any notion that efficiency investment should trump features.

The pro-efficiency camp being overruled for so long will either not exist or have no ability to drive investment decisions.

Worse, the pro-feature camp, by their nature, tends to view any outcome that is further out than 18 months to be very speculative. And the pro-efficiency camp views the value of their work in a 36 month time horizon.

In this battle, the short-term always wins.

In this struggle and debate, my attitude has always been that efficiency must have its own agenda but must sell it to the pro-feature camp. In effect, every efficiency improvement is really a feature.

But to do that, you have to sacrifice the big bang for small incremental wins along the way. In effect, when the business wants a feature, you write code that also improves the efficiency of the system over time.

And, of course, it’s not that simple.

You also have to pick the right technologies. To deliver the biggest wins, you need to be able to use best-of-breed technologies where there is a competitive market, and you need to be able to swap out technologies easily.

Huh?

Let’s take multi-cloud, for example. One argument is that making your application portable is foolish because it detracts from the core business, which is feature delivery. The argument is typically presented in the form – “private cloud very hard, public cloud easy, single cloud easier.”

And there is some truth to that. And the ease comes at a cost. And debates ensue.

I see it very differently. There is this massive competitive market of infrastructure providers. That infrastructure is a commodity except in the rarest of cases. As a technologist, I enable the business to be more efficient by allowing it to take advantage of that commodity infrastructure through my software choices. Customers of infrastructure will choose because of externalities—things like laws, vendor relationships, and personal taste.

Pre-k8s, this argument was slightly harder to make because there was no obvious API layer that would allow you to decouple your application layer from the common infrastructure, and the market wasn’t very competitive.

In fact, in 2015, at VMware, I argued that such a layer would have to come into existence, or AWS would own the world. So our job at VMware was to stay alive until such a layer emerged. I remember saying to folks – this thing would emerge miraculously, and when it did, we would need to jump all over it. I remember folks looking at me as if I was delusional.

Well, k8s arrived and was exactly that layer, and we jumped all over it. K8s was breathtakingly brilliant because it was the right choice for developers and infrastructure providers.

And the reality is that just picking k8s already makes your SaaS system more portable. And it’s an easy choice and gives you portability from day 0.

But it’s more than just k8s. Picking a managed Postgres over a custom DB makes your application more portable.

Incremental choices that over time add up so that when you want to make bigger bets, the cost of making those bets is small.

And the real bet is that over time because the market is now competitive, the infrastructure vendors will create an ecosystem that makes multi-cloud easier and private cloud easier.

In fact, I believe that a single cloud strategy advocates ignore how k8s is creating an API layer that enables innovation and competition in all layers simultaneously. And it’s not the only layer. Amazon, for example,  with i3N metal instances, created a competitive market for hypervisors in AWS.

The anti-multi-cloud folks point to dropbox as a potential failure because of the complexity. On the contrary, I would argue dropbox is an example of how hard multi-cloud is without the right layers creating an opportunity for competition everywhere.

The anti-multi-cloud folks’ argument also turns to – well, java. The argument is that Java attempted to create a sealed abstraction that hid all infrastructure from the applications and failed.

And they have a point. But the goal isn’t the ability to run the application without change on any cloud; the goal is to make the cost of doing that so small that the efficiency gains can be realized. The goal is to make it easy to force the infrastructure vendors to give up some of their margins to the application.


In this picture, you can see what I mean. In the initial state, the application is using only proprietary APIs. Over time, the applications that are being developed have access to standard APIs. The existence of those standard APIs and a competitive market gives choices. The proprietary APIs never go away; they just become a smaller part of the application. And the most important part is that the goal is not to use custom APIs; the goal is not to let custom APIs hold you hostage, to a vendor. And the other point is that it’s multi-cloud; if you want to use custom APIs, use them.

The emergence of that competitive market is creating innovation right now.

And as a result, forward-thinking venture capitalists and forward-thinking architects will choose technologies that will facilitate the emergence of multi-cloud because it is in their selfish interest.

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers

47 architecturalist papers: talent and team over location, always

June 12, 2021 by kostadis roussos Leave a Comment

The recent backtracking of Google, Amazon, and Apple on remote work is fascinating. For companies that are supposed to be on the bleeding edge of corporate governance, the decisions that these folks made not to allow remote work was one that makes me want to ask – “show me your work that leads to your decisions.”

In that spirit, let me show mine.

In 1994, Andries Van Dam got in front of an undergraduate computer science class and announced that the idea of all software engineering remaining in the USA was foolish. As software became more about composing components than building everything from scratch, and because any software engineer anywhere can construct a component that any other engineer could use,  jobs would be offshored. In effect, he argued that there would be fewer jobs in the USA.

It’s easy to laugh at Andy now. But in 1994, the bulk of software was for military-industrial uses. The worldwide web had not yet come into existence. And thanks to the Peace-Dividend, jobs were being lost everywhere in tech.

Many years later, he and I caught up, and we remarked that he was right; what he didn’t account for was an ever-increasing pie. In other words, both statements could be true; global teams delivering components would happen, and more jobs writing software would exist.

Post-dot-com boom, the fetish in the tech industry was this concept call off-shoring. The thesis was that companies could find good enough talent at lower cost-locales and thus reduce their costs.

I remember being in a meeting where some GMs declared that we would only be hiring in India, convinced me that Andy’s apocalyptic vision was right, and plan for a backup plan for my software career.

Waiting on a Green Card and having no other realistic choice as the nuclear winter of 2001-2009 took hold in the tech space, I had to learn how to work with remote teams.

In 2002, I made my first recruiting trip to India, where we brought up a team whose job was to take on NetCache. It was really a case of training my replacements. Being young and foolish, I don’t think I really understood what was going on, but it was pretty grim.

In 2003, I had another GM show up in a meeting and explained that we needed to offshore more work to India, and my project on storage management was to be sent to India.

But then something funny happened.

We started hiring in the USA again.

And then we acquired other companies in other parts of the USA.

And then, I suddenly found myself leading teams in Bangalore, Palo Alto, North Carolina, and Pittsburgh.

At one point, I was on a project where everyone was remote, and I wondered why the heck I was coming into work?

From about 2004-2009, every meeting had someone dialing in remotely. Every project had a remote engineering team.

Then I moved to Zynga, and I swore I would never work with remote teams again, and yep, I was working with teams all over the world.

Then I arrived at VMware in 2015, where remote work had not been widely adopted across the company.

But then, from 2015 to 2019, my partner in crime and I decided, that talent not location, mattered. So we hired folks anywhere. Our architect for our biggest technical achievement was on an island in Puget Sound, and the other one is in Humboldt county.

The location didn’t matter. We would hire anyone anywhere.

After the Heptio acquisition, I remember talking to some VP, and he said – “well, it’s going to be hard to meet because the Heptio folks aren’t in PA.” And I was like, “oh, if they were, that would suck because the guy they need to talk to is in Humboldt county and a beach house in Washington State.”

If I looked at my team, I had tech leads in Bangalore, Austin, Washington State, Sofia (Bulgaria), and Humboldt county.

We succeeded as a team because we didn’t care where you lived. And we figured out how to work together even if we didn’t have physical proximity.

And so over 17 years, I have been working with remote teams, and those teams have delivered staggering business value across three different companies and three different industries, including a hardware company.

The idea that remote work is worse than local work is a failure of the imagination of business leaders and management leaders.

But why?

Because here’s what I learned, if X units of work need to get done, and individual Y wants to do them in location Z, then X units of work get done if you let them do it from location Z. If you don’t, the X units of work don’t get done until you find individual i in the desired location R.

So the loss of productivity is measured in the following way:

For everyday d that some engineer E is not working, X units per work per day (x/d) do not get done, or  P(E) = x/d

If D(i) is the number of days that it takes to find the imaginary i, then the lost productivity is P(i) = x/d*D(i)

If time to market matters, and days are precious, then the value of the lost days is huge.

So when I hear leaders tell me that agility matters, speed matters, and they are unwilling to let people work from where I must admit, I have a hard time believing their math. And like my wife, who is a math teacher, says, I want to say – “show me your work.”

But then I hear startups or small teams say, but small teams.

And that got me thinking.

In 2003, when my team was only 8 engineers, we hired a team in Bangalore.

In 2010, when Zynga was growing, I couldn’t get recruiting to give me the time of day. I had 5 engineers in San Francisco, and the Chief People Officer, Colleen McCreary, asked if I was willing to hire in Seattle. And so we grew that 5 person team very quickly, and they built the infrastructure that Zynga used over the next decade.

In 2015, when hiring was very hard at VMware, I had nobody in Seattle, and when a very senior engineer showed up, and others were wondering how to make him productive, I said – yes.

Every single time, finding talent was the hard part. Dealing with remote locations was easy. And then, because I was willing to go to remote locations, finding more talent became easier.

Talent matters. Location doesn’t.

And that leads me to the following conclusion – if you can’t work with remote teams, then you might want to look very carefully at how your team works.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers

46 architecturalist papers: the hybrid cloud

June 8, 2021 by kostadis roussos 8 Comments

In 2013, after being pushed out of my role at Zynga, I was noodling on what to do next. And was thinking about working on hybrid cloud technologies, but I figured that with everything Zynga had achieved, this was an area that was well-trodden and not one where a new startup was welcome.

Wrong.

So let me rewind the clock.

Here’s what I had learned at Zynga.

  1. The public cloud is costly, and a well-run private cloud can save you tons of cash when you are short on cash.
  2. The public cloud is a very efficient way to get something started.

In many ways, it’s the basic tension between using a prototyping environment or language like python versus a more statically typed language like C++.

And that creates a fundamental dilemma, how do you get to the private cloud economics on a public cloud?

Let me start by saying the non-obvious. In 2013, running on the cloud was 1/3 the cost of running AWS with everything priced in.

If Zynga had tried in 2013 when our business was imploding to get to a private cloud, we would have never done it. But in 2009, our prescient CTO Cadir Lee made the strategic bet that we needed to have our own cloud. And by 2012, we had made the transition, allowing us in 2013 to manage our cash very effectively and survive a very long downturn without bleeding cash.

It’s important to realize that in 2011, I was in charge of what we would call DevOps / SRE / AppOps. And I loved AWS. And I was not interested in moving to zCloud. And then we launched CityVille. And the game was crashing, and we had to increase our server count by a factor of three. And when we begged for insight from Amazon, we learned that the problem was that the AWS team had switched NIC’s that had fewer IO buffers, which forced us to get more x86 servers to handle the IO load. That decision ate into our margins decisively, consumed lots of engineering cycles, and convinced me that we had to get off AWS ASAP.

I figured that this knowledge was so obvious that somebody somewhere was working on it as we spoke. And so, I didn’t bother looking for employment in that area.

The other thing I learned at Zynga was about the mobile business and software stack.

I learned at Zynga that vendors like Apple and Google make a lot of money adding software layers on top of Arm instructions to make it hard to write x-platform code.

Games are weird. They are very heavy in terms of graphics and very light in terms of UI. And so, a game can benefit significantly from being written in an x-platform toolkit because most of the time, you are playing on the game board.

And so, I pursued this strategy of trying to enable x-platform development, even to the point where we created our own programming language, playscript.

But it turns out that the x-platform strategy was a losing strategy because if the UI wasn’t native, the experience was marginally worse than some other game. The liquidity of games was so high that it made much sense to use those native features.

And so, the x-platform game strategy wasn’t quite as successful as I would have liked.

But I also learned that whereas with the UX, native integration matters a lot, the value of deep native integration with the back-end systems isn’t.

Customers do not use the back-end systems. Engineers use them. And those engineers – especially the software variety – are surprisingly willing to rebuild or change things.

And a game could be easily transferred from AWS to an on-prem data center if you just organized your system correctly and were prescient.

So how did the system have to be organized?

From my mobile experience, I realized that at the end of the day – every time you used something other than an x86 server, you were paying for someone else’s retirement. And every time you used some service, you were taking a dependency on someone else’s business plan.

Many companies tried to build meta-management layers that spanned the cloud and were a layer above the cloud API. I viewed that as a mistake. The only sane strategy was to have the same x86 server running in the cloud as you had on-prem.

And the only way to achieve that was to use the same kind of hypervisor.

At Zynga, we deliberately chose to run the same kind of Xen Hypervisor that AWS did for that reason. |

When I left Zynga, I was determined to make things like zCloud easy to build. And through a series of convoluted decisions, I ended up at VMware.

Now Kit Colbert talks a lot about multi-cloud in this post, which I urge you to read it.

My personal spin is that ultimately having a common layer that you can reliably deploy anywhere you want is critical. And the companies that last are the ones that can move their workloads and take advantage of the multi-cloud.

In 2013, I had not realized how special the Zynga team was. The guy who built our data centers was Allan Leinwand (currently SVP at Slack). The guy who built our storage system layer was the chief architect at Flipkart, Vinay YS. A key player on the infrastructure side of the house was Nick Tornow (VP of Engineering at Twitter).

The talent we had was staggering.

What VMware is doing is making the ability to run in multiple clouds something that can be done by small teams easily.

It’s a journey and a long one, but one that I am excited about working on.

 

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers

45 architecturalist papers: a great software architect is a shepherd

May 24, 2021 by kostadis roussos Leave a Comment

man standing in front of group of lamb

Photo by Biegun Wschodni on Unsplash

A friend of mine and a colleague sent me a note today that he feels that his job feels more like a shepherd’s than anything else. He now had to look at what people were doing and keep telling them “not good enough” with the hope that it gets good enough at some point.
This contrasted with the model in his head of the software architect as gatekeeper.

Having worked with him for many years and seeing his growth as a person and a software architect, he articulated what I have been noodling about for many years.

The traditional mental model of a software architect is what several open-source projects implement. Every change is ultimately approved by one or more contributors who judge every change for clarity and quality.

That model makes the architect the gatekeeper of every coding decision.

That model might work well, but it does not work for me.

My model is more of that of a shepherd. I define in my head what is unacceptable. Unacceptable is anything I don’t understand. And that is anything that will break the customer’s deployment of the current product.

A breaking change need not break the current deployment of the existing product. It is always possible to transition to a new version of a system. A truly breaking change is – in my mind – a different product.

What about things I don’t understand? Understanding doesn’t mean agreement. And most importantly, it doesn’t mean code. It means the person sharing the technical data has given me enough details that I am confident I can understand what is being proposed and what it might manifest as, and I am okay with that.

To borrow from Gandalf, until I have understanding and believe this won’t break customers, “You shall not pass.”

Then the next set of questions are about the confidence that what you are building is being built well. My basic strategy is to hire great engineers who build great software. I assume the great engineers are going to produce great code. The real tricky bit is to make sure that the engineers believe that what they are building is the right thing for the right reasons. Like all of us, it’s often easier to say “Yes” to the wrong answer than to communicate “No.” Not every battle is worth fighting over.

My view is that as a software architect, I am required to say, “No.” And that my job is to discover if the engineers think what they are building is correct. And if it is not, be the fulcrum and lever that they need to get the space and time to figure out the right answer.

The other point is that once we know the correct answer, we can then agree to do the wrong thing because of time-to-market reasons. And that is an explicit tradeoff and not a consequence of never even considering the correct answer. And the ability to choose to do the wrong thing allowed flexibility and enabled the right long-term thing to be done. Without that ability, I have seen standoffs between engineering management, product management, and architects. The architects don’t trust the managers and the PM to do the right thing, so they only propose the right long-term solution, pushing out timelines, etc.

Again it’s about shepherding people through a process, not telling them what to do, and not reviewing everything they produce.

Why am I okay with this?

Look, I have 10 fingers, and I do more check-ins than any person in the company. Because I have a team of hundreds that does hundreds of check-ins per day, and they can operate independently, looking at the correct data. By leveraging their collective intelligence, we can go faster and innovate faster than any other leadership model.

Is there a risk that things will go badly? Yes. But I prefer to deal with the failures and then understand how to mitigate them going forward. To innovate is to fail. To do new things is to fail. To encourage people is to fail. I would instead fall that way than any other way. Because when you innovate, do new things, and encourage individuals and teams, greatness happens.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers

  • 1
  • 2
  • 3
  • …
  • 6
  • Next Page »
 

Loading Comments...
 

    %d