wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

nutanixist 30: the limits of desired state and why imperative is better but harder

March 3, 2026 by kostadis roussos Leave a Comment



OpenShift has adopted a radical proposal to move k8s beyond application orchestration into infrastructure orchestration.

Let me observe that k8s, as an orchestration layer, is a fabulous piece of technology, and it relies on a desired-state design pattern, where different controllers are asked to do something, and any errors are expected to be resolved out of band.

And my criticism is not of the desired state or of k8s, but of the assumption that it is the best or even desirable way to manage Infrastructure.

What is Infrastructure? Infrastructure should just work. When it fails, it should be obvious why and easy to correct.

For example, when I create a VM, I expect the VM to be created. If the VM creation requires updating the network infrastructure or changing the storage infrastructure, the operation completes with a specific error message indicating what failed, or with success.

With the desired state, the operation doesn’t complete; instead, the operator has to determine which part of the workflow is stalled and fix it until the next error occurs.

While at VMware, I assume the desired state is the only answer. And the reason is that the VMware SDDC is built from a series of stateful applications that share and overlap state. And the reality is that without digging into the underlying stateful service, it was impractical to categorize the error completely. With the ability to restore stateful services independently of each other and the lack of transactional services, errors could occur that required a human to intervene.

At Nutanix, I discovered, to my amusement, that the v3 APIs adopted the k8s style of desired-state management. And that, while using them, Nutanix engineers and customers found that the desired state got in the way of error propagation, making the system less usable.

And so Nutanix rejected them for imperative APIs and thus introduced the v4 APIs.

In short, Nutanix said that “Infrastructure either works as an operation or it doesn’t. And if it doesn’t, we should give you an immediate and clear error. We shouldn’t make it your problem to go debug the Infrastructure.”

To do that, you need a strictly consistent database that all infrastructure control planes use, a single entity model, and a stateless hypervisor and file system.

Without that, the OpenShift model is the only path forward.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: nutanixist

the nutanixist 27: the UI Fallacy at the heart of the control plane paradox – we built a database for the operator’s brain.

November 12, 2025 by kostadis roussos Leave a Comment

I recently read some material on VCF 9.0, and it discussed how one of the major improvements of VCF 9.0 was that it created a single UI and thus a single control plane.

a white rectangular object hanging from a ceiling
Photo by Sasha Mk on Unsplash

As a control and management plane architect, I found those discussions and proposals infuriating. The idea that the problem with an extensive, complex system’s operation was just about providing more convenient dials and knobs struck me as absurd.

I could not figure out why PMs, GMs, and vendors pushed this approach until I met a set of customers.

As a software engineer, the control plane is the thing that reads from a database and updates the datapath. The part that updates the database, reads from it, and returns information to the user is the management plane.

But when I spoke to the product operators while an architect at VMware, customers said the control plane was the UI.

At first, I found that odd, but then upon deeper reflection, I realized that they were right from their point of view. They saw themselves as the control plane, controlling the system.

The database was in their heads, and they used the UI to configure the system.

That -aha- was profound because it highlights the foundational tension in IT and infrastructure: where is the boundary between the human control plane/computer control plane?

My observation was that the more you can push to the computer, the more robust and reliable your infrastructure is. You can do more, react faster, and provide better reliability if the computer is in control.

Building a UI improves human productivity if you believe the gating productivity factor is the human. If you believe that the system is optimal and that the only path forward is to improve human productivity, improving the UI is the right answer.

I find the idea that we have the optimal infrastructure architecture absurd.

Saying to your business partner, “This is wrong!” And then they ask you – “So what do we do?” And you answering, “I dunno.” And then demanding nothing be done is absurd. In the absence of any other option, you do what is ridiculous. You optimize what you can. Fixing the UIs was the best answer, because the other answers weren’t that much better.

For years, I didn’t know how to build the right control plane that eliminated UI workflows relying on a human to be the control plane. And like most things, the answer stared me in the face.

It’s so absurdly obvious, it hurts to say it: to build an automated control plane, the control plane must be able to control and configure an entity fully, AND when the entity cannot be controlled by the control plane, it must stop within a guaranteed, bounded time frame.

Because for non-Nutanix customers, such a control plane doesn’t exist, and manual steps are necessary to handle the AND clause, it is fair to say that a single UI is an essential step to improve productivity. But it is an incremental step. A tiny, incremental step.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers, nutanixist

the nutanixist 26: the unbounded step – how desired state systems create dual-database drift

November 10, 2025 by kostadis roussos Leave a Comment

white bow


Photo by Bruno Figueiredo on Unsplash

After I wrote about single databases and control planes, and their importance, folks observed that OpenShift and kubevirt use a single database.

And what I realized was that there was a deeper point being made: the Nutanix system guarantees that the host is under the control system’s authority, or it will automatically stop running.

The complex challenge of ensuring that the host runs only what you want is the first step, but the next — making sure it runs only when you can control it — is even more fundamental.

If the host doesn’t stop running automatically, the VM is both running and not running for some period of time. During that time, there are effectively two databases.

Because manual intervention is necessary, a third database comes into play: the operations database that detects this condition, and the system that actually reboots the machine or tracks its reboot.

The Kubernetes desired state system allows for this manual intervention by essentially waiting for the state to converge.

Desired-state systems enable these manual procedures, but the drawback is that they permit an implicitly unbounded step. Another system must address that unbounded step.

What makes this -reboot- more complicated is that, for stateful services, rebooting the host is expensive. The service may experience downtime or degraded performance. So when the reboot will happen—or whether it will—is unknowable. So the control plane that wants to take action on the VM or machine in this quantum state will have to stall.

Because rebooting a stateful service is not free, relying on the control plane connectivity to make the decision is fraught with operational challenges. Do you trust the control plane’s correctness more than the database or the OS that the control plane is running on? In other words, is the connectivity failure due to the control plane itself, or something else?

More specifically, is the problem in kubernetes? Or is the problem in the underlying host and VM? To reboot the host automatically, you have to believe that the control plane, in this case, kubernetes, is more reliable than any single host.

Brian Oki, a colleague of mine at VMware and the author of the original Viewstamp Replication, made the point to me early on, and it took me a while to appreciate: guaranteeing that the host is only running when it is connected to the control plane is a tough, but essential, property.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers, nutanixist

the nutanixist 25: why nutanix’s single database approach eliminates SDDC orchestration

November 8, 2025 by kostadis roussos Leave a Comment

Over the past 14 years, across three companies, I have been trying to figure out how to build a deterministic SDDC. An SDDC that, when you reconfigured it, stayed reconfigured and didn’t require any human intervention.

I failed. I had to come to Nutanix to see what I was missing.

What’s the problem?

To control an SDDC, you must interact with a software control system.

And what is a control system?

Logically, every software control system has three elements: an API, a database, and the control software.

The API/UI/CLI updates the database, and then the control system reads the database state and updates the datapath.

But what if you have multiple control systems?

And that’s where I failed.

See, the original idea I had was that you could solve the problem by controlling the control systems via their APIs.

We call that orchestration.

But that didn’t work. Because

1) Each control system has its own database, meaning it can get out of sync with the orchestration layer.

2) When an operation needs to be performed by multiple control systems on the same entity, drift is inevitable because they have numerous databases

The first point is obvious: the controller’s API can reconfigure the control system, bypassing the orchestration layer. And thus break the orchestration layer’s model.

The second point is less obvious. If two controllers need to operate on the same entity and that entity appears in two different databases, unless you are using transactional updates, each control system is working on a similar but not identical entity.

And so what happens is that inevitably both (1) and (2) require a human to figure things out. Which means the orchestration fails, and now someone has to debug it.

And maybe I stumbled on that, but what I completely missed is the database for the Hypervisor/OS and the server image.

See, the Hypervisor has a database, the local configuration, and the workloads it was running. It also has an image.

And if the image and the configuration are not controlled by the control plane, then, by definition, the Hypervisor can also be out of sync.

So what do you do?

Well, you create a stateless hypervisor that relies on the control plane to tell it what to do.

Which brings me back to non-Nutanix systems like OpenShift, VCF, and Proxmox. None of them controls the entire system. They control parts of the system because the Hypervisor has its own database—the configuration files and the list of processes it starts up on reboot.

So I wrote that statement about databases, and Vytautas Jankevičius was right to say I was wrong. The way I wrote it isn’t accurate. For example, OpenShift has a single database that pushes configuration down. What I meant was something a lot more abstract. What I meant was that there was a persistent universe where, depending on who you asked, you got a different answer. For example, with OpenShift, if the machine doesn’t reboot, the VM will keep running. During that time, depending on who you ask, you will get different answers. And it’s that “different answers until human intervention” that creates a split-brain universe. And to resolve that inconsistency, you need a third system —or an orchestration —and a human. In my mind, if I can get two answers, and the only way to fix it is to have a human intervene, then there exist two databases that are sometimes in sync.

My point was too abstract, and I phrased it — well, wrong. In my defense, I was trying to squeeze this into the limited text window of LinkedIn.com …

Nutanix took a very radical approach: What if there was precisely one database? If there is exactly one database, every control system has the same view of the entity. And therefore every control system sees the same updates in the same order. Nutanix maintains that property by having the hypervisor shut down if it can’t reach the control plane. In other words, there is no arbitrary period during which the hypervisor is running and the control plane can’t control it.

Obviously, the next challenge is scaling that system. And for that, I recommend The Nutanix Bible.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: nutanixist

nutanixist 24: how nutanix made k8s persistant volume provisioning more reliable and available

October 29, 2025 by kostadis roussos Leave a Comment

One of the core impedance mismatches between k8s control planes and compute control planes is how disks are attached and the constraints thereon.

Why does it matter?

assorted electric cables
Photo by John Barkiple on Unsplash

Whereas with VMs, adding a disk is a relatively rare day 2 operation, in a k8s environment, attaching a disk is part of restarting a pod that failed.

In a previous post, I wrote about how the hypervisor’s host control plane prevents adding a disk to a VM while the VM is being moved.

And how that fundamentally affects the availability of the application that runs in kubernetes.

Now I want to talk about another challenge.

To create a virtual disk via the CSI, you must interact with the infrastructure control plane.

Now the performance, availability, and location of the infrastructure control plane matter.

With Nutanix, you can configure the CSI system to communicate directly with the PE. When you do that, our CSI provider provisions a virtual disk, and the CSI interacts with the underlying PE control plane running on the kubelet. What’s important is that if the VM is running, then the PE control plane is accessible because an endpoint exists on the same physical host.


If you do not use the Nutanix CSI in PE mode, the CSI provider must communicate with the PC. This can lead to issues where the kubelet is unable to provision a disk because it depends on an external system.

The VCF 9.0 product documentation includes an excellent illustration of this architecture.

This leads to an availability mismatch, which adds complexity. The external control plane must be more available than any host that creates a pod. The network must be designed to support that level of availability. While this is achievable, it introduces additional tradeoffs.

What I like about the Nutanix platform is the choice it offers. And depending on the tradeoffs that matter for you, you can make different choices.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: nutanixist

nutanixist 23: overcoming kubernetes and vm storage limitations

October 29, 2025 by kostadis roussos 1 Comment

Virtualization offers workload isolation and separates infrastructure and application teams. This separation allows operations such as vMotion to proceed without coordination, enabling host maintenance and infrastructure rebalancing to proceed seamlessly.



One notable limitation of Kubernetes (k8s) and virtual machines (VMs) is the interplay between pod deployment and persistent volumes. Platform engineers want the ability to deploy pods and create storage on demand quickly. However, the virtual machine abstraction complicates this, making pod deployment more challenging and negatively affecting application availability.

For instance, when a virtual machine has a single virtual disk and needs to attach another, this operation is blocked during mobility tasks with hypervisor-attached storage.

Now, in a traditional VM environment, adding a virtual disk isn’t too big a deal. Adding another virtual disk is not a typical day 2 operation.

But in k8s, whenever I deploy a new pod to a VM and want persistent disks, I have to add another virtual disk.

So now, whenever the infrastructure admin wants to perform a rebalancing or maintenance, they must coordinate with the platform engineering team or the application team.

The whole point of virtualization is to provide isolation, yet because of this behavior, you lose it.

So co-ordinate!

Except for one critical use case called “High-Availability,” where the VM is rebalanced both before and after a server failure. So when a server fails, if a pod fails, and your VMs are being rebalanced, your pod restart can hang for an indeterminate amount of time. And if it hangs, then your application either runs in a degraded mode or doesn’t run at all.

This limitation exists for all KVM-based hypervisors, to the extent I am aware, and for VMware hypervisor-attached storage.

Nutanix, however, offers another class of storage, called a “volume group,” that has been available for 5+ years and allows a guest to attach to a virtual disk via iSCSI.

Nutanix calls that a “guest attached” volume group.

There is a trade-off, of course, in using this iSCSI layer. The Nutanix CSI driver handles the details.

In a vSphere world, you could use iSCSI to an external storage array from the guest, which introduces another set of trade-offs. It also complicates the environment’s operations. vVols tried to make that better.

With Nutanix, the nice property of the volume group is that I can attach multiple virtual disks and apply data management policies to the volume group, such as snapshots and DR, so that as new disks are created, they inherit those policies.

And so I get the simplicity and flexibility of virtual disks, without any of the day-2 headaches of hypervisor-attached storage.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: nutanixist

nutanixist 22: the impact of system consistency in ahv vs esx and other systems

September 27, 2025 by kostadis roussos Leave a Comment

AHV prioritizes system consistency over workloads, whereas ESX and every other OS prioritizes workloads over system consistency.

a wall with a lot of circles on it
Photo by Maria Teneva on Unsplash


If you examine the fundamental difference between AHV and ESX, once you set aside the features, APIs, and opinions, the most basic question is: “When is the host down?”

ESX asserts that as long as the kernel is running, the host remains up because a workload may be running or about to start. Even if the kernel is unreachable from the outside, ESX continues to run. The only person who can decide the host is down, therefore, is a human.

AHV, on the other hand, believes that once it is no longer part of the quorum, the host is down.

Both approaches have value, but they yield different outcomes.

With ESX, the human has complete control over deciding when to restart a host. Because only the human knows whether the host is running, each additional component must continue functioning until instructed otherwise and must keep operating even if other system parts are down.

It’s why, for example, with vSphere HA, even if the network is partitioned, all hosts will run workloads.

Until the human indicates that ESX is down, all system components should assume a workload is either running on the ESX host or may start running, so they must try to keep running as well.

The difficulty is that a malfunctioning piece of software can appear just like one that is very slow.

Therefore, each layer advances without knowing if another will do the same later, which can result in incorrect decisions.

A trivial way to prove this is with backup and restore. When you restore a system from a backup, to the outside observer, that’s indistinguishable from a very slow system. The ancient system must now catch up with the current state of the world. To do that, it has to be able to read the current state, but there is no precise current state. So at some point, a human must be involved to resolve inconsistencies. It’s why restoring VCF is so painful.

The benefit of this approach is that it allows surprisingly fast delivery of components, as long as integration and consistency are less critical than the speed of feature delivery for each one.

The downside is that when two systems need to agree on the system’s status, they cannot. Because only a human knows if the system is up, down, or slow, any software trying to coordinate between two components can only make an educated guess about what’s happening.

To handle this, you need to invest in more tools, monitoring, and observability. But it’s always a guess.

The alternative approach of AHV has the property that the software systems are aware when the host is down, since the computer makes the decision independently of whether workloads are running on the AHV host.

More importantly, any workload on that AHV host will not run until the cluster control plane reinstalls it on the host.

As a result, any layered system knows exactly when to stop.

Consequently, all layered systems are aware of each other’s state.

And all parts of the system agree on the state of the workloads.

The upside of this approach is that the system is correct, scales better, and is simpler to operate and develop against. The downside is that until the quorum system is more reliable than a single kernel, your system is less reliable.

What AHV has done is make its clustered system as reliable as a single kernel. And that is an astonishing achievement.

Once that is achieved, and if overall system behavior is more important than any one single system, then the simplicity of the AHV approach allows for faster feature delivery because the complexity of integration is simple.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: ahv, nutanixist

the nutanixist 20: how to build an AZ using soft transactions, a clustered IO path, and a stateless hypervisor without a hyperscalar cloud network

September 21, 2025 by kostadis roussos Leave a Comment

I’ve been pondering the problem of making infrastructure transactional for 20 years.  

The one paper I wrote – https://www.usenix.org/legacy/event/lisa07/tech/full_papers/holl/holl.pdf is an early attempt at trying to get the desired state systems to work. 

You can read the paper, but the critical idea (and it’s an ancient one) was that you take all of the control plane code and put it in the central system. 

The problem with that approach (and why the product failed) is due to availability. 

The thing we built had the nice property of simplicity of management. It had the unfortunate property of being less available than what it tried to replace. What do I mean? Our solution required a single centralized control plane. If that control plane failed, then snapshots, mirrors, and backups failed. Without our control plane, each NetApp Filer managed its own schedule and failed independently.

Storage administrators barfed all over it. They rejected the product and the architecture.

Then I went to Zynga. And there I took another stab at the problem of managing systems at scale. And there we built some pretty slick management software that allowed Zynga to scale to 100 million MAU for Cityville, on what was basically the flakiest infrastructure I have ever used. The critical insight I had at Zynga was that since transactional systems at scale didn’t work with a centralized database, you needed to build something that relied on eventual consistency.

Then I came to VMware and decided to tackle the problem of deterministic infrastructure at scale again. That’s when I realized there wasn’t really a solution to my problem. 

Abstract spiral pattern with warm light and shadow.
Photo by Milad Fakurian on Unsplash

What was my problem:

I had several hundred distributed databases (one per cluster), and I wanted to manage particular semantics that didn’t quite fit into a cluster’s semantics. For example, networking spans clusters. 

And I failed to come up with an answer. 

What do I mean? The current system requires manual intervention to keep running. The new eventually consistent system also required manual intervention to keep running because it wasn’t deterministic.

So what was the win? Unclear. But there was a win around per-cluster state, and so we decided to solve that. Working with Brian Oki, who did most of the heavy lifting, we devised a plan to make forward progress. We decided to push the cluster state into the cluster.

We began working on an internal project called Bauhaus, despite not having a definitive answer on how to approach networking. Bauhaus was about moving some of the cluster state into the cluster using a distributed KV store to simplify recovery and improve resiliency. 

The critical insight I didn’t have was “AZ” 

An AZ is one of those concepts that practitioners of distributed systems have spectacularly failed to define, and it is the most fluid of all.

Ask 50 practitioners and you get 50 answers. 

And because of that, it’s too amorphous to build systems with. 

But there is a crucial insight about an AZ: 

An AZ is a control plane that, when it fails, the hardware it manages becomes unusable, even if the hardware is powered on.

An AZ from the outside observer’s perspective is one thing. 

But the critical activity in cloud engineering is “how do I build an AZ so it appears to be one thing, but is actually built from many things.” 

The thing that’s not obvious to folks who don’t spend too much time puzzling this problem is how the network is built in the cloud.

If you examine the cloud, the critical aspect of their systems is a highly redundant and substantial bandwidth inter- and intra-data center network.

Every cloud has its own proprietary networking stack, which, when you interact with it (from the underlying, not the overlay), requires a significant amount of bridging magic. Those underlay networks do not have all of the semantics or properties of traditional IP networks. 

It’s the existence of those networks that allows for the cloud to provide a transactional system behavior. 

So let me be precise: 

In the cloud, I can assert that if I can’t reach a node, the node is down. 

If I can’t reach the AZ, it’s down. 

And if a VM was created in AZ 1, it’s either running in AZ 1 or not running in AZ 1. It can not exist outside of AZ 1. 

Without the cloud networks and the fact that every part of the system was engineered around this principle, building an AZ-like construct on premises was very difficult without extensive investment in network and hardware design. 

What these Nutanix guys did is figure out how to work around this using a custom data path and soft transactions. 

Rather than relying on the network connectivity to determine if a VM is running or not, they used the IO data path and a stateless OS. 

The IO data path guarantees that any hypervisor that boots cannot access any state that the clustered control plane doesn’t want it to access. 

The stateless OS allows the cluster control plane to program the OS to its new state trivially. 

The existence of a clustered IO path and a stateless hypervisor allows the cluster to control what state is being modified and which workloads are running. In effect, the clustered I/O path and stateless hypervisor enable the cluster as a whole to operate as a single entity.

As I mentioned earlier, soft transactions and a distributed database are what enable this scalability.

In this incredibly long and complex journey, I was fortunate to work with some brilliant people, but a critical person was Dahlia Malkhi, who, when I hit a brick wall, made it possible for me to see the path around it. I call her out because she was a researcher, and we may have interacted on a technical topic 2 or 3 times, and each time was seminal.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: nutanixist

the nutanixist 21: architecture is why Pure and nutanix could deliver a great solution in record time

September 15, 2025 by kostadis roussos Leave a Comment

One of Pure Storage’s more remarkable achievements was its integration with vVOLs. They spent years integrating and making vVOLs work. And, without a doubt, a significant part of the reason this was easy was due to the work the Pure team did. 

people hands reaching out
Photo by Priscilla Du Preez 🇨🇦 on Unsplash

But this is why it was easy for Nutanix and Pure. 

vVOLS was the answer VMware had to how to enable storage vendors to provide VM data management that was integrated into VMware’s policy-based management framework. 

That’s a mouthful.

In 2008, NetApp introduced a product called SnapManager for Virtual Infrastructure, which revolutionized how people discussed storage integration with VMware. Instead of seeing storage as independent of compute, it was presented as an integrated operational workflow. The VI admin using SMVI could directly integrate with NetApp storage to take snapshots and provision storage. 

In 2011, Nutanix introduced HCI, which provided VM-level data management that bypassed the operational concerns of storage administrators by removing them from the equation entirely.  

In 2012, VMware introduced policy-based storage management, along with the first incarnation of vVOLs in 2015, to enable policy-based management of storage. 

What VMware aimed to do was enable the entire storage ecosystem to integrate with the vSphere control plane, providing the operational value of VM data management in a consistent, vendor-agnostic manner.

Effectively, the goal was to move around competitors like Nutanix and NetApp by commoditizing the VM data management. And make vSphere the way you manage data, with storage vendors acting like providers. 

It was a good idea, and it was on the cusp of greatness, but for what I can only imagine were misguided, petty reasons, VMware canceled it.

Many of the challenges of vVOL were inherent to vSphere, making integration very difficult. 

vSphere doesn’t have a cluster control plane, and VMFS does not have a single control point for I/O; the VMFS IO path is in the kernel. 

So, what were vVOLs? Without getting too deep into the weeds, what VMware did for vSAN was add a new path in the core storage stack of vSphere. That layer was then integrated into the vSAN cluster control plane. That same interface was then externalized to the partners. 

And that was the problem. 

The storage partner was tasked with the complicated problem of building a clustered storage control plane. Why? Because vSphere, as I have explained elsewhere, doesn’t have a clustered control plane and allows independent hosts to make independent decisions that the control plane must react to. 

When vMotion occurred, the VASA provider was involved in the operation as it had to unmount a LUN from an ESXi host and then remount it on another host. 

But it was messier. Because vSphere cannot guarantee the number of hosts that will connect to a storage array or the number of LUNs that will be mapped, the VASA controller must manage any limits. 

And then, finally, due to VMFS limits, the number of vVols that could be connected to any host was limited. 

For Nutanix, these problems didn’t exist. Due to the clustered control plane, we could ensure that the number of LUNs connected to a storage controller remained within the limits agreed upon by Pure and Nutanix. Because our IO path was in user space, we could mount and map every virtual disk on every host. And because of our clustered control plane, during a Live Migration (Nutanix’s vMotion), we could handle the re-routing of the IOs without requiring the external storage provider to do the fencing for us. 

Unlike vVOL, which requires the storage vendor to build a clustered control plane from the basic primitives of the per-node file system, the storage vendor integrates with a cluster control plane and operates on cluster-level semantics. 

That is why our integration was fast. 

And more importantly, why our integration delivers more value and better availability than the dearly departed vVOLs. 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: nutanixist, Storage

the architecturalist 63: nutanix was the correct answer

September 13, 2025 by kostadis roussos 1 Comment


In 2012, while at Zynga, I had a moment of clarity that the way we had thought about infrastructure up to that point was wrong. That our focus on making a single node more and more available was a dead end.

I wrote about this on Quora, and it was picked up by Forbes, which gave me 1 minute of fame.

And I wrote this:

person using magnifying glass enlarging the appearance of his nose and sunglasses
Photo by Marten Newhall on Unsplash

NetApp’s engineering spent a lot of time worrying about hardware availability and making hardware appear to be much more resilient than it actually was.

And yet, these guys like Facebook, Twitter, and Google didn’t think that was important.

Which was mind-boggling. How else can you write software if the infrastructure isn’t perfect? “What were you people doing?” I thought.

So what drove me to find another job was that somehow, people were building meaningful applications that didn’t need component level availability. Something was changing…

Which brings me to what was changing.

What was changing, and this only became obvious after I joined Zynga, was that the old model was dead.

In a world where you have thousands of servers, depend on services that change all of the time, the notion that the application can be provided the illusion of perfect availability is, well, foolish.

In fact, applications have to be architected to understand failures. Failures are now as important to software as thinking about CPU and Memory and Storage. Your application has to be aware of how things fail and respond to those failures intelligently.

I believe that the next generation of software systems will be built around how do you reason about failure, just like the last was about how do you reason about CPU and Memory and Storage.

For the last 13 years, I have been wondering what the correct answer is. One school of thought believed that the correct answer was to treat everything as a database transaction. What if we made infrastructure transactional?

As a result, numerous attempts were made to develop management applications that updated the model of the world in a database and attempted to force the real world to conform to that model. I even invented one and published a paper that described such a system.

And they kind of worked.

The general idea was that you had an API that updated a database, and then a set of controllers that would go and modify the world to conform to the database. And if they ever detected an inconsistency between the world and the database, they would go and correct the system to conform to the database.

And those systems failed to deliver on transactional infrastructure.

When you invoke an API, the database gets updated, and the world converges, but here’s the rub: the world can diverge. And you wouldn’t know.

Let me provide an example from vCenter, a product with which I am very familiar.

Let me be specific – you tell vCenter to power on a VM. vCenter updates its database, then communicates with ESXi, and the VM is powered on.

But is the VM powered on?

You don’t know, because a user can log into ESXi and power off the VM.

In effect, ESXi has its own database and API. And that API and database can be used to change the state of the system.

To make matters worse, if a network partition occurs, the VM will be powered on, and vCenter cannot determine if the VM is powered on or not.

Therefore, any piece of code written must account for three states: “Yes, No, and I don’t know.”

Now, if it’s only one client calling vCenter and doing one thing at a time, that’s manageable. However, if you are working with workflows that depend on the VM being powered on, for example, powering on the VM, moving it, and so on, then for every step, you must account for the possibilities of ‘yes’, ‘no’, and ‘maybe’. And that handling all the different kinds of ‘maybe’ makes writing the control plane tricky.

And when I was at Zynga, I would like to believe I had identified this problem, but I had no idea how to solve it.

For years, I thought the only path forward was the desired state. In short, you express an intent, and the system converges to that intent. But the problem with that model is that expressing things as a sequence of operations is more convenient than simply describing intent. The problem with intent is that if you need to express two different contingent intents, how do you do that? And yes, you could, but pretty soon, you have one massive intent that describes the entire universe.

And so the approach, although promising, never materialized.

And then I ended up at Nutanix. I have also noted that Nutanix has a distributed database at its core, which is part of the puzzle. However, as I mentioned earlier, it’s only a part.

There were three more.

The second was the ability to have a parent database with multiple child databases, and that the parent database would always receive updates in the correct write order.

The third was soft transactions. This is critical because the system must perform reliably and be able to tolerate failures.

But the piece of the puzzle that eluded me was the need for two magical pieces of technology: the first was AHV, a stateless operating system, and the other was Stargate, a clustered IO path.

What Stargate guarantees is that the cluster knows which disk is being connected to, and it provides a point of control for the disk. It is not possible to change the state without Stargate knowing. And so, for a cluster, Stargate can prevent anyone from accessing disks and assert who is accessing them.

The second is AHV, which, when it reboots, doesn’t remember what it was doing before it rebooted. Therefore, AHV cannot run any workload without the cluster knowing what the workload is.

When you combine all five pieces of technology, you have the answer to the question I posed.

The infrastructure, by design of the datapath and system components, only has two answers to any operation: “Yes, I completed, and No, I didn’t.” And either is definitive. There exists no other possible answer to the question.

Once you have such a system, it becomes possible to implement two services that control the OS and the datapath that can assume the behavior of the infrastructure is binary.

And once you do that, you can build a system of APIs that always return yes or no to any question.

This then allows you to combine APIs into workflows that can be trivially designed. What do I mean?

Suppose I have a workflow that must call 5 APIs. We model this as a single workflow comprising five tasks.

In transactional infrastructure, after each API returns a response, I know what the environment must be. And therefore, if it says “Yes”, I can advance to the next step knowing that it is “Yes.” In other words, if Task 1 is completed successfully, I can easily advance to Task 2.

So let’s consider the alternative. Task 1 is to power on a VM. Task 2 is to attach a network to the VM. If Task 1 declares success, Task 2 might fail because someone behind the scenes shut down the VM. Now, Task 2 must handle an error. But what does this mean for the workflow? Did the workflow fail? Well, it didn’t. What happened was that the environment changed in a way that the workflow was unaware of.

So let’s look at the workflow state –
Task 1 – power on VM – success
Task 2 – Attach Network – Failure because the VM is not powered on.

This is a contradiction. How could Task 1 succeed and Task 2 fail? This is a contradiction because the workflow didn’t account for another system changing the state of the VM behind the scenes. And because the change occurred outside of the system, the program interacting with the APIs cannot determine why it has a contradiction.

To understand what happened, you need to build yet another system that monitors both the workflow and the system that can be changed outside the workflow’s control.

Intent-based systems attempted to work around this by retrying, but, as I mentioned, they had their own issues, the most significant being an infinite retry loop.

Ultimately, the only solution was to make it impossible for the system to be changed that was not under the control of the control plane.

And that’s what the folks at Nutanix did.





Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers, nutanixist

  • 1
  • 2
  • 3
  • Next Page »
 

Loading Comments...
 

    %d