
Virtualization offers workload isolation and separates infrastructure and application teams. This separation allows operations such as vMotion to proceed without coordination, enabling host maintenance and infrastructure rebalancing to proceed seamlessly.
One notable limitation of Kubernetes (k8s) and virtual machines (VMs) is the interplay between pod deployment and persistent volumes. Platform engineers want the ability to deploy pods and create storage on demand quickly. However, the virtual machine abstraction complicates this, making pod deployment more challenging and negatively affecting application availability.
For instance, when a virtual machine has a single virtual disk and needs to attach another, this operation is blocked during mobility tasks with hypervisor-attached storage.
Now, in a traditional VM environment, adding a virtual disk isn’t too big a deal. Adding another virtual disk is not a typical day 2 operation.
But in k8s, whenever I deploy a new pod to a VM and want persistent disks, I have to add another virtual disk.
So now, whenever the infrastructure admin wants to perform a rebalancing or maintenance, they must coordinate with the platform engineering team or the application team.
The whole point of virtualization is to provide isolation, yet because of this behavior, you lose it.
So co-ordinate!
Except for one critical use case called “High-Availability,” where the VM is rebalanced both before and after a server failure. So when a server fails, if a pod fails, and your VMs are being rebalanced, your pod restart can hang for an indeterminate amount of time. And if it hangs, then your application either runs in a degraded mode or doesn’t run at all.
This limitation exists for all KVM-based hypervisors, to the extent I am aware, and for VMware hypervisor-attached storage.
Nutanix, however, offers another class of storage, called a “volume group,” that has been available for 5+ years and allows a guest to attach to a virtual disk via iSCSI.
Nutanix calls that a “guest attached” volume group.
There is a trade-off, of course, in using this iSCSI layer. The Nutanix CSI driver handles the details.
In a vSphere world, you could use iSCSI to an external storage array from the guest, which introduces another set of trade-offs. It also complicates the environment’s operations. vVols tried to make that better.
With Nutanix, the nice property of the volume group is that I can attach multiple virtual disks and apply data management policies to the volume group, such as snapshots and DR, so that as new disks are created, they inherit those policies.
And so I get the simplicity and flexibility of virtual disks, without any of the day-2 headaches of hypervisor-attached storage.

[…] In a previous post, I wrote about how the hypervisor’s host control plane prevents adding a di… […]