wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

nutanixist 23: overcoming kubernetes and vm storage limitations

October 29, 2025 by kostadis roussos 1 Comment

Virtualization offers workload isolation and separates infrastructure and application teams. This separation allows operations such as vMotion to proceed without coordination, enabling host maintenance and infrastructure rebalancing to proceed seamlessly.



One notable limitation of Kubernetes (k8s) and virtual machines (VMs) is the interplay between pod deployment and persistent volumes. Platform engineers want the ability to deploy pods and create storage on demand quickly. However, the virtual machine abstraction complicates this, making pod deployment more challenging and negatively affecting application availability.

For instance, when a virtual machine has a single virtual disk and needs to attach another, this operation is blocked during mobility tasks with hypervisor-attached storage.

Now, in a traditional VM environment, adding a virtual disk isn’t too big a deal. Adding another virtual disk is not a typical day 2 operation.

But in k8s, whenever I deploy a new pod to a VM and want persistent disks, I have to add another virtual disk.

So now, whenever the infrastructure admin wants to perform a rebalancing or maintenance, they must coordinate with the platform engineering team or the application team.

The whole point of virtualization is to provide isolation, yet because of this behavior, you lose it.

So co-ordinate!

Except for one critical use case called “High-Availability,” where the VM is rebalanced both before and after a server failure. So when a server fails, if a pod fails, and your VMs are being rebalanced, your pod restart can hang for an indeterminate amount of time. And if it hangs, then your application either runs in a degraded mode or doesn’t run at all.

This limitation exists for all KVM-based hypervisors, to the extent I am aware, and for VMware hypervisor-attached storage.

Nutanix, however, offers another class of storage, called a “volume group,” that has been available for 5+ years and allows a guest to attach to a virtual disk via iSCSI.

Nutanix calls that a “guest attached” volume group.

There is a trade-off, of course, in using this iSCSI layer. The Nutanix CSI driver handles the details.

In a vSphere world, you could use iSCSI to an external storage array from the guest, which introduces another set of trade-offs. It also complicates the environment’s operations. vVols tried to make that better.

With Nutanix, the nice property of the volume group is that I can attach multiple virtual disks and apply data management policies to the volume group, such as snapshots and DR, so that as new disks are created, they inherit those policies.

And so I get the simplicity and flexibility of virtual disks, without any of the day-2 headaches of hypervisor-attached storage.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: nutanixist

Trackbacks

  1. nutanixist 24: how nutanix made k8s persistant disk provisioning to not rely on external systems and thus more reliable says:
    October 29, 2025 at 4:05 am

    […] In a previous post, I wrote about how the hypervisor’s host control plane prevents adding a di… […]

    Reply

Leave a Reply to nutanixist 24: how nutanix made k8s persistant disk provisioning to not rely on external systems and thus more reliableCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

 

Loading Comments...
 

    %d