Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
11215f2
Update authors.yml
jakjang Jan 16, 2026
bfd44e2
Create index.md
jakjang Jan 16, 2026
57b481b
Update index.md
jakjang Jan 16, 2026
3702aae
Update blog date for KubeVirt on AKS post
jakjang Jan 16, 2026
5472427
Modify authors in KubeVirt on AKS blog post
jakjang Jan 16, 2026
e62ac09
Fix authors formatting in KubeVirt on AKS blog post
jakjang Jan 16, 2026
d0f2825
Enhance blog post on KubeVirt with additional content
jakjang Jan 16, 2026
5f6cac8
Update index.md
jakjang Jan 16, 2026
c203125
Add KubeVirt tag and fix AKS typo
jakjang Jan 16, 2026
58dd50b
Add 'kubevirt' tag to blog post on KubeVirt deployment
jakjang Jan 16, 2026
f49e85b
Apply suggestions from code review
jakjang Jan 16, 2026
e7ad37f
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
f62bc04
Apply suggestions from code review
jakjang Jan 16, 2026
9e9b97c
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
c28deec
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
1cd809f
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
c456a43
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
387848f
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
94b8d79
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
407fee2
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
8660b1a
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
95786c0
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
7c4a9af
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
02bf1f2
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
ee3c8b5
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
4fc80f9
Update website/blog/tags.yml
jakjang Jan 16, 2026
ff95831
Fix VMI deployment instructions in blog post
jakjang Jan 16, 2026
ffb2a41
Fix formatting in AKS cluster creation instructions
jakjang Jan 16, 2026
5feaa81
Fix capitalization of 'YAML' in documentation
jakjang Jan 16, 2026
a5e4b61
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 16, 2026
e15891b
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 17, 2026
e2ef63e
Update index.md
jakjang Jan 17, 2026
610e9e8
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 21, 2026
a68bf07
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 21, 2026
5f0932b
Apply suggestions from code review
jakjang Jan 21, 2026
aa8309b
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 21, 2026
a2b022b
Apply suggestions from code review
jakjang Jan 21, 2026
be28178
Update index.md
jakjang Jan 21, 2026
68d4365
Update index.md
jakjang Jan 22, 2026
a3eef21
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 22, 2026
f408535
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 22, 2026
a72031f
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 22, 2026
ee6bee0
Update index.md
jakjang Jan 22, 2026
485de85
Update index.md
jakjang Jan 22, 2026
f3319bf
Update index.md
jakjang Jan 23, 2026
9a18f32
Update index.md
jakjang Jan 23, 2026
9ce5fe1
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
abae484
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
421875e
Update website/blog/2026-01-26-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
31e5b10
Added a picture showing nested virt support
jakjang Jan 23, 2026
e7d8e0c
Rename index.md to index.md
jakjang Jan 23, 2026
9adf316
Rename nested-virt-example.png to nested-virt-example.png
jakjang Jan 23, 2026
65a0e6e
Change note formatting for VM SKU selection
jakjang Jan 23, 2026
0c6a798
Update index.md
jakjang Jan 23, 2026
a5010ea
Update website/blog/2026-01-30-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
797cd63
Change code block syntax to bash for clarity
jakjang Jan 23, 2026
f069c42
Update website/blog/2026-01-30-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
315c783
Apply suggestions from code review
jakjang Jan 23, 2026
ef99c5c
Update website/blog/2026-01-30-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
7cba97c
Update website/blog/2026-01-30-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
a954a36
Update website/blog/2026-01-30-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
77257d7
Update website/blog/2026-01-30-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
eac7429
Update index.md
jakjang Jan 23, 2026
2f66a3d
Apply suggestions from code review
jakjang Jan 23, 2026
183e20b
Update website/blog/2026-01-30-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
36f7291
Update index.md
jakjang Jan 23, 2026
3627bdb
Update website/blog/2026-01-30-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
fadcb71
Apply suggestions from code review
jakjang Jan 23, 2026
98d51e4
Update website/blog/2026-01-30-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
c09cca9
Apply suggestions from code review
jakjang Jan 23, 2026
0e73750
Update index.md
jakjang Jan 23, 2026
5b89e57
Update website/blog/2026-01-30-kubevirt-on-aks/index.md
jakjang Jan 23, 2026
ace060b
Update KubeVirt node placement explanation
jakjang Jan 23, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
212 changes: 212 additions & 0 deletions website/blog/2026-01-30-kubevirt-on-aks/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,212 @@
---
title: "Deploying KubeVirt on AKS"
date: "2026-01-30"
description: "Learn how to deploy KubeVirt on Azure Kubernetes Service (AKS) to run and manage virtual machines alongside containerized applications using Kubernetes orchestration"
authors: ["jack-jiang", "harshit-gupta"]
tags: ["kubevirt", "general", "operations"]
---
Comment on lines +1 to +7
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The blog post date is in the future (2026-01-30), but the front matter is missing draft: true. According to the repository guidelines, Docusaurus publishes all posts immediately when deployed, regardless of date. To prevent premature publishing during PR review, add draft: true to the front matter. This should be removed before the intended publication date.

Copilot generated this review using guidance from repository custom instructions.

Kubernetes adoption continues to grow, but not every workload can be redesigned for containers right away. Many organizations still depend on virtual machine (VM) based deployments for technical, regulatory, or operational reasons.

[KubeVirt](https://github.com/kubevirt/kubevirt) is a [Cloud Native Computing Foundation (CNCF) incubating](https://www.cncf.io/projects/kubevirt/) open-source project that allows users to run, deploy, and manage VMs in their Kubernetes clusters.

In this post, you will learn how KubeVirt lets you run, deploy, and manage VMs on Kubernetes, alongside your containerized applications, using Kubernetes as the orchestrator.
<!-- truncate -->

## Why KubeVirt matters

KubeVirt can help organizations that are in various stages of their Kubernetes journey manage their infrastructure more effectively. It allows customers to manage legacy VM workloads alongside containerized applications using the same Kubernetes API.

VMs deployed on KubeVirt act much the same way as VMs deployed in more traditional manners would but can run and be managed alongside other containerized applications through traditional Kubernetes tools. Capabilities like scheduling that users know and love on Kubernetes can also be applied to these VMs.

Management of these otherwise disparate deployments can be simplified and unified. This unified management can help teams avoid the sprawl that would otherwise come with managing multiple platforms.

The capability to mix and match your workloads in a "hybrid" setting can also allow organizations that might have more complex, legacy VM-based applications to incrementally transition to containers or allow these mission-critical legacy applications to remain as they are.

## Deploying KubeVirt

Users today are able to self-deploy KubeVirt on AKS clusters using SKUs that support nested virtualization.

### Creating an AKS cluster

:::note
When you select a `--node-vm-size`, use a VM SKU that supports nested virtualization. You can confirm support on the VM size's Microsoft Learn page, such as [Standard_D4s_v5](https://learn.microsoft.com/azure/virtual-machines/sizes/general-purpose/dv5-series?tabs=sizebasic#feature-support).
Using the [Standard_D4s_v5](https://learn.microsoft.com/azure/virtual-machines/sizes/general-purpose/dv5-series?tabs=sizebasic#feature-support) SKU as an example, on the SKU page, you can see whether or not nested virtualization is supported in the "Feature support" section.

![Screenshot of Azure VM SKU page showing nested virtualization support in the Feature support section](nested-virt-example.png)
:::

1. Create your AKS cluster.

```bash
az aks create --resource-group <resource-group> --name <cluster-name> --node-vm-size Standard_D4s_v5
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably not the ideal way, we likely want to separate the system pool from a user nodepool that will run VMs

Copy link
Contributor Author

@jakjang jakjang Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the reasons to run like so (e.g. in my head, you'd be able to keep a small/static "system" node pool to run your stuff like CoreDNS or metrics server, and just scale up the separate "user" node pools endlessly to accommodate the scale up in VMs that might be needed) seem to be something that'd be worth covering in this demo? I figured I'd ideally like users to get setup with as little complexity as possible.

Alternatively, I was thinking we add a section at the end detailing out "best practices" for real deployments, which we can go over this type of setup.

Otherwise, if it's just system vs. user split, I understand that "normal" deployments (without any specific tolerations applied) should avoid the system pools.

```

2. After your cluster is up and running, get the access credentials for the cluster.

```bash
az aks get-credentials --resource-group <resource-group> --name <cluster-name>
```

### Installing KubeVirt

1. Install the KubeVirt operator.

```bash
# Get the latest release
export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)

# Deploy the KubeVirt operator
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml
```

1. Install the KubeVirt custom resource.

```bash
curl -L https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml \
| yq '.spec.infra.nodePlacement={}' \
| kubectl apply -f -
Comment on lines +57 to +68
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The installation commands fetch and apply KubeVirt manifests directly from third-party URLs (stable.txt, kubevirt-operator.yaml, and kubevirt-cr.yaml) without pinning to an immutable version or verifying integrity. An attacker who compromises the KubeVirt release pipeline or those hosting locations could change these artifacts and gain cluster-level control when users run kubectl apply as written. To reduce this supply chain risk, pin installs to a specific trusted release (or digest) and verify integrity instead of relying on the mutable stable pointer and unauthenticated remote YAML.

Copilot uses AI. Check for mistakes.
```

Notice the empty `nodePlacement: {}` line. By default, KubeVirt sets the node-affinity of control-plane components to control-plane nodes. Because AKS control-plane nodes are fully managed by Azure and inaccessible to KubeVirt, this update to nodePlacement avoids potential failures.

### Confirm the KubeVirt pods are up and running on the cluster

Once all the components are installed, you can quickly check if all the KubeVirt components are up and running properly in your cluster:

```bash
kubectl get pods -n kubevirt -o wide
```

You should see something like this:

```bash
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
virt-api-7f7d56bbc5-s9nr4 1/1 Running 0 4m10s 10.244.0.174 aks-nodepool1-26901818-vmss000000 <none> <none>
virt-controller-7c5744f574-56dd5 1/1 Running 0 3m39s 10.244.0.204 aks-nodepool1-26901818-vmss000000 <none> <none>
virt-controller-7c5744f574-ftz6z 1/1 Running 0 3m39s 10.244.0.120 aks-nodepool1-26901818-vmss000000 <none> <none>
virt-handler-dlkxf 1/1 Running 0 3m39s 10.244.0.52 aks-nodepool1-26901818-vmss000000 <none> <none>
virt-operator-7c8bdfb574-54cs6 1/1 Running 0 9m38s 10.244.0.87 aks-nodepool1-26901818-vmss000000 <none> <none>
virt-operator-7c8bdfb574-wzdxt 1/1 Running 0 9m38s 10.244.0.153 aks-nodepool1-26901818-vmss000000 <none> <none>
Comment on lines +85 to +90
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

continuing on Jorge's comment above, are these running in the user pool or system pool

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the demo, they're all running in the user node pool. There shouldn't be any modifications in the deployment that would otherwise target these pods to run in the system pool.

```

### Creating VirtualMachineInstance resources in KubeVirt

With KubeVirt installed on your cluster, you can now create your VirtualMachineInstance (VMI) resources.

1. Create your VMI. Save the following YAML, which will create a VMI based on Fedora OS, as `vmi-fedora.yaml`.

```yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
labels:
special: vmi-fedora
name: vmi-fedora
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: containerdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
name: default
rng: {}
memory:
guest: 1024M
resources: {}
networks:
- name: default
pod: {}
terminationGracePeriodSeconds: 0
volumes:
- containerDisk:
image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:devel
name: containerdisk
- cloudInitNoCloud:
Comment on lines +129 to +131
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The example VMI uses a third-party container image quay.io/kubevirt/fedora-with-test-tooling-container-disk:devel, which is a mutable development tag. If that image tag is ever replaced with a malicious build (e.g., registry compromise), users following this guide would run untrusted code in their clusters. Prefer referencing a specific, trusted version or image digest rather than a mutable :devel tag to limit supply chain and integrity risks.

Copilot uses AI. Check for mistakes.
userData: |-
#cloud-config
password: fedora
chpasswd: { expire: False }
name: cloudinitdisk
```
1. Deploy the VMI in your cluster.
```bash
kubectl apply -f vmi-fedora.yaml
```

If successful, you should see an output similar to `virtualmachineinstance.kubevirt.io/vmi-fedora created`.

### Check out the created VMI

1. Test and make sure the VMI is created and running via `kubectl get vmi`. You should see a result similar to:

```bash
NAME AGE PHASE IP NODENAME READY
vmi-fedora 85s Running 10.244.0.213 aks-nodepool1-26901818-vmss000000 True
```

1. Connect to the newly created VMI and inspect it.

Before you use the `virtctl` command-line tool, install it on your workstation. Install the [`krew` plugin manager](https://krew.sigs.k8s.io/docs/user-guide/setup/install/) if you don't have it, then run:

```bash
kubectl krew install virt
kubectl virt console vmi-fedora
```

When prompted with credentials, the default username/password should be `fedora`/`fedora`.

```bash
vmi-fedora login: fedora
Password:
```

Once logged in, run `cat /etc/os-release` to display the OS details.

```bash
[fedora@vmi-fedora ~]$ cat /etc/os-release
NAME=Fedora
VERSION="32 (Cloud Edition)"
ID=fedora
VERSION_ID=32
VERSION_CODENAME=""
PLATFORM_ID="platform:f32"
PRETTY_NAME="Fedora 32 (Cloud Edition)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:32"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f32/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=32
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=32
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Cloud Edition"
VARIANT_ID=cloud
```

## Converting your VMs

At this point, you should have KubeVirt up and running in your AKS cluster and a VMI deployed. KubeVirt can help with a [plethora of scenarios](https://kubevirt.io/) that operational teams may run into. Migrating legacy VMs to KubeVirt can be an involved process, however. [Doing it manually](https://www.spectrocloud.com/blog/how-to-migrate-your-vms-to-kubevirt-with-forklift) involves steps like converting VM's disk, persisting a VM disk, to creating a VM template.

Tools like [Forklift](https://github.com/kubev2v/forklift) can automate some of the complexity involved with the migration. Forklift allows VMs to be migrated at scale to KubeVirt. The migration can be done by installing Forklift custom resources and setting up their respective configs in the target cluster. Some great walkthroughs of VM migration can be found in these videos [detailing how Forklift helps deliver a better UX when importing VMs to KubeVirt](https://www.youtube.com/watch?v=S7hVcv2Fu6I) and [breaking down everything from the architecture to a demo of Forklift 2.0](https://www.youtube.com/watch?v=-w4Afj5-0_g).

## Share your feedback

If you're using KubeVirt on AKS or are interested in trying it, we'd love to hear from you! Your feedback will help the AKS team plan how to best support these types of workloads on our platform. Share your thoughts in our [GitHub Issue](https://github.com/Azure/AKS/issues/5445).

## Resources

- [What is KubeVirt?](https://www.redhat.com/topics/virtualization/what-is-kubevirt)
- [KubeVirt user guides](https://kubevirt.io/user-guide/)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
9 changes: 9 additions & 0 deletions website/blog/authors.yml
Original file line number Diff line number Diff line change
Expand Up @@ -418,3 +418,12 @@ thilo-fromm:
socials:
linkedin: thilo-fromm-b6832a56
github: t-lo

harshit-gupta:
name: Harshit Gupta
title: Senior Software Engineer at Microsoft
url: https://www.linkedin.com/in/hagupta/
socials:
linkedin: hagupta
github: harshitgupta1337

7 changes: 6 additions & 1 deletion website/blog/tags.yml
Original file line number Diff line number Diff line change
Expand Up @@ -187,10 +187,15 @@ kube-fleet:
permalink: /kube-fleet
description: Using Kube-Fleet for multi-cluster application deployment and management with AKS.

kubevirt:
label: KubeVirt
permalink: /kubevirt
description: Using KubeVirt to run VMs with Kubernetes and AKS.

kueue:
label: Kueue
permalink: /kueue
description: Kueue workload queueing and scheduling for batch AI/ML jobs on AK
description: Kueue workload queueing and scheduling for batch AI/ML jobs on AKS.

kro:
label: KRO
Expand Down