-
Notifications
You must be signed in to change notification settings - Fork 356
Kubevirt blog #5559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Kubevirt blog #5559
Changes from all commits
11215f2
bfd44e2
57b481b
3702aae
5472427
e62ac09
d0f2825
5f6cac8
c203125
58dd50b
f49e85b
e7ad37f
f62bc04
9e9b97c
c28deec
1cd809f
c456a43
387848f
94b8d79
407fee2
8660b1a
95786c0
7c4a9af
02bf1f2
ee3c8b5
4fc80f9
ff95831
ffb2a41
5feaa81
a5e4b61
e15891b
e2ef63e
610e9e8
a68bf07
5f0932b
aa8309b
a2b022b
be28178
68d4365
a3eef21
f408535
a72031f
ee6bee0
485de85
f3319bf
9a18f32
9ce5fe1
abae484
421875e
31e5b10
e7d8e0c
9adf316
65a0e6e
0c6a798
a5010ea
797cd63
f069c42
315c783
ef99c5c
7cba97c
a954a36
77257d7
eac7429
2f66a3d
183e20b
36f7291
3627bdb
fadcb71
98d51e4
c09cca9
0e73750
5b89e57
ace060b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,212 @@ | ||
| --- | ||
| title: "Deploying KubeVirt on AKS" | ||
| date: "2026-01-30" | ||
| description: "Learn how to deploy KubeVirt on Azure Kubernetes Service (AKS) to run and manage virtual machines alongside containerized applications using Kubernetes orchestration" | ||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| authors: ["jack-jiang", "harshit-gupta"] | ||
| tags: ["kubevirt", "general", "operations"] | ||
| --- | ||
|
Comment on lines
+1
to
+7
|
||
|
|
||
| Kubernetes adoption continues to grow, but not every workload can be redesigned for containers right away. Many organizations still depend on virtual machine (VM) based deployments for technical, regulatory, or operational reasons. | ||
|
|
||
| [KubeVirt](https://github.com/kubevirt/kubevirt) is a [Cloud Native Computing Foundation (CNCF) incubating](https://www.cncf.io/projects/kubevirt/) open-source project that allows users to run, deploy, and manage VMs in their Kubernetes clusters. | ||
|
|
||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| In this post, you will learn how KubeVirt lets you run, deploy, and manage VMs on Kubernetes, alongside your containerized applications, using Kubernetes as the orchestrator. | ||
| <!-- truncate --> | ||
|
|
||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ## Why KubeVirt matters | ||
|
|
||
| KubeVirt can help organizations that are in various stages of their Kubernetes journey manage their infrastructure more effectively. It allows customers to manage legacy VM workloads alongside containerized applications using the same Kubernetes API. | ||
|
|
||
| VMs deployed on KubeVirt act much the same way as VMs deployed in more traditional manners would but can run and be managed alongside other containerized applications through traditional Kubernetes tools. Capabilities like scheduling that users know and love on Kubernetes can also be applied to these VMs. | ||
|
|
||
| Management of these otherwise disparate deployments can be simplified and unified. This unified management can help teams avoid the sprawl that would otherwise come with managing multiple platforms. | ||
|
|
||
| The capability to mix and match your workloads in a "hybrid" setting can also allow organizations that might have more complex, legacy VM-based applications to incrementally transition to containers or allow these mission-critical legacy applications to remain as they are. | ||
|
|
||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ## Deploying KubeVirt | ||
|
|
||
| Users today are able to self-deploy KubeVirt on AKS clusters using SKUs that support nested virtualization. | ||
|
|
||
| ### Creating an AKS cluster | ||
|
|
||
| :::note | ||
| When you select a `--node-vm-size`, use a VM SKU that supports nested virtualization. You can confirm support on the VM size's Microsoft Learn page, such as [Standard_D4s_v5](https://learn.microsoft.com/azure/virtual-machines/sizes/general-purpose/dv5-series?tabs=sizebasic#feature-support). | ||
| Using the [Standard_D4s_v5](https://learn.microsoft.com/azure/virtual-machines/sizes/general-purpose/dv5-series?tabs=sizebasic#feature-support) SKU as an example, on the SKU page, you can see whether or not nested virtualization is supported in the "Feature support" section. | ||
|
|
||
|  | ||
| ::: | ||
|
|
||
| 1. Create your AKS cluster. | ||
|
|
||
| ```bash | ||
| az aks create --resource-group <resource-group> --name <cluster-name> --node-vm-size Standard_D4s_v5 | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Probably not the ideal way, we likely want to separate the system pool from a user nodepool that will run VMs
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does the reasons to run like so (e.g. in my head, you'd be able to keep a small/static "system" node pool to run your stuff like CoreDNS or metrics server, and just scale up the separate "user" node pools endlessly to accommodate the scale up in VMs that might be needed) seem to be something that'd be worth covering in this demo? I figured I'd ideally like users to get setup with as little complexity as possible. Alternatively, I was thinking we add a section at the end detailing out "best practices" for real deployments, which we can go over this type of setup. Otherwise, if it's just system vs. user split, I understand that "normal" deployments (without any specific tolerations applied) should avoid the system pools. |
||
| ``` | ||
|
|
||
| 2. After your cluster is up and running, get the access credentials for the cluster. | ||
|
|
||
| ```bash | ||
| az aks get-credentials --resource-group <resource-group> --name <cluster-name> | ||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ``` | ||
|
|
||
| ### Installing KubeVirt | ||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| 1. Install the KubeVirt operator. | ||
|
|
||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ```bash | ||
| # Get the latest release | ||
| export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt) | ||
|
|
||
| # Deploy the KubeVirt operator | ||
| kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml | ||
| ``` | ||
|
|
||
| 1. Install the KubeVirt custom resource. | ||
|
|
||
| ```bash | ||
| curl -L https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml \ | ||
| | yq '.spec.infra.nodePlacement={}' \ | ||
| | kubectl apply -f - | ||
|
Comment on lines
+57
to
+68
|
||
| ``` | ||
|
|
||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| Notice the empty `nodePlacement: {}` line. By default, KubeVirt sets the node-affinity of control-plane components to control-plane nodes. Because AKS control-plane nodes are fully managed by Azure and inaccessible to KubeVirt, this update to nodePlacement avoids potential failures. | ||
|
|
||
| ### Confirm the KubeVirt pods are up and running on the cluster | ||
|
|
||
| Once all the components are installed, you can quickly check if all the KubeVirt components are up and running properly in your cluster: | ||
|
|
||
| ```bash | ||
| kubectl get pods -n kubevirt -o wide | ||
| ``` | ||
|
|
||
| You should see something like this: | ||
|
|
||
| ```bash | ||
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES | ||
| virt-api-7f7d56bbc5-s9nr4 1/1 Running 0 4m10s 10.244.0.174 aks-nodepool1-26901818-vmss000000 <none> <none> | ||
| virt-controller-7c5744f574-56dd5 1/1 Running 0 3m39s 10.244.0.204 aks-nodepool1-26901818-vmss000000 <none> <none> | ||
| virt-controller-7c5744f574-ftz6z 1/1 Running 0 3m39s 10.244.0.120 aks-nodepool1-26901818-vmss000000 <none> <none> | ||
| virt-handler-dlkxf 1/1 Running 0 3m39s 10.244.0.52 aks-nodepool1-26901818-vmss000000 <none> <none> | ||
| virt-operator-7c8bdfb574-54cs6 1/1 Running 0 9m38s 10.244.0.87 aks-nodepool1-26901818-vmss000000 <none> <none> | ||
| virt-operator-7c8bdfb574-wzdxt 1/1 Running 0 9m38s 10.244.0.153 aks-nodepool1-26901818-vmss000000 <none> <none> | ||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
Comment on lines
+85
to
+90
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. continuing on Jorge's comment above, are these running in the user pool or system pool
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In the demo, they're all running in the user node pool. There shouldn't be any modifications in the deployment that would otherwise target these pods to run in the system pool. |
||
| ``` | ||
|
|
||
| ### Creating VirtualMachineInstance resources in KubeVirt | ||
|
|
||
| With KubeVirt installed on your cluster, you can now create your VirtualMachineInstance (VMI) resources. | ||
|
|
||
| 1. Create your VMI. Save the following YAML, which will create a VMI based on Fedora OS, as `vmi-fedora.yaml`. | ||
|
|
||
| ```yaml | ||
| apiVersion: kubevirt.io/v1 | ||
| kind: VirtualMachineInstance | ||
| metadata: | ||
| labels: | ||
| special: vmi-fedora | ||
| name: vmi-fedora | ||
| spec: | ||
| domain: | ||
| devices: | ||
| disks: | ||
| - disk: | ||
| bus: virtio | ||
| name: containerdisk | ||
| - disk: | ||
| bus: virtio | ||
| name: cloudinitdisk | ||
| interfaces: | ||
| - masquerade: {} | ||
| name: default | ||
| rng: {} | ||
| memory: | ||
| guest: 1024M | ||
| resources: {} | ||
| networks: | ||
| - name: default | ||
| pod: {} | ||
| terminationGracePeriodSeconds: 0 | ||
| volumes: | ||
| - containerDisk: | ||
| image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:devel | ||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| name: containerdisk | ||
| - cloudInitNoCloud: | ||
|
Comment on lines
+129
to
+131
|
||
| userData: |- | ||
| #cloud-config | ||
| password: fedora | ||
| chpasswd: { expire: False } | ||
| name: cloudinitdisk | ||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ``` | ||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| 1. Deploy the VMI in your cluster. | ||
| ```bash | ||
| kubectl apply -f vmi-fedora.yaml | ||
| ``` | ||
|
|
||
| If successful, you should see an output similar to `virtualmachineinstance.kubevirt.io/vmi-fedora created`. | ||
|
|
||
| ### Check out the created VMI | ||
|
|
||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| 1. Test and make sure the VMI is created and running via `kubectl get vmi`. You should see a result similar to: | ||
|
|
||
| ```bash | ||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| NAME AGE PHASE IP NODENAME READY | ||
| vmi-fedora 85s Running 10.244.0.213 aks-nodepool1-26901818-vmss000000 True | ||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ``` | ||
|
|
||
| 1. Connect to the newly created VMI and inspect it. | ||
|
|
||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| Before you use the `virtctl` command-line tool, install it on your workstation. Install the [`krew` plugin manager](https://krew.sigs.k8s.io/docs/user-guide/setup/install/) if you don't have it, then run: | ||
|
|
||
| ```bash | ||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| kubectl krew install virt | ||
| kubectl virt console vmi-fedora | ||
| ``` | ||
|
|
||
| When prompted with credentials, the default username/password should be `fedora`/`fedora`. | ||
|
|
||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ```bash | ||
| vmi-fedora login: fedora | ||
| Password: | ||
| ``` | ||
|
|
||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| Once logged in, run `cat /etc/os-release` to display the OS details. | ||
|
|
||
| ```bash | ||
| [fedora@vmi-fedora ~]$ cat /etc/os-release | ||
| NAME=Fedora | ||
| VERSION="32 (Cloud Edition)" | ||
| ID=fedora | ||
| VERSION_ID=32 | ||
| VERSION_CODENAME="" | ||
| PLATFORM_ID="platform:f32" | ||
| PRETTY_NAME="Fedora 32 (Cloud Edition)" | ||
| ANSI_COLOR="0;34" | ||
| LOGO=fedora-logo-icon | ||
| CPE_NAME="cpe:/o:fedoraproject:fedora:32" | ||
| HOME_URL="https://fedoraproject.org/" | ||
| DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f32/system-administrators-guide/" | ||
| SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help" | ||
| BUG_REPORT_URL="https://bugzilla.redhat.com/" | ||
| REDHAT_BUGZILLA_PRODUCT="Fedora" | ||
| REDHAT_BUGZILLA_PRODUCT_VERSION=32 | ||
| REDHAT_SUPPORT_PRODUCT="Fedora" | ||
| REDHAT_SUPPORT_PRODUCT_VERSION=32 | ||
| PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy" | ||
| VARIANT="Cloud Edition" | ||
| VARIANT_ID=cloud | ||
| ``` | ||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| ## Converting your VMs | ||
|
|
||
| At this point, you should have KubeVirt up and running in your AKS cluster and a VMI deployed. KubeVirt can help with a [plethora of scenarios](https://kubevirt.io/) that operational teams may run into. Migrating legacy VMs to KubeVirt can be an involved process, however. [Doing it manually](https://www.spectrocloud.com/blog/how-to-migrate-your-vms-to-kubevirt-with-forklift) involves steps like converting VM's disk, persisting a VM disk, to creating a VM template. | ||
|
|
||
| Tools like [Forklift](https://github.com/kubev2v/forklift) can automate some of the complexity involved with the migration. Forklift allows VMs to be migrated at scale to KubeVirt. The migration can be done by installing Forklift custom resources and setting up their respective configs in the target cluster. Some great walkthroughs of VM migration can be found in these videos [detailing how Forklift helps deliver a better UX when importing VMs to KubeVirt](https://www.youtube.com/watch?v=S7hVcv2Fu6I) and [breaking down everything from the architecture to a demo of Forklift 2.0](https://www.youtube.com/watch?v=-w4Afj5-0_g). | ||
|
|
||
| ## Share your feedback | ||
jakjang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| If you're using KubeVirt on AKS or are interested in trying it, we'd love to hear from you! Your feedback will help the AKS team plan how to best support these types of workloads on our platform. Share your thoughts in our [GitHub Issue](https://github.com/Azure/AKS/issues/5445). | ||
|
|
||
| ## Resources | ||
|
|
||
| - [What is KubeVirt?](https://www.redhat.com/topics/virtualization/what-is-kubevirt) | ||
| - [KubeVirt user guides](https://kubevirt.io/user-guide/) | ||
Uh oh!
There was an error while loading. Please reload this page.