vSphere 7 as a Nested Cluster within Centos 8 and KVM

What is a Nested VM?

describe nested

Nested virtualization refers to virtualization that runs inside an already virtualized environment. In other words, it’s the ability to run a hypervisor inside of a virtual machine (VM), which itself runs on a hypervisor.

Why do this?

As for me I have an R440 single CPU 10 core 20 threads with 64G of memory. I have plenty of room for vertical scalability which is why I bought this server. It give me the ability to leverage KVM from Centos and make mini labs to test and train on multiple components. Prior to this I had three Dell T110s which I have had vSphere 6.5, OracleVM, Kubernetes, and Docker at one time or another. Now with more resources and use newer versions of VMware I have more flexibility to have what I want at anytime. Also because I can 🙂

Pre-requisites

I used several resources to help point me in the right direction including Centos documentation but this resource was the most handy:

https://fabianlee.org/2018/09/19/kvm-deploying-a-nested-version-of-vmware-esxi-6-7-inside-kvm/

First and foremost make sure and have a linux OS already built and running on your platform of choice. I am using Centos 8 but you can use any vendor you like, additional requirements are as follows:

Additional information will follow this section on how to go about verifying or changing requirements.

  • Hardware resources such as CPU and memory. vSphere require a minimum of 4 vCPUs and 12G of memory. Also you will need raw volumes. I am using 50G volumes for vSphere install and 500G for vCenter which is iSCSI.
  • Enable VT-x: You need to make sure your CPU is capable of VT-x (virtualization acceleration), and then that your BIOS has VT-x enabled.  Many computers have it disabled by default in the BIOS.
  • Configure VT within the host: You will need to configure VT inside of Linux in this case Centos.
  • Install KVM components

Configure VT-x in OS

In this section I used the example from the resource I referenced that used Ubuntu not Centos. For my install I did not need to do this for this section and it worked fine as a nested VM. If you run into issues or are using Ubuntu you can enable this.

In addition to enabling VT-x at the BIOS level, you will also need to configure it at the Ubuntu OS level.  In the file “/etc/modprobe.d/qemu-system-x86.conf”, set the following lines.

Note I did not do this for Centos
# options kvm ignore_msrs=1 report_ignored_msrs=0
# options kvm_intel nested=1 enable_apicv=0 ept=1

Reboot the host, and then run the following commands.

Centos 8 I have the following by default.  
# cat /sys/module/kvm/parameters/ignore_msrs
 N
# cat /sys/module/kvm_intel/parameters/nested
 1
# cat /sys/module/kvm_intel/parameters/nested
 1
# cat /sys/module/kvm_intel/parameters/ept
 Y

Installing KVM

  1. As mentioned above we need to check if our hardware is supports VT-x by running the following:
# grep -e 'vmx' /proc/cpuinfo #Intel systems 
# grep -e 'svm' /proc/cpuinfo #AMD systems

Also, confirm that KVM modules are loaded in the kernel (they should be, by default).

# lsmod | grep kvm

In our case we will not only be using CLI but also Cockpit which is a WEB frontend to manage the OS and its services such as KVM. It comes pre installed and when you first login it will provide you a message that tells you how to start and enable it. What is not part of the default install is the piece to manage virtual hosts which we will install a little later.

The cockpit-machines extension should be installed to manage VMs based on Libvirt.

# dnf install cockpit cockpit-machines

2. When the package installation is complete, start the cockpit socket, enable it to auto-start at system boot and check its status to confirm that it is up and running.

# systemctl start cockpit.socket
# systemctl enable cockpit.socket
# systemctl status cockpit.socket

3. Next, add the cockpit service in the system firewall which is enabled by default, using the firewall-cmd command and reload the firewall configuration to apply the new changes.

# firewall-cmd --add-service=cockpit --permanent
# firewall-cmd --reload

4. To access the cockpit web console, open a web browser and use the following URL to navigate.

https://FQDN:9090/
OR
https://SERVER_IP:9090/

The cockpit uses a self-signed certificate to enable HTTPS, simply proceed with the connection when you get a warning from the browser. At the login page, use your server user account credentials.

Next, install the virtualization module and other virtualization packages as follows. The virt-install package provides a tool for installing virtual machines from the command-line interface, and a virt-viewer is used to view virtual machines.

# dnf module install virt 
# dnf install virt-install virt-viewer

5. Next, run the virt-host-validate command to validate if the host machine is set up to run libvirt hypervisor drivers.

# virt-host-validate
[andy@dellr440 ~]$ virt-host-validate
QEMU: Checking for hardware virtualization : PASS
QEMU: Checking if device /dev/kvm exists : PASS
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'cpu' controller support : PASS
QEMU: Checking for cgroup 'cpuacct' controller support : PASS
QEMU: Checking for cgroup 'cpuset' controller support : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'devices' controller support : PASS
QEMU: Checking for cgroup 'blkio' controller support : PASS
QEMU: Checking for device assignment IOMMU support : PASS
QEMU: Checking if IOMMU is enabled by kernel : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)

6. Next, start the libvirtd daemon (libvirtd) and enable it to start automatically on each boot. Then check its status to confirm that it is up and running.

# systemctl start libvirtd.service
# systemctl enable libvirtd.service
# systemctl status libvirtd.service

Download vSphere 7

This is not free but you can do a trial for 60 days. Also you can checkout VMUG and joining the users group to get yearly evals.

https://my.vmware.com/

https://www.vmug.com/home

Creating a Storage Pool

I ended up using local storage I had some extra 4TB disks that I mirrored. I am not going to get into software mirroring in Centos there is plenty out there if you don’t know how to go about it. So after I setup my internal mirror I used Cockpit to create the storage pool. You can do this CLI as well but its super easy to use the Web management tool. Login to the web interface:

  1. https://ip_address:9090
  2. Then proceed to the Virtual Machines > Storage Pool section
  3. Click on Create Storage Pool
    1. Make sure your storage is available. In my case I used local so after creating the mirror I made the filesystem and mounted it to /images which is what I used for the storage pool.

Once you create it you should see something like this

Creating Disks

I like to create the disks ahead of time. In the Cockpit you can certainly create them within the VM config if this is what you want to do.

  1. Login Cockpit
  2. Go to Virtual Machines -> Storage Pools
  1. Click on the Pool you created in the previous steps
    1. Click on Storage Volumes
    2. Click on Create Volume
      1. Enter name, size I am starting with 50G and qcow2 for Node1 and Node2
Create volume example

Creating Networks

You need to create a bridge that VMware will be able to see and boot VMware over PXE. You will still need to do this regardless of doing a network install or ISO. If its not setup correctly you will not be able to install VMware successfully or you may be able to install but not access the VMware WEB interface. You can create the bridge via Cockpit or CLI whatever works for you, I will show both configs.

Create Networks Cockpit

  1. Login Cockpit
  2. Go to Networking
  3. Click on Add Bridge
    1. Enter name and click on one or more interfaces. They need to be devices or a Team.

4. After you create it you should be able to access it and see data coming across

5. I also created a virbr0 interface for traffic within the VM. This was done at the VM creation level int he next section. The br0.20 is a VLAN 20 bridge.

Create a KVM VM

Here is the script I used to build my VMs. In my case I am doing a PXE boot and installing VMware over the network. I will also post the version for using an iso image.

  • vCPUs 4
  • Memory: 12G
  • cpu: host-passthrough
  • Disk: none (Will add this later)
  • Network: bridge to br0

Using PXE to Install

esxi2_pxe_create.sh

 virt-install --virt-type=kvm --name=esxi1 \
 --cpu host-passthrough \
 --ram 12288 --vcpus=4  \
 --virt-type=kvm --hvm  \
 --network bridge:br0 --pxe \
 --graphics vnc --video qxl  \
 --disk none 

I like to setup the disks ahead of time and then assign them manually. So I left it as none and added it before I installed which I will go over in a bit. This example will inherit the host CPU virtualization environment. If doing a Network install you can go to the next section Adding Disk to VM

Using an ISO instead of PXE

As promised here is the version for using an ISO. In this example you are adding the disk rather than later. I wanted to provide an example of adding in a disk so you have both available to you:

esxi2_create.sh

virt-install --virt-type=kvm --name=esxi1 
--cpu host-passthrough
--ram 12288 --vcpus=4 
--virt-type=kvm --hvm 
--cdrom ~/Downloads/VMware-VMvisor-Installer-7.0U1-16850804.x86_64.iso 
--network bridge:br0 
--graphics vnc --video qxl 
--disk pool=default,size=40,sparse=true,bus=sata,format=qcow2 
--boot cdrom,hd --noautoconsole --force

You need to have the CDROM connected after power up.  Depending on the domain xml, you may have to attach it as device “hdb” or “hdc” so look at the <target dev=”hd?“> label of the cdrom device.

$ cdrom=`virsh domblklist esxi1 --details | grep cdrom | awk {'print $3'}`

$ virsh attach-disk esxi1 ~/Downloads/VMware-VMvisor-Installer-6.7.0-8169922.x86_64.iso $cdrom --type cdrom --mode readonly

With this connection to the CDROM, go ahead and reset the machine so it can use it for booting.

$ virsh reset esxi1

Build the VM

Using the script in the prior stapes we will build the VM.

  1. ssh root@HOST_IP
  2. cd /SCRIPT_LOCATION
  3. run script
    1. sh esxi2_pxe_create.sh
  4. Verify build Go to Virtual machines
    1. Check to see if you VM is present and correct from the script POV

Adding Disk to VM

  1. Go to Virtual Machines -> VM -> Disks
  1. Click on Add Disk
    1. Click on use existing (assumes you created it as in previous steps)
      1. Leave cache as default
      2. Set Bus as SATA

You should have something similar as below once added.

Verify Network Interfaces or Add if Necessary

As before form the Virtual Machine verify that your bridge is there if it isn’t go ahead and add it.

Take a look at the setting or if you need to create it see the config below:

  1. Set Interface to Bridge to Lan and you will use the bridge (br0) you created earlier.
  2. Model set to e1000e, e1000 Legacy was used in other examples I saw but these were older releases not 7.0.

I have found that you need to add two network bridge interfaces or vSphere wont install properly. Just repeat the step above and call it “br1”

This image has an empty alt attribute; its file name is image-55.png

Optional: Virtual Network Interface for localized traffic. This can also be added in the build script if you like. I did this later as I am progressing through the build for a working example.

Final Check before Install

This image has an empty alt attribute; its file name is image-55.png

If everything looks ok then we can proceed in installing vSphere 7.0x. I am only going to show the network install. The config I use for achieving this is below. The contents of the VMWare 7.0 images was copied to the tftp directory on my Kickstart server. I am not going to go over how to setup a Kickstart server there is plenty of information out there for that.

  1. Create a directory under the tftpboot tree and copy the ISO to that
    1. /var/lib/tftpboot/vmware7
  2. You will also need to create a PXE boot config or add it to a default menu if you do it this way.
    1. This is what I used for PXE boot. I am using a serial console but if you are not that use the commented out line instead: /var/lib/tftpboot/pxelinux.cfg/MAC_ADDRESS or default and add as menu entry.
kernel vmware7/mboot.c32
 append -c vmware7/boot.cfg gdbPort=none logPort=none tty2Port=com1 allowLegacyCPU=true
 #append -c vmware7/boot.cfg
 ipappend 2
  1. If you are using a serial console you will need to edit the VMware boot.cfg as well, if not you can skip this step.
    1. vi /var/lib/tftpboot/vmware7/boot.cfg
    2. append the kernelopt line with: text nofb com1_baud=115200 com1_Port=0x3f8 tty2Port=com1 gdbPort=none logPort=none
kernelopt=runweasel text nofb com1_baud=115200 com1_Port=0x3f8 tty2Port=com1 gdbPort=none logPort=none

Now we can finally boot it and install VMWare.

Install VMWare 7 via VM Console

  1. Go to Cockpit and login
  2. Click on Virtual Machines
    1. Click on VM
    2. Click on Overview
      1. Click on Boot Order and add in Network (52:54:00:21:51:6e which is my Bridge) or CD disk
  1. Click Run
  2. Click on Consoles
    1. VNC
      1. This is my menu from my Kickstart server

If it finds your PXE info correctly then it will start to boot the installer:

You will get the normal install messages accept them and you will end up here the disk selection screen. If you don’t see anything check to see if you disk is attached to the VM and what the bus type is. The default is virtio which wont work. Change to SATA if that is the case.

Keep going through the setup just as you normally would

Reboot the server, the disk order will change and put the disk first but I always disconnect the network for PXE boot to avoid any mishaps. Just go the to VM and Overview as before and click on boot order.

Once rebooted you will end up with the typical management screen. At this point you can treat it as any other VMware server. I will also install a virtual appliance for vcenter but I wont cover that here. Thanks and have fun!

Advertisement