Rocky Linux Install 2022 with KVM support


Originally I had tried a software update on a Dell 2950 III server from Rocky Linux 8 to 9, only to end up with rocky linux , “glibc error: cpu does not support x86-64-v2”. Basically I fried my entire system as this CPU cannot support these new CPU calls.

Today I am going to walk you through a complete Rocky Linux 2022 install, which now replaces Centos from original Centos creator. Today I put in three different USB sticks containing Oracle Linux, Rocky Linux and Centos Stream Linux to see if I could install over my existing partitions as to not wipe out my FreeBSD KVM on the LVM. Well today I am sad to say, the installer does not see the LVM partitions, so I am forced to reinstall Rocky Linux as well as my guests all over again.

I am going to walk you through a safer way to install Centos(Rocky Linux), to future proof yourself in case of reinstalls or problems.

Rocky Linux 8 vs 9:

Run the following shell script:

pico (add following)

#!/usr/bin/awk -f

BEGIN { while (!/flags/) if (getline < "/proc/cpuinfo" != 1) exit 1

if (/lm/&&/cmov/&&/cx8/&&/fpu/&&/fxsr/&&/mmx/&&/syscall/&&/sse2/) level = 1

if (level == 1 && /cx16/&&/lahf/&&/popcnt/&&/sse4_1/&&/sse4_2/&&/ssse3/) level = 2

if (level == 2&&/avx/&&/avx2/&&/bmi1/&&/bmi2/&&/f16c/&&/fma/&&/abm/&&/movbe/&&/xsave/) level = 3

if (level == 3 && /avx512f/&&/avx512bw/&&/avx512cd/&&/avx512dq/&&/avx512vl/) level = 4

if (level > 0) { print "CPU supports x86-64-v" level; exit level + 1 } exit 1 }

Save it then:

chmod +x; ./

host:~ # ./
CPU supports x86-64-v1
host:~ #

Now if you get like the above with only version 1, you can only install Rocky Linux 8, otherwise if you have version 2 or higher you can install Rocky Linux 9. I believe reason RHEL made this change was to make glibc calls faster.

INSTALL Rocky Linux:

Download latest Rocky Linux ISO and install it with rufus to a USB stick. I am not going to cover a simple graphics installer, but I want to cover partitioning your drive. Let’s go to “Installation Destination” on installer. Now to prevent any future wipeouts of LVMs with installers, we are going to use standard partitions and only install our LVM afterwards manually. This way if we every run into situation again where installer cannot see into our LVM install, it will definitely see our standard partitions and we won’t have to wipe out our LVMs ever again.

Here are my recommendations for /boot, / and swap. For /boot from experience 1 GB is not enough like they say, after you start installing enough kernels, or start adding custom kernels for supporting things like ZFS or Ksmbd, it adds up quickly. So my recommendation for /boot is 3 GB. For swap it is 20% of your memory, I have 32 GB on this server, but I am going to go 8GB, as I rarely like going under that these days. For / partition we will want at least 80-100 GB. At some point you are going to run out of space un-tarring enough kernel sources, or doing 4k file tests, or space for your ISOs, you need some breathing room on your main OS!

(all ext4 standard partitions setup as follows)

/boot – sda1 – 3GB
swap – sda2 – 8GB
/ – sda3 – 80GB

Save with this setup and finish your install finishing up your install packages, network, password and so on. What we are going to do is setup sda4 manually after the install for our LVM. Double check everything, make sure they are all ext4 partitions, and there is no LVM anywhere!!!

Post Install Tasks — setting up for LVM, KVM, wireguard:

Let us start by upgrading the system, setting up for KVM, and upgrading the kernel from stock default. Remember we cannot run a lot of thing with less than a 5.15.x kernel, and if you start getting into things like ZFS we would need to be exactly on 5.15.x kernel currently. For our purposes we will just use kernel-ml and can downgrade to 5.15.x for ZFS later if we choose to manually compile our own kernels

dnf update
shutdown -r now (reboot with updated system)
cat /proc/cpuinfo | egrep "vmx|svm"  (check we have virtualization enabled)
dnf install @virt virt-top libguestfs-tools virt-install virt-manager xauth virt-viewer

systemctl enable --now libvirtd (everything working? ifconfig -a)
#get rid of virbr0 in ifconfig
virsh net-destroy default
virsh net-undefine default
service libvirtd restart
(check for below install command for rocky 8 or 9)
#we will not be able to install wireguard with less than 5.15 kernel
yum install
rpm --import
dnf makecache
dnf --enablerepo="elrepo-kernel" install -y kernel-ml
#let's setup our br0 bridge before we reboot
cd /etc/sysconfig/network-scripts
nano ifcfg-enp10s0f0 (your <device name>
#add "BRIDGE=br0" at end of this file
nano ifcfg-br0
#add something like this:
#should match a lot of your file above
#change UID to something different above and put your own IPs in
shutdown -r now (reboot with new kernel)
ifconfig -a (check br0 is there, then all good)

Ok now we have new kernel and br0 is setup for KVM guests, let’s move on to LVM.

Creating our LVM for our KVM guests:

#make sure we have right device
fdisk -l /dev/sda 
fdisk /dev/sda
<enter> (should have partition 4 and adding rest of disk space to it)
t (toggle partition 4 - change from linux to LVM)
8e (change to LVM)
w (write changes to partition tables)
partprobe /dev/sda (inform OS of partition changes)
pvcreate /dev/sda4 (now we have it as a LVM-can check with "pvs")
vgcreate vps /dev/sda4 (creating our volume group - "vgdisplay")
#now we are all setup, we can create as many KVM guests as we want
#for example give 70G to one guest and give remaining space to a devel guest
lvcreate -n cappy -L 70G vps (create a 70G guest - "lvdisplay")
lvcreate -n devel -l 100%FREE vps(give remaining space to this guest)
#can always delete it later
pvdisplay (check we used up all the space on vps)
#let's make sure guests can suspend and resume on host reboots:
pico /etc/sysconfig/libvirt-guests 
systemctl start libvirt-guests
systemctl enable libvirt-guests

Congratulations on your new install.