Category Archives: kvm

Rocky Linux Install 2022 with KVM support

INTRO:

Originally I had tried a software update on a Dell 2950 III server from Rocky Linux 8 to 9, only to end up with rocky linux , “glibc error: cpu does not support x86-64-v2”. Basically I fried my entire system as this CPU cannot support these new CPU calls.

Today I am going to walk you through a complete Rocky Linux 2022 install, which now replaces Centos from original Centos creator. Today I put in three different USB sticks containing Oracle Linux, Rocky Linux and Centos Stream Linux to see if I could install over my existing partitions as to not wipe out my FreeBSD KVM on the LVM. Well today I am sad to say, the installer does not see the LVM partitions, so I am forced to reinstall Rocky Linux as well as my guests all over again.

I am going to walk you through a safer way to install Centos(Rocky Linux), to future proof yourself in case of reinstalls or problems.

Rocky Linux 8 vs 9:

Run the following shell script:

pico glibc_check.sh (add following)

#!/usr/bin/awk -f

BEGIN { while (!/flags/) if (getline < "/proc/cpuinfo" != 1) exit 1

if (/lm/&&/cmov/&&/cx8/&&/fpu/&&/fxsr/&&/mmx/&&/syscall/&&/sse2/) level = 1

if (level == 1 && /cx16/&&/lahf/&&/popcnt/&&/sse4_1/&&/sse4_2/&&/ssse3/) level = 2

if (level == 2&&/avx/&&/avx2/&&/bmi1/&&/bmi2/&&/f16c/&&/fma/&&/abm/&&/movbe/&&/xsave/) level = 3

if (level == 3 && /avx512f/&&/avx512bw/&&/avx512cd/&&/avx512dq/&&/avx512vl/) level = 4

if (level > 0) { print "CPU supports x86-64-v" level; exit level + 1 } exit 1 }

Save it then:

chmod +x glibc_check.sh; ./glibc_check.sh

host:~ # ./glibc_check.sh
CPU supports x86-64-v1
host:~ #

Now if you get like the above with only version 1, you can only install Rocky Linux 8, otherwise if you have version 2 or higher you can install Rocky Linux 9. I believe reason RHEL made this change was to make glibc calls faster.

INSTALL Rocky Linux:

Download latest Rocky Linux ISO and install it with rufus to a USB stick. I am not going to cover a simple graphics installer, but I want to cover partitioning your drive. Let’s go to “Installation Destination” on installer. Now to prevent any future wipeouts of LVMs with installers, we are going to use standard partitions and only install our LVM afterwards manually. This way if we every run into situation again where installer cannot see into our LVM install, it will definitely see our standard partitions and we won’t have to wipe out our LVMs ever again.

Here are my recommendations for /boot, / and swap. For /boot from experience 1 GB is not enough like they say, after you start installing enough kernels, or start adding custom kernels for supporting things like ZFS or Ksmbd, it adds up quickly. So my recommendation for /boot is 3 GB. For swap it is 20% of your memory, I have 32 GB on this server, but I am going to go 8GB, as I rarely like going under that these days. For / partition we will want at least 80-100 GB. At some point you are going to run out of space un-tarring enough kernel sources, or doing 4k file tests, or space for your ISOs, you need some breathing room on your main OS!

(all ext4 standard partitions setup as follows)

/boot – sda1 – 3GB
swap – sda2 – 8GB
/ – sda3 – 80GB

Save with this setup and finish your install finishing up your install packages, network, password and so on. What we are going to do is setup sda4 manually after the install for our LVM. Double check everything, make sure they are all ext4 partitions, and there is no LVM anywhere!!!

Post Install Tasks — setting up for LVM, KVM, wireguard:

Let us start by upgrading the system, setting up for KVM, and upgrading the kernel from stock default. Remember we cannot run a lot of thing with less than a 5.15.x kernel, and if you start getting into things like ZFS we would need to be exactly on 5.15.x kernel currently. For our purposes we will just use kernel-ml and can downgrade to 5.15.x for ZFS later if we choose to manually compile our own kernels

dnf update
shutdown -r now (reboot with updated system)
cat /proc/cpuinfo | egrep "vmx|svm"  (check we have virtualization enabled)
dnf install @virt virt-top libguestfs-tools virt-install virt-manager xauth virt-viewer

systemctl enable --now libvirtd (everything working? ifconfig -a)
#get rid of virbr0 in ifconfig
virsh net-destroy default
virsh net-undefine default
service libvirtd restart
(check https://www.elrepo.org/ for below install command for rocky 8 or 9)
#we will not be able to install wireguard with less than 5.15 kernel
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
dnf makecache
dnf --enablerepo="elrepo-kernel" install -y kernel-ml
#let's setup our br0 bridge before we reboot
cd /etc/sysconfig/network-scripts
nano ifcfg-enp10s0f0 (your <device name>
#add "BRIDGE=br0" at end of this file
nano ifcfg-br0
#add something like this:
#should match a lot of your file above
STP=no
TYPE=Bridge
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=192.168.0.2
GATEWAY=192.168.0.1
PREFIX=24
DNS1=192.168.0.3
DNS2=8.8.8.8
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=br0
UUID=36e9e2ce-47h1-4e02-ab76-34772d136a21
DEVICE=br0
ONBOOT=yes
AUTOCONNECT_SLAVES=yes
IPV6INIT=no
#change UID to something different above and put your own IPs in
shutdown -r now (reboot with new kernel)
ifconfig -a (check br0 is there, then all good)

Ok now we have new kernel and br0 is setup for KVM guests, let’s move on to LVM.

Creating our LVM for our KVM guests:

#make sure we have right device
fdisk -l /dev/sda 
fdisk /dev/sda
p
<enter> (should have partition 4 and adding rest of disk space to it)
t (toggle partition 4 - change from linux to LVM)
8e (change to LVM)
w (write changes to partition tables)
partprobe /dev/sda (inform OS of partition changes)
pvcreate /dev/sda4 (now we have it as a LVM-can check with "pvs")
vgcreate vps /dev/sda4 (creating our volume group - "vgdisplay")
#now we are all setup, we can create as many KVM guests as we want
#for example give 70G to one guest and give remaining space to a devel guest
lvcreate -n cappy -L 70G vps (create a 70G guest - "lvdisplay")
lvcreate -n devel -l 100%FREE vps(give remaining space to this guest)
#can always delete it later
pvdisplay (check we used up all the space on vps)
#let's make sure guests can suspend and resume on host reboots:
pico /etc/sysconfig/libvirt-guests 
"ON_SHUTDOWN=suspend"
systemctl start libvirt-guests
systemctl enable libvirt-guests

Congratulations on your new install.

Dan.

KVM – Adding Space To FreeBSD 10 zfs on root guest on a Centos 6.5 LVM host

Intro:
I decided to write this after I could not find any documentation on internet how to easily add space to a FreeBSD 10 guest that had zfs on root install. This should show you how to do it easily and quickly. It is important as we may need to add more space from our LVM to our FreeBSD guest at some point, and we need to know exactly how to do that.

Pre-requisites:
I assume you have a Centos host running KVM guests, as well as gdisk installed.

First thing we want to do is extend size of our guest

lvextend -L +10G /dev/vps/sunsaturn (add 10G to sunsaturn)
gdisk /dev/vps/sunsaturn (we let gdisk fix partition table or we will not be able to add new space)

Command (? for help): w
Warning! Secondary header is placed too early on the disk! Do you want to
correct this problem? (Y/N): Y
Have moved second header and partition table to correct location.”

Just save and exit to fix our partition tables.
Now let us run gdisk again to add the new space, since we have following:

gdisk -l /dev/vps/sunsaturn
Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            1057   512.0 KiB   A501  gptboot0
   2            1058         8389665   4.0 GiB     A502  swap0
   3         8389666       335544286   156.0 GiB   A504  zfs0
virt-filesystems --long --parts --blkdevs -h -a /dev/vps/sunsaturn
Name       Type       MBR  Size  Parent
/dev/sda1  partition  -    512K  /dev/sda
/dev/sda2  partition  -    4.0G  /dev/sda
/dev/sda3  partition  -    156G  /dev/sda
/dev/sda   device     -    160G  -

Our last partition can easily be expanded just deleting last partition and add it back with the new space.

gdisk /dev/vps/sunsaturn (now delete last partition(3) and add it back, set code and name back to A504 and zfs0)
partprobe /dev/vps/sunsaturn
(if for any reason you cannot see the space just run:)
gdisk /dev/vps/sunsaturn (then hit "w")
partprobe /dev/vps/sunsaturn
(now you should be able to do above)

Alright we have expanded our LVM, now we need to restart the guest and enable new space within the guest.

virsh shutdown sunsaturn (actually wait till it is shutdown, till "virsh list" shows it gone)
virsh start sunsaturn

IN THE GUEST:

zpool status (find out device and put it below)
zpool online -e zroot vtbd0p3 (zfs will now grab the additional space)

That’s it, now you know how to add space on the fly to any FreeBSD guest with zfs on root, on the fly.

Dan.

Update: a few months after writing this I came across this article which does a good representation.