I ran into an interesting situation, why not take an old Dell r720 etc, install a PCI card with an NVME on it and boot from that instead for our core OS’s and virtual machines. As we know older machines do not support direct booting from NVME.
What I thought would be no more than an hour to do, turned into a whole day ordeal fighting with clover to boot it! The idea was simple: put Clover on a USB stick, configure my Dell r720 to only boot off that, and have USB stick boot the NVME with FreeBSD on it. I managed to do it finally, but it was not fun! Was it worth it? Damn rights, 3500MB/s on an NVME, our virtual machines should fly, especially on FreeBSD “make installworld” on virtual machines! Also we know bhyve is faster than even Linux KVM when using NVME as boot option, so I will show you how to do it and hopefully this only takes you an hour out of your day 🙂
Problems I ran into:
Just getting clover on a usb stick on windows 11 then modifying files proved troublesome, if I had to do it over again, I’ll show you 2 better ways to do it. Fighting with windows 11 just for access to the USB stick EFI partition, vs just mounting it with FreeBSD and modifying it from there turned into a real headache. Also originally I installed FreeBSD on the NVME and then clover no matter what I did would not see it, but in the end actually installing FreeBSD booting from Clover first did the trick.
Hardware list(home server):
a) Dell r720 – if I had to pick an old Dell machine to do this again I would have instead picked a Dell r730 up. Reason is r730s up have bifurcation support you can enable in BIOS. This would allow you to get a PCI card and put multiple NVMEs on it. Without that bifurcation support your limited to just 1 NVME on the card. As far as Dell r710s down, avoid them like the plague, they still have PCIE version 2 at the back, which would limit you to 1500MB/s instead of getting full 3500MB/s on PCIe version 3 with Dell r720s and up. If price is a factor a single NVME on a Dell r720 is a good option as well. If you cannot afford a PCIE version 4 or 5 build, a Dell r730 stacked with 3 NVME on a PCIE card would be best bang for your buck for a commercial use or a single NVME with a Dell r720 for home use.
b) Samsung 970 EVO Plus – These are cheaper these days for 1 and 2TB options, I chose the 1TB option as I still don’t like fact I cannot get a 20TB NVME for 300 bucks at end of 2022 still. It should be enough space to run a lot of VMs. Look at Ebay, Kijiji, Craiglist for used ones first, if you can’t find one amazon is not bad for them these days. If going commercial pick something off Ebay like Netflix has used in past like 4 WD SN720s and put them all on a PCI controller like the “ASUS Hyper M.2 X16 PCIe 3.0 X4 Expansion Card V2 Supports 4 NVMe M.2″( https://www.amazon.ca/gp/product/B07NQBQB6Z ) With 4 of those in a ZFS stripe on a FreeBSD install you should rock more IO speed than a single PCIE 4 system with a single NVME at 7000MB/s read/write and have more longevity.
c) PCIE card – Keeping it cheap and simple I chose M.2 NVME to PCIe 3.0 x4 Adapter with Aluminum Heatsink Solution: https://www.amazon.ca/dp/B07JJTVGZM
So for this upgrade, the Samsung NVME and the PCIE card cost me maybe 100 dollars or less, the real challenge is now to make it bootable 🙂 Before this I was just using a 480GB 2.5 inch SSD and stuck it in an icy dock and tossed it along with other drives in the r720 raid I use for SSD storage on my 3.5 inch backplane. Two other spinning rust 16TB drives in there with a BYOD controller I use for ZFS storage. The core OS and VMs will get the new shiny 1 TB NVME. For backups I have a similar system I bring online only for backups. So everything in this system is in raid0, but technically its raid1 using another machine for secondary backups whenever I decide to bring it online through the Drac controller.
So if we put this is perspective, I should go from say 350MB/s on that 2.5 inch SSD to 3500MB/s on this new NVME. What a difference that is going to make running windows server in a VM or “make installworld” in a FreeBSD VM etc all for under 100 dollars. This is real reason to do it because obviously running samba etc we will be running it from the spinning rust drives, which at best will saturate only half of a 10 gigabit network card at maybe 250-500MB/s for movies etc. Also for speed, remember even sockets are just at the end of day files, so we should get an increase in socket performance to.
Clover USB stick:
The simplest way to install latest Clover is using a Mac utility someone wrote called BDUtility but it also works on a windows 11, literally download the program, stick in a USB stick and your golden from: https://cvad-mac.narod.ru/index/bootdiskutility_exe/0-5
The second way with windows 11 is just to grab latest ISO file and burn it with Rufus, you can find latest clover ISO at: https://github.com/CloverHackyColor/CloverBootloader/releases (should just be able to uncompress it with 7zip program and your good to go)
If your using first way I didn’t have an issue accessing files on windows 11, if you did it rufus way, we are going to need a program called “Explorer++” from explorerplus.com. So if you need it download it and make sure to run program as administrator or it won’t work.
Now if for any reason you go to “This PC” and the USB stick is not there after installing clover on it, you’ll have to mount it manually. For that just use “cmd” as administrator, type “mountvol” to see the volumes with no letter, then just manually mount it with “mountvol M: <long ass GUID volume>”. May have to mount a few of them with different letters to find that clover partition.
Now that we have access to clover USB stick with explorer++ or regular access with BDUtility route, we need to go into the EFI -> CLOVER -> drivers directory. We will have a directory called something like “off”. We need to copy the file NvmExpressDxe.efi to the other folders in drivers directory, sometimes they called BIOS and UEFI, just copy them into there. Another blog for a FreeBSD user he said he copied following as well to make it work with FreeBSD so copy them all if you wish…
- AudioDxe.efi
- CsmVideoDxe.efi
- DataHubDxe.efi
- EmuVariableUefi.efi
- FSInject.efi
- Fat.efi
- NvmExpressDxe.efi
- PartitionDxe.efi
- SMCHelper.efi
- UsbKbDxe.efi
- UsbMouseDxe.efi
Now we should be able to boot off the USB stick to test it, then we can hit “F3” to see any hidden EFI partitions we can boot off as well we should be able to hit “F2” to create a misc directory on USB stick that will contain our GUID we need to customize the CLOVER->config.plist file for auto booting afterwards.
And so the fight began autobooting that NVME that cost me a day. So here is way I got it to work, create another USB stick with FreeBSD installer on it for that NVME. We are going to place the Clover and FreeBSD USB sticks into the server at same time. When we boot up machine we select say “F11” for Dell servers to go into boot menu. We will select the Clover USB. Once in Clover we hit “F3” to find the FreeBSD install USB, boot it, then just install FreeBSD regularly on NVME drive. Once we are done we can remove the FreeBSD USB stick, configure BIOS to only boot that Clover USB stick at this point and once we boot it, and hit F3, we should have option now to boot our new FreeBSD install on the NVME drive!
So at this point we are golden, but obviously we do not want to reboot machine and have to always go into clover to boot our NVME drive, be nice if we could reboot machine, and just automatically boot NVME drive. At this point since we have FreeBSD installed on NVME, let’s just boot it and mount the Clover USB stick there instead for modifications instead. Make sure to hit “F2” beforehand so we create a log file in misc directory on USB stick we will need next. Then hit F3, select the NVME and boot it.
Automating Clover Booting FreeBSD:
Normally when we install FreeBSD with ZFS, FreeBSD mounts its own EFI partition automatically for us at : /boot/efi. What we will want to do is modify /etc/fstab to also have access to Clover EFI at /boot/efi2 on demand. First do:
mkdir /boot/efi2
then nano /etc/fstab (and add the following, assuming da0 is your clover USB stick, gpart show da0):
/dev/da0s1 /boot/efi2 msdosfs rw 2 2
Now we should just be able to do a simple “mount /boot/efi2” and on reboots we will always have it.
cd /boot/efi2/EFI/CLOVER/; cat misc/preboot.log
mv config.plist config.plist.old; nano config.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Boot</key>
<dict>
<key>Timeout</key>
<integer>0</integer>
<key>DefaultVolume</key>
<string>D8189770-86A8-11ED-B706-E4434BF65F00</string>
</dict>
<key>GUI</key>
<dict>
<key>TextOnly</key>
<true/>
<key>Custom</key>
<dict>
<key>Entries</key>
<array>
<dict>
<key>Hidden</key>
<false/>
<key>Volume</key>
<string>D8189770-86A8-11ED-B706-E4434BF65F00</string>
<key>Disabled</key>
<false/>
<key>Type</key>
<string>Linux</string>
<key>Title</key>
<string>DELL R720 NVMe boot</string>
</dict>
</array>
</dict>
</dict>
</dict>
</plist>
Now you will notice 2 lines where I have “D8189770-86A8-11ED-B706-E4434BF65F00”. Replace this with GUID of your own NVME drive you got from “cat misc/preboot.log”. Save file and we are almost done!
If you have problems locating it, you should look for something like in my case:
38:018 0:000 - [07]: Volume: PciRoot(0x2)\Pci(0x2,0x0)\Pci(0x0,0x0)\NVMe(0x1,0D-B9-9E-01-5B-38-25-00)\HD(2,GPT,D81D9A46-86A8-11ED-B706-E4434BF65F00,0x82800,0x1000000)
38:019 0:000 Result of bootcode detection: bootable unknown (legacy)
From here I can see that D8189770-86A8-11ED-B706-E4434BF65F00 is the GUID I need to use as that is one with name NVME in it. Think of it like all those Linux /dev/disk/by-whatever GUID disk names you’d use for /etc/fstab or passing through disks on KVM on Linux.
Alright you’d think we are done and everything would work properly right? Nope for me it went to FreeBSD bootloader, and stalled there this time so to fix this I added following on FreeBSD to bypass the FreeBSD bootloader:
nano /boot/loader.conf
(and add)
autoboot_delay="-1"
Save and exit, this setting will not allow Clover to interrupt our FreeBSD bootloader. I’m sure I could spend another day playing with all the Clover settings to find something that works, but I’m not going to, this is good enough, change it to 5 if you need the boot screen for whatever reason down the road.
Now we can successfully reboot machine any time with our NVME drive!
RECAP:
While this was a real pain in the butt to get working, the pros outweigh the cons. Now we can continue life as normal rebooting machine at will for updates. We are using an actual NVME! We are getting all the performance benefits. Now go install all your VMs to use “nvme” instead of “virtio-blk” on all your vm-bhyve ZFS datasets and enjoy using bhyve as it was intended 🙂 You are now faster than Linux KVM now your not using virtio-blk anymore 🙂