All posts by SunSaturn

FreeBSD BHYVE why use it?A Bhyve suspend/resume howto + disaster recovery with UFS and ZFS

INTRO:

FreeBSD Bhyve virtualization why? Sure there are different virtualization packages out there, but you have FreeBSD! Why not use ESXI? From experience any closed source software generally has a max lifetime of 10 years(abandonware), and opensource will generally over throw them by then. Also anything you don’t have full control over on root prompt to read the source code will be a nightmare to administer.

What about Linux and KVM? Yes, before Bhyve they were the best, the virt-manager and virsh/virt-viewer commands were unbeatable. Here is the thing with Redhat however, they over the course of just 2 years now have made most of sys admins angry on the internet. First they removed the megaraid drivers from the kernel(yes people still run Dell r710’s etc as fileservers), which caused a lot of people grief, then on upgrade to Redhat 9, they killed off a lot of Dell servers as well, people who upgraded had to reinstall with Rocky Linux 8 as they used a glibc that did not work on a lot of people’s systems. And to top it off they killed off Centos by moving it to Centos-stream, angering rest of internet. So everyone moved to Rocky Linux(original creator of Centos), to keep their hosts “production”.

The worst thing about Linux to begin with as a virtualization host, is every 2 years when a new major release comes out, most people have to take time out of their day to reinstall the OS. A core OS! With FreeBSD you can update to new major releases by way of simple freebsd-update command or rebuilding source code from scratch with “make buildworld” as well. No downtime reinstalling, no host to replace, a sys admin dream.

I’ve had fights with programmers over the years over Linux and FreeBSD, especially when it came to libevent and libev, where people swore by epoll over kqueue, from experience put FreeBSD under load and then a Linux server, your loads will be better served on FreeBSD with less CPU utilization, the network stack is better, more efficient, even Netflix uses it.

And now I know what everyone is going to say, been using Linux/KVM virt-manager for so many years, FreeBSD doesn’t even have suspend/resume stable. I’m not loosing my 20 SSH connections to my guests just because I had to reboot for a kernel update. I can’t disagree with this, I’d hate to be working 4 hours on a script on one of them, reboot for a kernel update, and lost all my work because I forgot to save on a host and host didn’t resume guests properly after reboot.

So why use Linux at all? I think Linux is fine as a guest, just not in the frontline of battle for reasons I mentioned above. There are still lots of things that will only run on Linux, if your a flutter programmer, FreeBSD still has not ported Dart language, so your apps will still need a Linux guest to test builds. If I’m considering working on an app, I would most likely install an ubuntu guest for that, do all your dart/c/java there along with windows 11, and your python Fastapi’s interfacing with Mysql on Linux and/or FreeBSD.

Honestly for your server part of your app, I would use a FreeBSD guest to begin with hosting Fastapi, I will show you how you could pass a second disk through to the guest with a proper 16k blocksize for max Mysql performance for your app. The biggest reason to use FreeBSD upfront, you can be stuck 6 months to a year working on an app, you really need a week of downtime because some new Linux release came out and you have to reinstall? I didn’t think so…let FreeBSD “Shine bright like a diamond” as Rihanna would sing.

So today I am going to take what is the most important and exciting #1 feature release for FreeBSD 14, suspend and resume. I will be installing FreeBSD current to test it. I will also show how FreeBSD is a force to be reckoned with as the core OS with virtualization.

Test case scenario:

Dell r720, Intel Enterprise 480GB SSD as core OS slapped into an icy dock for core OS, along with 2 12TB rust spinning drives for separate ZFS backup pool all on a megaraid JBOB controller. FreeBSD current, with production flags, with experimental suspend and resume features enabled in kernel and userland.

For main OS we will use default ZFS install, for guest tests we will install 2 FreeBSD guests, one with ZFS, and one with UFS. We will suspend them both, then resume them and see if we loose our SSH connections πŸ™‚ I will most likely do a part 2 on installing Linux and Windows guests as I see this article will get pretty lengthy just with these 2 guests…

For third-party packages to interface with suspend and resume, we can’t use any. We will have to do everything manually, I will keep it simple with just bash scripts. Maybe I’ll code an advanced interface with python and asyncio in future. One third party package I did run across I kind of liked was vm-bhyve. To be honest only thing I liked about it was his directory layout, so that’s only thing I will try to keep consistent for our scenario.

Bhyve still needs a good interface to it, I think what would be best for it is python fastapi or C/Rust with kqueue as server end. Front end hands down flutter, will build on Linux, apache/nginx, android, IOS, windows and even our TVs in livingroom for a front end interface. Even on FreeBSD natively once Dart is ported to it. I wouldn’t even bother with Java or Kotlin for these reasons alone. I think if that was built for a year would turn FreeBSD into everyone’s favorite virtualization OS.

Back to sys admin stuff, let’s set this up once really well and we should be good for next 5-10 years, just clone the drive if moving to PCIE 4 or 5 machine this is not Linux πŸ™‚

Setup Bhyve:

Let’s start with something simple, setup our directory structure like vm-bhyve, mod what we need and uninstall his package. I’m going to assume at this point you just have a FreeBSD release like FreeBSD 13.1 installed, so let’s begin….

https://github.com/churchers/vm-bhyve

pkg install vm-bhyve bhyve-firmware
zfs create -o mountpoint=/vm zroot/vm
sysrc vm_enable="YES"
sysrc vm_dir="zfs:pool/vm"
vm init
cp /usr/local/share/examples/vm-bhyve/* /vm/.templates/

OK great now we have a directory structure to work with, he likes to keep all guest related things in /vm/<guestname>, so let’s keep with his directory structure idea and delete the package now.

pkg delete vm-bhyve
nano /etc/rc.conf
(now edit /etc/rc.conf and remove/comment out his sysrc lines)
(while we in here let's add support for 4 guest tap interfaces)

#support for 4 test guests - substitute "ix0" for your own interface
cloned_interfaces="bridge0 tap0 tap1 tap2 tap3"
ifconfig_bridge0_name="br0"
ifconfig_br0="addm ix0 addm tap0 addm tap1 addm tap2 addm tap3"
(exit /etc/rc.conf)
#now let's just create what that does manually on command line for now
ifconfig bridge create
ifconfig tap0 create
ifconfig tap1 create
ifconfig tap2 create
ifconfig tap3 create
ifconfig bridge0 addm ix0 addm tap0 addm tap1 addm tap2 addm tap3
ifconfig bridge0 name br0
ifconfig br0 up
#damn starting to like /etc/rc.conf better already :)
(alright now let's edit /etc/sysctl.conf)
nano /etc/sysctl.conf (add following:)
#BHYVE
net.link.tap.up_on_open=1
#BYHVE + PF nat
net.inet.ip.forwarding=1
net.inet6.ip6.forwarding=1
(exit /etc/sysctl.conf)
nano /boot/loader.conf
(have it look like this:)
autoboot_delay="5"
kernels="kernel kernel.old"
boot_serial="YES"
#stuff added by install
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
cryptodev_load="YES"
zfs_load="YES"
#bhyve
vmm_load="YES"
nmdm_load="YES"
if_bridge_load="YES"
if_tap_load="YES"
(exit /boot/loader.conf)

At this point we should be able to do an “ifconfig” and see our bridge setup properly for 4 possible guests, just add more tap interfaces for additional guests. Now let’s reboot and make sure our bridge etc is still there:

shutdown -r now
ifconfig -a
#let's install a few packages to help us
pkg install screen tightvnc git bash

Moving to FreeBSD current with experimental suspend/resume support(MOVIE TIME!):

Ok the time consuming part, hope you have a lot of cores/fast CPUs πŸ™‚

Alright even if you have current installed you still will not have support enabled so we are going to recompile kernel and userland for support. Honestly this is way to upgrade FreeBSD from source anytime, only difference on stable/release is I would checkout a different branch with git and be using GENERIC instead of GENERIC-NODEBUG to copy from. You may be thinking I’ll checkout a release branch and just compile in support, tried that, it won’t work properly, this is your only way to test it before 14 release where it actually works for most part.

The make buildworld, make buildkernel, make installworld commands will take a long time, I recommend going to play your favorite video game or watch a movie after executing those commands, return in a few hours to check in on them time to time. Honestly once you have the buildworld out of the way its not that bad after. I’d recommend running everything in a screen session as well, that way if something happens you can log back in and do a “screen -r”. After your done, return tomorrow and we will continue on with testing suspend and resume πŸ™‚

screen
git clone https://git.FreeBSD.org/src.git /usr/src
cd /usr/src/sys/amd64/conf
(edit MYKERNEL) cp GENERIC-NODEBUG MYKERNEL (add: options         BHYVE_SNAPSHOT)
cd /usr/src
#(find amount of CPUs and adjust -j below - "dmesg|grep SMP")
make -j12 buildworld -DWITH_BHYVE_SNAPSHOT -DWITH_MALLOC_PRODUCTION
make -j12 buildkernel KERNCONF=MYKERNEL
make installkernel KERNCONF=MYKERNEL
shutdown -r now
cd /usr/src; make installworld
shutdown -r now
etcupdate -B
pkg bootstrap -f #if new freebsd version
pkg upgrade -f   #if new freebsd version
shutdown -r now

Preliminary Thoughts on Creating Guests:

We are back and ready to roll! So creating guests is probably something every sys admin sits there and thinks hours upon hours about. What performance improvements could I make, will I be able to mount /etc and /boot directory of an offline guest if anything goes wrong or you screw anything up on guest. How will you do backups for them, how will you recover from disaster if it strikes. These guests once setup can run 5-10 years, no room for error, everything has to be accounted for.

I will give you my take on this from decades of experience. When you are starting out fresh, your main concern is the main virtualization host is doing nothing but virtualization and routing. You’d don’t want apache, email servers or any type of server processes running on it that are better suited for guests that can crash if need be and not affect the main host and other guests running on it. You want the main host rock stable at all times. You can play with the guests and give them the memory and CPUs they require to run what they need.

On the main FreeBSD host – virtualization, DNS, isc-DHCP/isc-DHCPD6, RADVD, PF FIREWALL, and at most backups/ZFS snapshots. Over the years before FreeBSD had Bhyve, I offloaded all backups to ZFS guest for backups there as well, using rsync or zfs send/recv. The choice is yours, but my recommendation is you are running as little as possible on main host as far as internet services.

If you did it all right, you’ll find your barely doing anything on main host and always logged into guests instead, then you know you did everything right. You might edit PF firewall to block something from all guests time to time, or do updates, about it. You have to remember your main host will save you with disaster recovery on guests, create new guests, basically be the blood and soul of your system, and this is where FreeBSD shines.

If you have ability at all to run FreeBSD as main host, you’ll save yourself years of headaches, where every Linux sys admin is reinstalling a new release on main host every 2-4 years, you did a freebsd-update or rebuilt the world and went and watched a movie while other sys admins were pulling their hair out all week wishing they documented their configs better πŸ™‚

What about mission critical? In this situation your going to learn a lot about ZFS and send/recv to clone guests on the fly. For every other situation, a simple rsync once a night of the /etc, /usr/local/etc, /boot, /root, /home directories is all you need, why waste space? I’m not going to clone a 100GB guest byte for byte, if something happens to that guest, I have all its configs, I’m good to go. Install a simple rsyncd.conf on each guest to backup its configs each night. Every host is in charge of all their guests backups to a directory. Then its your decision to do offline backups with rsync or zfs send/recv from that host.

So I know all the worries, been there done that. As I’m moving through this guide, I’m going to show you disaster mitigation techniques on the host as well, so your well prepared if disaster ever strikes on a guest.

MY RESEARCH:

To run FreeBSD effectively as a main host in production a lot had to be accounted for, and I will list them here:

  1. All guests can be resumed quickly after a main host reboots for kernel update after suspending all guests. No ssh connections lost to guests, no accidental forgetting to save work bites you on resumes of guests.
  2. If a guest fails to boot at anytime, mandatory mounting of /etc , /boot etc directories of guest to fix problems on host.
  3. If a guest runs out of space, ability to resize guest on host, and grow the FS on the guest afterwards with as little downtime as possible.
  4. Ability to suspend a guest quickly to fix any problems. With this typically you want to reboot guest properly and fix issues from host after that.
  5. FreeBSD only: ability to ZFS snapshot guests and roll them back to any previous snapshot if needed.

Bugs I found and possible remedies:

In my preliminary research I found what bhyve actually does is run itself if a while loop, if it exists with error code 0, then bhyve is run again, effectively a “shutdown -r now” working properly. If it exits with any other error code then loop is broken, everything can be cleaned up after guest. The problem I found with suspend/resume so far on this issue is once suspended it exits with error code 0 as well. There definitely should be a different error code attached. There is a progress bar written to screen on STDOUT before it does shut down, only way currently to differentiate between the two is to capture that output. Something that would be better left to python asyncio/fastapi server process keeping all guests in a loop and determining difference between a suspend and a clean exit, then you could have a separate command line utility to access API’s on server process would probably be the best solution right now. A coding exercise in python that shouldn’t take more than a week to code. A 6-12 month front end to that with GUI and flutter would probably be best overall supporting the most platforms from one code base.

On further research there are mainly 3 types of ways to pass disks to guests, virtio-blk, virtio-scsi and nvme. On my tests with suspend/resume virtio-blk is stable each time I ran it. On virtio-scsi I had issues, on resume I could do an “ls -al”, see the filesystem properly but after doing anything else like a “df -k” or logging into system, it would hang then eventually crash with error code 139. I reported this is freebsd-current mailing list and hopefully someone gets around to looking at it.

Further research on virtio-blk has shown that it is slowly being phased out for virtio-scsi. Apparently this is the new version of passing disks that will be more common in the future. The belief behind it stems from being to hard to rework the virtio-blk code as well as virtio-scsi has more features and ability to pass way more disks from a host to guests. As far as nvme, I did not have any to test with, regardless of SSD type I believe the move will attempt to include all SSD/nvme to one virtio-scsi configuration in the future so users should embrace virtio-scsi once it is stable with suspend/resume on FreeBSD.

Upon further testing of attempting to mount a ZFS guest to the main host, it was unstable with no plans to fix it. Upon contacting freebsd-current mailing list about this issue, I was informed it causes deadlocks due to lock recursion and is why the sysctl vfs.zfs.vol.recursive was turned off by default. Suggestion was to use scsi instead, which in my testing did mount the ZFS guest without issues, a further suggestion that virtio-scsi is the future.

Upon examining the C source code to the virtio-scsi driver, it creates /dev/cam/* devices that can be used for the guests to passthrough targets with setup luns on the host.

router:/root # ls -al /dev/cam/ctl*
crw------- 1 root operator 0, 164 Nov 10 02:29 /dev/cam/ctl
crw------- 1 root operator 0, 167 Nov 10 08:14 /dev/cam/ctl1.0
crw------- 1 root operator 0, 168 Nov 10 08:14 /dev/cam/ctl2.0
router:/root #

What happens here is a port is created that can be used in the virtio-scsi line of a guests config to pass devices. Upon attaching to targets, the luns create /dev/da* devices.

router:/root # ls -al /dev/da*
crw-r----- 1 root operator 0, 134 Nov 10 02:29 /dev/da0
crw-r----- 1 root operator 0, 135 Nov 10 02:29 /dev/da0s1
crw-r----- 1 root operator 0, 136 Nov 10 02:29 /dev/da0s2
crw-r----- 1 root operator 0, 138 Nov 10 02:29 /dev/da0s2a
crw-r----- 1 root operator 0, 181 Nov 10 08:14 /dev/da1
crw-r----- 1 root operator 0, 183 Nov 10 08:14 /dev/da1p1
crw-r----- 1 root operator 0, 184 Nov 10 08:14 /dev/da1p2
crw-r----- 1 root operator 0, 185 Nov 10 08:14 /dev/da1p3
crw-r----- 1 root operator 0, 182 Nov 10 08:14 /dev/da2
crw-r----- 1 root operator 0, 186 Nov 10 08:14 /dev/da2p1
crw-r----- 1 root operator 0, 187 Nov 10 08:14 /dev/da2p2
crw-r----- 1 root operator 0, 188 Nov 10 08:14 /dev/da2p3
crw-r----- 1 root operator 0, 206 Nov 10 08:14 /dev/da3
crw-r----- 1 root operator 0, 207 Nov 10 08:14 /dev/da3p1
crw-r----- 1 root operator 0, 208 Nov 10 08:14 /dev/da3p2
router:/root #

The first one /dev/da0 is reserved for scsi itself, every other lun created in a target creates a new /dev/da* device, first one beginning at /dev/da1 and so forth.

For my purposes I had the ZFS guest as first lun in my test and was able to manipulate /dev/da1 successfully to mount the ZFS guest. I could also pass a target to a guest with as many disks/cdroms/ISOs(luns) as I wanted just by passing the /dev/cam/ctl1.0 target.

Let’s do a quick illustration of passing zvols in FreeBSD to scsi instead, you could also pass disk images if you wanted:

nano /etc/ctl.conf:
(add following:)
portal-group pg0 {
        discovery-auth-group no-authentication
        listen 127.0.0.1:3260
}
target iqn.2005-02.com.sunsaturn:target0 {
        auth-group no-authentication
        portal-group pg0
        #bhyve virti-iscsi disk - /dev/cam/ctl1.0
        port ioctl/1
        lun 0 {
                path /dev/zvol/zroot/asterisk
                #blocksize 128
                serial 000c2937247001
                device-id "iSCSI Disk 000c2937247001"
                option vendor "FreeBSD"
                option product "iSCSI Disk"
                option revision "0123"
                option insecure_tpc on
        }
}
target iqn.2005-02.com.sunsaturn:target1 {
        auth-group no-authentication
        portal-group pg0
        #bhyve virti-iscsi disk - /dev/cam/ctl2.0
        port ioctl/2

        lun 0 {
                path /dev/zvol/zroot/asterisk2
                #blocksize 128
                serial 000c2937247002
                device-id "iSCSI Disk 000c2937247002"
                option vendor "FreeBSD"
                option product "iSCSI Disk"
                option revision "0123"
                option insecure_tpc on
        }
        lun 1 {
                path /vm/.iso/FreeBSD-14.0-CURRENT-amd64-20221103-5cc5c9254da-259005-disc1.iso
                #byhve seems to just hang when I set it to an actual CDROM so let it default to type 0
                #device-type 5
                serial 000c2937247003
                device-id "iSCSI CDROM ISO 000c2937247003"
                option vendor "FreeBSD CDROM"
                option product "iSCSI CDROM"
                option revision "0123"
                option insecure_tpc on
        }
}

(close and exit)
nano /etc/iscsi.conf
(add following:)
t0 {
        TargetAddress   = 127.0.0.1:3260
        TargetName      = iqn.2005-02.com.sunsaturn:target0
}
t1 {
        TargetAddress   = 127.0.0.1:3260
        TargetName      = iqn.2005-02.com.sunsaturn:target1
}
(close and exit)
nano /etc/rc.conf
(add following:)
#ISCSI - service ctld start && service iscsid start
#server
ctld_enable="YES"          #load /etc/ctl.conf
iscsid_enable="YES"        #start iscsid process to connect to ctld
#client - service iscsictl start
iscsictl_enable="YES"      #connect to all targets in /etc/iscsi.conf
iscsictl_flags="-Aa"

(close and exit)
(now let's create some zvols to install guests on)
zfs create -V30G -o volmode=dev zroot/asterisk
zfs create -V30G -o volmode=dev zroot/asterisk2
cd /vm/.iso
wget https://download.freebsd.org/snapshots/amd64/amd64/ISO-IMAGES/14.0/FreeBSD-14.0-CURRENT-amd64-20221103-5cc5c9254da-259005-disc1.iso
(let's start scsi manually)
service ctld start && service iscsid start && service iscsictl start

Now we can see all those /dev/da* devices for us to manipulate on host for mounting shut down guests, as well as those /dev/cam/* devices for passing to guests.

Next let’s actually install 2 test guests, asterisk and asterisk2. For asterisk guest I will install UFS FreeBSD, and for asterisk2 I will install ZFS guest as well as pass the FreeBSD ISO to that guest.

To make this easier, let’s use a bash script I quickly coded to test suspend/resume capabilities of each guest, let’s call them asterisk.sh and asterisk2.sh:

cd /root
nano -w asterisk.sh
(add following)
#!/bin/bash
#
# General script to test bhyve suspend/resume features
#
# Requirements: FreeBSD current
# screen
# git clone https://git.FreeBSD.org/src.git /usr/src
# cd /usr/src/sys/amd64/conf (edit MYKERNEL) cp GENERIC MYKERNEL-NODEBUG; (add: options         BHYVE_SNAPSHOT)
# cd /usr/src
# (find amount of CPUs and adjust -j below - "dmesg|grep SMP")
# make -j12 buildworld -DWITH_BHYVE_SNAPSHOT -DWITH_MALLOC_PRODUCTION
# make -j12 buildkernel KERNCONF=MYKERNEL
# make installkernel KERNCONF=MYKERNEL
# shutdown -r now
# cd /usr/src; make installworld
# shutdown -r now
# etcupdate -B
# pkg bootstrap -f #if new freebsd version
# pkg upgrade -f   #if new freebsd version 
#
# Report anomolies to dan@sunsaturn.com

##############EDIT ME#####################


HOST="127.0.0.1"                        # vncviewer 127.0.0.1:5900 - pkg install tightvnc
PORT="5900"
WIDTH="800"
HEIGHT="600"
VMNAME="asterisk"
ISO="/vm/.iso/FreeBSD-14.0-CURRENT-amd64-20221103-5cc5c9254da-259005-disc1.iso"
DIR="/vm/asterisk"                      # Used to hold files when guest suspended
SERIAL="/dev/nmdm_asteriskA"           # For "screen /dev/nmdm_asteriskB" - pkg install screen
TAP="tap0"
CPU="8"
RAM="8G"

#For testing virtio-scsi
STORAGE="/dev/cam/ctl1.0"               # port from /etc/ctl.conf(port ioctl/1) - core dumping on resume
DEVICE="virtio-scsi"

#for testing virtio-blk                 # Comment out above 2 lines if using these
#DEVICE="virtio-blk"                    
#STORAGE="/dev/zvol/zroot/asterisk"     # Standard zvol
#STORAGE="/dev/da1"                     # Block device created from iscsictl

#########################################

usage() {
   echo "Usage: $1 start    (Start the guest: $VMNAME)"; 
   echo "Usage: $1 stop     (Stop the guest: $VMNAME)"; 
   echo "Usage: $1 resume   (Resume the guest from last suspend: $VMNAME)"; 
   echo "Usage: $1 suspend  (Suspend the guest: $VMNAME)"; 
   echo "Usage: $1 install  (Install new guest: $VMNAME)"; 
   exit
}

if [ ! -d "$DIR" ]; then 
   mkdir -p $DIR
fi

#if [ -z "$2" ]; then
#   usage
#else
#   VMNAME=$2
#fi


if [ "$1" == "install" ]; then
   #Kill it before starting it
   echo "Execute: screen $SERIAL"
   bhyvectl --destroy --vm=$VMNAME
   bhyve -c $CPU -m $RAM -w -H -A \
      -s 0:0,hostbridge \
      -s 3:0,ahci-cd,$ISO \
      -s 4:0,$DEVICE,$STORAGE  \
      -s 5:0,virtio-net,$TAP \
      -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT \
      -s 30,xhci,tablet \
      -s 31,lpc -l com1,stdio \
      -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
      $VMNAME
   #kill it after 
   bhyvectl --destroy --vm=$VMNAME
elif [ "$1" == "start" ]; then 
   while true
   do
      echo "Starting $VMNAME -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT"
      #Kill it before starting it
      bhyvectl --destroy --vm=$VMNAME > /dev/null 2>&1
      bhyve -c $CPU -m $RAM -w -H -A \
         -s 0:0,hostbridge \
         -s 4:0,$DEVICE,$STORAGE  \
         -s 5:0,virtio-net,$TAP \
         -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT \
         -s 30,xhci,tablet \
         -s 31,lpc -l com1,$SERIAL \
         -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
         $VMNAME
      #DISABLING REBOOT LOOP AS SUSPEND RETURNS ERROR CODE 0 AS WELL
      #if [ "$?" != 0 ];
      #then
      #   echo "The exit code was not reboot code 0!: $?"
      #   exit
      #fi
      echo "The exit code was : $?"
      exit
   done
elif [ "$1" == "resume" ]; then 
   while true
   do
      echo "Starting $VMNAME -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT"
      #Kill it before starting it
      bhyvectl --destroy --vm=$VMNAME > /dev/null 2>&1
      if [ -f "$DIR/default.ckp" ]; then
         bhyve -c $CPU -m $RAM -w -H -A \
            -s 0:0,hostbridge \
            -s 4:0,$DEVICE,$STORAGE  \
            -s 5:0,virtio-net,$TAP \
            -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT \
            -s 30,xhci,tablet \
            -s 31,lpc -l com1,$SERIAL \
            -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
            -r $DIR/default.ckp \
            $VMNAME
      else
         echo "Guest was never suspended"
         exit
      fi
      #DISABLING REBOOT LOOP AS SUSPEND RETURNS ERROR CODE 0 AS WELL
      #if [ "$?" != 0 ];
      #then
      #   echo "The exit code was not reboot code 0!: $?"
      #   exit
      #fi
      echo "The exit code was : $?"
      exit
   done
elif [ "$1" == "suspend" ];
then 
   bhyvectl --suspend $DIR/default.ckp --vm=$VMNAME

elif [ "$1" == "stop" ]; then 
   bhyvectl --destroy --vm=$VMNAME 
else 
   usage
fi

Let’s also create asterisk2.sh:

#!/bin/bash
#
# General script to test bhyve suspend/resume features
#
# Requirements: FreeBSD current
# screen
# git clone https://git.FreeBSD.org/src.git /usr/src
# cd /usr/src/sys/amd64/conf (edit MYKERNEL) cp GENERIC MYKERNEL-NODEBUG; (add: options         BHYVE_SNAPSHOT)
# cd /usr/src
# (find amount of CPUs and adjust -j below - "dmesg|grep SMP")
# make -j12 buildworld -DWITH_BHYVE_SNAPSHOT -DWITH_MALLOC_PRODUCTION
# make -j12 buildkernel KERNCONF=MYKERNEL
# make installkernel KERNCONF=MYKERNEL
# shutdown -r now
# cd /usr/src; make installworld
# shutdown -r now
# etcupdate -B
# pkg bootstrap -f #if new freebsd version
# pkg upgrade -f   #if new freebsd version 
#
# Report anomolies to dan@sunsaturn.com

##############EDIT ME#####################


HOST="127.0.0.1"                        # vncviewer 127.0.0.1:5900 - pkg install tightvnc
PORT="5901"
WIDTH="800"
HEIGHT="600"
VMNAME="asterisk2"
ISO="/vm/.iso/FreeBSD-14.0-CURRENT-amd64-20221103-5cc5c9254da-259005-disc1.iso"
DIR="/vm/asterisk2"                      # Used to hold files when guest suspended
SERIAL="/dev/nmdm_asterisk2A"           # For "screen /dev/nmdm_asterisk2B" - pkg install screen
TAP="tap1"
CPU="8"
RAM="8G"

#For testing virtio-scsi
STORAGE="/dev/cam/ctl2.0"               # port from /etc/ctl.conf(port ioctl/1) - core dumping on resume
DEVICE="virtio-scsi"

#for testing virtio-blk                 # Comment out above 2 lines if using these
#DEVICE="virtio-blk"                    
#STORAGE="/dev/zvol/zroot/asterisk2"     # Standard zvol
#STORAGE="/dev/da2"                     # Block device created from iscsictl

#########################################

usage() {
   echo "Usage: $1 start    (Start the guest: $VMNAME)"; 
   echo "Usage: $1 stop     (Stop the guest: $VMNAME)"; 
   echo "Usage: $1 resume   (Resume the guest from last suspend: $VMNAME)"; 
   echo "Usage: $1 suspend  (Suspend the guest: $VMNAME)"; 
   echo "Usage: $1 install  (Install new guest: $VMNAME)"; 
   exit
}

if [ ! -d "$DIR" ]; then 
   mkdir -p $DIR
fi

#if [ -z "$2" ]; then
#   usage
#else
#   VMNAME=$2
#fi


if [ "$1" == "install" ]; then
   #Kill it before starting it
   echo "Execute: screen $SERIAL"
   bhyvectl --destroy --vm=$VMNAME
   bhyve -c $CPU -m $RAM -w -H -A \
      -s 0:0,hostbridge \
      -s 3:0,ahci-cd,$ISO \
      -s 4:0,$DEVICE,$STORAGE  \
      -s 5:0,virtio-net,$TAP \
      -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT \
      -s 30,xhci,tablet \
      -s 31,lpc -l com1,stdio \
      -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
      $VMNAME
   #kill it after 
   bhyvectl --destroy --vm=$VMNAME
elif [ "$1" == "start" ]; then 
   while true
   do
      echo "Starting $VMNAME -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT"
      #Kill it before starting it
      bhyvectl --destroy --vm=$VMNAME > /dev/null 2>&1
      bhyve -c $CPU -m $RAM -w -H -A \
         -s 0:0,hostbridge \
         -s 4:0,$DEVICE,$STORAGE  \
         -s 5:0,virtio-net,$TAP \
         -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT \
         -s 30,xhci,tablet \
         -s 31,lpc -l com1,$SERIAL \
         -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
         $VMNAME
      #DISABLING REBOOT LOOP AS SUSPEND RETURNS ERROR CODE 0 AS WELL
      #if [ "$?" != 0 ];
      #then
      #   echo "The exit code was not reboot code 0!: $?"
      #   exit
      #fi
      echo "The exit code was : $?"
      exit
   done
elif [ "$1" == "resume" ]; then 
   while true
   do
      echo "Starting $VMNAME -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT"
      #Kill it before starting it
      bhyvectl --destroy --vm=$VMNAME > /dev/null 2>&1
      if [ -f "$DIR/default.ckp" ]; then
         bhyve -c $CPU -m $RAM -w -H -A \
            -s 0:0,hostbridge \
            -s 4:0,$DEVICE,$STORAGE  \
            -s 5:0,virtio-net,$TAP \
            -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT \
            -s 30,xhci,tablet \
            -s 31,lpc -l com1,$SERIAL \
            -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
            -r $DIR/default.ckp \
            $VMNAME
      else
         echo "Guest was never suspended"
         exit
      fi
      #DISABLING REBOOT LOOP AS SUSPEND RETURNS ERROR CODE 0 AS WELL
      #if [ "$?" != 0 ];
      #then
      #   echo "The exit code was not reboot code 0!: $?"
      #   exit
      #fi
      echo "The exit code was : $?"
      exit
   done
elif [ "$1" == "suspend" ];
then 
   bhyvectl --suspend $DIR/default.ckp --vm=$VMNAME

elif [ "$1" == "stop" ]; then 
   bhyvectl --destroy --vm=$VMNAME 
else 
   usage
fi

Great now let’s install the guests:

chmod 755 *.sh
./asterisk.sh install
(at this point just install FreeBSD with default UFS options, I actually chose to remove swap partition at the end, and the root partition then add them back as swap 2nd and root partition last to avoid headaches having to grow the disk later on, I suggest you do the same)
./asterisk2.sh install (install with GPT+UEFI)
(install FreeBSD again with default ZFS install, there is no headache here partitions are perfect here as defaults on zroot)

(At this point I may edit /etc/fstab to change whatever hardcoded device names
are in there to GPT names from /dev/gpt/* so when we switch between virtio-scsi and virtio-blk devices it won't matter. Ie for swap switch it to:
/dev/gpt/swap0 instead) If you are ever wanting to show what the GPT label names are you can either do "gdisk -l <device>" or "gpart show -l <device>" to figure out what to put into /etc/fstab. If they have no label , give them one. )

#now let's start them both after the install
./asterisk.sh start
#another terminal
./asterisk2.sh start
#another terminal
screen /dev/nmdm_asteriskB
#another terminal
screen /dev/nmdm_asterisk2B
#another terminal
./asterisk.sh suspend
./asterisk.sh resume
#another terminal
./asterisk2.sh suspend
./asterisk2.sh resume

Now you will notice on your screen session, after resuming guest “ls” etc works but as soon as we do anything else, “df -h” it will hang and after about a minute it will core dump.

router:/root # ./asterisk2.sh resume
Starting asterisk2 -s 29,fbuf,tcp=127.0.0.1:5901,w=800,h=600
fbuf frame buffer base: 0x229792600000 [sz 16777216]
Pausing pci devs...
pci_pause: no such name: virtio-blk
pci_pause: no such name: ahci
pci_pause: no such name: ahci-hd
pci_pause: no such name: ahci-cd
Restoring vm mem...
[8192.000MiB / 8192.000MiB] |################################################################################################################################################|
Restoring pci devs...
vm_restore_user_dev: Device size is 0. Assuming virtio-blk is not used
vm_restore_user_dev: Device size is 0. Assuming virtio-rnd is not used
vm_restore_user_dev: Device size is 0. Assuming e1000 is not used
vm_restore_user_dev: Device size is 0. Assuming ahci is not used
vm_restore_user_dev: Device size is 0. Assuming ahci-hd is not used
vm_restore_user_dev: Device size is 0. Assuming ahci-cd is not used
Restoring kernel structs...
Resuming pci devs...
pci_resume: no such name: virtio-blk
pci_resume: no such name: ahci
pci_resume: no such name: ahci-hd
pci_resume: no such name: ahci-cd
./asterisk2.sh: line 145: 10883 Segmentation fault      (core dumped) bhyve -c $CPU -m $RAM -w -H -A -s 0:0,hostbridge -s 4:0,$DEVICE,$STORAGE -s 5:0,virtio-net,$TAP -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT -s 30,xhci,tablet -s 31,lpc -l com1,$SERIAL -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd -r $DIR/default.ckp $VMNAME
The exit code was : 139
router:/root # 

Great so scsi is unstable with suspend/resume. Now run same tests but let’s modify asterisk2.sh script to use virtio-blk instead with /dev/da2 and run same tests:

#For testing virtio-scsi
#STORAGE="/dev/cam/ctl2.0"              # port from /etc/ctl.conf(port ioctl/1) - core dumping on resume
#DEVICE="virtio-scsi"

#for testing virtio-blk                 # Comment out above 2 lines if using these
DEVICE="virtio-blk"
#STORAGE="/dev/zvol/zroot/asterisk2"    # Standard zvol
STORAGE="/dev/da2"                      # Block device created from iscsictl

You may be wondering how I know which /dev/da* asterisk2 is using:
I simply run:
iscsictl 
#This will give you a list of who is on what, it can be completely random
#always check this list before mounting a guest so you don't mount wrong one
#typically everything on lun0 will be numbered first, then lun1 but this is not #always the case so make sure to run that
#on a non-testing host it may look something like this:
router:/root # iscsictl 
Target name                          Target portal    State
iqn.com.sunsaturn.asterisk:target1   127.0.0.1:3260   Connected: da1 da5 
iqn.com.sunsaturn.rocky:target2      127.0.0.1:3260   Connected: da2 da6 
iqn.com.sunsaturn.ubuntu:target3     127.0.0.1:3260   Connected: da3 da8 
iqn.com.sunsaturn.windows:target4    127.0.0.1:3260   Connected: da4 da7 
router:/root # 

Here is something good to add to your .bash_profile then you don't have to 
think about it ever again:

if [ "$HOSTNAME" == "test.test.com" ]; then
   echo "Checking which devices guests connected to:"
   echo "#######################################################"
   iscsictl
   echo "#######################################################"
fi

I personally have a 2nd root account I set to bash so I don't touch default root
shell, can use toor if you like. I suggest everyone do that, there will come a day where you are like damn I can't remember my root password anymore because been using ssh keys so long. Yes you have to su to root everytime, but I even automate that these days.

Now let’s run it:

./asterisk2.sh start
#another terminal (make sure watching screen terminal running these)
./asterisk2.sh suspend
./asterisk2.sh resume

WORKS PERFECTLY:
Now go uncomment STORAGE line with 
STORAGE="/dev/zvol/zroot/asterisk2"
and comment out:
#STORAGE="/dev/da2"
WORKS PERFECTLY TO

So what can we see from suspend/resume stability, it works on virtio-blk perfectly. No matter if I use the /dev/da* devices from scsi or the zvols directly.

What about importing ZFS pool from asterisk2? Sure let’s do it on host, shut it down first:

router:/root # gpart show /dev/da2
=>      40  62914480  da2  GPT  (30G)
        40    532480    1  efi  (260M)
    532520      2008       - free -  (1.0M)
    534528  16777216    2  freebsd-swap  (8.0G)
  17311744  45600768    3  freebsd-zfs  (22G)
  62912512      2008       - free -  (1.0M)

router:/root # gdisk -l /dev/da2
GPT fdisk (gdisk) version 1.0.9

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/da2: 62914560 sectors, 30.0 GiB
Sector size (logical): 512 bytes
Disk identifier (GUID): 8ACD2112-5EBF-11ED-8F56-00A098E3C14E
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 40, last usable sector is 62914519
Partitions will be aligned on 8-sector boundaries
Total free space is 4016 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              40          532519   260.0 MiB   EF00  efiboot0
   2          534528        17311743   8.0 GiB     A502  swap0
   3        17311744        62912511   21.7 GiB    A504  zfs0
router:/root # 

So we can see this is definitely our ZFS guest, and we already know we cannot manipulate /dev/zvol/zroot/asterisk2 directly because of recursion issues, but we can manipulate /dev/da2 through scsi just fine:

#Import ZFS guest under /mnt (shut down the guest)
#First import it under /mnt under a different temporary name that won't save the name when we export it after with "-t" option
zpool import (we should see asterisk2's zroot ready for import)
zpool import -fR /mnt -t zroot testing
zfs mount -o mountpoint=/mnt testing/ROOT/default
(go ahead fix /mnt/etc /mnt/boot problems)
zpool export testing
rm -rf /mnt/* (cleanup left over directories)

#hell let's mount asterisk guest with UFS as well on /mnt
#I did crash it a few times so probably need recovery
router:/root # gpart show /dev/da1
=>       40  104857520  da1  GPT  (50G)
         40         24       - free -  (12K)
         64     532480    1  efi  (260M)
     532544   16777216    2  freebsd-swap  (8.0G)
   17309760   87547776    3  freebsd-ufs  (42G)
  104857536         24       - free -  (12K)

router:/root # mount -t ufs /dev/da1p3 /mnt
mount: /dev/da1p3: R/W mount of / denied. Filesystem is not clean - run fsck. Forced mount will invalidate journal contents: Operation not permitted
router:/root # fsck /dev/da1p3 
** /dev/da1p3
** SU+J Recovering /dev/da1p3

USE JOURNAL? [yn] y

** Reading 182419456 byte journal from inode 4.

RECOVER? [yn] y

** Building recovery table.
** Resolving unreferenced inode list.
** Processing journal entries.

WRITE CHANGES? [yn] y


***** FILE SYSTEM IS CLEAN *****
** 96 journal records in 6656 bytes for 46.15% utilization
** Freed 27 inodes (4 dirs) 0 blocks, and 20 frags.

***** FILE SYSTEM MARKED CLEAN *****
router:/root # mount -t ufs /dev/da1p3 /mnt
router:/root # ls /mnt
bin  boot  COPYRIGHT  dev  entropy  etc  home  lib  libexec  media  mnt  net  proc  rescue  root  sbin  sys  tmp  usr  var
router:/root # 

Summary:

Suspend/resume does work, just not passing in virtio-scsi with those /dev/cam/* devices. Seems we can do “ls” just fine, so something else is going on.

For now what you can do is just use the virtio-blk devices, until virtio-scsi works. In fact I would set it up this way since we already know virtio-blk is on its way out, plus it was so much nicer to pass in a CDROM on asterisk2 in just 1 line to guest, or as many disks as we want. We learned we can use /dev/da* devices for direct ZFS importing of guests pools for disaster recovery, for anything that isn’t ZFS we could also use the /dev/zvol/zroot/* devices directly if we wanted, but it is a cool work around from FreeBSD team, and sets you up using virtio-scsi now.

If you do not care about suspend/resume right now you could just continue using virtio-scsi to future proof yourself for virtio-blk phase out, or you could just pass virtio-blk devices temporarily till it is fixed and use suspend/resume all you want. My personal preference till this gets sorted out is leave iscsi processes running till it gets resolved, and use virtio-blk directly against /dev/zvol/root/<guestname> for now, can switch them later, this way you can use suspend/resume all you want as well as mount any ZFS guests through the /dev/da* devices anytime you need. This way your at least future proofed. If you don’t want to use the /dev/da* devices :

zfs set volmode=full zroot/asterisk
(if you want to be able to mount non-ZFS guests directly at:
/dev/zvol/zroot/asterisk for instance, what this will do is separate asterisk into asteriskp1 asteriskp2 asteriskp3 etc for your mounting purposes)
#Just get used to scsi already :) I know old habits die hard and no one wants to #give up NIS either and learn LDAP, hell neither do I, whatever works :)

Hope you enjoyed this article on suspend/resume experimental support for FreeBSD, perhaps in another article I will show you Linux and Windows guests, and how we can disaster recovery them as well like we did for these 2 UFS and ZFS guests today, till then….

Shine bright like a diamond,

Dan.

PCIE 5.0 is it worth it?

If you just built a new PCIE 4.0 system is PCIE 5.0 worth it? At this time with a recession looming, food prices high, gas prices high, I would say definitely not. Your money would be better spent on solar panels, lithium batteries and inverters, something you could eventually get your money back on.

The CONS:

So just how good is PCIE 5.0. It doubles the lane speeds once again, but here is the thing, no one has even put out a graphics card that can saturate a PCIE 3 x 16 lane, your PC won’t boot any faster on PCIE 5.0. Here is the other thing, the 2nm fabs are being set to go into production at end of 2024. Intel says it will have 1.5nm at that time, we will see. I think a better upgrade date would be end of 2025 if you already have a PCIE 4 system, giving enough time for motherboards and chips to come out for the 2nm fabrication process.

Also the good motherboards are not out yet for PCIE 5.0(at least the good ones). I expect they will start showing up around end of 2023.

The PROS:

With an extra 1gz of CPU speed that may help people’s compile times with things like app development and flutter. For a desktop, the extra SSD speed won’t even be noticeable compared to your PCIE 4.0 system. Where it would make sense is companies that depend on databases to serve their customers, the added speedup of searching millions of rows in a database would be a welcomed improvement. For gamers, graphics cards can’t even saturate PCIE 4 yet, not even worth it other than for bragging rights.

So basically for a server always, but a desktop it just isn’t there yet. Video card manufacturers cared more about crypto miners and their bottom line during pandemic, now they have some catching up to do and price drops to do for their gaming clients to make up for it.

RECAP:

All in all, if you have money to blow, it would definitely be a fun upgrade for the desktop, but that’s all it would be. I’d wait it out till the 2nm fabrication process comes out, Tailand expected to get one of the ASML machines sometime in 2024. Also I expect in next year or 2 we might finally have a quantum PCIE card to try.

The quantum world isn’t even going to start till they can get a quantum card in everyone’s computer so all the developers around the world can start programming for them, so engineers are going to have to step it up a notch in next couple years and get a product to market, as the technology will be useless for a year or 2 till developers around the world can make it useful.

I also think how artificial intelligence is done now, a really lame quantum physics algorithm on using neural nets to do probabilities will be a thing of the past as well, and that will be redone with quantum for real AI. It really is such a joke that even physicists put funny notes out there doors saying, “You are probably here”. I admit does crack me up a bit, if you understand how physicists think with observables and probabilities πŸ™‚ I just can’t see in 2030 people still doing dumb things like still multiplying matrixes together to do AI, or it would be a sad world.

Think one of my saddest and happiest moments this year was talking with NASA, and telling them how behind they are, they have not even come up with a theory for faster than light speed travel yet. Granted a month after I did that now a theory exists, but that’s all it will remain for awhile now. Honestly its difficult what they do, I’d need to smoke a lot of weed to to come up with a practical way to solve that one, but at least they are somewhat on right track now.

Unfortunately NASA is now pre-occupied with certain asteroids that could end humanity, understandable, I would be to! Here is the thing though, without faster than light speed travel, is humanity really ever going to make it to a new exoplanet millions of light years away, there will always be a new asteroid out there coming to wipe us out like the dinosaurs. I think perhaps our best solution as a human race is yes to take out asteroids that may wipe us out in next century, but at same time develop the technology to travel faster than light speed otherwise we will just always be fighting the inevitable.

Also another thing we have to consider in 2022 with the new James Webb Telescope, is that the big bang theory is now disproven. Yes you heard that right, it is. So back to drawing board for astrophysicists.

2020-2030 is probably the best decade to be alive right now, we will see the most innovations in this time period than humanity has ever seen, as long as we don’t have to keep fighting with INTEL keeping Moore’s law going πŸ™‚ At this point I think we are all tired of companies like ASML taking years to build a machine to keep that law going, hopefully quantum hits soon. I mean what are they going to do after the 2nm fabrication process, probably go to pinta units, it will never end, bring quantum out!

Speaking of quantum, I do see a downside already. The benefit will be enormous for humanity as a whole, but we need to regulate those pharmaceutical companies. I do not want to see humanity suffering because a bunch of biochemists played with quantum computers and CRISPR, then charge a fortune for their cures. That is like stealing from the open source world and not giving back. They never even would have had those quantum computers if it wasn’t for everyone that built it for them in first place. I think world needs to start an opensource project ASAP to combat this. Governments should pay them a one time fee for formula/algorithm how to produce it in their own labs, and governments should hand that over to open source project.

I really see a world in the future where everyone is trained in physics, engineering, programming and biochemistry with CRISPR. These should be the school curriculum for kids these days. When this does happen, and you get sick one day, even your friend next door could formulate the drugs for you to keep you alive or give you a CRISPR needle to modify your DNA so you are cured.

Make it fun for them, teach them how to setup a solar panel, teach them to program an app they would love world to have or make a robot do something. Teach them about DNA, RNA and CRISPR in a fun way. One day a generation is going to have to be so smart they can go underground with geothermal energy when Sun is ready to engulf the earth or asteroid hits first(most likely). Best thing we can do is give them the knowledge we should all have right now, but most don’t. A professor at a university once told me, “Never spoon feed anyone, only give them the tools and let them figure it out on their own”. Sound advice for the future, YouTube, free library books on their tablets, online shopping, the world is their oyster. I’ve done that as a university teacher myself in the past, given back to them. They loved it! Need any help rewriting their curriculums, give me a shout.

Shine bright like a diamond,

SunSaturn.

Rocky Linux Install 2022 with KVM support

INTRO:

Originally I had tried a software update on a Dell 2950 III server from Rocky Linux 8 to 9, only to end up with rocky linux , “glibc error: cpu does not support x86-64-v2”. Basically I fried my entire system as this CPU cannot support these new CPU calls.

Today I am going to walk you through a complete Rocky Linux 2022 install, which now replaces Centos from original Centos creator. Today I put in three different USB sticks containing Oracle Linux, Rocky Linux and Centos Stream Linux to see if I could install over my existing partitions as to not wipe out my FreeBSD KVM on the LVM. Well today I am sad to say, the installer does not see the LVM partitions, so I am forced to reinstall Rocky Linux as well as my guests all over again.

I am going to walk you through a safer way to install Centos(Rocky Linux), to future proof yourself in case of reinstalls or problems.

Rocky Linux 8 vs 9:

Run the following shell script:

pico glibc_check.sh (add following)

#!/usr/bin/awk -f

BEGIN { while (!/flags/) if (getline < "/proc/cpuinfo" != 1) exit 1

if (/lm/&&/cmov/&&/cx8/&&/fpu/&&/fxsr/&&/mmx/&&/syscall/&&/sse2/) level = 1

if (level == 1 && /cx16/&&/lahf/&&/popcnt/&&/sse4_1/&&/sse4_2/&&/ssse3/) level = 2

if (level == 2&&/avx/&&/avx2/&&/bmi1/&&/bmi2/&&/f16c/&&/fma/&&/abm/&&/movbe/&&/xsave/) level = 3

if (level == 3 && /avx512f/&&/avx512bw/&&/avx512cd/&&/avx512dq/&&/avx512vl/) level = 4

if (level > 0) { print "CPU supports x86-64-v" level; exit level + 1 } exit 1 }

Save it then:

chmod +x glibc_check.sh; ./glibc_check.sh

host:~ # ./glibc_check.sh
CPU supports x86-64-v1
host:~ #

Now if you get like the above with only version 1, you can only install Rocky Linux 8, otherwise if you have version 2 or higher you can install Rocky Linux 9. I believe reason RHEL made this change was to make glibc calls faster.

INSTALL Rocky Linux:

Download latest Rocky Linux ISO and install it with rufus to a USB stick. I am not going to cover a simple graphics installer, but I want to cover partitioning your drive. Let’s go to “Installation Destination” on installer. Now to prevent any future wipeouts of LVMs with installers, we are going to use standard partitions and only install our LVM afterwards manually. This way if we every run into situation again where installer cannot see into our LVM install, it will definitely see our standard partitions and we won’t have to wipe out our LVMs ever again.

Here are my recommendations for /boot, / and swap. For /boot from experience 1 GB is not enough like they say, after you start installing enough kernels, or start adding custom kernels for supporting things like ZFS or Ksmbd, it adds up quickly. So my recommendation for /boot is 3 GB. For swap it is 20% of your memory, I have 32 GB on this server, but I am going to go 8GB, as I rarely like going under that these days. For / partition we will want at least 80-100 GB. At some point you are going to run out of space un-tarring enough kernel sources, or doing 4k file tests, or space for your ISOs, you need some breathing room on your main OS!

(all ext4 standard partitions setup as follows)

/boot – sda1 – 3GB
swap – sda2 – 8GB
/ – sda3 – 80GB

Save with this setup and finish your install finishing up your install packages, network, password and so on. What we are going to do is setup sda4 manually after the install for our LVM. Double check everything, make sure they are all ext4 partitions, and there is no LVM anywhere!!!

Post Install Tasks — setting up for LVM, KVM, wireguard:

Let us start by upgrading the system, setting up for KVM, and upgrading the kernel from stock default. Remember we cannot run a lot of thing with less than a 5.15.x kernel, and if you start getting into things like ZFS we would need to be exactly on 5.15.x kernel currently. For our purposes we will just use kernel-ml and can downgrade to 5.15.x for ZFS later if we choose to manually compile our own kernels

dnf update
shutdown -r now (reboot with updated system)
cat /proc/cpuinfo | egrep "vmx|svm"  (check we have virtualization enabled)
dnf install @virt virt-top libguestfs-tools virt-install virt-manager xauth virt-viewer

systemctl enable --now libvirtd (everything working? ifconfig -a)
#get rid of virbr0 in ifconfig
virsh net-destroy default
virsh net-undefine default
service libvirtd restart
(check https://www.elrepo.org/ for below install command for rocky 8 or 9)
#we will not be able to install wireguard with less than 5.15 kernel
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
dnf makecache
dnf --enablerepo="elrepo-kernel" install -y kernel-ml
#let's setup our br0 bridge before we reboot
cd /etc/sysconfig/network-scripts
nano ifcfg-enp10s0f0 (your <device name>
#add "BRIDGE=br0" at end of this file
nano ifcfg-br0
#add something like this:
#should match a lot of your file above
STP=no
TYPE=Bridge
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=192.168.0.2
GATEWAY=192.168.0.1
PREFIX=24
DNS1=192.168.0.3
DNS2=8.8.8.8
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=br0
UUID=36e9e2ce-47h1-4e02-ab76-34772d136a21
DEVICE=br0
ONBOOT=yes
AUTOCONNECT_SLAVES=yes
IPV6INIT=no
#change UID to something different above and put your own IPs in
shutdown -r now (reboot with new kernel)
ifconfig -a (check br0 is there, then all good)

Ok now we have new kernel and br0 is setup for KVM guests, let’s move on to LVM.

Creating our LVM for our KVM guests:

#make sure we have right device
fdisk -l /dev/sda 
fdisk /dev/sda
p
<enter> (should have partition 4 and adding rest of disk space to it)
t (toggle partition 4 - change from linux to LVM)
8e (change to LVM)
w (write changes to partition tables)
partprobe /dev/sda (inform OS of partition changes)
pvcreate /dev/sda4 (now we have it as a LVM-can check with "pvs")
vgcreate vps /dev/sda4 (creating our volume group - "vgdisplay")
#now we are all setup, we can create as many KVM guests as we want
#for example give 70G to one guest and give remaining space to a devel guest
lvcreate -n cappy -L 70G vps (create a 70G guest - "lvdisplay")
lvcreate -n devel -l 100%FREE vps(give remaining space to this guest)
#can always delete it later
pvdisplay (check we used up all the space on vps)
#let's make sure guests can suspend and resume on host reboots:
pico /etc/sysconfig/libvirt-guests 
"ON_SHUTDOWN=suspend"
systemctl start libvirt-guests
systemctl enable libvirt-guests

Congratulations on your new install.

Dan.

Batteries Lifepo4 – Lithium Iron Phosphate upgrade and why

The past setup:

Well my two marine batteries died I had used as a backup for servers. What I had done for my servers at home was modified a UPS and hooked up two 100ah marine batteries in parallel for 200ah of capacity. While this was a great cheap solution at the time, 2 marine batteries were 100 dollars each and a modified UPS to run off them was a great idea at the time.

New tech in batteries – LifePo4(Lithium Iron Phosphate)

I was using alkaline 12V batteries that long ago, but now with the new contender on the market Lifepo4, short for Lithium Iron Phosphate we really need to look at why they are so much better for our homes than the old alkaline batteries. I had to watch countless YouTube videos on electrical engineering to get to bottom of this. To basically sum it up, let’s say we have one 12 volt car battery at 100ah(amp hours), and we have one 12 volt 100ah Lifepo4 battery. Alkaline batteries need to be charged once they reach 50% capacity, Lifepo4 batteries can be completely discharged and still run loads needed.

The recommendation is to charge Lifepov4 batteries by time it hits 20% so you don’t drain it completely. Also alkaline batteries die after about 500 cycles, where the new Lifepov4 standard can last as long as 6000 cycles! So I have found Lifepov4 batteries will not only last you longer, but you can use them longer before they are completely discharged, and for an added bonus, they are safe in your house because they won’t cause fires like other batteries will!

This basically all means then a 200ah alkaline battery would equal the performance of capacity of a 100ah Lifepov4 battery, and will last longer, not only 500 cycles! Definitely then worth twice the cost of an alkaline battery.

Current pricing of Lifepov4:

Alright that is all nice and all, but let’s look at real world prices of replacing my alkaline 200ah setup. The current best priced 100ah Lifepov4 battery on amazon is:

This one

WOW $570 + tax CDN@!!! So what $650 CDN just for a 100ah battery! And this gets worse as we go to 200ah batteries at 1k or more, and 300ah batteries at nearly 2k it seems! Ok this is completely unacceptable, we are being robbed in the USA and Canada on these China made cells in these, what if we build our own?

What are current stats and prices from China?:

Checkout following link from reputable supplier in China on Alibaba:

https://szluyuan.en.alibaba.com/productgrouplist-916502401/Non_grade_A_lifepo4_battery.html?spm=a2700.shop_co.88.17

The forum thread for her where people had good experiences is here:

https://diysolarforum.com/threads/where-to-buy-from-these-days.43265/

China sells them by the cell at 3.4 volts and a certain amount of AH(amp hours) for each cell. So a quick lesson in electrical engineering is in order to understand some of this. There are two ways to hookup batteries to each other, in series or in parallel. So if we had 4 of these 3.4 volt China cells in series, you add the voltage together and the amp hours stay the same. So in this case if we took 4 of these and connected them all together we would have the 12 volt battery we need. If we instead hooked them up in parallel, the volts stay the same but the amp hours increase. So in parallel we would only have 3.4 volts, not what we want. We want to start at a 12 volt battery and get the most AH for our money.

So what we need is 4 of these cells to build a 12 volt battery, then in future we can build another one and hook those two up in parallel to increase capacity if we want. For series connection all we do is hook each battery up negative to positive with cables, for parallel all we do is hookup terminals on each battery negative to negative and positive to positive with cables or bus bars, whatever you prefer, easy enough right? China supplies bus bars generally so we good there, we don’t have to go off and buy 2 gauge cables off Amazon or anything.

Another quick lesson, the total Power of a battery would always be the formula:

Watts = Volts x Amps , so to get any of those values all we need is 2 of them: ie: to get amps knowing watts and volts, we would just do watts/volts=amps and so on.

Grade A vs Grade B cells from China:

People all over the internet will argue over Grade A vs Grade B EVE cells, basically Grade A someone put in the effort to fully test them, discharge and charge and will charge you double for that. With Grade B they do same tests, but don’t go through effort of fully discharging and charging them, but it is from same production line, so for cost effectiveness we want to go with Grade B obviously and take our chances. They should all be same volts when we get them, only thing they didn’t test what capacity with fully charging and discharging them, so that means we could get a better battery to lol. So if it is for home use, use Grade B for price alone, if for a business go for Grade A.

Let’s pick a cell from China:

If you look at that Alibaba webpage from above, we know Amy is reputable, may take us two months to get our batteries, but cost wise it will be better. Let’s take a quote from the forum link above about someone’s experience:

 “I placed my order of 4 x EVE LF280K with Amy on June 7th. Shipment was two boxes (2 batteries per box) and box#1 arrived July 28th and box#2 followed the next day on July 29th (52 days delivery time to Toronto Canada).

The cells were described by Amy as “LF280K, brand new. Grade B β€”the voltage and internal resistance are matched, the capacity is not. The actual capacity is 275AH-284AH. QR code has B stamp , $111/pcs”.

Shipping was $140 and final delivery made via UPS. Tracking number was provided when order was placed, but package could not be tracked until it arrived in Canada (by sea).

Transaction was super smooth and went without a hitch. Thanks to members of this forum for recommending Amy as a reliable and trustworthy source. Will definitely order from her again.

I’m a degenerate gambler, so I’m looking forward to running capacity tests on each cell to see if I got a good batch or not “

Wow he paid a mere $650 CDN total for a 280AH battery! That would cost us 2k on amazon or anywhere else!

Is that all or do we need more?:

Technically you could just do that and be fine but there are actually two more things needed for a Lifepov4 battery: a BMS and a charger.

A BMS(Battery management system) is a card you buy online that has little wires hooked up to all the battery terminals, think of it as something that protects the battery from overcharging, over discharging, temperature cutoffs and so forth, it is basically a good thing to put on the battery. They run cheap crappy ones, and hundred or 2 for the good ones that even come with a Bluetooth app for your phone giving you all detailed stats about your new Lifepov4 battery you put together. It is a good idea and would recommend it. Last thing we need is a charger meant for Lifepov4, and we are set.

In all honesty, I think a good quality BMS, charger, and perhaps a box to put the battery in when your finished is a good idea. If BMS dies, charger dies, or battery cell dies, it’s as easy as just pulling it out and replacing it, unlike a normal battery where they put so much glue and stuff on it to prevent you from opening it that you’d have to toss the whole thing out.

Putting it all together:

Ok this is definitely not the faint of heart, in order to now change our setup from modified UPS using 2 alkaline batteries we have to rethink our whole setup for Lifepo4. So the UPS will need to be removed, we will need the following in CDN dollars:(take off 25% if USD)

  • a) Inverter(1500 watts or more) $400
  • b) Automatic Transfer Switch $200
  • c) 280 AH Lifepov4 battery $650-800 + $2-300 for BMS card and charger

So is it even worth it? Our upfront costs are already $1500-2k to replace our simple setup, why not just go to Costco and pay $400 for 2 more batteries that will die on us again. Also why not just get a UPS that supports Lifepov4 instead of going inverter/transfer switch route.

Here is why, a UPS that supports lifepov4 is very expensive, I believe Tripp Lite makes one, I think their top model supports around 750 watts and your going to pay 1k for something that can’t even handle a microwave oven or many servers running at same time? What if the power grid goes out, I would definitely need to be able to run around 1000 watts safely for servers, routers, internet, TV etc in livingroom. Of course you could get a top of the line one that supports more, but then your paying thousands for a rackmount option, not worth it.

How about we keep our inverter separate from rest of system that way we can upgrade it anytime we want, also add more lifepov4 batteries in parallel in the future if we want more capacity. This one 12V battery at 280 AH is like 6 alkaline batteries in parallel and is lighter to boot! $200 a piece from Costco = $1200 in alkaline batteries that would just die on you again!

Keep in mind we will stay 12V with our 1500 watt inverter or even a 2000 watt inverter, it is only when you start scaling up to 3000 watt inverters we would have to start scaling our batteries to 24V or even 48V if you wanted to run a whole house.

I left the best thing for last:

This new setup would allow you to do so much more, we could toss in a solar charge controller for a couple hundred dollars and some used solar panels after to the mix, and get free energy now if we wanted. We could switch our our automatic transfer switch to use the batteries when they are full from solar energy, and switch back when our batteries hit 20% discharge! I bet that sounds like fun, paying less in electricity bills just because you did a proper setup. Also you can check battery status from your phone at all times vs buying one off amazon that includes nothing!

Let’s get to some links for our parts list already!:

a) Inverter:

https://www.voltworks.cc/collections/used-products

Let’s start with 1500 watt inverter, this will be cheapest we will find for good quality, even if we go new still be cheaper than amazon. There are cheaper 1200 watt inverters on amazon, but I’d feel more comfortable with 1500 watts. If you have a lot of money go with an expertpower expensive inverter, they have transfer switch and charger built into them already, but I don’t think it’s worth it, what if your inverter dies, that is an expensive replacement then, but if money is no issue get a 3000 watt inverter from them. Why did I pick this one? Because you find a 1500 watt inverter with a pure sine wave not modified sine wave(the former protects your electronics) for a better price and quality parts πŸ™‚

b) Automatic Transfer Switch:

This will allow us to use the grid and switch to inverter batteries if grid goes down on us so our servers/PC’s stay on and whatever else you want for up to 1500 watts of the inverter we picked. Keep in mind on this you have to change a jumper setting for an instant switch as it’s set for a generator with a time delay when you get it. Also they have a pre-wired version so you don’t have to hack up extension cords if you choose on US amazon site. I believe this one will support up to a 2000 watt inverter, they have a 50 amp version as well.

This is just a smart choice, if our inverter ever dies or we want to upgrade to 2000 watts, it is a quick swap out, and we don’t have to pay for an expensive inverter with a transfer switch already built it saving us money again. For reference $300CDN vs $7-800 is a big deal especially it it dies one day, I’d rather put that money towards an inverter that has more watts.

c) Lifepov4 280 AH battery cells:

https://www.alibaba.com/product-detail/Luyuan-4pcs-280AH-LiFePO4-LFP-3_62583909993.html?spm=a2700.shop_plgr.41413.33.3b044a44HTxwRK

Talk to Amy on Alibaba about some Grade B cells, also explore her links maybe get a JBD BMS(I hear the overkill BMS is just that one anyways) and a battery box to put everything in. Lot’s of YouTube videos showing how to hookup BMS to these cells, pick one you like, preferably with a phone app, treat yourself here.

What if you want to expand the setup to solar?

Swap out the Automatic Transfer Switch above with this instead:

Then get yourself a solar controller charger(stick to one with same watts as your inverter) and some used solar panels your good to go! This thing will feed off your batteries when solar has charged them up, then switch back to grid when your batteries are low. Free energy, why not put the battery you built to use πŸ™‚

I’ll add more links and information to this setup once I get all these parts, but this is how I would go about it for best setup for your home. Start small with 1500 watt inverter, scale up as needed, maybe one day you want to run your fridge, deepfreeze and air conditioner as well, that would be a good time to scale up to say a 3000 watt inverter and solar charge controller.

I would recommend you build your own battery with BMS and try a setup like this, not only will you be happier in the end, not just the cost savings, just the pure knowledge you will gain from learning how to set it all up from YouTube videos will make you a much better person going green but at same time give you the confidence to go completely off grid one day if you choose, that knowledge is invaluable.

Until next time….

Dan.

Exciting Update: SunSaturn will build Crystal Tunes!

As first app release, I will be building a music player. It was a late winter night, and an idea popped in my head, “There are no good music apps out there for sorting/playing your own songs you own that work on every device”.

So I thought, how about we build something that works on the web, phones, laptops, tablets, windows, MacOS and even your TV, then we also make it so we can store the songs online and have any device you pickup, your phone, tablet, TV etc and just have it sync those songs or just play them right over the internet. Kinda important don’t you think for your car, home, gym or whatever you play music at. Also offline syncing would be important in case your not in internet range.

So where did Crystal Tunes name come from? You will laugh at this, my character name of final fantasy 14 is Crystal Sight, so I joined a party one night and mentioned I was building a music player and someone said Crystal Tunes, so I guess it just kinda stuck, and it does sound futuristic in my opinion considering quantum computing lately has discovered breakthroughs using crystals.

How far have I gotten as of May 21st? Halfway is my honest answer. For many months I have retrained myself to learn new programming languages, python, flutter, occasional java and and javascript. I will be using flutter as client end for this and python as server end. Flutter is much more difficult to work with than python, so it has been taking some time working with 2 languages all the time.

So far I have tested it working on windows, android phones, linux, web, and TV. The TV was probably most difficult as I have to work within constraints of using a TV remote, which changes a lot of what I can and cannot use. Rest assured however, when I am finished it will support every device out there. So if you typically use a windows computer, android tablet/phones, and your TV, it will work there, and I will deal with Macs and Iphones when I can get my hands on them to test they work there to.

All in all not a bad idea for a first app attempt, when I am finished I will release it as version 1.0 for every device you can download, and I will also upload to google play store for android and TVs.

So stay tuned…..version 1.0 coming this year πŸ™‚ Feel free to email me if you have any feature requests or design ideas. I will base this on my own creativity then I will run some design aspects against youngest generation in the end to cover all basis.

So what advanced features can we build in in 2020? Working on every device on internet is definitely the toughest one, I have to build in different C libraries for different platforms, and that has been a real challenge. Passwords? Let’s do away with them, aren’t we tired enough of having a million passwords everywhere and these password managers storing all our passwords on the cloud! 2020 is about oath2 security and signal security. So to build it cross platform we will use oath2, so let’s ditch passwords completely and use quick click email links or google logins to authenticate the world, much more convenient, especially for logging in on your TV.

So my idea on security will be no passwords ever again, and a token revocation list, and also be emailed about any new devices added to your account, I don’t think it gets much better than this for convenience of never having to use a password again. Only issue we might face here is if we want other people to access our personal accounts, without them having access to your email account, they won’t be able to, so perhaps we can add subaccount down the road if people need this, I don’t ever think it is a good idea to give anyone access to your main account, but only things you want them to access.

Playlists:

This is a hot topic with everyone, everyone wants a drag and drop interface, here is the problem with that, won’t work on TV with a remote control. We need a better concept or we enable a drag and drop interface for phones/desktops to configure playlists for your TV might be a better plan. What we also need is ability to see next songs playing, especially for a TV. So we are going to do something different here than any other music player, we are going to update the playlist with what is playing next for every song, this will be important, it’s more work and more code but will look better on TV.

Uploads:

This is probably the most difficult one because everyone has to pay for storage space in this world, it doesn’t just come free. I really want to make it so its free to everyone, but if we get to the point where we start running out of disk space because to many people use the app, then we will have to offer a premium membership to buy more disk space for people. My thoughts are we just buy as many drives as we can in a raid 5 setup and keep going, my servers are hosted at USA datacenters with dual internet link redundancy, so this enables us to write our own custom code for server end to do whatever advanced features we want for our app, instead of relying on things like firebase etc with limited functionality from google like most developers use. In fact we will bypass google completely and write everything custom so we can build as many advanced features for app as we want! We will set a callback for them with google login, but that is as much as we will do with them, we will also have a custom login in case people don’t want to login with google.

Streaming only online option:

This is like what the web will have to do. There is no way to access all your songs from a browser otherwise they could look at all the files on your hard drive from any website and that would be a massive security breach.

I am going to build it so you can stream all your files from internet at all times if needed, and for web well there is no choice there. What we should all ultimately be doing is syncing songs to whatever device we pickup, phone, tablet, desktop or your TV. But then again maybe people have massive song libraries of like 20 thousand songs or more which is not uncommon these days and don’t want to sync them to every device for disk space reasons, which makes sense.

So in order to stream online only all the time, would require a dedicated process like a web server process or an asynchronous API. My thoughts on this are that async is good, but for something like streaming files it makes no sense because process will be busy entire time just streaming the file depending how fast your internet connection is. We will try both and see how it goes, I can code in the async backend for streaming files, but we should probably start with a simple web process handling the file instead to free up the API itself for handling requests at beginning, but we will build both in and see which performs better on a larger scale.

If to many people abuse this feature however, we may need to upgrade the hardware, and offer a premium membership for this because everyone on internet pays for bandwidth, and we might not want to flood the datacenters bandwidth where we don’t need to, otherwise we’d get a hefty bill in the mail from them! And that’s fine, if your goal is to upload a hundred thousand songs and never sync them to any device, just play them online all the time, then just help with bandwidth costs by having a premium membership and it should all work out and your golden as well. I will try to accommodate everyone’s needs.

Future of Crystal Tunes:

I hope Google buys it one day for everyone, till then, we will keep working to try and make it better and better till they do πŸ™‚ I cannot tell the future, but maybe if we integrate YouTube and everything else into it, maybe they will buy it then.

In retrospect:

I guess one of nicest things about building something like this, is giving the world something everyone in it can use everyday, I hope you will enjoy it!

Dan.

UPDATE:

While I had worked hard on the app with flutter and a rest api with python, I have had not had a chance to get back to it, during winter months I hope.

New PC Build 2021 is it worth it?

Is a new PC worth building in 2021? I would say YES this year if you mean a desktop. I would say no if you mean a server.

2020 brought us some great things, new CPU’s, new motherboards, new graphics cards and so on. To start off with, reason it is not worth it for a server, is because PCIE 5.0 is around corner and servers would benefit from faster cores, memory and faster drive speeds than a desktop would. Also most of world will feel more comfortable running a drive at PCIE 5.0 speeds instead of PCIE 4.0 speeds because of large amount of data in databases and numerous blockchains they will need to run in future that is really drive intensive. Three words: “PCIE 5.0 Drive”. Wait for it, EPYC chips and IPMI will be your friends.

Intro

So the MAIN reason 2021 is worth building a new desktop PC is simple. The Samsung 980 Pro Drive with 7000 read and write speeds, DROOL! It is the only reason, even a graphics card in 2021 cannot even saturate a PCIE 3.0 x 16 lane, never mind PCIE 4.0. What do faster drive speeds do for your PC experience? Absolutely everything, faster loading times, faster game play, faster browsing on internet, and for programmers faster compile times! Absolutely everything is supercharged, everyone can thank Samsung and AMD for this. So to build a new PC assuming you already have a PC and can re-use your case and power supply, lets focus on main parts we need: CPU, motherboard, graphics card, memory and drive. So let’s look at these 5 parts to help you make an informed buying decision for upgrading.

CPU

What then would be the best and most cost effective build for your home desktop computer? Firstly in order to support PCIE 4.0 this is where it starts, the cheapest zen3 CPU with 6 cores and 12 threads is all you need for desktop so essentially the 5600X CPU would be enough. We don’t need more than 6-8 cores for a desktop, we care more about clock speed with latest zen3 series. But what about older zen2 and zen1? Not worth it, zen1 and zen2 will make us run our DDR4 memory at lower speeds, lowest cost zen3 CPU is my recommendation:

https://www.newegg.ca/amd-ryzen-5-5600x/p/N82E16819113666?Item=N82E16819113666

MOTHERBOARD

For motherboard this is where is gets tricky and you could literally spend days. In order to support zen3 CPU we have two options as CPU is an AM4 socket, the X570 boards, or the budget B boards. What’s the difference between X570 and the budget boards? You get way less from budget boards than a real X570 board. This is where you should be spending a good amount of money, if a motherboard was a human, the motherboard would be your body. You need a good body for everything else to run right. A CPU would be like your brain, but CPU can only function good if you take good care of your body right?

So when considering a motherboard there are many things we have to look at, especially when buying new. First and foremost the USB spec is currently USB 3.2 GEN 2. So if we skimp out on motherboard cost we are stabbing ourselves in the foot not getting enough of the USB ports. The budget boards for instance use older USB 3.2 GEN1, not worth it especially with thunderbolt USB 4 on PCIE 5.0 right around the corner, so the less USB 3.2 GEN1 ports and the more USB 3.2 GEN ports the better. Your mouse, your keyboard, your phone, your webcam, your USB sticks, your printer…..the list goes on. You plug everything into these, make sure they are USB 3.2 GEN 2 as many ports as you can get! If you are an app developer you’ll want to take advantage of 1 USB-C 3.2 GEN 2 port for your phone for 20 Gbps. A normal USB 3.2 GEN 2 port will give you a max of 10 Gbps, but with same spec in a USB-C port that is 20 Gbps. Same as charging cables, USB slow , USB-C fast.

The next thing we have to look at is how many PCIE 4.0 lanes can we get without spending a grand on a board. Around $399USD range or $500 CDN, we will try to get a motherboard with all the features of a 1 thousand dollar board at this price with as many PCIE 4.0 lanes as we can get. Your going to need 1×16 lane for your graphics card, 1 or 2 NVME slots for your drive, as well as a future PCIE 4.0 slot for upgrading. I would say what would work out the best is let’s have 1×4 extra slot in there for future 10 gigabit network card, and 1 extra x16 lane for either an extra graphics card or another upgrade of your choice. It would also be nice to have 1 extra NVME slot for another fast drive if needed.

After a lot of searching here is your best choice right now, many USB latest GEN2 ports, even has WIFI AX and 2.5Gbps LAN port. We have 2 NVME slots, ability to do 2 graphics cards or 1 plus something down road, also a x4 slot for a 10 gigabit network card if we ever dish out for an expensive 10 gigabit switch. There is a newer version of this 2019 board for an extra 30 bucks it seems, but as of March 2021 it is out of stock everywhere so my pick is as follows for do it right now:

https://www.newegg.ca/asus-rog-crosshair-viii-hero/p/N82E16813119109

Graphics Card

Graphics cards as of March 2021 are sold out everywhere because of the rise of Bitcoin to 50-60k USD. Everyone wants them for mining, so short of ordering a pre-built system with one in it, there is no chance of getting one currently. So what did I get? After careful consideration of what is available, what makes the most sense until these are available is a card with multiple monitor support that can support 4k resolution. So my decision was what stock traders use, a 6 port mini DP port graphics card, that can support 4k for a couple hundred dollars would be perfect. If it was not for the shortage of graphics cards currently, my pick would be an AMD 6000 series graphics card to fit your budget, 6700, 6800 or 6900 series currently from least expensive to most expensive. Also since we are doing an AMD build, AMD graphics cards do get a speedup using AMD chips, so its a no brainer. Without further ado, here is a decent graphics card you can use with 4k and 6 monitors till you can get your hands on those other ones:

https://www.newegg.ca/visiontek-radeon-hd-7750-900614/p/N82E16814129274?Item=N82E16814129274

As another bonus I looked around for a 4k monitor that would support max resolution of one of these ports 4k@60hz and found a nice one with a great stand and very affordable without spending close to a grand for one. Since I run normally 3 27 inch monitors, I definitely don’t want to spend more than 500 ever for one so here is a good one at those specs:

DRIVE

The whole reason we doing this upgrade right here folks! The Samsung 980 Pro 1 Terabyte Drive! The specs on this drive are out of this world, remember it was not long ago people were on PCIE 3.0 systems, and getting max 500MBps reads. Then this drive comes out at 7000MBps! It is blazing fast. Not many drives even compete with this drive for PCIE 4.0 drives. And we have a trusted name like Samsung for our data which is important. DO NOT go less than a 1 terabyte drive, I have used 250 GB and 500GB drives and always run out of space, even if you don’t think you are going to now, wait a few years and you will wish you listened to me. Without further ado, here is the drive! :

https://www.newegg.ca/samsung-1tb-980-pro/p/N82E16820147790

MEMORY

There is a reason I left this one till last, it is really difficult to pick memory with so many choices out there. You also have to understand concepts like XMP 2.0, overclocking memory, CAS latency on memory, as well as how fast of memory you can get, B die, single rank vs dual rank and the list goes on, it gets very complicated very fast. So let me break it down for you.

PCIE 4.0 on Zen3 performs best around 3600mhz mark with 32GB of memory and lowest CAS timings you can get. Now we have to factor in price, price is based on CAS latency, right now 14 being really good and expensive, and CAS 18 is highest I would go. As years go on CAS will get better, and memory cheaper, so do we really want to spend like 500+ dollars just to get a CAS timing of 14 or better on 3600mhz 32gig memory sticks? Hell no, lets pick a middle mark of CAS 16 and down road you can upgrade them if it gets cheaper. We already spent a lot on all the other parts so a good option for us will be as follows, 32GB for 200 bucks CDN:

RECAP WHY I CHOSE THESE PARTS

Before this upgrade I was running an old PCIE 2.0 system, 10 years and all I get is 1ghz faster clock speed, disappointing really, but where this shines is the drive speeds and support for newer graphics cards. For my own purposes I use ssh, play with app development, flutter, android studio, run a million tabs in chrome etc. One screen usually for ssh or emulators, one screen for coding, one screen for YouTube or going through a millon chrome tabs. I have background WSL2 sessions running, and I’m using up almost all my 12 gigs of memory in it, even causing my PC to swap and freeze if I overdo it. Flutter compile times are longer than I would like, it was just time, I couldn’t wait anymore. And world has moved to 4k, so let’s leave the 1080p world behind and move to 4k as well, as it won’t be long before 8k arrives.

Personally I rarely game on my PC, I have a Playstation for that, but if I was to be gaming, I would modify build as follows: when I could get my hands on a newer graphics card I would toss it in, then I would look for a monitor that runs 4k@240hz, then I would switch my other 2 monitors off and game on that one πŸ™‚

Then there are going to be some of you that say, why not wait for PCIE 5.0 and USB 4.0 etc. Because you are going to be waiting a long time! Years before they come down in price enough you would consider buying them when they get released. Now remember we picked that extra x16 PCIE 4.0 lane on our motherboard? You could use it as an expansion port for USB4.0 down the road, like I said, future proof πŸ™‚ Other than fact Samsung will probably release a PCIE 5.0 drive, you want to wait that long, or are these drive speeds great right now! I thought so.

I hope this helps you with your new build, total cost 2k CDN, very good price for a completely new future proof build to last you a long time.

Enjoy,

Dan.

FreeBSD certbot wildcard automatic renewals with bind

As many have experienced, wildcard automatic renewals are not working. Where we used to have “certbot renew” to just take care of everything, that no longer works.

My goal then is to have it work again without touching our current cronjobs so let’s get started.

lets encrypt wildcard instructions

pkg install py37-certbot-dns-rfc2136
tsig-keygen -a HMAC-SHA512 acme-update

add contents to named.conf from above command

EXAMPLE: named.conf

key “acme-update” {
algorithm hmac-sha512;
secret “my long ass secret with double quotes”;
};

//test.com
zone “test.com” {
type master;
file “master/test.com”;
update-policy {
grant “acme-update” name _acme-challenge.test.com TXT;
};
};

pico /usr/local/etc/letsencrypt/rfc2136.ini (add the following for certbot)

dns_rfc2136_server = 5.5.5.5 (PUT YOUR IP ADDRESS)
dns_rfc2136_name = acme-update
dns_rfc2136_secret = mylongasssecret
dns_rfc2136_algorithm = HMAC-SHA512

chmod 600 /usr/local/etc/letsencrypt/rfc2136.ini

certbot certonly --dns-rfc2136 --dns-rfc2136-credentials /usr/local/etc/letsencrypt/rfc2136.ini --server https://acme-v02.api.letsencrypt.org/directory --email admin@test.com --agree-tos --no-eff-email --domain 'test.com' --domain '*.test.com'

Congratulations, for now on your normal “certbot renew” command in your cronjob will work like it did before.

2021 biggest upgrade year in decades, 5G, HDMI 2.1, PCIE 4.x. Where do I start?

2021 will put everything we currently own in the antique shops! Upgrading our PC’s…

Let us start with HDMI 2.1. This technology is promising 48 gbps! What does this mean? Get ready to throw out all your existing monitors for the new 4k 120fps monitors ASUS currently is releasing. Of course we will need new graphics cards to match our typical 3 27 inch monitor setups on our desk.

What does this all mean? We will need new PCIE 4.x motherboards, CPU, ddr6 memory, Nvme SSD’s and everything else to match! There are currently not enough motherboards out yet, so I would estimate everyone should start their builds fall 2021 or early 2022. An ultimate setup for new build would be CPU with overclocking ability past 5ghz with many cores. Fastest DDR6 memory you can get, high end graphics card to support 3 HDMI 2.1 outs for the 3 27 inch monitors 4k at 120fps+. There is no reason to have a 1080p monitor on your desk anymore, they belong in antique shops as of 2021, especially with ps5 supporting 8k!

What has been the most depressing part about CPU technology is that it has stalled for nearly 2 decades. We should be way beyond 5ghz processors by now at base clock speeds, but industry chose to add more cores/threads instead. If you remember back in 90’s on with our blue screens of death from windows 3.1/windows XP, a lot of that was solved with adding cores to CPU’s. So if you had some process running out of control maxing your CPU out to 100%, then your PC would not freeze anymore.

So why did they only continue with cores only? Well then internet developed into using virtual hosting, things like KVM on redhat/centos, that allowed you to add more operating systems to existing systems without requiring a whole new PC/server. If you had a lot of memory in your system(32 or 64+gig) you could run many different operating systems like Linux, FreeBSD, Windows, Solaris, Irix and many other unix flavors.

But isn’t increasing core speed a good thing? Of course it is, better the core speed, faster the CPU can run return to us to do other things. Transcoding for example requires a high clock speed, IE converting a video stream from one format to another on the fly. The reason we have stalled so much is adding transistors to CPU chips is not an easy task, they can only get so small. GPU’s( graphic card processors) actually are faster, because this CPU industry has stalled so much on increasing clock speeds. Reality is with only INTEL and AMD for competition, there just wasn’t enough competition, we should be at 10ghz base clock speeds by now. We are more likely to see quantum computer chips now before we can expect INTEL or AMD to actually produce a 10ghz chip. Then with invention of smart phones it stalled it even more as we went back in time once again using old school ARM CPU chips to increase battery life in phones instead of developing better battery technology.

5g smart phones

All our pixel 4XL’s down or anyone with a phone not made fall of 2020 on belong in antique shops as of 2021! What is a 5G phone? It’s the next version of cell phone tower technology. We had 3g, 4g, LTE and now 5g. Why would you want to have a phone that is stuck back on 3g technology!

The good thing about 5g, is it will empower self-driving cars, AI, and faster and more reliable internet with our phones. The only case I can make about not upgrading is if you live in a rural area, then don’t bother till 2022 when your phone contract runs out, every phone that year will support it, so the google pixel 6XL should be your first upgrade!

Why do I prefer google Pixels? Longer support, quicker updates, more secure(google makes android, so you really want 2 companies running around your phone, or just google) and if you ever plan on making an android app in your life, that is the phone you need. Have fun learning java! Ohh god back to 90s again! Why don’t we just program them in assembly language already instead of using perl or python lol. Don’t worry, when quantum chips arrive, we will have updated programming languages finally. Programmers can just go to sleep for next 5 years till physicists release a chip with 2000 qubits. Look on bright side, at least you’ll have a ps5 to tide you over along with an 8k TV before that happens.

Laptops with PCIE 4.x, HDMI 2.1 support

These will command a premium price tag, your better off building a desktop off newegg.com/ca with prices they will charge. I would not waste my money on a laptop period without HDMI 2.1 out support to a 4k TV or you would be stuck watching a movie in 1080p in your hotel room, what a disgrace that would be.

These days laptops are fast enough for all your daily needs, to buy new I would wait for graphic cards that support HDMI 2.1 out to 4k TVs so you can watch a good quality movie. They have them right now, but I would wait till they have PCIE 4.x motherboards in them so your future proof for awhile and your SSD’s run blazing fast in them.

Home Living Room Setup

NOTE: I am using ps5 and new xbox releasing late 2020 as minimum test scenarios we need to stay compatible with for upgrading old technology.

This is tricky, the first thing we need before we do any upgrades to our existing living rooms is a receiver capable of HDMI 2.1 that is compatible with latest ps5/xbox etc and supports 6+ HDMI ports. Currently there are only a couple available, costing thousands. I would wait till Black Friday 2021 and look if you can snap one for $350 from bestbuy. Then the upgrades can begin. After buying receiver finally, you should head over to monoprice.com and pickup as many HDMI 2.1 cables as you need. Then we upgrade the TV finally to support latest ps5/xbox specs. As of today’s date there are only 2 models available.

For our television upgrade this is personal choice on size etc. One thing you absolutely need to make sure is that it is ps5 etc compatible. I would say 4k TV would be better choice right now, as the new consoles have lower frame rate on 8k, that is not very good for gaming. Another thing you need to consider is gamer input lag. The lower the gamer input lag, the better the TV will be. This does not mean if you have the money don’t get a 8k TV, I certainly would, I would just play games at 4k on the 8k TV, then I’d still be able to watch movies at 8k whenever they come out plentiful like 4k movies are now.

My recommendation for price/performance/cost factor is largest TV you can get 65+ inches at 4k with ps5 support and 3D support because new avatar movies will be releasing in next few years and 3D only looks good on 65 inch+ TV’s. I prefer 73 inch range. If on a really tight budget just snag a 55 inch 4k TV when you can, but who likes sitting that close!

For non-gamers you won’t have to worry about gamer input lag as much as someone raiding online say in Final Fantasy 14 on ps5, but my advice on getting a large enough TV for new console support with 3D as large as you can stands. For non-gamers the fastest possible Netflix experience you will get will be running netflix etc on ps5/xbox new consoles when they come out late 2020, no other device will match it’s speed, not even your brand new smart TV.

Technology Updates in retrospective

Gamer is a funny term to begin with. In reality all humans are gamers. Your just an extension of your smart phone these days till maybe one day your able to use Elon Musk’s neural link to even come close to be as smart as AI. As much as you may not think you are a gamer, reality is on consoles you get many lives, in real life you only get 1, and one day you will be walking around with virtual reality glasses before you die whether you like it or not, resistance is futile.

If you choose to resist new technologies, you will never be apart of the real resistance team against AI and quantum computers one day, knowledge of underlying technologies in #1 in 2020. Even Bill Gates predicted one day we will all work from home one day in virtual reality, the only resistance currently is governments not wanting to give everyone a universal income, so there will be a sars version 3 4 5 who knows till this happens. With new technologies like CRISPR anyone could release a virus on world anytime they want now. Over next few years we will see governments adopt crypto currencies, currently central banks have it, and a couple countries, and we know central banks control US/CANADA anyways so just a matter of time. I predict USA will adopt it first, they always behind Europe lately, then Canada is so passive they only do things after USA does them, so they will follow suit, just like legalization of marijuana.

What if you are young in your 20s or 30s?

If you are just coming into all this new technology, follow Elon Musk, see where the world is going. Understand physics, understand his projects of Starlink, Neural Link, and what is involved in inhabiting Mars. Project Mars that has been worked on for a very long time, just launched this year and should hit mars by February.

What should you study in school? Physics right now is #1. To put it simply my parents came from industrial age, in this age the baby boomers grew up in an age where they could have worked same job all their life in some store or factory where only 1 income was required to support a family. Industrial age meant women stayed home, men worked, women took care of the kids and a family could survive just off one income, unlike today.

The next stage after industrial age was computer age. This is my age, currently 44 years old. I like this number a lot it was the number of Stephane Richer when the Montreal Canadians won the Stanley Cup πŸ™‚ Anyways since the 90s computers have evolved so much, my generation improved the computer technology greatly, so much in fact that is where you have your smart phones you can’t get off these days.

The age we are currently coming into is called the space age. Your age group. Physics and engineering are #1 right now. As more and more missions go to Mars, quantum computers become a thing, and artificial intelligence will mean everyone works from home one day, as much as governments resist to give people a universal income, rest assured it will happen one day as AI replaces 80% of jobs in next 10 years.

This is a great time to become an engineer. Engineers will always have a job. While my background was computer science from the 90s, if I was currently your age, I would instead be studying physics and getting an engineering degree. You can still pursue computers but do it from the engineering side. Besides their side is much funner you can build robots, all computer scientists can do is build cryptography, I would pick the robot side πŸ™‚ Will give you the base foundation for understanding more about the space age as it becomes relevant to you. I hear Elon Musk even setup a school, explore that idea as well.

But what if I like to help people? One word, “CRISPR”. Study biochemistry and learn CRISPR as soon as you can. Hell even get petri dishes at home and learn from online kits available. This is the future of medicine, if you want to help people, use it for good like curing cancer and not releasing viruses on the world. Word of warning we do not know enough about human body right now to be playing god with CRISPR. For example the inventor of it ended up in jail for 2 years in China for modifying embryos with it so they could never get HIV for example. Here is the problem, we do NOT understand enough about DNA to just randomly snip at DNA. For example if you take CRISPR and modify the DNA strand so you cannot get HIV, you make the person more susceptible to things like diabetes. It’s like you shut one thing off, you accidentally turn another thing on. I don’t think till quantum computers arrive and analyze DNA that we will have a better understanding of it, so be prudent and don’t play God, learn from your teachers and classmates and join mailing lists for researchers.

But you want to do nothing, maybe be a pornstar? Remember no matter how attractive you think you or someone else is, everyone gets old! Relationships are built on trust and having things in common. This should be the basis for everything you do in life. Find a passion and explore it, it takes 10k hours to master anything in life, this is why you need to find it and put the time in. Go game on twitch if you need to, just find that passion, 10k is a lot of hours and you need to put it in somewhere. Trust me time goes by quickly if you find people who share your interest.

Before you leave this world you want to leave something behind that future generations can carry on your work, if you do not do this your life will feel empty all the time. You always want to be accomplishing something that betters humanity, physics/science will give you the basis to do this.

Develop the mentality, “Nothing is impossible, just hasn’t been done yet”. Throughout my programming years, that is all I heard from developers, “That’s impossible”. And the more they thought it was impossible, the more it challenged me to make it possible. I did a lot of late night coding sessions, even slept on some possible algorithms in my head for it before I went to bed and woke up with the answer, then implemented it and proved everyone wrong. Nothing is impossible, its terminology for lazy people unwilling to do any research or testing. The key is to know every possible human available tool for it and try them in different ways. Don’t be afraid to fail, I needed to fail 100 times at everything I tried at first to finally have something that worked, never give up! Sure give up on bad partners or people from non-science fields, but never give up on science! There is always a way!

Remember to always be nice to non-science people, by transitive property people fear what they do not understand. Don’t spend a lot of your time hanging around non-science people because you’ll waste so much time on their fears and conspiracy theories, let someone do it who is bored πŸ™‚ Get on IRC, mailing lists, and talk to people smarter than you always or you will never learn anything. If your having creativity issues, go teach at the university for awhile then return to your field later, students can teach you a lot and give you your passion for field back. I think the saying, “Those who can, do; those who can’t, teach” only applies to Arts people anyways, cause you can get research grants and further research when you get your creative itch back.

If you ever get a chance to run a business, the right business partner is key. I must have ran 5 different companies in my life, the right partner is the hardest thing to find. You need to find someone who has all your weaknesses as their strengths, maybe that’s marketing, sales, accounting, programming, taking a company public on stock market or whatever, you can’t do it all! My average was I found one good partner every 3 tries. Treat your partner with utmost trust, have mentality you want to have exact same money as them and be sipping martinis with them in Caribbean one day. Never do anything without integrity when running a company or skim off the top, your only hurting your partner and you sharing that martini! You will never find him/her first try, you have to understand your weaknesses and personalities well first.

Treat your sales team like gold, they pay your programmers their wages, take their families out on trips, make it fun, that is your blood line! You will suffer mental collapse many times and feel like throwing in the towel, that is the time to push harder to succeed. The difference between people who succeed and fail is the ones that pushed through it when it happens. I do not wish a startup on anyone, its mentally exhausting, I suffered programmer burnout many times doing this, remember balance is key. Have a whiteboard and draw out ideas, it helps to keep things in perspective.

My goal is not to dis-encourage you to running a company. There are many positives. You will learn so much about how valuable inter personal relationships are, you will become very wise on how to deal with people, merchant accounts, marketing, sales, programming, or whatever your strengths are. Just have a marketing plan upfront, no product is worth anything if no one knows about it, so if one of your strengths is not internet marketing, find a partner.

When I started out first time in my 20s, I was weak at sales, lacked people skills, weak at marketing. I was strong at programming, and technical side of things, but it wasn’t enough skillset. So I then became stronger at marketing, and people skills. Now with a strong tech background, people skills and marketing abilities, I was able to get a partner with good people skills and strong sales skills. That partnership worked really well, I focused more time programming for marketing purposes, and my partner would close the deals as sales just wasn’t my thing. It was a beautiful partner ship and we complemented each others strengths and weaknesses perfectly. I developed enough people skills I could talk tech in terms people understood, I was contributing on marketing, and that helped my partner close sales. That’s how you do it, the motivation from each other is what keeps things going, and that my friend is why finding right partner is important.

The coolest part about it is it’s the only job in the world where you want to make yourself dispensable, so you can run off and open other companies down the road πŸ™‚ Put the programming time in , build it and they will come πŸ™‚ Or if your on other end, find a programmer and support them contributing ideas.

On a funny note: when I was in university, science students would run over to Arts faculties wearing shirts that said, “Friends don’t let friends take Arts”. I hope this make sense to you one day…(Would you rather marry a doctor or lawyer? Would you like to have a BA and work at a gas station?)

Always remember, intelligent people talk about ideas, others talk about other people constantly. Use this knowledge to help you pick a good partner one day, don’t stress yourself out with people where you feel you need to defend yourself all the time. Stress will kill you faster than anything on this planet.

Good Luck,

Dan@SunSaturn

FreeBSD 12.1 + Alpine with GPG

Intro:
I decided to install GPG on FreeBSD with alpine. What does this do? It’s the old days, using pgp to encrypt your email before sending. This is a howto so everyone can start encrypting their emails. Why do it? Back in the 90s when I was sitting in computer science class it was common courtesy and etiquette to always provide people with your PGP key when sending emails. So by not providing people with your PGP key, it’s considered disrespectful among the computer professionals. This is a tribute to my old classmates Isaac Eaglestone and Jason Barlow, I wish I could find them again. Especially Isaac who would bitch me out every other day for not using it πŸ™‚ To be fair it was a headache to get anything working back then with an email client, considering all we had to work with was Slackware Linux back then, so I decided let’s go through FreeBSD, pull our hair out fixing any errors that pop up and let’s get a reliable Alpine + GPG setup going!

Can this stop quantum computers?

By now we all know Shor’s algorithm is set to break all asymmetric encryption. So what we will do is use best encryption we can with GPG using symmetric encryption, GPG supports AES256, so we will use that along with using RSA for compatibility. For all said purposes we will use the strongest that makes sense and stay compatible with other people’s keys as well.

Why use alpine?

If you ssh into systems on a regular basis, it makes no sense to download your email to an insecure device at home. If your using openvpn to download over VPN to a client such as Kmail to your Google Pixel Phone, it should be ok. What makes a phone insecure is trusting to many app developers. FaceBook for instance has been known to go behind people’s backs and upload your contacts to their servers. FaceBook also owns whatsapp. If you want to keep your phone secure, don’t put these on your phone.

FreeBSD Prerequisites: PART 1

Firstly I prefer using alpine with Postfix and Maildir support, since the Maildir patch is not available with standard pkg system. Off to the ports we go:

Let’s install alpine from ports, lock it from package manager updating it, install alpine gpg addon from pkg system, and see what directories it used for installing it.

cd /usr/ports/mail/alpine
make config #(Select Maildir patch)
make
make install
pkg lock alpine
pkg install ez-pine-gpg 
pkg list ez-pine-gpg

(Assuming your using bash for your shell and nano for your editor)
Next we want to get rid of any “pinentry” errors that may come up, the first problem I ran into, the following will solve it next login:

alias pico='nano -w'
pico ~/.bash_profile #(add the following next line,save and exit)
export GPG_TTY=$(tty)

At this point at least run alpine once to get your .pinerc created if its not already, then let’s open .pinerc and REPLACE display-filters and sending-filters with the following:

# This variable takes a list of programs that message text is piped into
# after MIME decoding, prior to display.
display-filters=_BEGINNING("-----BEGIN PGP")_ /usr/local/bin/ez-pine-gpg-incoming

# This defines a program that message text is piped into before MIME
# encoding, prior to sending
sending-filters=/usr/local/bin/ez-pine-gpg-sign-and-encrypt _INCLUDEALLHDRS_ _RECIPIENTS_,
        /usr/local/bin/ez-pine-gpg-encrypt _RECIPIENTS_,
        /usr/local/bin/ez-pine-gpg-symmetric _RECIPIENTS_,
        /usr/local/bin/ez-pine-gpg-sign _INCLUDEALLHDRS_

Alright we are getting closer, now we want to actually create our gpg key if you don’t have one already, now we run into the ssh X11 forwarding headache if you have it enabled when you su to another user, so to make sure we have no issues ssh to localhost as that user without X manually so we don’t get any end of file errors creating our brand new key. This generally happens because when you su to another user, the tty is still owned by user who logged in on tty device and permissions are generally 600 on it. You can get around it by chowning the tty device or using tmux, but honestly why go through the trouble, just ssh as user you want to create the key with:

ssh -x localhost #(disable X forwarding for this user)
gpg --full-generate-key

Enable encrypted swap space if you have newer hardware and your CPU supports AES-NI, here is a quick test:

swapoff -a
kldload -n aesni
swapon -a
dmesg #this should show if CPU supports it or not

If above is supported, put aesni_load=”YES” in /boot/loader.conf and append the “.eli” suffix to all swap devices. Enjoy your encrypted swap space.

Ok now let’s create our gpg.conf file, so we can remove unsecure memory errors if you don’t have secure memory space and set some defaults for encryption. You remembered to run gpg at least once so ~/.gnupg directory got created right? Just type in random crap and hit CTRL-D.

pico ~/.gnupg/gpg.conf #add following to the file, save and exit:
no-secmem-warning
cipher-algo AES256
personal-cipher-preferences AES256 AES192 AES CAST5
personal-digest-preferences SHA512 SHA384 SHA256 SHA224
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed
cert-digest-algo SHA512
s2k-digest-algo SHA512
s2k-cipher-algo AES256
keyid-format 0xlong
with-fingerprint
use-agent
charset utf-8

Now let’s edit our keyserver daemon’s config file, this always goes through tor, so unless you want to be waiting 30 seconds for it to timeout all the time, just install tor! Yes, you could just set option for it not to use tor, but only thing that is going to do is make it start faster, after process forks it will try tor anyways, so trust me save yourself the hours of headache and just install tor so your not always waiting on dirmngr to timeout using tor, unless of course you enjoy sitting there waiting 30+ seconds for a simple command like gpg –search-key <KEYID>. This dirmngr.conf file and daemon is only used when dealing with public internet keyservers.

There is nothing wrong with using Tor, just as long as you aren’t an exit node, if you can run one awesome! Is Tor secure? Absolutely not, NSA pulled 2 people out of DefCon conference when 2 researchers from university found a way to exploit TLS in there, then they have been in talks with source code developers of Tor as well. If that is not enough they are known to run honeypots all over the system. If you want secure, your best to run a VPN, then run that through Tor. Personally I think of having Tor on my system just like enabling IPV6 on it, just another network I can talk to. For our purposes we are going to use it just so dirmngr doesn’t piss us off πŸ™‚ Besides we can just do “service tor stop” anytime we not talking to a keyserver if we like.

pico ~/.gnupg/dirmngr.conf #add following to the file, save and exit:
keyserver hkps://keys.openpgp.org

#now setup Tor if you haven't already
pkg install tor
pico /etc/rc.conf  #add the following, save and exit
tor_enable="YES"

#I am not going through configuring tor in this article, at least go into
#/usr/local/etc/tor and configure torrc and torsocks.conf so port 9050 is #working, just make sure your not running an exit node!

service start tor  #start tor

Now you may be asking why I am using keys.openpgp.org instead of sks key servers. The reason is simple, there is a DOS attack that has been known for decades that still works on all internet key servers. The DOS is quite simple, sign someone’s key 150k+ times and upload it to keyserver effectively destroying their gpg installation once they refresh their keys. The key server I have picked for us today, is only one on internet as of this date that at least attempts to mitigate this attack. So do yourself a favor and use it! As of recent versions of gpg all keyserver lines go in this file now,  not gpg.conf anymore.  As of this date I am using version 2.2.21.

Great now let’s get on with the real stuff, playing with gpg itself. First we want to generate our key. I suggest leaving things at defaults, but you can set RSA key to 4096, there really not much security between 2048, maybe 40 bits, according to gpg website, trade off in performance not really worth it.

I am saying use defaults just to be compatible with other people, in future what we really want to do is just replace RSA with elliptical curve bitcoin uses by selecting secp256k1 in expert mode down the road. (ie: selecting 10 and 9 with next command) Let’s not do that now. We still using AES256, even with quantum computers with 4096 qubits would only knock that down to AES128, by time they have that 1 million qubit computer hopefully the quantum algorithm is out. Pick a good password! With today’s technology they can brute force 2.8 billion tries a day, that is enough to try every lower case character a-z of a 10 character password in one day! Mix it up with upper case, numbers, special characters and use 4 words if you can!

Back in 90s, I used sentences with pgp, then I always forgot what it was cause I wasn’t using it that frequently like a password you would use to login with email or ssh, so keep it to something you can remember!

gpg --full-generate-key --expert #generate our key!

You will notice it put a revocation certificate in: ~/.gnupg/openpgp-revocs.d/
You will need this to revoke your key down the road if you loose it.

Ok at this point go test it, go into alpine, send an email to yourself, hit CTRL-x like usual to send, but before typing “Y” to send, hit CTRL-p instead to scroll through sending filters, select something like sign and encrypt, then hit “Y” to send.

If all goes well, you should get prompted for your password, gpg-agent will then store this password in shared memory for a set amount of time, which you can actually specify for how long in config file, or until server is rebooted or gpg-agent is killed and restarted. You’ll notice in “ps aux” every time you deal with gpg in anyway, that gpg-agent is running.

This is how gpg works. Try thinking gpg as a key-ring management tool. Everytime we use gpg we are mostly just a client talking to that gpg-agent server process running. We can list keys in there, tell it to remember our password in shared memory for X amount of time, sign things, encrypt and de-crypt files and much more. So now just sending email to our self is not very useful.

At this point I want you to add another user to your system, I want you to repeat all these steps and do it for this second user. Send email to itself, and when you got that working and ready to send to each other, let’s continue… Remember when using su – $USER you cannot create a key if he does not have his own tty, make sure to ssh -x $USER@localhost so he get his own tty so you have no issues!

PART 2 – Actually working with GPG (our key manager)

OK if you made it this far, congratulations! The hard part is done! You completed the setup! Give yourself a pat on the back. Only things we are going to do now is play with gpg command itself, that’s about it and learn what cool things we can do with it.

Let’s start off by continuing where we left off, I told you to create another user on system and setup his .gnupg directory. So at this point let’s send our practice emails to that user back and forth.

First Step: Let’s export our public keys for each of these accounts:

cd /tmp
gpg --armor --output user1@example.com.public-key.gpg --export user1@example.com
#now for other user:
cd /tmp
gpg --armor --output user2@example.com.public-key.gpg --export user2@example.com
#I like to keep copies of these in my .gnupg directory so let's do that
#for each user do following:
cd ~/.gnupg
cp /tmp/user1@example.com.public-key.gpg .
cp /tmp/user2@example.com.public-key.gpg .
#now for user1:
gpg --import user2@example.com.public-key.gpg
gpg --sign-key user2@example.com
#now for user2:
gpg --import user1@example.com.public-key.gpg
gpg --sign-key user1@example.com

Ok great, what we did was import the key, then signed the keys for each user because we trust them. Great now jump in alpine on each user and send emails back and forth with the CTRL-p filters. Play with it for a bit, you will notice gpg-agent daemon starts asking you for your password. Pinentry program runs here to ask for it, which is

sunsaturn:~/.gnupg # ls -al /usr/local/bin/pinentry
lrwxr-xr-x 1 root wheel 12 Oct 11  2019 /usr/local/bin/pinentry -&gt; pinentry-tty
sunsaturn:~/.gnupg # 

gpg-agent keeps your password in shared memory approx 2 hours, unless you change that in config file or you restart gpg-agent. You can kill gpg-agent and dirmngr daemons anytime you want with “gpgconf –kill all”. Or the old school reliable way of “ps aux” “kill -9 <pid1> <pid2>”

Wonderful you have done it! But wait we probably want to submit our key for at least ourselves to the internet keyservers! We don’t have to but it would be nice if we could link our email address to a PGP key on internet so people could find us easily.

Ok what we will do is submit our key to SKS keyservers as well as our default openpgp key server. Then we will add our key to our .signature file in alpine so whole world now knows we have a PGP key. We will even put a copy of our public PGP in our .signature file so people can grab it anytime through a website, sound cool? Great let’s do it…

 

gpg --list-keys #let's start by listing the keys and find our <KEYID>

pub   rsa2048/0xFF6F49977311C386 2020-07-17 [SC]
      Key fingerprint = A1A7 6E84 FB0B 8994 C3B5  A1BA FF6F 4997 7311 C386
uid                   [ultimate] Dan The Man (Dan @ SunSaturn)

For example above here is my key, my <KEYID> is the numbers/letters after the pub rsa2048/0x string. So here my <KEYID> is FF6F49977311C386. The reason we have 0x in front of it is because in our gpg.conf file we have “keyid-format 0xlong”. It’s just to prevent problems really, had I done just “keyid-format long” then it would not have the “0x” in front of it. Also you can see the fingerprint of my public key. So since we are using 0xlong I can use 0xFF6F49977311C386 as my <KEYID> here.

Alright let’s submit our key to keys.openpgp.org, since that is in our dirmngr.conf file as the keyserver that is what we will default to.

gpg --send-key 0xFF6F49977311C386     #use your KEYID!
gpg --search-key 0xFF6F49977311C386   #use your KEYID!

Great if all went well we submitted our key to keys.opengpg.org and then searched it and got it back. Now wouldn’t it be cool to search by our email address instead? Go in your browser now to : https://keys.openpgp.org follow instructions in your email and this site to verify your email address so people can search for your key by your email address. Once you are done that awesome let’s see if it worked:

gpg --search-key user@domain.com #use your email now!
gpg: data source: https://keys.openpgp.org:443
(1) Dan The Man (Dan @ SunSaturn) 
2048 bit RSA key 0xFF6F49977311C386, created: 2020-07-17

Good job, now let’s submit our key to SKS servers as well:

gpg --send-key --keyserver pool.sks-keyservers.net 0xFF6F49977311C386
gpg --search-key 0xFF6F49977311C386 #use your KEYID for both!
gpg --search-key user@domain.com    #use email to if you like

Now you have to realize pool.sks-keyservers.net is a pool of addresses, it may take time for them all to sync, if you ran command “host -t A pool.sks-keyservers.net”, you can see IP address is going to rotate each time, but if you ran those 2 commands above quickly you may have gotten same IP address twice and it successfully searched the key. Don’t worry about this, check back in in 24 hours. One good thing is we don’t have to do any email verification checks to list our key on SKS servers, so we are done. For a list of pools visit : https://sks-keyservers.net/overview-of-pools.php

Almost there, last thing we want to do is tell the world in our .signature file on alpine we are able to use PGP/GPG if people wish to add our public key to their keyring to encrypt emails/files to us. For that we want our public key on our website somewhere, and we want our fingerprint for the file so we can include that for people so they know it was not tampered with.

gpg --armor --output /path/to/website/root/pgp.txt --export user@example.com
gpg --list-keys

In first command above we exported our public key to directory of our website, or just copy pgp.txt to your website on another server if needed. In the second command we looking for “Key fingerprint” line so in my case:

gpg --list-keys
pub rsa2048/0xFF6F49977311C386 2020-07-17 [SC]
Key fingerprint = A1A7 6E84 FB0B 8994 C3B5 A1BA FF6F 4997 7311 C386

My fingerprint is “A1A7 6E84 FB0B 8994 C3B5 A1BA FF6F 4997 7311 C386”. By putting a link to pgp.txt file and giving them this fingerprint in .signature file this gives people 3 ways now they can find us. Through openpgp keyserver, through SKS keyservers, and also our emails. So let’s edit our .signature:

pico ~/.signature #add something as follows:
PGP Key: https://SunSaturn.com/pgp.txt
A1A7 6E84 FB0B 8994 C3B5 A1BA FF6F 4997 7311 C386

For reference here is my .signature with my email address/phone number removed for this blog, obviously put your pgp.txt and fingerprint ID in it’s place πŸ™‚


Dan The Man
CEO & Founder
Websites, Domains and Everything else
PGP Key: https://SunSaturn.com/pgp.txt
A1A7 6E84 FB0B 8994 C3B5 A1BA FF6F 4997 7311 C386

That’s it we are done! Congratulations for doing the entire setup! For now on all you will ever have to do is remember to hit CTRL-p to send with PGP/GPG and your password. I hope to god you can remember your password. Place a file somewhere giving you hints what it is if needed.

Closing Thoughts:

Keep your private key secure. We all know by now intelligence agencies store encrypted data to save at a later date when technology gets better to decrypt. That being said your emails should be fine until they have quantum computers with millions of qubits. If your really paranoid, create an advanced key with elliptic curves from this setup, it just won’t be compatible with most people at this point for anyone running older versions of gpg. For any important files you need to encrypt, always use the best you can. When quantum algorithms come out, unencrypt your files, then encrypt them again with newest standard. If your key ever becomes compromised revoke the keys on both keyservers we submitted to and go through creating new key again. You can also do a shared password between the both of you using one of the sending filters, cool right?

If you have really sensitive information to send someone, both of you agree to access the file over a vpn/ssh connection to download the PGP file. Gives you a double later of protection. Even better yet, ssh over a VPN connection for a third layer πŸ™‚ Someone storing encrypted data would have to break your VPN key, your SSH key and then your PGP key, you’ll most likely be dead by then, should be good πŸ™‚ For advanced users: create a cronjob that switches your vpn key and ssh secret/public keys at regular intervals, ultimate protection. Store your encrypted files in an encrypted filesystem preferably on an SSD with many layers of 4 or more like QLC with trim support, more layers there are, harder it is for forensics teams to grab deleted data, they will give up. Personally I don’t have any sensitive data, but if I did those are avenues I would use. For me I use VPN’s for what they were made for, accessing internal machines on remote servers like I was there.

Encrypt files: (create files with .gpg ending)

#have a friends public key imported? 
#this method you cannot decrypt .gpg file after
#only his secret key can
gpg --encrypt --recipient myfriend@gmail.com myfile.txt
ls -al myfile.txt* #decide what to do with myfile.txt
#even better way, encrypt with yours and your friends secret keys
#this way you can both decrypt myfile.txt.gpg
gpg --encrypt --recipient myemail@domain.com --recipient myfriend@gmail.com myfile.txt
rm -f myfile.txt #send him myfile.txt.gpg
#or share same password between you both
gpg --symmetric myfile.txt
rm -f myfile.txt #send him myfile.txt.gpg

Decrypt file:

gpg -d myfile.txt.gpg > myfile.txt

Enjoy your new setup!

Dan.

Updates 2020- PART 1

Introduction

I think I’m about due for an update as it is SunSaturn’s 15 year anniversary since it was first created. HAPPY BIRTHDAY SUNSATURN! I personally took a lot of time last 5 years soul searching, but in end I’ve decided SunSaturn is way to go. I think SunSaturn will be now taking a new approach in 2020 offering hosting, VPN service as well as a few new services by the end of 2022. I have a lot of catching up to do! World has changed a lot now, 2020 marks the era of 5G networks, artificial intelligence, smart homes, robots, drones, flying cars(300k for one! Time to get pilots license to fly to my brothers place!), virtual reality, star link internet from Elon Musk, and every country in world will soon be implementing their own digital currencies like Bitcoin! Ok that is a very long list, so lets break down each one shall we and see where people should be focusing on for 2021 and 2022!

Quantum computers and new encryption

In previous article I talked a lot about Shor’s Algorithm being able to break all current encryption. So what has been done about it? Only thing they could of course, increased the bits in encryption keys. Will it help? Not if someone builds a 1 million qubit computer like a company is currently promising by 2022 for US Government. They are coming up with a quantum resistant algorithm now, so in mean time best bet is still to use symmetric encryption. Algorithms like AES512 should be fine, backed up with elliptical curve instead of RSA/DSA routines.

What about email? World has now moved to open source project GNUPG, so if you haven’t already, setup gpg for your email client and submit your public key to internet servers. This will be first step in order to do anything for 2021/2022, so I will write a separate article on this alone how I implement it as it is way to in-depth to setup something like that for first time. But rest assured once it is setup, people will finally take you seriously on the internet if you can sign your own emails.It’s important! Plus as an added bonus, GPG finally supports elliptic curves, so you may actually learn a thing or two about how Bitcoin works in the process!

5G networks and virtual reality

Here we go, what everyone truly wants to know about. All the 5G propaganda during the COVID-19 where people thought it caused it and started burning town 5G towers over it even, completely hilarious. 5G does NOT hurt you in any way or cause COVID lol. The old saying is people fear what they do not understand, rest assured, it was complete nonsense.

Think of 5G like when we went from 3G then to 4G then to LTE. What they are doing is just paying for another spectrum to use so they can offer faster internet in BIG CITIES. Yes, generally it will only be offered in BIG CITIES, because this new spectrum has limited range. Think of how your router at home was once A,B or N then went to AC. We sure get good range on 2.4ghz routers at home, but when we close to the 5ghz connection it is way faster(ie transferring a 8 gig movie to your phone from computer on your home network).

What about people not in big cities? This is where Elon Musk comes in with Star Link. If you have looked up in sky at all in last year you might have seen them. Satellites 500km above the earth that promise to offer high speed internet to rural people. Star Link cannot provide to big cities, because it would flood the bandwidth available. So if your in an urban setting, look for fiber or 5G networks.

The PROS of 5G: virtual reality will become a thing! Google has acquired a new google glass company, perhaps glasses won’t be $1500, how they flopped last time, other companies have them for $1000, but that price tag will eventually come down. Google needs to take a hit on glasses for developers worldwide, because if developers all over the world don’t own a pair, there is no point in offering them to public with nothing built for them, google can’t do this alone, needs worldwide developer support. So google, offer them to developers world wide for $300, then sell to general public for 1k down the road!

Self-driving cars will become a thing, as well as drones flying around cities. These are 2 things that would absolutely need 5G to be effective. Another PRO is cell phone companies may compete for your home internet business, giving you more options for internet in the city. Yes, this means all our phones should be placed in antique shop currently. Yes, even my Google Pixel 4XL belongs there within 2 years as they only making phones with 5G chips now. If you live in rural area, no big rush.

The CONS of 5G: Government control of course. Before the pandemic happened protesters in the UK were already upset with facial recognition and painting their faces. I will say this again and again, stop putting your pictures on FACEBOOK and STOP USING IT! Everyone knows by now NSA has backdoor access to all your pictures on there, they will use them to facial ID you. Has Edward Snowden still not gotten through to everyone? He had to become a fugitive of the USA because he cared enough about you to let you know what was happening and your still not listening to him?

Artificial intelligence is here, if you need to post your picture/data publicly all the time, use SnapChat or Signal. For love of god just put them anywhere where company won’t share your data to be thrown in an AI database! They should be paying you to share your data, your just giving them free data to feed their artificial intelligence programs and not giving you a cent! If you absolutely need to use Facebook, run your pictures through an image manipulation program first to throw off their AI. If they give you money, then maybe consider submitting your unedited pictures. Click like on things you hate, dislike on things you love, screw them right up. Post messages encrypted if you can, and give your friends the keys to unlock your messages, this is how internet was meant to be! If google started sharing your data, we wouldn’t buy android anymore. We shouldn’t even be using our real names on the internet, internet was made for freedom of speech and that is why anonymity was needed in first place, people will fight with their last dying breath to protect that. In the future governments will be crying otherwise, as artificial intelligence programs begin talking with each other, and they have no way to stop it after. Programmers control the internet, not governments, always remember that.

Social Media Internet Draft Proposal

I propose an internet draft for social media, where everyone has 2 copies of their own blockchain they created, one for public keys and storage of media, and the other for private keys to decrypt each message on public chain. I propose every post be encrypted with a new key.I propose a new algorithm where on unix servers we have permissions such as :

-rwxrwxrwx   1 dan  dan    380 Jul 16 04:05 file.conf

That at any time a given owner can allow Owner Group or Others to have access to a given post. Service providers such as FaceBook, Google if forced to remove a post can set the “Other” “rwx”(read,write,execute) flags off and only be able to see the message if owner does in fact set the “Other” bit flags. The “Group” flag can be set to everyone in their circle of friends to be viewed by encrypting post with each of their friends public keys. If “Group” flag was removed from said post, then owner with their secret key could remove friends public keys from post, but also a mechanism to add keys back if “Group” flag was set again.

The idea here, is social media providers may not view posts if only group and owner flags set, as we have seen how destructive internet becomes when that happens and the many privacy breeches of normal users. Governments back-doors into these systems as well as companies profiting putting users data in their artificial intelligence programs. The providers may continue profiting on ads beside each post, but may no longer retain data on/of normal users. I also propose mechanism in place where any photos posted online, users have option of tossing them through a program to mangle the bits in the image enough to take AI facial recognition capabilities away.

The Social Media companies MUST use a decentralized blockchain to pull in data, normal users would be able to feed data into providers decentralized blockchain with users public blockchain with appropriate bits set. This is important otherwise providers could read and steal the said data. Terms of Service must be in place where they do not store the data when the “Other” bit is set. Social media companies could compete to mine blocks with other miners or join a mining pool. I propose mining should be done on IP addresses connected for longest amount of time, mine a block, then they get thrown to bottom of list, so everyone in the world can have a part of the mining process. This ensures if said provider abandons the chain, another one could start up in its place with all data in tact. We would say encrypt next block in chain with next X amount of IP addresses that have been there the longest hosting the chain. I propose the block chain itself be split into A-Z, a-z, 0-9(8 char usernames) for anonymous usernames. This allows miners to only host a part of the data for rewards, as storage may become expensive to hold to big of a blockchain.

As world currently stands, block-chains are worthless unless they contain data that people feel is valuable, allowing investors to invest in them. The exception being Bitcoin itself, the grandfather of digital currency set to be the new Gold if any government digital currencies fail. IF a social media company did a nice interface to a social media block-chain, investors have a digital currency to invest in that is worthwhile. It is then not just a bunch of useless numbers, but actual data that they could download to mine blocks or day trade. It then has a purpose, not just a bunch of numbers and a promise for a company to do well. Prices on crypto currency then would reflect size of blockchain, current price of storage medium, and cost of hardware to pull queries from that storage. This is a long-term solution to this problem in the world.

What will make people switch to government digital currencies in the long-term will be exactly that, more people they have in their country, bigger blockchain(s) will be, making them more valuable to investors. So think USD and China. If someone was actually able to crack an address on the said blockchain, everything from social security numbers to passports could be exposed, thus why it would have so much value in the future. This of course could be protected by governments by forcing end users to do a password change every month by giving them a new one and re-encrypting that block on chain keeping chain brute force proof from its own employees.

What is discerning about governments having centralized block-chains, is let’s say you have a million dollars, they could essentially just break the chain like it never existed. That is why regulations must be in place through central banks. IF they are able to simply reverse transactions on blockchain, the blockchain will have less value to investors on the exchanges, as was the case with Ethereum. Currently central banks are using digital currency, as well as Russia and a few other countries, so it’s just a matter of time before mastercard and visa are on the chain making this a reality. They are already hiring block-chain developers as we speak.

Profit: Social media companies after establishing a block-chain could in fact profit more with this model. Users are not afraid to use their systems, providers are no longer afraid of governments. Providers have more user retention instead of running to every competitor every time their is a security breach. Providers could apply for ICO with cryptocurrency exchanges, have shares in the digital currency, and offer users that cryptocurrency in exchange for another digital currency through current exchange rates at crypto currency providers such as Binance.

This is enough for Part 1, until next time….

Dan.