Category Archives: linux

Using WSL as regular Linux host on same network with 10 gigabit and NFS

What if you could use WSL as a regular Linux host on same network? For developers this would be big right? What about app developers that test apps on Windows, Linux, Android, Web etc. You could do it all from one place, and have convenience of mounting your NAS on your WSL machine with NFS to backup your WSL files you been working on, whether you working with flutter, flet, or whatever your preference is. Or maybe just you like to ssh into your WSL machine to run Unix commands on your windows desktop from your laptop etc…

Alright let’s assume we are a developer and not a hacker, we will want latest Ubuntu LTS release(as developers always test with this distro more than anything else), but feel free to install Kali instead if that’s your thing…

Let’s do it, what we will accomplish here is install Ubuntu-22.04 WLS2, we will start this instance when we login to windows in background so we can ssh into it, we will also mount NFS directory from our NAS so you can just copy anything you want to backup to it easily. So let’s say I do a lot of work in my /home/dan directory, I will mount NAS to /data directory and anytime I want to backup anything, I’ll copy it there. We could go more advanced and just mount our /home/dan directory from a FreeBSD zfs pool and revert snapshots to anytime we want 🙂 , but we will keep it simple…

On security end of things, you should only NFS mount anything on WSL that you normally map with samba from your NAS. If ever get malicious code on your windows computer, you don’t want to open your whole NAS to it. Worst case scenario you loose 2 days of your life reinstalling windows again with backups from private NAS backups and rolling back snapshots on your ZFS system.

Notes on backups

I personally have a 2nd server I leave offline and only bring it online occasionally to backup first NAS, blame SSD corporate greed, spinning drives cheaper but can only spin 3-6 years before they fail, so this ensures I don’t have 2 drives fail at same time on each if they don’t rust first lol. NVME’s are still only mainly used as OS drives, quite sad really. When I used to work for fortune 500 companies they used to backup regularly to tape drives, and every few months send that backup offsite as well. If their colocation caught fire, they didn’t loose everything this way. So while it’s a good idea to do an onsite solution like I have, it’s also a good idea to rent a VPS/dedicated/colocated server and backup stuff there occasionally as well.

In the 90s I use used to program in C etc, I kicked my PC to many times and the drive failed, lost a whole year worth of C programming libraries I made, never touched it again, went into perl/php/python after that. Another example, one backup server one time I forgot to create ZFS snapshots on it, so one day when first server had drive fail, I replaced it with a blank slate, the backup server then rsync’d an empty drive one night before I got to it and lost almost everything. So moral of story is, a) snapshots b) 2nd NAS to backup first c) copy things you want to keep offsite in case of a fire. You decide, but I know how long apps take to develop when your new at it, I’d hate to see you give up on a project because you didn’t use github or a VPS.

Notes on FreeBSD

I can’t stress enough how your servers should be running FreeBSD as main host. By utilizing BHYVE virtualization your virtual hosts will be rock solid, and the ports collection is enormous, it is essentially a sys admin’s dream. See my previous posts on running FreeBSD with vm-byve. Not just the fact you have more packages available to you than any other OS on the planet, the main reason is ZFS. Linus Torvalds already stated he won’t allow ZFS in Linux kernel, and always people hacking it in there on their Linux distros is a real waste of time, for backing up data, there is no better OS. At home my FreeBSD servers do everything, wireguard, DNS, DHCPD, backups, snapshots, virtual hosts, 10 gigabit fiber, you name it, it’s something you can’t do with prebuilt solutions like FreeNAS, proxmox, etc taking the control away from you. Take control back and you won’t regret it. I even demonstrated how to run home assistant with it and not needing to buy a raspberry pie with inferior hardware 🙂

Getting started with WSL prerequisites

First thing we need is a bridge, this will allow us to be on same network as our regular LAN for dhcpd etc… If you don’t have 10 gigabit skip #2, replace your username dan with whatever your C:\Users\$username is…

1) create a bridge:
search for hyper-v, should see something to enable you to add that addons to windows, add then restart PC.
Now open Hyper-V manager, select <your PC name>, and select "virtual switch manager"
Now create a new virtual switch that is "external" and call it "WSL Bridge"
Check it has our external network 10GB card etc.
2) enable jumbo packet on new virtual switch
control panel -> Network and Sharing Center -> "vEthernet (WSL Bridge)" -> properties -> configure -> advanced
set Jumbo Packet to "9014"

Now we need to edit the main wsl file to use this bridge, again replace dan with your username:

3) In WSL:
nano /mnt/c/Users/dan/.wslconfig OR just do it from your C:\home\Users\dan\.wslconfig windows file
networkingMode = bridged
vmSwitch = "WSL Bridge"

PLEASE NOTE: could not get X11 forwarding working without ipv6 enabled
the reason is you would have to change in /etc/ssh/sshd_config "AddressFamily any" to "AddressFamily inet"

Let’s actually install WSL

wsl --list --online
wsl --install -d Ubuntu-22.04

Now let’s create a .bat file that runs when we login to start WSL automatically, in your C:\Users\$USER directory create a file called “WSL_start.bat” and let’s add to it:

:: Start WSL in backgound
TITLE Starting WSL

:: Section 1: Starting WSL in background
set OS=Ubuntu-22.04
ECHO ==========================
ECHO Starting WSL %OS% in background, please wait...
ECHO ============================

wsl -d %OS% --exec dbus-launch true

Close the file remembering to replace OS= with whatever distribution you want to start. You may say why not just use that one wsl line to do it instead, well when you reboot your PC and login for first time, what you would have is just a blank terminal screen executing something and you would have no idea what it is, this way it is informative, and you always know what it is, especially if you don’t have a superfast computer PCIE 5 PC, I’m on PCIE 4, and it’s around seconds.

Now let’s add that .bat file to “Task Scheduler”. Search for it and open it.

Go to “create task”. In “General” make sure “Run only when user is logged on” is checked. In “Triggers” select “Specific User” should be yourself. And on “Begin the task” select “At log on”. In “Actions” click “browse” and point to .bat file we created.

And that’s it we have a task created to start our WSL anytime we login, perfect.

Let’s start WSL

Open powershell and just type WSL and hit <tab> should autocomplete for you and hit enter:

PS C:\Users\dan> .\WSL_start.bat
Starting WSL Ubuntu-22.04 in background, please wait...
PS C:\Users\dan>

Great now just login with “wsl” or “bash”. You can now check with “ifconfig -a” or “ip a” that your on same subnet as your regular LAN.

sudo su
apt install openssh-server
apt install  nfs-common
systemctl enable --now ssh

You should now be able to login with ssh, good to go. Go ahead and restart WSL and make sure everything working alright:
wsl --shutdown
nano /etc/wsl.conf
#ADD following:
hostname = wsl
generateHosts = false

What we are doing here is setting our hostname and saying don’t overwrite /etc/hosts, that way you can add your IP to it etc without it getting overwritten. There is another option you can add to not overwrite /etc/resolv.conf, personally I feed it info with my DHCPD server so I like leaving if but it is “generateResolvConf = false”. If you set this option you need to restart WSL and it will completely nuke /etc/resolv.conf first time, so best to make a copy before rebooting then copy it back. Doing this however is only reliable way then to use hostnames in your /etc/fstab instead of IP address. Personally I feed the host it’s static IP address, MTU, DNS etc with my DHCPD server using it’s mac address so it’s good for me, if you want an example from my dhcpd.conf on FreeBSD I have following with isc-dhcpd currently for WSL:

subnet netmask {
       default-lease-time 259200;
       max-lease-time 432000;
       option broadcast-address;
       option domain-name "";
       option routers;
       option domain-name-servers,;

host wsl {
  hardware ethernet 5e:bb:f6:9e:ee:fa;
  option interface-mtu 9000;

This is a more convenient way to do it, that way you control WSL and everything on your network from one file. Obviously remove MTU line if you don't have 10 gigabit :) If you want to get more fancy, also edit your /etc/hosts , and also your reverse DNS on bind and add it as well, but for most part this will suffice. I'm generally lazy editing bind reverse file, dhcpd.conf and /etc/hosts on main host is fine.

This is what my /etc/hosts file looks like on wsl:
::1             localhost       localhost    wsl

Now let’s setup NFS, please note with NFS use IP address and not a hostname, only reliable way to use hostnames with NFS in /etc/fstab is if “generateResolvConf = false”, if that is something you really want then this is how I would do it:

cp /etc/resolv.conf /etc/resolv.conf.old
wsl --shutdown (windows)
wsl -d Ubuntu-22.04 --exec dbus-launch true (windows)
cp /etc/resolv.conf.old /etc/resolv.conf; pico /etc/resolv.conf

But I don’t, just here for reference.

NFS setup:

Add to /etc/fstab your mount, on your NAS make sure to allow IP etc in /etc/exports and restart mountd so wsl can access it then:

mkdir /data
pico /etc/fstab
#ADD following: /data nfs vers=4,auto,noatime,nolock,tcp 0 0

Use your NAS IP and folder, but that it, now try it out:

mount /data

If everything went alright and you can see your files “ls -al /data”, your good to reboot and check NFS mount mounts on startup:

wls --shutdown

And that’s it, you have NFS.

Final Thoughts

Microsoft should be adding ways to start WSL headless for us, also a way to access serial console. This however should get you going, until next time…..


Rocky Linux Install 2022 with KVM support


Originally I had tried a software update on a Dell 2950 III server from Rocky Linux 8 to 9, only to end up with rocky linux , “glibc error: cpu does not support x86-64-v2”. Basically I fried my entire system as this CPU cannot support these new CPU calls.

Today I am going to walk you through a complete Rocky Linux 2022 install, which now replaces Centos from original Centos creator. Today I put in three different USB sticks containing Oracle Linux, Rocky Linux and Centos Stream Linux to see if I could install over my existing partitions as to not wipe out my FreeBSD KVM on the LVM. Well today I am sad to say, the installer does not see the LVM partitions, so I am forced to reinstall Rocky Linux as well as my guests all over again.

I am going to walk you through a safer way to install Centos(Rocky Linux), to future proof yourself in case of reinstalls or problems.

Rocky Linux 8 vs 9:

Run the following shell script:

pico (add following)

#!/usr/bin/awk -f

BEGIN { while (!/flags/) if (getline < "/proc/cpuinfo" != 1) exit 1

if (/lm/&&/cmov/&&/cx8/&&/fpu/&&/fxsr/&&/mmx/&&/syscall/&&/sse2/) level = 1

if (level == 1 && /cx16/&&/lahf/&&/popcnt/&&/sse4_1/&&/sse4_2/&&/ssse3/) level = 2

if (level == 2&&/avx/&&/avx2/&&/bmi1/&&/bmi2/&&/f16c/&&/fma/&&/abm/&&/movbe/&&/xsave/) level = 3

if (level == 3 && /avx512f/&&/avx512bw/&&/avx512cd/&&/avx512dq/&&/avx512vl/) level = 4

if (level > 0) { print "CPU supports x86-64-v" level; exit level + 1 } exit 1 }

Save it then:

chmod +x; ./

host:~ # ./
CPU supports x86-64-v1
host:~ #

Now if you get like the above with only version 1, you can only install Rocky Linux 8, otherwise if you have version 2 or higher you can install Rocky Linux 9. I believe reason RHEL made this change was to make glibc calls faster.

INSTALL Rocky Linux:

Download latest Rocky Linux ISO and install it with rufus to a USB stick. I am not going to cover a simple graphics installer, but I want to cover partitioning your drive. Let’s go to “Installation Destination” on installer. Now to prevent any future wipeouts of LVMs with installers, we are going to use standard partitions and only install our LVM afterwards manually. This way if we every run into situation again where installer cannot see into our LVM install, it will definitely see our standard partitions and we won’t have to wipe out our LVMs ever again.

Here are my recommendations for /boot, / and swap. For /boot from experience 1 GB is not enough like they say, after you start installing enough kernels, or start adding custom kernels for supporting things like ZFS or Ksmbd, it adds up quickly. So my recommendation for /boot is 3 GB. For swap it is 20% of your memory, I have 32 GB on this server, but I am going to go 8GB, as I rarely like going under that these days. For / partition we will want at least 80-100 GB. At some point you are going to run out of space un-tarring enough kernel sources, or doing 4k file tests, or space for your ISOs, you need some breathing room on your main OS!

(all ext4 standard partitions setup as follows)

/boot – sda1 – 3GB
swap – sda2 – 8GB
/ – sda3 – 80GB

Save with this setup and finish your install finishing up your install packages, network, password and so on. What we are going to do is setup sda4 manually after the install for our LVM. Double check everything, make sure they are all ext4 partitions, and there is no LVM anywhere!!!

Post Install Tasks — setting up for LVM, KVM, wireguard:

Let us start by upgrading the system, setting up for KVM, and upgrading the kernel from stock default. Remember we cannot run a lot of thing with less than a 5.15.x kernel, and if you start getting into things like ZFS we would need to be exactly on 5.15.x kernel currently. For our purposes we will just use kernel-ml and can downgrade to 5.15.x for ZFS later if we choose to manually compile our own kernels

dnf update
shutdown -r now (reboot with updated system)
cat /proc/cpuinfo | egrep "vmx|svm"  (check we have virtualization enabled)
dnf install @virt virt-top libguestfs-tools virt-install virt-manager xauth virt-viewer

systemctl enable --now libvirtd (everything working? ifconfig -a)
#get rid of virbr0 in ifconfig
virsh net-destroy default
virsh net-undefine default
service libvirtd restart
(check for below install command for rocky 8 or 9)
#we will not be able to install wireguard with less than 5.15 kernel
yum install
rpm --import
dnf makecache
dnf --enablerepo="elrepo-kernel" install -y kernel-ml
#let's setup our br0 bridge before we reboot
cd /etc/sysconfig/network-scripts
nano ifcfg-enp10s0f0 (your <device name>
#add "BRIDGE=br0" at end of this file
nano ifcfg-br0
#add something like this:
#should match a lot of your file above
#change UID to something different above and put your own IPs in
shutdown -r now (reboot with new kernel)
ifconfig -a (check br0 is there, then all good)

Ok now we have new kernel and br0 is setup for KVM guests, let’s move on to LVM.

Creating our LVM for our KVM guests:

#make sure we have right device
fdisk -l /dev/sda 
fdisk /dev/sda
<enter> (should have partition 4 and adding rest of disk space to it)
t (toggle partition 4 - change from linux to LVM)
8e (change to LVM)
w (write changes to partition tables)
partprobe /dev/sda (inform OS of partition changes)
pvcreate /dev/sda4 (now we have it as a LVM-can check with "pvs")
vgcreate vps /dev/sda4 (creating our volume group - "vgdisplay")
#now we are all setup, we can create as many KVM guests as we want
#for example give 70G to one guest and give remaining space to a devel guest
lvcreate -n cappy -L 70G vps (create a 70G guest - "lvdisplay")
lvcreate -n devel -l 100%FREE vps(give remaining space to this guest)
#can always delete it later
pvdisplay (check we used up all the space on vps)
#let's make sure guests can suspend and resume on host reboots:
pico /etc/sysconfig/libvirt-guests 
systemctl start libvirt-guests
systemctl enable libvirt-guests

Congratulations on your new install.


Linux DHCP IPV6 Host Server

I will do a very basic walkthrough of how to setup a Linux server to act as DHCP6 server for your network. Before we begin, we need to understand a few things that are different from IPV4. First thing is we cannot send a gateway with DHCP6.
Second we can only send IP address and DNS servers with DHCP6. So to accomplish both, we use radvd along with DHCP, the former sends the gateway, the latter sends the IP address and DNS servers to client. I will assume here you know how to install radvd and dhcp in linux so I won’t get into linux server administration. In order to be DHCPV6 stateful so we can assign addresses, both M and O Flags need to be set to 1 in radvd advertisement so clients know to go get the IP address from DHCP6 server. So for radvd our objective is simply to set advertisements on, and set the M and O flags bits.

My /etc/radvd.conf contains following:

interface br0
    AdvSendAdvert on;
    AdvManagedFlag on;
    AdvOtherConfigFlag on;

This is all you need. We are advertising, and setting the M and O bits here. Now radvd will send our clients our link-local gateway and tell them to go get their IPV6 information from DHCP. This is probably the most confusing part about this setup, there is NO way to send our real IPV6 gateway, clients only get the LINK-LOCAL gateway and from that must be able to get out to the internet. AGAIN I WILL REPEAT, they get your “Link-Local” gateway ie: “fe80::226:5aff:fe6b:ca8d”, not your real “2001:aaaa:bbbb::1” gateway. This is a limitation of the protocal, but it is not a big deal, we can still forward clients out a link-local gateway.

Ok now clients have our routers link-local gateway, now we can setup our dhcpd6.conf, and perhaps assign some static IPV6 addresses to some dhcp clients to since we like to know who is who. Only issue with IPV6 and static addresses is we can no longer use MAC Address, we need to use DUID of the client. This is also problematic since DUID is the same for all ethernet cards on each host. To solve that problem you can look into using DHCPv6 IAID, but since we only have 1 ethernet per client, we will only focus on DUID. Let us assume
we have a 2001:aaaa:bbbb::/48 to assign to clients.

Let us look at the bottom of my /etc/dhcp/dhcpd6.conf:


subnet6 2001:aaaa:bbbb::/48 {
  #lets range last octet from decimal 1000-65535 which in hex is : 3e8-ffff
  range6 2001:aaaa:bbbb::3e8 2001:aaaa:bbbb::ffff;
  option 2001:aaaa:bbbb::3,2001:aaaa:bbbb::4;
  option dhcp6.domain-search "";

#you get this by typing "ipconfig /all" on windows machine and look for "DHCPv6 Client DUID"
#just separate with : instead of -        
host dandesktop { #unfortunately, same client-id for each ethernet card in same host, so only 1 will get an IPV6 address here
  host-identifier option dhcp6.client-id 00:01:00:01:1B:67:B6:C3:58:5B:39:45:07:90;
  fixed-address6 2001:aaaa:bbbb::5;
host laptop { #unfortunately, same client-id for each ethernet card in same host, so only 1 will get an IPV6 address here
  host-identifier option dhcp6.client-id 00:01:00:01:1A:F5:AF:22:48:5B:39:3A:06:38;
  fixed-address6 2001:aaaa:bbbb::17; 

So what I started doing was a standard catchall block, setting DNS servers and IPV6 addresses for clients I did not assign statically giving them an IPV6 address in range 2001:aaaa:bbbb::3e8 – 2001:aaaa:bbbb::ffff.

Then I assign 2 static IPV6 addresses to my desktop and my laptop. I ran “ipconfig /all” on the two Windows 8.1 machines and collected their DUID’s. Then used a search and replace program on the DUID to change all “-” characters with “:” characters to match format in the dhcpd6.conf file.

Now after we start dhcpd, make sure it is running:

router:/etc/dhcp# ps aux|grep dhcpd6
dhcpd    19531  0.0  0.0  47252  2640 ?        Ss   May04   0:00 /usr/sbin/dhcpd -6 -user dhcpd -group dhcpd -cf /etc/dhcp/dhcpd6.conf
root     22152  0.0  0.0 105304   880 pts/1    S+   00:05   0:00 grep dhcpd6

Now if all goes well from radvd, clients will get the link-local “fe80::226:5aff:fe6b:ca8d” gateway, run off and check UDP port 546 on IPV6 to get our settings from dhcpd6.conf file for an IP address and the DNS servers, and voila we are done! If you have issues with clients, please checkout my other how to on setting up a windows dhcp client.

Until Next Time,


KVM – Adding Space To FreeBSD 10 zfs on root guest on a Centos 6.5 LVM host

I decided to write this after I could not find any documentation on internet how to easily add space to a FreeBSD 10 guest that had zfs on root install. This should show you how to do it easily and quickly. It is important as we may need to add more space from our LVM to our FreeBSD guest at some point, and we need to know exactly how to do that.

I assume you have a Centos host running KVM guests, as well as gdisk installed.

First thing we want to do is extend size of our guest

lvextend -L +10G /dev/vps/sunsaturn (add 10G to sunsaturn)
gdisk /dev/vps/sunsaturn (we let gdisk fix partition table or we will not be able to add new space)

Command (? for help): w
Warning! Secondary header is placed too early on the disk! Do you want to
correct this problem? (Y/N): Y
Have moved second header and partition table to correct location.”

Just save and exit to fix our partition tables.
Now let us run gdisk again to add the new space, since we have following:

gdisk -l /dev/vps/sunsaturn
Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            1057   512.0 KiB   A501  gptboot0
   2            1058         8389665   4.0 GiB     A502  swap0
   3         8389666       335544286   156.0 GiB   A504  zfs0
virt-filesystems --long --parts --blkdevs -h -a /dev/vps/sunsaturn
Name       Type       MBR  Size  Parent
/dev/sda1  partition  -    512K  /dev/sda
/dev/sda2  partition  -    4.0G  /dev/sda
/dev/sda3  partition  -    156G  /dev/sda
/dev/sda   device     -    160G  -

Our last partition can easily be expanded just deleting last partition and add it back with the new space.

gdisk /dev/vps/sunsaturn (now delete last partition(3) and add it back, set code and name back to A504 and zfs0)
partprobe /dev/vps/sunsaturn
(if for any reason you cannot see the space just run:)
gdisk /dev/vps/sunsaturn (then hit "w")
partprobe /dev/vps/sunsaturn
(now you should be able to do above)

Alright we have expanded our LVM, now we need to restart the guest and enable new space within the guest.

virsh shutdown sunsaturn (actually wait till it is shutdown, till "virsh list" shows it gone)
virsh start sunsaturn


zpool status (find out device and put it below)
zpool online -e zroot vtbd0p3 (zfs will now grab the additional space)

That’s it, now you know how to add space on the fly to any FreeBSD guest with zfs on root, on the fly.


Update: a few months after writing this I came across this article which does a good representation.

Authenticating users with freeradius on Centos

You may want to authenticate users with radius at some point, perhaps your backend stores all your users there, perhaps you do not want to login to many boxes to change password for same user, I will describe here how to authenticate users with almost any service.

First setup some repository depending on if your running 64 bit or not:
#64 Bit

rpm -Uvh

#32 Bit

rpm -Uvh

Install it and configure it:

yum install pam_radius
alias pico='nano -w'
pico /etc/pam_radius.conf

Setup your radius details here
#server[:port] shared_secret timeout (s) your_radius_secret_password 3

Add radius authentication to SSH

cd /etc/pam.d
pico sshd

#now for any users you want to authenticate, just toss following line as second line in any service
auth sufficient debug

Just open any file and edit it and its authenticating off radius
IMPORTANT NOTE: Do NOT think you can just add users to radius and login, you must actually create the user first! This is not LDAP, we are simply just providing another place to store passwords for users, nothing more, you can lockout the account on the system and still login with users radius passsword.

To add a user is simple as : adduser username
Delete a user just as simple : userdel -r username

Verify everything is ok:

ssh -l radius_user localhost 
tail -100 /var/log/secure

You hopefully see something as follows:
pam_radius_auth: Got RADIUS response code 2

Exactly what we want, response code 2 from radius is Accept-Accept, so we typed in right password and should have been logged in.

Try some other services, pop open dovecot for instance:

pico /etc/pam.d/dovecot (add the same line)
telnet localhost 110
user radius_user
pass radius_pass
retr 1

You can do this for all your services,

Until Next Time,

Centos OpenVPN Setup

A friend asked me to setup openvpn on a openvz VPS he had, First thing we needed to do which he had already done is contact host to add support for tun/tap devices, for idea of what host had to do here it is: Host Setup His website is so lets use that for this, obviously change to your own domain. We will be doing this for Centos 6 64bit
Basically for a quick rundown of what to do on a centos host:

echo "modprobe tun" >> /etc/rc.d/rc.local; modprobe tun
echo "modprobe ipt_mark" >> /etc/rc.d/rc.local; modprobe ipt_mark
echo "modprobe ipt_MARK" >> /etc/rc.d/rc.local; modprobe ipt_MARK
CTID=101 (change to your CTID)
vzctl set $CTID --devnodes net/tun:rw --save
vzctl set $CTID --devices c:10:200:rw --save
vzctl set $CTID --capability net_admin:on --save
vzctl exec $CTID mkdir -p /dev/net
vzctl exec $CTID chmod 600 /dev/net/tun
vzctl restart $CTID

#Now in container lets run these 2 lines as well as add them to startup

myip= (change to your ip address)
iptables -t nat -A POSTROUTING -o venet0 -j SNAT --to-source $myip
iptables -t nat -A POSTROUTING -s -j SNAT --to-source $myip
echo "iptables -t nat -A POSTROUTING -o venet0 -j SNAT --to-source $myip" >> /etc/rc.d/rc.local
echo "iptables -t nat -A POSTROUTING -s -j SNAT --to-source $myip" >> /etc/rc.d/rc.local

#enable forwarding

sysctl net.ipv4.ip_forward = 1
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf

#We need to enable right repository for this OS install
#Remember to change to right rpm for your OS

rpm -Uvh
cd /etc/yum.repos.d
yum install gcc make rpm-build autoconf.noarch zlib-devel pam-devel openvpn openssl-devel -y
cp -R /usr/share/doc/openvpn*/easy-rsa/ /etc/openvpn/
cd /etc/openvpn/easy-rsa/2.0
chmod 755 *
nano vars

#change following:

export KEY_SIZE=4096
export KEY_CITY="SanFrancisco"
export KEY_ORG="LeadingVPN"
export KEY_EMAIL=""
#export KEY_CN=changeme
#export KEY_NAME=changeme
#export KEY_OU=changeme
#export PKCS11_MODULE_PATH=changeme
#export PKCS11_PIN=1234

#run ./build-dh command before going for lunch

source ./vars
./pkitool --initca
./pkitool --server
openvpn --genkey --secret keys/ta.key

#build as many client certificates as needed(if you need to add more just cd into here and add another vpn to sign against the above)


#make directories and copy directory files

rm -rf /etc/openvpn/secure /etc/openvpn/client1 /etc/openvpn/client2 
mkdir /etc/openvpn/secure    (server file directory)
mkdir /etc/openvpn/client1  (client1 directory)
mkdir /etc/openvpn/client2     (client2 directory)
cd /etc/openvpn/easy-rsa/2.0/keys
cp ca.crt ca.key ta.key dh4096.pem /etc/openvpn/secure  (files needed for server, ca.key if want to be CA to)
cp ca.crt ta.key /etc/openvpn/client1 (client1
cp ca.crt ta.key /etc/openvpn/client2 (client2
cd /etc/openvpn
nano secure.conf


Linux ISCSI with FreeBSD ZFS

How many of you would love to use ZFS on linux for its excellent snapshot features, but cannot? How about we setup a FreeBSD host with ZFS, then on linux we mount the ZFS as ISCSI, format it as ext4 then we can snapshot that partition anytime we want and roll it back anytime we want? I have been using FreeBSD for years on ZFS with over 10TB of data, its very stable on this OS, let’s see how I’d go about accomplishing keeping ZFS for backup snapshots:

#Freebsd- Lets Setup Server Portion Of ISCSI:
#notice how df will not show data/iscsitest, as we creating a block device

zfs destroy data/iscsitest
zfs create -V 20G data/iscsitest
zfs list -o name,used,avail,volsize
cd /usr/ports/net/istgt; make install
cd /usr/local/etc/istgt/
cp auth.conf.sample auth.conf
cp istgt.conf.sample istgt.conf
cp istgtcontrol.conf.sample istgtcontrol.conf
pico istgt.conf


Portal DA1 …


Netmask …


LUN0 Storage /dev/zvol/data/iscsitest Auto

/usr/local/etc/rc.d/istgt start

yum install iscsi-initiator-utils
service iscsi start

iscsiadm -m discovery -t sendtargets -p
fdisk -l(find the new device…perhaps it is /dev/sdf)
gdisk /dev/sdf (partition it…maybe sdf1 as ext4)
mkfs.ext4 /dev/sdf1
mkdir /mnt; mount -t ext4 /dev/sdf1 /mnt

(toss in /etc/fstab if you like as: /dev/sdf1 /mnt ext4 _netdev 0 0) #notice the _netdev option is needed
chkconfig iscsi on

#OPTIONAL: How would we change size of partition?: (can delete old targets with : iscsiadm -m node -p –op=delete)

zfs set volsize=40G data/iscsitest
zfs list -o name,volsize
/usr/local/etc/rc.d/istgt restart

(umount any iscsi partitions)
service iscsi restart
fdisk -l (check what /dev/sdaX it is and new size to confirm)
gdisk /dev/sdf (delete all partitions and save)
gdisk /dev/sdf (recreate it with max size and save)
resize2fs /dev/sdf1
(go ahead and mount it again, should have new size)

#FreeBSD snapshot test:
zfs snapshot data/iscsitest@test

(now time goes on and data gets written to /mnt on linux box, but something happens and we want to roll back)

/usr/local/etc/rc.d/istgt stop
zfs rollback data/iscsitest@test
/usr/local/etc/rc.d/istgt start

umount /mnt
mount /mnt
(should be recovered)

#make your life easier crontab the snapshots Daily and Monthly:
21 4 * * * (/usr/local/etc/rc.d/istgt stop; zfs destroy data/iscsitest@`/bin/date +\%A`; zfs snapshot data/iscsitest@`/bin/date +\%A`; /usr/local/etc/rc.d/istgt start) > /dev/null 2>&1
0 0 1 * * (/usr/local/etc/rc.d/istgt stop; zfs destroy data/iscsitest@`/bin/date +\%B`; zfs snapshot data/iscsitest@`/bin/date +\%B`; /usr/local/etc/rc.d/istgt start) > /dev/null 2>&1

Until next time,


DKIM and postfix setup on centos 6.3

This is meant as a quick 5 min get it going, and a 5 min quick testing to get dkim going.I do recommend actually reading man pages etc if you have extra time, but this guide should get you going.

Install EPEL repository:
64 bit:
# rpm -Uvh
32 bit:
# rpm -Uvh

Install DKIM:
# yum install opendkim
# export
# mkdir /etc/opendkim/keys/$domain
# cd /etc/opendkim/keys/$domain
# opendkim-genkey -d $domain -s default
# chown -R opendkim:opendkim /etc/opendkim/keys/$domain
# echo "default._domainkey.$domain $domain:default:/etc/opendkim/keys/$domain/default.private" >> /etc/opendkim/KeyTable
# echo "*@$domain default._domainkey.$domain" >> /etc/opendkim/SigningTable

If you have internal hosts relaying through that you want to sign mail for to:
# echo "" >> /etc/opendkim/TrustedHosts

Edit DNS:
# cat /etc/opendkim/keys/$domain/default.txt >> /var/named/master/YOUR_DOMAIN_DNS_ZONE_FILE
(what I normally do at this point is increment serial number in DNS zone file, login to slaves, delete their zone files and restart named there to get it going quickly)
# nano -w /etc/opendkim.conf
â—¦Mode sv
â—¦uncomment everything except KeyFile
(Find this line: SigningTable /etc/opendkim/SigningTable and change it to:
SigningTable refile:/etc/opendkim/SigningTable to enable regex wildcards on SigningTable)

Configure Postfix
# nano -w /etc/postfix/ (add following)
# opendkim setup
smtpd_milters = inet:localhost:8891
non_smtpd_milters = inet:localhost:8891
milter_default_action = accept

Restart Services
# service opendkim start
# service postfix restart
# service named reload
# chkconfig opendkim on

Test our setup
# echo "DKIM Test" | mail -s "DKIM Testing"
# tail -100 /var/log/maillog

Now make sure maillog log shows it signed, check gmail headers of email you sent, make sure everything passes fine


How to partition an openvz host to take snapshots

/dev/sda1 - /boot - 250 MB (because kernel cannot be on LVM)
/dev/sda2 - LVM - rest of space
volume group=vps
root - / - 102400 MB (100 GB) (logical volume name "root")
swap - N/A - 4096 MB (4 GB) (logical volume name "swap")
/vz - /vs - (logical volume name "vz")
(here you take remaining space and subtract it from amount of space you want for snapshots)

Now you have 3 LVM’s in /dev/vps/*, and you have remaining space for snapshots. You should keep at least 10GB free for snapshots depending how much your data changes. If alot changes you would be better off with 100GB of free space. In my examples for snapshots I am going to assume I left 10GB free of space for snapshots.

History on snapshots:
#snapshot–snapshots only require enough space for how much data you think will be changed(copy on write)
# –longer a snapshot left sitting there, more snapshot may grow with changes to take out
# –ie: you create a new 2G file after snapshot is taken, snapshot grows 2G so it can take that file out after
Lets begin:
Lets say I want to take a snapshot of /vz(where all my openvz hosts reside)
I know I have 10GB left of free space, a way to check( do a “pvdisplay” will show you the “PV Size”, then do a “lvdisplay” which shows you all LV’s and their sizes, add up all the “LV Size” of each, then use that to subtract from “PV Size” for remaining space for snapshots. WARNING: never attempt to shrink a LV to get extra space for snapshots, backup the LV , delete it, recreate it, and copy data back)
# lvcreate --snapshot --name snap --size 10GB /dev/vps/vz
(in this example I just created a file /dev/vps/snap, that will be good for 10GB worth of changes to the /dev/vps/vz logical volume.)
(Now let’s mount it for example if we wanted to back it up)
# mkdir /mnt; mount /dev/vps/snap /mnt; ls -al /mnt
(now we have /mnt available for a backup with ie: rsync)
(to unmount and remove snapshot to get our 10GB of free space back for another snapshot:)
# umount /mnt; lvremove /dev/vps/snap
(simple as that, now you are an expert at snapshots)

Now lets get more advanced, lets say I want to take that 10GB of space and take snapshots of /dev/vps/vz daily and once a month, as well as take a monthly snapshot of host itself: /dev/vps/root, so that we have ability to get back anything we accidently deleted anywhere.

Lets setup a cronjob:
#crontab -e
(lets add the following:)
#10 Gigabytes available for snapshots, 7 days in a week=7GB+2GB monthly snapshot+1GB host snapshot=10GB
#daily snapshots
0 0 * * * (lvremove -f /dev/vps/snap-`/bin/date +\%A`; lvcreate --snapshot --name snap-`/bin/date +\%A` --size 1GB /dev/vps/vz) > /dev/null 2>&1
#monthly snapshot
0 0 1 * * (lvremove -f /dev/vps/snap-Monthly; lvcreate --snapshot --name snap-Monthly --size 2GB /dev/vps/vz) > /dev/null 2>&1
#host snapshot once a month
0 0 1 * * (lvremove -f /dev/vps/snap-host-Monthly; lvcreate --snapshot --name snap-host-Monthly --size 1GB /dev/vps/root) > /dev/null 2>&1

(and there you have it, everyday we will get /dev/vps/snap-Thursday for example day of week snapshots, 1 monthly snapshot, and 1 monthly snapshot of host itself we can mount anytime
to /mnt and do any recovery or backups needed)

My recommendation if you have the space is have a backup LVM the same size of /vz. My preference for backups personally is ZFS filesystem using freebsd, which you can easily
install as a VPS under KVM virtualization, share the disk to the VM then have freebsd do a ZFS raid on them, but thats for another article.

As far as LVM goes, what I personally do is rsync the /vz directory each night to a ZFS filesystem, and take snapshots there, ZFS is much nicer to work with when dealing with snapshots, but where LVM comes in handy is to snapshot the LVM(because we use ext4 on linux) for something like a mysql database, mount it, then rsync it off to a ZFS filesystem.