Category Archives: linux

Rocky Linux Install 2022 with KVM support

INTRO:

Originally I had tried a software update on a Dell 2950 III server from Rocky Linux 8 to 9, only to end up with rocky linux , “glibc error: cpu does not support x86-64-v2”. Basically I fried my entire system as this CPU cannot support these new CPU calls.

Today I am going to walk you through a complete Rocky Linux 2022 install, which now replaces Centos from original Centos creator. Today I put in three different USB sticks containing Oracle Linux, Rocky Linux and Centos Stream Linux to see if I could install over my existing partitions as to not wipe out my FreeBSD KVM on the LVM. Well today I am sad to say, the installer does not see the LVM partitions, so I am forced to reinstall Rocky Linux as well as my guests all over again.

I am going to walk you through a safer way to install Centos(Rocky Linux), to future proof yourself in case of reinstalls or problems.

Rocky Linux 8 vs 9:

Run the following shell script:

pico glibc_check.sh (add following)

#!/usr/bin/awk -f

BEGIN { while (!/flags/) if (getline < "/proc/cpuinfo" != 1) exit 1

if (/lm/&&/cmov/&&/cx8/&&/fpu/&&/fxsr/&&/mmx/&&/syscall/&&/sse2/) level = 1

if (level == 1 && /cx16/&&/lahf/&&/popcnt/&&/sse4_1/&&/sse4_2/&&/ssse3/) level = 2

if (level == 2&&/avx/&&/avx2/&&/bmi1/&&/bmi2/&&/f16c/&&/fma/&&/abm/&&/movbe/&&/xsave/) level = 3

if (level == 3 && /avx512f/&&/avx512bw/&&/avx512cd/&&/avx512dq/&&/avx512vl/) level = 4

if (level > 0) { print "CPU supports x86-64-v" level; exit level + 1 } exit 1 }

Save it then:

chmod +x glibc_check.sh; ./glibc_check.sh

host:~ # ./glibc_check.sh
CPU supports x86-64-v1
host:~ #

Now if you get like the above with only version 1, you can only install Rocky Linux 8, otherwise if you have version 2 or higher you can install Rocky Linux 9. I believe reason RHEL made this change was to make glibc calls faster.

INSTALL Rocky Linux:

Download latest Rocky Linux ISO and install it with rufus to a USB stick. I am not going to cover a simple graphics installer, but I want to cover partitioning your drive. Let’s go to “Installation Destination” on installer. Now to prevent any future wipeouts of LVMs with installers, we are going to use standard partitions and only install our LVM afterwards manually. This way if we every run into situation again where installer cannot see into our LVM install, it will definitely see our standard partitions and we won’t have to wipe out our LVMs ever again.

Here are my recommendations for /boot, / and swap. For /boot from experience 1 GB is not enough like they say, after you start installing enough kernels, or start adding custom kernels for supporting things like ZFS or Ksmbd, it adds up quickly. So my recommendation for /boot is 3 GB. For swap it is 20% of your memory, I have 32 GB on this server, but I am going to go 8GB, as I rarely like going under that these days. For / partition we will want at least 80-100 GB. At some point you are going to run out of space un-tarring enough kernel sources, or doing 4k file tests, or space for your ISOs, you need some breathing room on your main OS!

(all ext4 standard partitions setup as follows)

/boot – sda1 – 3GB
swap – sda2 – 8GB
/ – sda3 – 80GB

Save with this setup and finish your install finishing up your install packages, network, password and so on. What we are going to do is setup sda4 manually after the install for our LVM. Double check everything, make sure they are all ext4 partitions, and there is no LVM anywhere!!!

Post Install Tasks — setting up for LVM, KVM, wireguard:

Let us start by upgrading the system, setting up for KVM, and upgrading the kernel from stock default. Remember we cannot run a lot of thing with less than a 5.15.x kernel, and if you start getting into things like ZFS we would need to be exactly on 5.15.x kernel currently. For our purposes we will just use kernel-ml and can downgrade to 5.15.x for ZFS later if we choose to manually compile our own kernels

dnf update
shutdown -r now (reboot with updated system)
cat /proc/cpuinfo | egrep "vmx|svm"  (check we have virtualization enabled)
dnf install @virt virt-top libguestfs-tools virt-install virt-manager xauth virt-viewer

systemctl enable --now libvirtd (everything working? ifconfig -a)
#get rid of virbr0 in ifconfig
virsh net-destroy default
virsh net-undefine default
service libvirtd restart
(check https://www.elrepo.org/ for below install command for rocky 8 or 9)
#we will not be able to install wireguard with less than 5.15 kernel
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
dnf makecache
dnf --enablerepo="elrepo-kernel" install -y kernel-ml
#let's setup our br0 bridge before we reboot
cd /etc/sysconfig/network-scripts
nano ifcfg-enp10s0f0 (your <device name>
#add "BRIDGE=br0" at end of this file
nano ifcfg-br0
#add something like this:
#should match a lot of your file above
STP=no
TYPE=Bridge
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=192.168.0.2
GATEWAY=192.168.0.1
PREFIX=24
DNS1=192.168.0.3
DNS2=8.8.8.8
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=br0
UUID=36e9e2ce-47h1-4e02-ab76-34772d136a21
DEVICE=br0
ONBOOT=yes
AUTOCONNECT_SLAVES=yes
IPV6INIT=no
#change UID to something different above and put your own IPs in
shutdown -r now (reboot with new kernel)
ifconfig -a (check br0 is there, then all good)

Ok now we have new kernel and br0 is setup for KVM guests, let’s move on to LVM.

Creating our LVM for our KVM guests:

#make sure we have right device
fdisk -l /dev/sda 
fdisk /dev/sda
p
<enter> (should have partition 4 and adding rest of disk space to it)
t (toggle partition 4 - change from linux to LVM)
8e (change to LVM)
w (write changes to partition tables)
partprobe /dev/sda (inform OS of partition changes)
pvcreate /dev/sda4 (now we have it as a LVM-can check with "pvs")
vgcreate vps /dev/sda4 (creating our volume group - "vgdisplay")
#now we are all setup, we can create as many KVM guests as we want
#for example give 70G to one guest and give remaining space to a devel guest
lvcreate -n cappy -L 70G vps (create a 70G guest - "lvdisplay")
lvcreate -n devel -l 100%FREE vps(give remaining space to this guest)
#can always delete it later
pvdisplay (check we used up all the space on vps)
#let's make sure guests can suspend and resume on host reboots:
pico /etc/sysconfig/libvirt-guests 
"ON_SHUTDOWN=suspend"
systemctl start libvirt-guests
systemctl enable libvirt-guests

Congratulations on your new install.

Dan.

Linux DHCP IPV6 Host Server

I will do a very basic walkthrough of how to setup a Linux server to act as DHCP6 server for your network. Before we begin, we need to understand a few things that are different from IPV4. First thing is we cannot send a gateway with DHCP6.
Second we can only send IP address and DNS servers with DHCP6. So to accomplish both, we use radvd along with DHCP, the former sends the gateway, the latter sends the IP address and DNS servers to client. I will assume here you know how to install radvd and dhcp in linux so I won’t get into linux server administration. In order to be DHCPV6 stateful so we can assign addresses, both M and O Flags need to be set to 1 in radvd advertisement so clients know to go get the IP address from DHCP6 server. So for radvd our objective is simply to set advertisements on, and set the M and O flags bits.

My /etc/radvd.conf contains following:

interface br0
{
    AdvSendAdvert on;
    AdvManagedFlag on;
    AdvOtherConfigFlag on;
};

This is all you need. We are advertising, and setting the M and O bits here. Now radvd will send our clients our link-local gateway and tell them to go get their IPV6 information from DHCP. This is probably the most confusing part about this setup, there is NO way to send our real IPV6 gateway, clients only get the LINK-LOCAL gateway and from that must be able to get out to the internet. AGAIN I WILL REPEAT, they get your “Link-Local” gateway ie: “fe80::226:5aff:fe6b:ca8d”, not your real “2001:aaaa:bbbb::1” gateway. This is a limitation of the protocal, but it is not a big deal, we can still forward clients out a link-local gateway.

Ok now clients have our routers link-local gateway, now we can setup our dhcpd6.conf, and perhaps assign some static IPV6 addresses to some dhcp clients to since we like to know who is who. Only issue with IPV6 and static addresses is we can no longer use MAC Address, we need to use DUID of the client. This is also problematic since DUID is the same for all ethernet cards on each host. To solve that problem you can look into using DHCPv6 IAID, but since we only have 1 ethernet per client, we will only focus on DUID. Let us assume
we have a 2001:aaaa:bbbb::/48 to assign to clients.

Let us look at the bottom of my /etc/dhcp/dhcpd6.conf:

authoritative;

subnet6 2001:aaaa:bbbb::/48 {
  #lets range last octet from decimal 1000-65535 which in hex is : 3e8-ffff
  range6 2001:aaaa:bbbb::3e8 2001:aaaa:bbbb::ffff;
  option dhcp6.name-servers 2001:aaaa:bbbb::3,2001:aaaa:bbbb::4;
  option dhcp6.domain-search "sunsaturn.com";
} 

#you get this by typing "ipconfig /all" on windows machine and look for "DHCPv6 Client DUID"
#just separate with : instead of -        
host dandesktop { #unfortunately, same client-id for each ethernet card in same host, so only 1 will get an IPV6 address here
  host-identifier option dhcp6.client-id 00:01:00:01:1B:67:B6:C3:58:5B:39:45:07:90;
  fixed-address6 2001:aaaa:bbbb::5;
} 
host laptop { #unfortunately, same client-id for each ethernet card in same host, so only 1 will get an IPV6 address here
  host-identifier option dhcp6.client-id 00:01:00:01:1A:F5:AF:22:48:5B:39:3A:06:38;
  fixed-address6 2001:aaaa:bbbb::17; 
} 

So what I started doing was a standard catchall block, setting DNS servers and IPV6 addresses for clients I did not assign statically giving them an IPV6 address in range 2001:aaaa:bbbb::3e8 – 2001:aaaa:bbbb::ffff.

Then I assign 2 static IPV6 addresses to my desktop and my laptop. I ran “ipconfig /all” on the two Windows 8.1 machines and collected their DUID’s. Then used a search and replace program on the DUID to change all “-” characters with “:” characters to match format in the dhcpd6.conf file.

Now after we start dhcpd, make sure it is running:

router:/etc/dhcp# ps aux|grep dhcpd6
dhcpd    19531  0.0  0.0  47252  2640 ?        Ss   May04   0:00 /usr/sbin/dhcpd -6 -user dhcpd -group dhcpd -cf /etc/dhcp/dhcpd6.conf
root     22152  0.0  0.0 105304   880 pts/1    S+   00:05   0:00 grep dhcpd6
router:/etc/dhcp# 

Now if all goes well from radvd, clients will get the link-local “fe80::226:5aff:fe6b:ca8d” gateway, run off and check UDP port 546 on IPV6 to get our settings from dhcpd6.conf file for an IP address and the DNS servers, and voila we are done! If you have issues with clients, please checkout my other how to on setting up a windows dhcp client.

Until Next Time,

Dan.

KVM – Adding Space To FreeBSD 10 zfs on root guest on a Centos 6.5 LVM host

Intro:
I decided to write this after I could not find any documentation on internet how to easily add space to a FreeBSD 10 guest that had zfs on root install. This should show you how to do it easily and quickly. It is important as we may need to add more space from our LVM to our FreeBSD guest at some point, and we need to know exactly how to do that.

Pre-requisites:
I assume you have a Centos host running KVM guests, as well as gdisk installed.

First thing we want to do is extend size of our guest

lvextend -L +10G /dev/vps/sunsaturn (add 10G to sunsaturn)
gdisk /dev/vps/sunsaturn (we let gdisk fix partition table or we will not be able to add new space)

Command (? for help): w
Warning! Secondary header is placed too early on the disk! Do you want to
correct this problem? (Y/N): Y
Have moved second header and partition table to correct location.”

Just save and exit to fix our partition tables.
Now let us run gdisk again to add the new space, since we have following:

gdisk -l /dev/vps/sunsaturn
Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            1057   512.0 KiB   A501  gptboot0
   2            1058         8389665   4.0 GiB     A502  swap0
   3         8389666       335544286   156.0 GiB   A504  zfs0
virt-filesystems --long --parts --blkdevs -h -a /dev/vps/sunsaturn
Name       Type       MBR  Size  Parent
/dev/sda1  partition  -    512K  /dev/sda
/dev/sda2  partition  -    4.0G  /dev/sda
/dev/sda3  partition  -    156G  /dev/sda
/dev/sda   device     -    160G  -

Our last partition can easily be expanded just deleting last partition and add it back with the new space.

gdisk /dev/vps/sunsaturn (now delete last partition(3) and add it back, set code and name back to A504 and zfs0)
partprobe /dev/vps/sunsaturn
(if for any reason you cannot see the space just run:)
gdisk /dev/vps/sunsaturn (then hit "w")
partprobe /dev/vps/sunsaturn
(now you should be able to do above)

Alright we have expanded our LVM, now we need to restart the guest and enable new space within the guest.

virsh shutdown sunsaturn (actually wait till it is shutdown, till "virsh list" shows it gone)
virsh start sunsaturn

IN THE GUEST:

zpool status (find out device and put it below)
zpool online -e zroot vtbd0p3 (zfs will now grab the additional space)

That’s it, now you know how to add space on the fly to any FreeBSD guest with zfs on root, on the fly.

Dan.

Update: a few months after writing this I came across this article which does a good representation.

Authenticating users with freeradius on Centos

You may want to authenticate users with radius at some point, perhaps your backend stores all your users there, perhaps you do not want to login to many boxes to change password for same user, I will describe here how to authenticate users with almost any service.

First setup some repository depending on if your running 64 bit or not:
#64 Bit

rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm

#32 Bit

rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-7.noarch.rpm

Install it and configure it:

yum install pam_radius
alias pico='nano -w'
pico /etc/pam_radius.conf

Setup your radius details here
#server[:port] shared_secret timeout (s)
127.0.0.1 your_radius_secret_password 3

Add radius authentication to SSH

cd /etc/pam.d
pico sshd

#now for any users you want to authenticate, just toss following line as second line in any service
auth sufficient pam_radius_auth.so debug

Just open any file and edit it and its authenticating off radius
IMPORTANT NOTE: Do NOT think you can just add users to radius and login, you must actually create the user first! This is not LDAP, we are simply just providing another place to store passwords for users, nothing more, you can lockout the account on the system and still login with users radius passsword.

To add a user is simple as : adduser username
Delete a user just as simple : userdel -r username

Verify everything is ok:

ssh -l radius_user localhost 
exit
tail -100 /var/log/secure

You hopefully see something as follows:
pam_radius_auth: Got RADIUS response code 2

Exactly what we want, response code 2 from radius is Accept-Accept, so we typed in right password and should have been logged in.

Try some other services, pop open dovecot for instance:

pico /etc/pam.d/dovecot (add the same line)
telnet localhost 110
user radius_user
pass radius_pass
retr 1

You can do this for all your services,

Until Next Time,

SunSaturn.com

Centos OpenVPN Setup

A friend asked me to setup openvpn on a openvz VPS he had, First thing we needed to do which he had already done is contact host to add support for tun/tap devices, for idea of what host had to do here it is: Host Setup His website is leadingvpn.com so lets use that for this, obviously change to your own domain. We will be doing this for Centos 6 64bit
Basically for a quick rundown of what to do on a centos host:

echo "modprobe tun" >> /etc/rc.d/rc.local; modprobe tun
echo "modprobe ipt_mark" >> /etc/rc.d/rc.local; modprobe ipt_mark
echo "modprobe ipt_MARK" >> /etc/rc.d/rc.local; modprobe ipt_MARK
CTID=101 (change to your CTID)
vzctl set $CTID --devnodes net/tun:rw --save
vzctl set $CTID --devices c:10:200:rw --save
vzctl set $CTID --capability net_admin:on --save
vzctl exec $CTID mkdir -p /dev/net
vzctl exec $CTID chmod 600 /dev/net/tun
vzctl restart $CTID

#Now in container lets run these 2 lines as well as add them to startup

myip=1.1.1.1 (change to your ip address)
iptables -t nat -A POSTROUTING -o venet0 -j SNAT --to-source $myip
iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -j SNAT --to-source $myip
echo "iptables -t nat -A POSTROUTING -o venet0 -j SNAT --to-source $myip" >> /etc/rc.d/rc.local
echo "iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -j SNAT --to-source $myip" >> /etc/rc.d/rc.local

#enable forwarding

sysctl net.ipv4.ip_forward = 1
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf

#We need to enable right repository for this OS install
#Remember to change to right rpm for your OS

rpm -Uvh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm
cd /etc/yum.repos.d
wget http://repos.openvpn.net/repos/yum/conf/repos.openvpn.net-CentOS6-snapshots.repo
yum install gcc make rpm-build autoconf.noarch zlib-devel pam-devel openvpn openssl-devel -y
cp -R /usr/share/doc/openvpn*/easy-rsa/ /etc/openvpn/
cd /etc/openvpn/easy-rsa/2.0
chmod 755 *
nano vars

#change following:

export KEY_SIZE=4096
export KEY_COUNTRY="US"
export KEY_PROVINCE="CA"
export KEY_CITY="SanFrancisco"
export KEY_ORG="LeadingVPN"
export KEY_EMAIL="admin@leadingvpn.com"
#export KEY_CN=changeme
#export KEY_NAME=changeme
#export KEY_OU=changeme
#export PKCS11_MODULE_PATH=changeme
#export PKCS11_PIN=1234

#run ./build-dh command before going for lunch

source ./vars
./clean-all
./build-dh
./pkitool --initca
./pkitool --server secure.leadingvpn.com
openvpn --genkey --secret keys/ta.key

#build as many client certificates as needed(if you need to add more just cd into here and add another vpn to sign against the above)

./pkitool client1.leadingvpn.com
./pkitool client2.leadingvpn.com

#make directories and copy directory files

rm -rf /etc/openvpn/secure /etc/openvpn/client1 /etc/openvpn/client2 
mkdir /etc/openvpn/secure    (server file directory)
mkdir /etc/openvpn/client1  (client1 directory)
mkdir /etc/openvpn/client2     (client2 directory)
cd /etc/openvpn/easy-rsa/2.0/keys
cp ca.crt ca.key ta.key dh4096.pem secure.leadingvpn.com.crt secure.leadingvpn.com.key /etc/openvpn/secure  (files needed for server, ca.key if want to be CA to)
cp ca.crt ta.key client1.leadingvpn.com.crt client1.leadingvpn.com.key /etc/openvpn/client1 (client1 client1.leadingvpn.com)
cp ca.crt ta.key client2.leadingvpn.com.crt client2.leadingvpn.com.key /etc/openvpn/client2 (client2 client2.leadingvpn.com)
cd /etc/openvpn
nano secure.conf

TO BE CONTINUED….

Linux ISCSI with FreeBSD ZFS

How many of you would love to use ZFS on linux for its excellent snapshot features, but cannot? How about we setup a FreeBSD host with ZFS, then on linux we mount the ZFS as ISCSI, format it as ext4 then we can snapshot that partition anytime we want and roll it back anytime we want? I have been using FreeBSD for years on ZFS with over 10TB of data, its very stable on this OS, let’s see how I’d go about accomplishing keeping ZFS for backup snapshots:

#ISCSI
#Freebsd- Lets Setup Server Portion Of ISCSI:
#notice how df will not show data/iscsitest, as we creating a block device

zfs destroy data/iscsitest
zfs create -V 20G data/iscsitest
zfs list -o name,used,avail,volsize
cd /usr/ports/net/istgt; make install
cd /usr/local/etc/istgt/
cp auth.conf.sample auth.conf
cp istgt.conf.sample istgt.conf
cp istgtcontrol.conf.sample istgtcontrol.conf
pico istgt.conf

[PortalGroup1]

Portal DA1 192.168.0.3:3269 …

[InitiatorGroup1]

Netmask 192.168.0.0/24 …

[LogicalUnit1]

LUN0 Storage /dev/zvol/data/iscsitest Auto

/usr/local/etc/rc.d/istgt start

#Linux:
yum install iscsi-initiator-utils
service iscsi start

iscsiadm -m discovery -t sendtargets -p 192.168.0.3
fdisk -l(find the new device…perhaps it is /dev/sdf)
gdisk /dev/sdf (partition it…maybe sdf1 as ext4)
mkfs.ext4 /dev/sdf1
mkdir /mnt; mount -t ext4 /dev/sdf1 /mnt

(toss in /etc/fstab if you like as: /dev/sdf1 /mnt ext4 _netdev 0 0) #notice the _netdev option is needed
chkconfig iscsi on

#OPTIONAL: How would we change size of partition?: (can delete old targets with : iscsiadm -m node -p 192.168.0.3 –op=delete)
Freebsd:

zfs set volsize=40G data/iscsitest
zfs list -o name,volsize
/usr/local/etc/rc.d/istgt restart

Linux:
(umount any iscsi partitions)
service iscsi restart
fdisk -l (check what /dev/sdaX it is and new size to confirm)
gdisk /dev/sdf (delete all partitions and save)
gdisk /dev/sdf (recreate it with max size and save)
resize2fs /dev/sdf1
(go ahead and mount it again, should have new size)

#FreeBSD snapshot test:
zfs snapshot data/iscsitest@test

(now time goes on and data gets written to /mnt on linux box, but something happens and we want to roll back)

#rollback:
Freebsd:
/usr/local/etc/rc.d/istgt stop
zfs rollback data/iscsitest@test
/usr/local/etc/rc.d/istgt start

Linux:
umount /mnt
mount /mnt
(should be recovered)

#make your life easier crontab the snapshots Daily and Monthly:
21 4 * * * (/usr/local/etc/rc.d/istgt stop; zfs destroy data/iscsitest@`/bin/date +\%A`; zfs snapshot data/iscsitest@`/bin/date +\%A`; /usr/local/etc/rc.d/istgt start) > /dev/null 2>&1
0 0 1 * * (/usr/local/etc/rc.d/istgt stop; zfs destroy data/iscsitest@`/bin/date +\%B`; zfs snapshot data/iscsitest@`/bin/date +\%B`; /usr/local/etc/rc.d/istgt start) > /dev/null 2>&1

Until next time,

Dan.

DKIM and postfix setup on centos 6.3

This is meant as a quick 5 min get it going, and a 5 min quick testing to get dkim going.I do recommend actually reading man pages etc if you have extra time, but this guide should get you going.

Install EPEL repository:
64 bit:
# rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm
32 bit:
# rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-7.noarch.rpm

Install DKIM:
# yum install opendkim
# export domain=YOURDOMAIN.com
# mkdir /etc/opendkim/keys/$domain
# cd /etc/opendkim/keys/$domain
# opendkim-genkey -d $domain -s default
# chown -R opendkim:opendkim /etc/opendkim/keys/$domain
# echo "default._domainkey.$domain $domain:default:/etc/opendkim/keys/$domain/default.private" >> /etc/opendkim/KeyTable
# echo "*@$domain default._domainkey.$domain" >> /etc/opendkim/SigningTable

If you have internal hosts relaying through that you want to sign mail for to:
# echo "192.168.0.0/24" >> /etc/opendkim/TrustedHosts

Edit DNS:
# cat /etc/opendkim/keys/$domain/default.txt >> /var/named/master/YOUR_DOMAIN_DNS_ZONE_FILE
(what I normally do at this point is increment serial number in DNS zone file, login to slaves, delete their zone files and restart named there to get it going quickly)
# nano -w /etc/opendkim.conf
◦Mode sv
◦Domain YOURDOMAIN.com
◦uncomment everything except KeyFile
(Find this line: SigningTable /etc/opendkim/SigningTable and change it to:
SigningTable refile:/etc/opendkim/SigningTable to enable regex wildcards on SigningTable)

Configure Postfix
# nano -w /etc/postfix/main.cf (add following)
# opendkim setup
smtpd_milters = inet:localhost:8891
non_smtpd_milters = inet:localhost:8891
milter_default_action = accept

Restart Services
# service opendkim start
# service postfix restart
# service named reload
# chkconfig opendkim on

Test our setup
# echo "DKIM Test" | mail -s "DKIM Testing" SOMEUSER@gmail.com
# tail -100 /var/log/maillog

Now make sure maillog log shows it signed, check gmail headers of email you sent, make sure everything passes fine


Dan

How to partition an openvz host to take snapshots

#Partitioning:
/dev/sda1 - /boot - 250 MB (because kernel cannot be on LVM)
/dev/sda2 - LVM - rest of space
volume group=vps
root - / - 102400 MB (100 GB) (logical volume name "root")
swap - N/A - 4096 MB (4 GB) (logical volume name "swap")
/vz - /vs - (logical volume name "vz")
(here you take remaining space and subtract it from amount of space you want for snapshots)

Now you have 3 LVM’s in /dev/vps/*, and you have remaining space for snapshots. You should keep at least 10GB free for snapshots depending how much your data changes. If alot changes you would be better off with 100GB of free space. In my examples for snapshots I am going to assume I left 10GB free of space for snapshots.

History on snapshots:
#snapshot–snapshots only require enough space for how much data you think will be changed(copy on write)
# –longer a snapshot left sitting there, more snapshot may grow with changes to take out
# –ie: you create a new 2G file after snapshot is taken, snapshot grows 2G so it can take that file out after
Lets begin:
Lets say I want to take a snapshot of /vz(where all my openvz hosts reside)
I know I have 10GB left of free space, a way to check( do a “pvdisplay” will show you the “PV Size”, then do a “lvdisplay” which shows you all LV’s and their sizes, add up all the “LV Size” of each, then use that to subtract from “PV Size” for remaining space for snapshots. WARNING: never attempt to shrink a LV to get extra space for snapshots, backup the LV , delete it, recreate it, and copy data back)
# lvcreate --snapshot --name snap --size 10GB /dev/vps/vz
(in this example I just created a file /dev/vps/snap, that will be good for 10GB worth of changes to the /dev/vps/vz logical volume.)
(Now let’s mount it for example if we wanted to back it up)
# mkdir /mnt; mount /dev/vps/snap /mnt; ls -al /mnt
(now we have /mnt available for a backup with ie: rsync)
(to unmount and remove snapshot to get our 10GB of free space back for another snapshot:)
# umount /mnt; lvremove /dev/vps/snap
(simple as that, now you are an expert at snapshots)

Now lets get more advanced, lets say I want to take that 10GB of space and take snapshots of /dev/vps/vz daily and once a month, as well as take a monthly snapshot of host itself: /dev/vps/root, so that we have ability to get back anything we accidently deleted anywhere.

Lets setup a cronjob:
#crontab -e
(lets add the following:)
#10 Gigabytes available for snapshots, 7 days in a week=7GB+2GB monthly snapshot+1GB host snapshot=10GB
#daily snapshots
0 0 * * * (lvremove -f /dev/vps/snap-`/bin/date +\%A`; lvcreate --snapshot --name snap-`/bin/date +\%A` --size 1GB /dev/vps/vz) > /dev/null 2>&1
#monthly snapshot
0 0 1 * * (lvremove -f /dev/vps/snap-Monthly; lvcreate --snapshot --name snap-Monthly --size 2GB /dev/vps/vz) > /dev/null 2>&1
#host snapshot once a month
0 0 1 * * (lvremove -f /dev/vps/snap-host-Monthly; lvcreate --snapshot --name snap-host-Monthly --size 1GB /dev/vps/root) > /dev/null 2>&1

(and there you have it, everyday we will get /dev/vps/snap-Thursday for example day of week snapshots, 1 monthly snapshot, and 1 monthly snapshot of host itself we can mount anytime
to /mnt and do any recovery or backups needed)

My recommendation if you have the space is have a backup LVM the same size of /vz. My preference for backups personally is ZFS filesystem using freebsd, which you can easily
install as a VPS under KVM virtualization, share the disk to the VM then have freebsd do a ZFS raid on them, but thats for another article.

As far as LVM goes, what I personally do is rsync the /vz directory each night to a ZFS filesystem, and take snapshots there, ZFS is much nicer to work with when dealing with snapshots, but where LVM comes in handy is to snapshot the LVM(because we use ext4 on linux) for something like a mysql database, mount it, then rsync it off to a ZFS filesystem.

Dan