This page show the installation notes for a two network interfaced Proxmox 7 virtualisation server. This is my network scenario we are going to work on:
{LAN 192.168.0.0/24} || +==========================+ | [LAN IF 192.168.0.250] | | ProxMox 7 Host | | pve.lan | | [DMZ IF 192.168.178.250] | +==========================+ || {DMZ 192.168.178.0/24} || +==============================+ | [Internal IF 192.168.178.1] | | Shitty ISP Router | | router.dmz | | [External IF DHCP IPv4+IPv6] | +==============================+ || {Internet IPv4 + IPv6}
cd Downloads # NOTE: in my case my thumb drive # has been assigned the /dev/sdx # device name. sudo dd if=/path/to/proxmox7.iso of=/dev/sdx
firefox https://pve.lan:8006/
ssh root@pve.lan # type yes to confirm # fix the "perl: warning: Setting locale failed" problem locale-gen # make a backup first cp -v /etc/network/interfaces{,.$(date +%F)}
# WARNING: will replace file! cat << 'EOF' > /etc/network/interfaces auto lo iface lo inet loopback # physical internal LAN interface iface enp1s0 inet manual # physical external DMZ interface iface enp5s8 inet manual # virtial bridged interface 0 (internal LAN) auto vmbr0 iface vmbr0 inet static address 192.168.0.250/24 bridge-ports enp1s0 bridge-stp off bridge-fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '192.168.0.0/24' -o vmbr1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '192.168.0.0/24' -o vmbr1 -j MASQUERADE post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1 # virtial bridged interface 1 (external DMZ) auto vmbr1 iface vmbr1 inet static address 192.168.178.250/24 gateway 192.168.178.1 bridge-ports enp5s8 bridge-stp off bridge-fd 0 EOF
# make a backup first cp -v /etc/resolv.conf{,.$(date +%F)} # replace /etc/resolv.conf with my config: cat << 'EOF' > /etc/resolv.conf search lan nameserver 2001:4860:4860::8888 nameserver 8.8.8.8 EOF
sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/pve-enterprise.list
cat << EOF > /etc/apt/sources.list.d/pve-no-subs.list # according to ... # https://pve.proxmox.com/wiki/Package_Repositories # # PVE pve-no-subscription repository provided by proxmox.com, # NOT recommended for production use deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription EOF
ifreload -a # or ifreload -a -v # VERY VERBOSE
apt update && apt dist-upgrade -yy && reboot
Since Google decided to drop any support for classic SMTP AUTH (only supports OAUTH2 these days) most tutorials and instructions you can find on the web who still use GMAIL are deprecated and i was unable to find an easy solution to adapt postfix SMTP under Debian 11 (proxmox 7) to it. So i decided to look for another free Email provider (https://www.sendinblue.com/) that is still able to serve me plain SMTP AUTH to forward at least “some” notification mails.
This is my attempt to solve this.
It's important to setup automatic email forwarding early on, so the system can notify us on any problem detected. Therefor we entered an email address while we were with the graphical installer. This is ok but it may not be enough to enable proper outbound email.
Since GMAIL failed us and tools like postfix etc are not “oauth2” compliant, i found in Oct.2022 a friendly french email provider called BREVO that seem to offer plain old SMTP AUTH for up to 300 emails per month for free. And for now it seems to accept pretty much any FROM addresses without hassle.
apt install -y libsasl2-modules sasl2-bin swaks # list all installed/available SASL plugins saslpluginviewer
cp -v /etc/postfix/main.cf{,.$(date +%F)}
postconf 'relayhost = [smtp-relay.sendinblue.com]:587' postconf 'smtp_use_tls = yes' postconf 'smtp_tls_security_level = encrypt' postconf 'smtp_tls_CApath = /etc/ssl/certs/' postconf 'smtp_sasl_auth_enable = yes' postconf 'smtp_sasl_security_options =' postconf 'smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd' postconf 'smtp_sasl_mechanism_filter =' systemctl restart postfix systemctl status postfix
# NOTE THE LEADING SPACE in the next line! # Prevents it from being saved in bash history echo '[smtp-relay.sendinblue.com]:587 axel.werner.1973@gmail.com:seecreetPazzword' > /etc/postfix/sasl_passwd chmod 600 /etc/postfix/sasl_passwd postmap hash:/etc/postfix/sasl_passwd
tail -n0 -f /var/log/messages /var/log/syslog /var/log/mail.* &
msg="testies $(date)" ; echo $msg | /usr/bin/pvemailforward
Since my hostsystem does contain multiple terrabytes of diskspace and i would like to use some sort of redundant storage configuration (raid5 a like). ZFS would be a cool choice. However every manual i read notes that i would need at least 1GB of RAM per TB of Disk space, which i dont have. So using ZFS is out of the window by now.
I have 8GB RAM and roughly 3x 6TB HDD ( /dev/sd[bcd] )
So i guess Linux's 'md' (mdadm raid) it will be.
apt install -y mdadm sysstat
wipefs --all --force /dev/sd[bcd]
lsblk -o PATH,MODEL,SERIAL,STATE,ROTA,TYPE,size | grep disk
The MODEL and SERIAL column information is used in the “by-id” names! So make note of those!
for drv in /dev/sd[bcd] ; do echo "Deploying GPT on ${drv}" echo 'label:gpt' | sfdisk -q "${drv}" echo "Partitioning ${drv}" echo '1M,+,R' | sfdisk -q "${drv}" sfdisk -l "${drv}" done
modprobe -a linear multipath raid0 raid1 raid5 raid6 raid10 dm-mod mdadm --create --run \ --verbose \ --level=5 \ --raid-devices=3 \ --consistency-policy=ppl \ --chunk=256 \ /dev/md/raid5 \ /dev/sd[bcd]1
watch cat /proc/mdstat # or mdadm --detail /dev/md/raid5
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
mdadm --monitor --scan --oneshot --test
pvcreate -v /dev/md/raid5 # check details with pvs # or pvdisplay
vgcreate pvedata /dev/md/raid5 locale-gen # check/list with vgs # or vgdisplay
lvcreate --verbose --extents 100%VG --name lvraid5 pvedata # FIXME ? # lvcreate --autobackup y --extents 100%VG --name lvraid5 --readahead auto pvedata # check with lvs # or lvdisplay
FIXME
FIXME
FIXME
FIXME
FIXME
FIXME
FIXME
# lookup device path lsblk -p # make xfs filesystem # - this might take a while # on large volumes # need this for XFS allignment calculations # take it from your mdadm details # export RAID_DEVICE=/dev/mapper/pvedata-lvraid5 export CHUNK_SZ_KB=256 export PARITY_DRIVE_COUNT=1 export NON_PARITY_DRIVE_COUNT=2 mkfs.xfs \ -L raid5lv \ -f \ -l lazy-count=1 \ -d sunit=$(($CHUNK_SZ_KB*2)) \ -d swidth=$(($CHUNK_SZ_KB*2*$NON_PARITY_DRIVE_COUNT)) \ $RAID_DEVICE # Check Result / Details: xfs_info /dev/mapper/pvedata-lvraid5 # meta-data=/dev/mapper/pvedata-lvraid5 isize=512 agcount=32, agsize=91568576 blks # = sectsz=4096 attr=2, projid32bit=1 # = crc=1 finobt=1, sparse=1, rmapbt=0 # = reflink=1 bigtime=0 # data = bsize=4096 blocks=2930193408, imaxpct=5 # = sunit=64 swidth=128 blks # naming =version 2 bsize=4096 ascii-ci=0, ftype=1 # log =internal log bsize=4096 blocks=521728, version=2 # = sectsz=4096 sunit=1 blks, lazy-count=1 # realtime =none extsz=4096 blocks=0, rtextents=0 # # FIXME # mkfs.xfs -L raid5lv /dev/mapper/pvedata-lvraid5 # Check Result / Details: xfs_info /dev/mapper/pvedata-lvraid5 FIXME
mkdir -vp /raid5lv mount -v /dev/mapper/pvedata-lvraid5 /raid5lv mount | grep raid ls -la /raid5lv touch /raid5lv/Welcome_to_raid5lv ls -la /raid5lv umount -v /raid5lv ls -la /raid5lv
cat << 'EOF' >> /etc/fstab LABEL=raid5lv /raid5lv xfs defaults 0 2 EOF
mount | grep raid mount -a mount | grep raid
# creates a SUBDIR proxmox/ so we can share # the top level directory with the host # or other things without mixup. # pvesm add dir raid5lv --path /raid5lv/proxmox/ --content iso,vztmpl,backup,rootdir,images,snippets
# FIXME lvremove /dev/pve/data # lvresize --resizefs --extents +100%FREE /dev/pve/root
df -h / /raid*
pveperf # root fs pveperf /raid5lv/ root@pve:~# pveperf # single ssd CPU BOGOMIPS: 19956.04 REGEX/SECOND: 1060547 HD SIZE: 109.80 GB (/dev/mapper/pve-root) BUFFERED READS: 268.40 MB/sec AVERAGE SEEK TIME: 0.14 ms FSYNCS/SECOND: 480.64 DNS EXT: 536.55 ms DNS INT: 539.24 ms (lan) root@pve:~# pveperf /raid5lv/ # xfs on lvm on dm raid CPU BOGOMIPS: 19956.04 REGEX/SECOND: 1316398 HD SIZE: 11175.81 GB (/dev/mapper/pvedata-lvraid5) BUFFERED READS: 334.68 MB/sec <<< SUPER AVERAGE SEEK TIME: 14.22 ms <<< to be expected FSYNCS/SECOND: 23.77 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< WAY DOWN :( DNS EXT: 530.12 ms DNS INT: 546.22 ms (lan)
For whatever reason the company behind ProxMox changed the .bashrc of the ROOT user. Debian's bashrc template is /etc/skel/.bashrc , which contains important environment variables to configure the bash history. However they are missing in ROOT's version, resulting in dupes and possibly security threatening entries.
To fix it i did this:
- Extract and appent to ROOT's bashrc:
grep HIST /etc/skel/.bashrc >> /root/.bashrc
In theory the mdadm monitor “should” already send us email to the root account, in case of some problem with the MD raid. However i prefer to add another layer of security to it and make it audible when there is a state detected which is not expected.
Some simple beeps are sufficiant. But sending an S.O.S. by morse code is more like it. However it can easily be extended to do more things (like sending another mail) or sending a whole message with details via morse code. It's up to you.
This is how i did it:
apt install beep
modprobe pcspkr
beep
beep
still works without “modprobing”.#!/bin/bash # CHANGE LOG: # # 2021-08-01 A.Werner ADD: wall + console output + new string # clean,checking # # 2021-09-05 A.Werner ADD: new OK string added # function dit { beep -f 2750 -l 75 -d 50 } function dah { beep -f 2750 -l 175 -d 50 } function spc { sleep .1 } function s { dit dit dit spc } function o { dah dah dah spc } function morse_sos { s o s sleep .5 } mdState=$( /usr/sbin/mdadm --detail /dev/md127 | grep "State :" | cut -d: -f2 | tr -d ' ' ) case "$mdState" in active|active,checking|clean|clean,checking) : # nop ;; *) morse_sos echo "$0 WARNING: mdadm reports md0 status: '$mdState' on $(date)" >&2 echo "$0 WARNING: mdadm reports md0 status: '$mdState' on $(date)" >/dev/console wall "$0 WARNING: mdadm reports md0 status: '$mdState' on $(date)" ;; esac
chmod +x /usr/local/sbin/check-raid-status.sh
cat <<'EOF' > /etc/cron.d/raid-monitor-md127 # # Regular cron jobs to audibly alert admin if # md (mdadm) raid changes state from "clean" # using morse code # # #m h dom mon dow user command 5 06-22 * * * root /usr/local/sbin/check-raid-status.sh EOF
FIXME
FIXME
FIXME
FIXME
FIXME
FIXME
FIXME
FIXME
FIXME