User Tools

Site Tools


it-artikel:linux:proxmox-7-installation-and-configuration-with-two-network-interfaces

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
it-artikel:linux:proxmox-7-installation-and-configuration-with-two-network-interfaces [2022-10-30 13:10] axel.werner.1973@gmail.comit-artikel:linux:proxmox-7-installation-and-configuration-with-two-network-interfaces [2023-08-05 14:01] (current) axel.werner.1973@gmail.com
Line 9: Line 9:
      {LAN 192.168.0.0/24}      {LAN 192.168.0.0/24}
               ||               ||
-  +=========================+ +  +==========================+ 
-  | [LAN IF 192.168.0.250]    +  | [LAN IF 192.168.0.250]   
-  |     ProxMox 7 Host      +  |     ProxMox 7 Host       
-  |        pve.lan          |+  |        pve.lan           |
   | [DMZ IF 192.168.178.250] |   | [DMZ IF 192.168.178.250] |
-  +=========================++  +==========================+
               ||               ||
     {DMZ 192.168.178.0/24}     {DMZ 192.168.178.0/24}
Line 39: Line 39:
 # has been assigned the /dev/sdx # has been assigned the /dev/sdx
 # device name. # device name.
-sudo dd if=/path/to/proxmox7,iso of=/dev/sdx+sudo dd if=/path/to/proxmox7.iso of=/dev/sdx
 </code> </code>
   - Unplug USB Drive and plug it into Proxmox Server. Boot from USB drive. A somewhat graphical Grub Bootmenü should be visible. **Choose install**   - Unplug USB Drive and plug it into Proxmox Server. Boot from USB drive. A somewhat graphical Grub Bootmenü should be visible. **Choose install**
Line 156: Line 156:
 It's important to setup automatic email forwarding early on, so the system can notify us on any problem detected. Therefor we entered an email address while we were with the graphical installer. This is ok but it may not be enough to enable proper outbound email.  It's important to setup automatic email forwarding early on, so the system can notify us on any problem detected. Therefor we entered an email address while we were with the graphical installer. This is ok but it may not be enough to enable proper outbound email. 
  
-Since GMAIL failed us and tools like postfix etc are not "oauth2" compliant, i found in Oct.2022 a friendly french email provider called [[https://www.sendinblue.com/]] that seem to offer plain old **SMTP AUTH** for up to 300 emails per month for free. And for now it seems to accept pretty much any FROM addressm without hassle. +Since GMAIL failed us and tools like postfix etc are not "oauth2" compliant, i found in Oct.2022 a friendly french email provider called [[https://www.brevo.com/free-smtp-server/|BREVO]] that seem to offer plain old **SMTP AUTH** for up to 300 emails per month for free. And for now it seems to accept pretty much any FROM addresses without hassle. 
  
   - Make sure you entered a valid (destination) Email Address while Proxmox Installation. If you need to change it you can do this using the Web UI. Therefor ...   - Make sure you entered a valid (destination) Email Address while Proxmox Installation. If you need to change it you can do this using the Web UI. Therefor ...
Line 238: Line 238:
  
 </code> The MODEL and SERIAL column information is used in the "by-id" names! So make note of those! </code> The MODEL and SERIAL column information is used in the "by-id" names! So make note of those!
-  - FIXME 
-  - FIXME 
   - According to the Arch Linux people its best practice to put a single partition of type 'Linux Raid' on every drive, even it isn't really needed. It's supposed to help later when we'll have to replace a failed drive one day. Also it's easier to see that these disks actually are "in use". So this is how i create a single partition of type "RAID" with max size on every drive: <code>   - According to the Arch Linux people its best practice to put a single partition of type 'Linux Raid' on every drive, even it isn't really needed. It's supposed to help later when we'll have to replace a failed drive one day. Also it's easier to see that these disks actually are "in use". So this is how i create a single partition of type "RAID" with max size on every drive: <code>
  
Line 251: Line 249:
  
 </code> </code>
-  - FIXME <code>FIXME</code> 
-  - FIXME <code>FIXME</code> 
-  - FIXME <code>FIXME</code> 
-  - FIXME <code>FIXME</code> 
   - Create 3 drive RAID5 array using mdadm : <code>   - Create 3 drive RAID5 array using mdadm : <code>
  
Line 298: Line 292:
  
 </code> </code>
-  - FIXME <code>FIXME</code> 
-  - FIXME <code>FIXME</code> 
-  - FIXME <code>FIXME</code> 
   - What about creating a single LVM group **pvedata** and add the **/dev/md/raid5** as its first physical drive, Therefor we can later grow the VG or "move" it to another drive for easier maintenance. <code>   - What about creating a single LVM group **pvedata** and add the **/dev/md/raid5** as its first physical drive, Therefor we can later grow the VG or "move" it to another drive for easier maintenance. <code>
 vgcreate pvedata /dev/md/raid5 vgcreate pvedata /dev/md/raid5
Line 311: Line 302:
   - Since i would like to "share" the whole free diskspace between the host os AND Proxmox guests, AND i want to be able to store "anything" (any proxmox content/data type) on this pool i am going to create ONE large logical volume (LV) now and add it to Proxmox as a "directory storage pool" later: <code>   - Since i would like to "share" the whole free diskspace between the host os AND Proxmox guests, AND i want to be able to store "anything" (any proxmox content/data type) on this pool i am going to create ONE large logical volume (LV) now and add it to Proxmox as a "directory storage pool" later: <code>
  
-lvcreate --autobackup y --extents 100%VG --name lvraid5 --readahead auto pvedata+lvcreate --verbose --extents 100%VG --name lvraid5 pvedata 
 + 
 +# FIXME ? 
 +lvcreate --autobackup y --extents 100%VG --name lvraid5 --readahead auto pvedata 
 + 
 + 
 +# check with 
 + 
 +lvs 
 + 
 +# or 
 + 
 +lvdisplay
  
 </code> </code>
 +  - FIXME <code>FIXME</code>
 +  - FIXME <code>FIXME</code>
 +  - FIXME <code>FIXME</code>
 +  - FIXME <code>FIXME</code>
 +  - FIXME <code>FIXME</code>
 +  - FIXME <code>FIXME</code>
 +  - FIXME <code>FIXME</code>
   - Finally we put some filesystem on the LV:<code>   - Finally we put some filesystem on the LV:<code>
  
Line 322: Line 332:
 #   - this might take a while #   - this might take a while
 #     on large volumes #     on large volumes
-mkfs.xfs -L raid5lv /dev/mapper/pvedata-lvraid5+ 
 +# need this for XFS allignment calculations 
 +# take it from your mdadm details 
 +
 +export RAID_DEVICE=/dev/mapper/pvedata-lvraid5 
 +export CHUNK_SZ_KB=256  
 +export PARITY_DRIVE_COUNT=1  
 +export NON_PARITY_DRIVE_COUNT=2 
 + 
 +mkfs.xfs 
 +  -L raid5lv 
 +  -f \ 
 +  -l lazy-count=1 \ 
 +  -d sunit=$(($CHUNK_SZ_KB*2)) \ 
 +  -d swidth=$(($CHUNK_SZ_KB*2*$NON_PARITY_DRIVE_COUNT)) \ 
 +  $RAID_DEVICE 
 + 
 + 
 +# Check Result / Details: 
 + 
 +xfs_info /dev/mapper/pvedata-lvraid5 
 + 
 +#  meta-data=/dev/mapper/pvedata-lvraid5 isize=512    agcount=32, agsize=91568576 blks 
 +#                                 sectsz=4096  attr=2, projid32bit=1 
 +#                                 crc=1        finobt=1, sparse=1, rmapbt=0 
 +#                                 reflink=1    bigtime=0 
 +#  data                           bsize=4096   blocks=2930193408, imaxpct=5 
 +#                                 sunit=64     swidth=128 blks 
 +#  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1 
 +#  log      =internal log           bsize=4096   blocks=521728, version=2 
 +#                                 sectsz=4096  sunit=1 blks, lazy-count=1 
 +#  realtime =none                   extsz=4096   blocks=0, rtextents=0 
 +#   
 + 
 + 
 + 
 + 
 + 
 + 
 +# FIXME 
 +# mkfs.xfs -L raid5lv /dev/mapper/pvedata-lvraid5 
 + 
 + 
 +# Check Result / Details: 
 + 
 +xfs_info /dev/mapper/pvedata-lvraid5 
 + 
 +FIXME 
 + 
 + 
  
 </code> </code>
Line 376: Line 436:
  
 df -h / /raid* df -h / /raid*
 +
 +</code>
 +  - Finally (after the raid is finished syncing) let's do a little performance test (with pvw on board tools) and compare the ssd boot device with the lvm on mdadm raid:<code>
 +pveperf # root fs
 +
 +pveperf /raid5lv/
 +
 +
 +
 +
 +
 +
 +
 +root@pve:~# pveperf # single ssd
 +
 +CPU BOGOMIPS:      19956.04
 +REGEX/SECOND:      1060547
 +HD SIZE:           109.80 GB (/dev/mapper/pve-root)
 +BUFFERED READS:    268.40 MB/sec
 +AVERAGE SEEK TIME: 0.14 ms
 +FSYNCS/SECOND:     480.64
 +DNS EXT:           536.55 ms
 +DNS INT:           539.24 ms (lan)
 +
 +
 +root@pve:~# pveperf /raid5lv/ # xfs on lvm on dm raid
 +
 +CPU BOGOMIPS:      19956.04
 +REGEX/SECOND:      1316398
 +HD SIZE:           11175.81 GB (/dev/mapper/pvedata-lvraid5)
 +BUFFERED READS:    334.68 MB/sec <<< SUPER
 +AVERAGE SEEK TIME: 14.22 ms <<< to be expected
 +FSYNCS/SECOND:     23.77 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< WAY DOWN :(
 +DNS EXT:           530.12 ms
 +DNS INT:           546.22 ms (lan)
 +
 +
  
 </code> </code>
Line 453: Line 550:
 } }
  
-mdState=$( /usr/sbin/mdadm --detail /dev/md0 | grep "State :" | cut -d: -f2 | tr -d ' ' )+mdState=$( /usr/sbin/mdadm --detail /dev/md127 | grep "State :" | cut -d: -f2 | tr -d ' ' )
  
 case "$mdState" in case "$mdState" in
Line 473: Line 570:
   - Make shellscript executable:<code>chmod +x /usr/local/sbin/check-raid-status.sh</code>   - Make shellscript executable:<code>chmod +x /usr/local/sbin/check-raid-status.sh</code>
   - Activate a cronjob that runs the check script every minute while daytime:<code>   - Activate a cronjob that runs the check script every minute while daytime:<code>
-cat <<'EOF' > /etc/cron.d/raid-monitor-md0+cat <<'EOF' > /etc/cron.d/raid-monitor-md127
 # #
 # Regular cron jobs to audibly alert admin if # Regular cron jobs to audibly alert admin if
Line 481: Line 578:
 # #
 #m h dom mon dow user command #m h dom mon dow user command
-* 06-22 * * * root /usr/local/sbin/check-raid-status.sh+5 06-22 * * * root /usr/local/sbin/check-raid-status.sh
  
 EOF EOF
it-artikel/linux/proxmox-7-installation-and-configuration-with-two-network-interfaces.1667135456.txt.gz · Last modified: 2022-10-30 13:10 by axel.werner.1973@gmail.com