User Tools

Site Tools


it-artikel:linux:proxmox-7-installation-and-configuration-with-two-network-interfaces

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
it-artikel:linux:proxmox-7-installation-and-configuration-with-two-network-interfaces [2022-10-30 13:45] – [Configure Storage Space :] axel.werner.1973@gmail.comit-artikel:linux:proxmox-7-installation-and-configuration-with-two-network-interfaces [2023-08-05 14:01] (current) axel.werner.1973@gmail.com
Line 9: Line 9:
      {LAN 192.168.0.0/24}      {LAN 192.168.0.0/24}
               ||               ||
-  +=========================+ +  +==========================+ 
-  | [LAN IF 192.168.0.250]    +  | [LAN IF 192.168.0.250]   
-  |     ProxMox 7 Host      +  |     ProxMox 7 Host       
-  |        pve.lan          |+  |        pve.lan           |
   | [DMZ IF 192.168.178.250] |   | [DMZ IF 192.168.178.250] |
-  +=========================++  +==========================+
               ||               ||
     {DMZ 192.168.178.0/24}     {DMZ 192.168.178.0/24}
Line 39: Line 39:
 # has been assigned the /dev/sdx # has been assigned the /dev/sdx
 # device name. # device name.
-sudo dd if=/path/to/proxmox7,iso of=/dev/sdx+sudo dd if=/path/to/proxmox7.iso of=/dev/sdx
 </code> </code>
   - Unplug USB Drive and plug it into Proxmox Server. Boot from USB drive. A somewhat graphical Grub Bootmenü should be visible. **Choose install**   - Unplug USB Drive and plug it into Proxmox Server. Boot from USB drive. A somewhat graphical Grub Bootmenü should be visible. **Choose install**
Line 156: Line 156:
 It's important to setup automatic email forwarding early on, so the system can notify us on any problem detected. Therefor we entered an email address while we were with the graphical installer. This is ok but it may not be enough to enable proper outbound email.  It's important to setup automatic email forwarding early on, so the system can notify us on any problem detected. Therefor we entered an email address while we were with the graphical installer. This is ok but it may not be enough to enable proper outbound email. 
  
-Since GMAIL failed us and tools like postfix etc are not "oauth2" compliant, i found in Oct.2022 a friendly french email provider called [[https://www.sendinblue.com/]] that seem to offer plain old **SMTP AUTH** for up to 300 emails per month for free. And for now it seems to accept pretty much any FROM addressm without hassle. +Since GMAIL failed us and tools like postfix etc are not "oauth2" compliant, i found in Oct.2022 a friendly french email provider called [[https://www.brevo.com/free-smtp-server/|BREVO]] that seem to offer plain old **SMTP AUTH** for up to 300 emails per month for free. And for now it seems to accept pretty much any FROM addresses without hassle. 
  
   - Make sure you entered a valid (destination) Email Address while Proxmox Installation. If you need to change it you can do this using the Web UI. Therefor ...   - Make sure you entered a valid (destination) Email Address while Proxmox Installation. If you need to change it you can do this using the Web UI. Therefor ...
Line 332: Line 332:
 #   - this might take a while #   - this might take a while
 #     on large volumes #     on large volumes
-mkfs.xfs -L raid5lv /dev/mapper/pvedata-lvraid5+ 
 +# need this for XFS allignment calculations 
 +# take it from your mdadm details 
 +
 +export RAID_DEVICE=/dev/mapper/pvedata-lvraid5 
 +export CHUNK_SZ_KB=256  
 +export PARITY_DRIVE_COUNT=1  
 +export NON_PARITY_DRIVE_COUNT=2 
 + 
 +mkfs.xfs 
 +  -L raid5lv 
 +  -f \ 
 +  -l lazy-count=1 \ 
 +  -d sunit=$(($CHUNK_SZ_KB*2)) \ 
 +  -d swidth=$(($CHUNK_SZ_KB*2*$NON_PARITY_DRIVE_COUNT)) \ 
 +  $RAID_DEVICE 
 + 
 + 
 +# Check Result / Details: 
 + 
 +xfs_info /dev/mapper/pvedata-lvraid5 
 + 
 +#  meta-data=/dev/mapper/pvedata-lvraid5 isize=512    agcount=32, agsize=91568576 blks 
 +#                                 sectsz=4096  attr=2, projid32bit=1 
 +#                                 crc=1        finobt=1, sparse=1, rmapbt=0 
 +#                                 reflink=1    bigtime=0 
 +#  data                           bsize=4096   blocks=2930193408, imaxpct=5 
 +#                                 sunit=64     swidth=128 blks 
 +#  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1 
 +#  log      =internal log           bsize=4096   blocks=521728, version=2 
 +#                                 sectsz=4096  sunit=1 blks, lazy-count=1 
 +#  realtime =none                   extsz=4096   blocks=0, rtextents=0 
 +#   
 + 
 + 
 + 
 + 
 + 
 + 
 +# FIXME 
 +# mkfs.xfs -L raid5lv /dev/mapper/pvedata-lvraid5 
 + 
 + 
 +# Check Result / Details: 
 + 
 +xfs_info /dev/mapper/pvedata-lvraid5 
 + 
 +FIXME 
 + 
 + 
  
 </code> </code>
Line 386: Line 436:
  
 df -h / /raid* df -h / /raid*
 +
 +</code>
 +  - Finally (after the raid is finished syncing) let's do a little performance test (with pvw on board tools) and compare the ssd boot device with the lvm on mdadm raid:<code>
 +pveperf # root fs
 +
 +pveperf /raid5lv/
 +
 +
 +
 +
 +
 +
 +
 +root@pve:~# pveperf # single ssd
 +
 +CPU BOGOMIPS:      19956.04
 +REGEX/SECOND:      1060547
 +HD SIZE:           109.80 GB (/dev/mapper/pve-root)
 +BUFFERED READS:    268.40 MB/sec
 +AVERAGE SEEK TIME: 0.14 ms
 +FSYNCS/SECOND:     480.64
 +DNS EXT:           536.55 ms
 +DNS INT:           539.24 ms (lan)
 +
 +
 +root@pve:~# pveperf /raid5lv/ # xfs on lvm on dm raid
 +
 +CPU BOGOMIPS:      19956.04
 +REGEX/SECOND:      1316398
 +HD SIZE:           11175.81 GB (/dev/mapper/pvedata-lvraid5)
 +BUFFERED READS:    334.68 MB/sec <<< SUPER
 +AVERAGE SEEK TIME: 14.22 ms <<< to be expected
 +FSYNCS/SECOND:     23.77 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< WAY DOWN :(
 +DNS EXT:           530.12 ms
 +DNS INT:           546.22 ms (lan)
 +
 +
  
 </code> </code>
Line 463: Line 550:
 } }
  
-mdState=$( /usr/sbin/mdadm --detail /dev/md0 | grep "State :" | cut -d: -f2 | tr -d ' ' )+mdState=$( /usr/sbin/mdadm --detail /dev/md127 | grep "State :" | cut -d: -f2 | tr -d ' ' )
  
 case "$mdState" in case "$mdState" in
Line 483: Line 570:
   - Make shellscript executable:<code>chmod +x /usr/local/sbin/check-raid-status.sh</code>   - Make shellscript executable:<code>chmod +x /usr/local/sbin/check-raid-status.sh</code>
   - Activate a cronjob that runs the check script every minute while daytime:<code>   - Activate a cronjob that runs the check script every minute while daytime:<code>
-cat <<'EOF' > /etc/cron.d/raid-monitor-md0+cat <<'EOF' > /etc/cron.d/raid-monitor-md127
 # #
 # Regular cron jobs to audibly alert admin if # Regular cron jobs to audibly alert admin if
Line 491: Line 578:
 # #
 #m h dom mon dow user command #m h dom mon dow user command
-* 06-22 * * * root /usr/local/sbin/check-raid-status.sh+5 06-22 * * * root /usr/local/sbin/check-raid-status.sh
  
 EOF EOF
it-artikel/linux/proxmox-7-installation-and-configuration-with-two-network-interfaces.1667137541.txt.gz · Last modified: 2022-10-30 13:45 by axel.werner.1973@gmail.com