it-artikel:linux:experimental-ha-kubernetes-cluster-for-soho-use-on-ubuntu-k3sup-k3s-base
no way to compare when less than two revisions
Differences
This shows you the differences between two versions of the page.
— | it-artikel:linux:experimental-ha-kubernetes-cluster-for-soho-use-on-ubuntu-k3sup-k3s-base [2022-08-31 12:30] (current) – created - external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== EXPERIMENTAL HA Kubernetes Cluster for SOHO use on UBUNTU + k3sup + k3s base ====== | ||
+ | |||
+ | |||
+ | ===== Project Goal: ===== | ||
+ | |||
+ | - self hosted HA fault tolerant (multi master) Kubernetes Cluster | ||
+ | - NO SINGLE POINT OF FAILURE! | ||
+ | - no " | ||
+ | - using 3 UBUNTU Server VMs | ||
+ | - easy and fast to setup | ||
+ | - One site only | ||
+ | - FIXME shared storage/ | ||
+ | |||
+ | |||
+ | |||
+ | ===== Kinda important stuff to read and know about: ===== | ||
+ | |||
+ | * ubuntu server 20.04 | ||
+ | * ssh passwordless public key authentiction | ||
+ | * virtualbox or kvm virtualisation platform or bare metal if available | ||
+ | * k3s | ||
+ | * [[https:// | ||
+ | * [[https:// | ||
+ | * [[https:// | ||
+ | * [[https:// | ||
+ | * k3sup [[https:// | ||
+ | * Docker registry: | ||
+ | * [[https:// | ||
+ | * [[https:// | ||
+ | |||
+ | ===== Installation / Setup: ===== | ||
+ | |||
+ | - Setup 2-3 UBUNTU Server 20.04 VMs who can be reached via ssh. Must have unique hostnames and IP addresses of course. Make sure your IP addresses are FIXED and not dynamic, because later we will require a DNS record to all fixed IP addresses of a cluster. | ||
+ | - Configure your VMs static IP addresses. In my scenario i will use **.101** for the first master node, **.102** for the second node etc.. : < | ||
+ | cat << ' | ||
+ | network: | ||
+ | version: 2 | ||
+ | ethernets: | ||
+ | enp0s3: | ||
+ | addresses: | ||
+ | - 192.168.0.101/ | ||
+ | gateway4: 192.168.0.1 | ||
+ | nameservers: | ||
+ | addresses: | ||
+ | - 192.168.0.1 | ||
+ | search: | ||
+ | - lan | ||
+ | EOF | ||
+ | |||
+ | reboot ; and do test | ||
+ | |||
+ | </ | ||
+ | - Setup passwordless ssh login to those VMs from your admin workstation | ||
+ | - Optional: Have fqdn available for each vm instead of plain IP addresses | ||
+ | - On 1st VM (k3s-master1) login as root:< | ||
+ | # generate a ssh key pair | ||
+ | ssh-keygen -f / | ||
+ | |||
+ | # distribute the ssh public key to any other master VM | ||
+ | |||
+ | # copy to ourselfs too | ||
+ | ssh-copy-id -o StrictHostKeyChecking=no root@k3s-master1.lan | ||
+ | |||
+ | ssh-copy-id -o StrictHostKeyChecking=no | ||
+ | ssh-copy-id -o StrictHostKeyChecking=no | ||
+ | </ | ||
+ | - Prepare data directories (local storage) for **k3s (alias rancher)** and **kubelet** below **/data/** on every node, since we don't want it to fill up our ROOTFS. That's if you mounted another volume under **/data/** of course :< | ||
+ | |||
+ | for n in 1 2 3 ; do \ | ||
+ | ssh root@k3s-master$n.lan '\ | ||
+ | rm -rv / | ||
+ | rm -rv / | ||
+ | mkdir -vp / | ||
+ | ln -sv / | ||
+ | ln -sv /data/k3s/ / | ||
+ | # disable all swap ; \ | ||
+ | swapoff --all ; \ | ||
+ | sed -i "/ | ||
+ | ' ; \ | ||
+ | done # prepare data dir | ||
+ | |||
+ | </ | ||
+ | - Prepare a directory for placing local copies of git repos and bins on Admin' | ||
+ | cd | ||
+ | mkdir -p install/ | ||
+ | cd !$ | ||
+ | </ | ||
+ | - Clone k3sup git repo localy, so we keep the sources and docs of what we use next: < | ||
+ | export GITREPO=alexellis/ | ||
+ | version=$(curl -sI https:// | ||
+ | git clone https:// | ||
+ | cd $version | ||
+ | |||
+ | # show latest tag/version available in local copy | ||
+ | git describe --abbrev=0 --tags | ||
+ | |||
+ | # manually save the matching pre-compiled binary with it | ||
+ | wget https:// | ||
+ | chmod -c u+x ./k3sup | ||
+ | |||
+ | # OPTIONAL: install binary if prefered | ||
+ | cp -v ./k3sup / | ||
+ | |||
+ | # show version | ||
+ | k3sup version | ||
+ | |||
+ | </ | ||
+ | - Optional: If you want to save the git repo of k3s which has been used here try this: < | ||
+ | cd | ||
+ | mkdir -p install/k3s | ||
+ | cd !$ | ||
+ | |||
+ | export GITREPO=k3s-io/ | ||
+ | version=$(curl -sI https:// | ||
+ | git clone https:// | ||
+ | cd $version | ||
+ | |||
+ | |||
+ | # show latest tag/version available in local copy | ||
+ | git describe --abbrev=0 --tags | ||
+ | |||
+ | |||
+ | |||
+ | # download the required images tar ball that | ||
+ | # matches the version for later " | ||
+ | # | ||
+ | wget https:// | ||
+ | |||
+ | # | ||
+ | # manually install required images where k3s expects them: | ||
+ | # | ||
+ | mkdir -p / | ||
+ | cp -v k3s-airgap-images-amd64.tar / | ||
+ | |||
+ | |||
+ | # manually save the matching pre-compiled binary with it | ||
+ | wget https:// | ||
+ | chmod -c u+x ./k3s | ||
+ | |||
+ | # show version of binary | ||
+ | ./k3s --version | ||
+ | |||
+ | # install binary | ||
+ | cp -v ./k3s / | ||
+ | |||
+ | |||
+ | </ | ||
+ | - Now let k3sup do its magic and install the 1st master node " | ||
+ | |||
+ | cd | ||
+ | |||
+ | k3sup install \ | ||
+ | --print-command \ | ||
+ | --tls-san cloud.lan \ | ||
+ | --cluster \ | ||
+ | --host $(hostname -f) \ | ||
+ | --host-ip $(hostname -f) \ | ||
+ | --k3s-extra-args '\ | ||
+ | --cluster-domain cloud.lan \ | ||
+ | ' | ||
+ | |||
+ | </ | ||
+ | - Lets test if our first master node is up and running: < | ||
+ | |||
+ | export KUBECONFIG=/ | ||
+ | kubectl config set-context default | ||
+ | kubectl get node -o wide | ||
+ | |||
+ | #example of expected output: | ||
+ | # | ||
+ | # | ||
+ | # | ||
+ | #Context " | ||
+ | |||
+ | # | ||
+ | #NAME STATUS | ||
+ | # | ||
+ | # | ||
+ | # | ||
+ | |||
+ | </ | ||
+ | - For convenience lets install some BASH completion code to use with **kubectl**, | ||
+ | |||
+ | # repeat on every node you want it to be available later | ||
+ | # will be available on next login/shell | ||
+ | |||
+ | kubectl completion bash > / | ||
+ | |||
+ | </ | ||
+ | - Install and add more master nodes to existing master/ | ||
+ | |||
+ | k3sup join --server --server-host k3s-master1.lan --host k3s-master2.lan | ||
+ | k3sup join --server --server-host k3s-master1.lan --host k3s-master3.lan | ||
+ | |||
+ | # check nodes status with | ||
+ | kubectl get node | ||
+ | |||
+ | </ | ||
+ | - Check " | ||
+ | |||
+ | kubectl get rc,services | ||
+ | |||
+ | # example for expected result | ||
+ | # | ||
+ | #NAME | ||
+ | # | ||
+ | # | ||
+ | |||
+ | # show cluster " | ||
+ | |||
+ | kubectl describe endpoints | ||
+ | |||
+ | # | ||
+ | # | ||
+ | # | ||
+ | # | ||
+ | # | ||
+ | #Subsets: | ||
+ | # Addresses: | ||
+ | # NotReadyAddresses: | ||
+ | # Ports: | ||
+ | # Name | ||
+ | # ---- | ||
+ | # https 6443 TCP | ||
+ | # | ||
+ | # | ||
+ | # | ||
+ | |||
+ | kubectl get node | ||
+ | |||
+ | # | ||
+ | #NAME STATUS | ||
+ | # | ||
+ | # | ||
+ | # | ||
+ | # | ||
+ | |||
+ | |||
+ | </ | ||
+ | - So our Kubernetes Cluster is running now. | ||
+ | |||
+ | |||
+ | ===== Monitoring and navigating our Kubernetes cluster ===== | ||
+ | |||
+ | ^Task ^Command ^ | ||
+ | |Check CPU and MEMORY (load) usage across the cluster/all nodes: | < | ||
+ | |Check CPU and MEMORY usage of PODS: | < | ||
+ | # across all namespaces | ||
+ | kubectl top pod -A | ||
+ | |||
+ | # in default namespace only | ||
+ | kubectl top pod | ||
+ | |||
+ | </ | ||
+ | |Get an overview of whats going on and whats already installed, on your Kubernetes cluster. Again **-A** means **across ALL namespaces**: | ||
+ | |FIXME: | < | ||
+ | |FIXME: | < | ||
+ | |FIXME: | < | ||
+ | |FIXME: | < | ||
+ | |FIXME: | < | ||
+ | |FIXME: | < | ||
+ | |FIXME: | < | ||
+ | |FIXME: | < | ||
+ | |||
+ | |||
+ | |||
+ | ---- | ||
+ | |||
+ | {{tag> | ||
+ | |||
+ | |||
it-artikel/linux/experimental-ha-kubernetes-cluster-for-soho-use-on-ubuntu-k3sup-k3s-base.txt · Last modified: 2022-08-31 12:30 by 127.0.0.1