Ubuntu Multipass: Part Deux (k3s and PVC)

K3s on Multipass is pretty nice, but we have to add storage to make it useful.  Let’s add StorageOS to provide a Persistant Volume system.


We can follow this guide (https://docs.storageos.com/docs/platforms/kubernetes/install/1.15)

$ kubectl create -f https://github.com/storageos/cluster-operator/releases/download/1.4.0/storageos-operator.yaml

$ kubectl create -f https://github.com/storageos/cluster-operator/releases/download/1.4.0/storageos-operator.yaml
customresourcedefinition.apiextensions.k8s.io/storageosclusters.storageos.com created
customresourcedefinition.apiextensions.k8s.io/storageosupgrades.storageos.com created
customresourcedefinition.apiextensions.k8s.io/jobs.storageos.com created
customresourcedefinition.apiextensions.k8s.io/nfsservers.storageos.com created
namespace/storageos-operator created
clusterrole.rbac.authorization.k8s.io/storageos-operator created
serviceaccount/storageoscluster-operator-sa created
clusterrolebinding.rbac.authorization.k8s.io/storageoscluster-operator-rolebinding created
deployment.apps/storageos-cluster-operator created
$ kubectl -n storageos-operator get pod
NAME                                          READY   STATUS              RESTARTS   AGE
storageos-cluster-operator-57fc9c468f-f69xf   0/1     ContainerCreating   0          9s

Next we need to create and expose a secret for storage OS to use.

$ echo mySecret | base64
bXlTZWNyZXQK
$ vi myStorageSecret.yaml
$ cat myStorageSecret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: "storageos-api"
  namespace: "storageos-operator"
  labels:
    app: "storageos"
type: "kubernetes.io/storageos"
data:
  apiUsername: bXlTZWNyZXQK
  apiPassword: bXlTZWNyZXQK

$ kubectl create -f myStorageSecret.yaml 
secret/storageos-api created

Now, let's create a 512mb cluster:

$ kubectl apply -f myClusterDefn.yaml 
storageoscluster.storageos.com/example-storageos created
$ cat myClusterDefn.yaml 
apiVersion: "storageos.com/v1"
kind: StorageOSCluster
metadata:
  name: "example-storageos"
  namespace: "storageos-operator"
spec:
  secretRefName: "storageos-api" # Reference the Secret created in the previous step
  secretRefNamespace: "storageos-operator"  # Namespace of the Secret
  k8sDistro: "kubernetes"
  images:
    nodeContainer: "storageos/node:1.4.0" # StorageOS version
  resources:
    requests:
    memory: "512Mi"

The "memory" setting at the bottom sets the size.

Once we've applied the definition we can check that the pods came up.  They will be responsible for distributing the storage across the nodes.

$ kubectl -n storageos get pods -w
NAME                                   READY   STATUS              RESTARTS   AGE
storageos-daemonset-hgv9v              0/1     Init:0/1            0          41s
storageos-daemonset-qg4vz              0/1     Init:0/1            0          41s
storageos-daemonset-qhhkx              0/1     Init:0/1            0          41s
storageos-scheduler-67d54d4f45-cl4kv   0/1     ContainerCreating   0          41s

One problem we have is our cluster just doesn't have a node that can satisfy these requirements …

NAME                                   READY   STATUS              RESTARTS   AGE
storageos-daemonset-hgv9v              0/1     PodInitializing     0          2m36s
storageos-daemonset-qg4vz              0/1     PodInitializing     0          2m36s
storageos-daemonset-qhhkx              0/1     PodInitializing     0          2m36s
storageos-scheduler-67d54d4f45-5nwlh   0/1     Evicted             0          33s
storageos-scheduler-67d54d4f45-8x62v   0/1     Evicted             0          25s
storageos-scheduler-67d54d4f45-97ngn   0/1     Evicted             0          28s
storageos-scheduler-67d54d4f45-bmqhl   0/1     Evicted             0          31s
storageos-scheduler-67d54d4f45-cl4kv   0/1     Evicted             0          2m36s
storageos-scheduler-67d54d4f45-d2dcq   0/1     Evicted             0          31s
storageos-scheduler-67d54d4f45-ds5mf   0/1     Evicted             0          33s
storageos-scheduler-67d54d4f45-gjjqt   0/1     Evicted             0          24s
storageos-scheduler-67d54d4f45-hgxkk   0/1     Evicted             0          29s
storageos-scheduler-67d54d4f45-hvgb9   0/1     Evicted             0          26s
storageos-scheduler-67d54d4f45-kcbmc   0/1     Evicted             0          32s
storageos-scheduler-67d54d4f45-ps2rw   0/1     Evicted             0          27s
storageos-scheduler-67d54d4f45-sq8fx   0/1     Evicted             0          30s
storageos-scheduler-67d54d4f45-vf4hp   0/1     ContainerCreating   0          22s
storageos-scheduler-67d54d4f45-xh7dc   0/1     Evicted             0          33s

In fact, im certain our little 3 node cluster from the first guide is having a hard time keeping up…

$ kubectl get pods --all-namespaces
NAMESPACE            NAME                                          READY   STATUS              RESTARTS   AGE
kube-system          calico-kube-controllers-586d5d67cb-26dxm      0/1     Evicted             0          3m15s
kube-system          calico-kube-controllers-586d5d67cb-gvvnd      0/1     Evicted             0          18h
kube-system          calico-kube-controllers-586d5d67cb-hfpx2      0/1     Pending             0          11s
kube-system          calico-kube-controllers-586d5d67cb-rtl8g      0/1     Evicted             0          3m54s
kube-system          calico-node-hwwdx                             1/1     Running             0          91s
kube-system          calico-node-ndqsh                             1/1     Running             0          18h
kube-system          calico-node-zhl8g                             1/1     Running             0          18h
kube-system          coredns-58687784f9-6lns4                      0/1     Pending             0          3m20s
kube-system          coredns-58687784f9-7dw4l                      0/1     Evicted             0          18h
kube-system          coredns-58687784f9-d4vvx                      0/1     Evicted             0          4m29s
kube-system          coredns-58687784f9-jfvsp                      0/1     Evicted             0          18h
kube-system          coredns-58687784f9-kggg5                      0/1     Evicted             0          5m10s
kube-system          coredns-58687784f9-ks2pv                      0/1     Pending             0          45s
kube-system          dns-autoscaler-79599df498-76rd6               0/1     Evicted             0          18h
kube-system          dns-autoscaler-79599df498-fgtcw               0/1     Pending             0          8s
kube-system          dns-autoscaler-79599df498-w2hsl               0/1     Evicted             0          3m52s
kube-system          kube-apiserver-node1                          1/1     Running             0          18h
kube-system          kube-controller-manager-node1                 1/1     Running             1          18h
kube-system          kube-proxy-sfg6t                              1/1     Running             0          59s
kube-system          kube-proxy-tcdpj                              1/1     Running             0          18h
kube-system          kube-proxy-tmdvf                              1/1     Running             0          18h
kube-system          kube-scheduler-node1                          1/1     Running             1          18h
kube-system          kubernetes-dashboard-556b9ff8f8-nxvpf         0/1     Evicted             0          18h
kube-system          kubernetes-dashboard-556b9ff8f8-pqlc9         0/1     Pending             0          81s
kube-system          nginx-proxy-node2                             1/1     Running             0          18h
kube-system          nginx-proxy-node3                             1/1     Running             0          18h
kube-system          nodelocaldns-7fl6d                            0/1     ContainerCreating   0          77s
kube-system          nodelocaldns-87rbr                            1/1     Running             0          17s
kube-system          nodelocaldns-vmw8w                            0/1     Running             0          5s
storageos-operator   storageos-cluster-operator-57fc9c468f-46hfv   0/1     Evicted             0          2m43s
storageos-operator   storageos-cluster-operator-57fc9c468f-4z9kx   0/1     Evicted             0          2m39s
storageos-operator   storageos-cluster-operator-57fc9c468f-5zqpm   0/1     Evicted             0          2m45s
storageos-operator   storageos-cluster-operator-57fc9c468f-65jnb   0/1     Evicted             0          2m40s
storageos-operator   storageos-cluster-operator-57fc9c468f-8qz62   0/1     Evicted             0          2m41s
storageos-operator   storageos-cluster-operator-57fc9c468f-b49d6   0/1     Evicted             0          2m44s
storageos-operator   storageos-cluster-operator-57fc9c468f-bf8cv   0/1     Evicted             0          2m38s
storageos-operator   storageos-cluster-operator-57fc9c468f-c5lmn   0/1     Evicted             0          2m41s
storageos-operator   storageos-cluster-operator-57fc9c468f-dgrg2   0/1     Evicted             0          2m45s
storageos-operator   storageos-cluster-operator-57fc9c468f-f69xf   0/1     Evicted             0          10m
storageos-operator   storageos-cluster-operator-57fc9c468f-jpmqg   0/1     Evicted             0          2m39s
storageos-operator   storageos-cluster-operator-57fc9c468f-jrvx5   0/1     Evicted             0          2m45s
storageos-operator   storageos-cluster-operator-57fc9c468f-ltvtm   0/1     Evicted             0          2m44s
storageos-operator   storageos-cluster-operator-57fc9c468f-n5fkc   0/1     Evicted             0          2m45s
storageos-operator   storageos-cluster-operator-57fc9c468f-nn5bn   0/1     Evicted             0          2m43s
storageos-operator   storageos-cluster-operator-57fc9c468f-q6gjj   0/1     Evicted             0          2m42s
storageos-operator   storageos-cluster-operator-57fc9c468f-txf54   0/1     Pending             0          2m37s
storageos            storageos-daemonset-7rlv5                     0/1     Evicted             0          48s
storageos            storageos-daemonset-hgv9v                     0/1     ImagePullBackOff    0          6m3s
storageos            storageos-daemonset-qg4vz                     0/1     ImagePullBackOff    0          6m3s
storageos            storageos-scheduler-67d54d4f45-5nwlh          0/1     Evicted             0          4m
storageos            storageos-scheduler-67d54d4f45-8x62v          0/1     Evicted             0          3m52s
storageos            storageos-scheduler-67d54d4f45-97ngn          0/1     Evicted             0          3m55s
storageos            storageos-scheduler-67d54d4f45-bmqhl          0/1     Evicted             0          3m58s
storageos            storageos-scheduler-67d54d4f45-cl4kv          0/1     Evicted             0          6m3s
storageos            storageos-scheduler-67d54d4f45-d2dcq          0/1     Evicted             0          3m58s
storageos            storageos-scheduler-67d54d4f45-ds5mf          0/1     Evicted             0          4m
storageos            storageos-scheduler-67d54d4f45-gjjqt          0/1     Evicted             0          3m51s
storageos            storageos-scheduler-67d54d4f45-hgxkk          0/1     Evicted             0          3m56s
storageos            storageos-scheduler-67d54d4f45-hvgb9          0/1     Evicted             0          3m53s
storageos            storageos-scheduler-67d54d4f45-kcbmc          0/1     Evicted             0          3m59s
storageos            storageos-scheduler-67d54d4f45-ps2rw          0/1     Evicted             0          3m54s
storageos            storageos-scheduler-67d54d4f45-sq8fx          0/1     Evicted             0          3m57s
storageos            storageos-scheduler-67d54d4f45-ssg2m          0/1     Pending             0          114s
storageos            storageos-scheduler-67d54d4f45-vf4hp          0/1     Evicted             0          3m49s
storageos            storageos-scheduler-67d54d4f45-xh7dc          0/1     Evicted             0          4m

Using K3s with larger Nodes

Let’s launch a few larger xenial instances.

$ multipass launch -c 2 -d 15G -m 2G -n k3s01 xenial
Retrieving image: 9% 

After the first one, the rest launch fast (after an image is cached):

$ multipass launch -c 2 -d 15G -m 2G -n k3s01 xenial
Launched: k3s01                                                                 
$ multipass launch -c 2 -d 15G -m 2G -n k3s02 xenial
Launched: k3s02                                                                 
$ multipass launch -c 2 -d 15G -m 2G -n k3s03 xenial
Launched: k3s03      

Let’s now hop into the first node and launch k3s under root:

$ multipass shell k3s01
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.4.0-165-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

0 packages can be updated.
0 updates are security updates.

New release '18.04.2 LTS' available.
Run 'do-release-upgrade' to upgrade to it.


To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

multipass@k3s01:~$ sudo su - 
root@k3s01:~# curl -sfL https://get.k3s.io | sh -
[INFO]  Finding latest release
[INFO]  Using v0.9.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.9.1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.9.1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink from /etc/systemd/system/multi-user.target.wants/k3s.service to /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

then get the k8s config

root@k3s01:~# cat /etc/rancher/k3s/k3s.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWakNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFUzTURRd09UYzJNREFlRncweE9URXdNRGN3TURVMk1EQmFGdzB5T1RFd01EUXdNRFUyTURCYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFUzTURRd09UYzJNREJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQk9YaEhkVCtaMzNSYUVwU2MxUmdRTkhRTHVrTG5ycjJVMmlZZGZqN1djL08KNkhhb09zYWhQQkNkN0E1TEplZ25Pd3M4RkNYbUpjeG1yRWM1NEk4MXRTcWpJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwY0FNRVFDSUZNaGVYYTE3M285CnlwQ3JiSStOQlhTZTUyUGRDeUZvdndVTktaOFJMdDFmQWlBY241OFM1YktIQzJnYWtaeTdTcFZWbG9Hek1CQTAKYVVZS1VrUHpwc3plQ2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    password: 13ce1f2b78925ec83bf707d1827067fc
    username: admin
root@k3s01:~# cat /var/lib/rancher/k3s/server/node-token
K10c5795c31973084adf816154d5ac70d85b83c6336c9f76d6accbf5b8df590e6ab::node:eb0b4dcfe8dad3df47361ce12d0988ed

Make sure to get the IP and change the server line server: https://127.0.0.1:6443 above before using locally;

$ multipass list
Name                    State             IPv4             Image
k3s03                   Running           192.168.64.14    Ubuntu 16.04 LTS
k3s01                   Running           192.168.64.12    Ubuntu 16.04 LTS
k3s02                   Running           192.168.64.13    Ubuntu 16.04 LTS

Now let's add other nodes

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                         READY   STATUS      RESTARTS   AGE
kube-system   coredns-66f496764-2cz8f      1/1     Running     0          2m32s
kube-system   helm-install-traefik-5hn5g   0/1     Completed   0          2m32s
kube-system   svclb-traefik-4r6tr          3/3     Running     0          2m18s
kube-system   traefik-d869575c8-58nrn      1/1     Running     0          2m18s

Now let’s add our other nodes

$ multipass shell k3s02
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.4.0-165-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

0 packages can be updated.
0 updates are security updates.

New release '18.04.2 LTS' available.
Run 'do-release-upgrade' to upgrade to it.


To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

multipass@k3s02:~$ sudo su -
root@k3s02:~# curl -sfL https://get.k3s.io | K3S_URL=https://192.168.64.12:6443 K3S_TOKEN=K10c5795c31973084adf816154d5ac70d85b83c6336c9f76d6accbf5b8df590e6ab::node:eb0b4dcfe8dad3df47361ce12d0988ed sh -
[INFO]  Finding latest release
[INFO]  Using v0.9.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.9.1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.9.1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink from /etc/systemd/system/multi-user.target.wants/k3s-agent.service to /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent

And do the same for k3s03

$ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
k3s02    Ready    worker   34m   v1.15.4-k3s.1
k3s01    Ready    master   40m   v1.15.4-k3s.1
k3s03c   Ready    worker   10s   v1.15.4-k3s.1

Setting up storageOS

$ kubectl create -f https://github.com/storageos/cluster-operator/releases/download/1.4.0/storageos-operator.yaml
customresourcedefinition.apiextensions.k8s.io/storageosclusters.storageos.com created
customresourcedefinition.apiextensions.k8s.io/storageosupgrades.storageos.com created
customresourcedefinition.apiextensions.k8s.io/jobs.storageos.com created
customresourcedefinition.apiextensions.k8s.io/nfsservers.storageos.com created
namespace/storageos-operator created
clusterrole.rbac.authorization.k8s.io/storageos-operator created
serviceaccount/storageoscluster-operator-sa created
clusterrolebinding.rbac.authorization.k8s.io/storageoscluster-operator-rolebinding created
deployment.apps/storageos-cluster-operator created

Now check the pod

$ kubectl -n storageos-operator get pod
NAME                                          READY   STATUS    RESTARTS   AGE
storageos-cluster-operator-6bb7ccc597-5fzgm   1/1     Running   0          112s

And then we can test it

$ kubectl -n storageos get pods
NAME                                   READY   STATUS                  RESTARTS   AGE
storageos-scheduler-78b68c74cb-fx5lk   1/1     Running                 0          21m
storageos-daemonset-d997r              1/1     Running                 0          21m
storageos-daemonset-vz5sv              0/1     Init:CrashLoopBackOff   6          10m
storageos-daemonset-bmsm6              0/1     Init:CrashLoopBackOff   8          21m
storageos-daemonset-ll7ml              0/1     Init:CrashLoopBackOff   8          21m
storageos-daemonset-mlvbk              1/1     Running                 0          4m47s

I discovered that storageos fails on xenial (16.04) and does better with the standard 18.04 (bionic) image.

I recreated the cluster using standard images and tried again…

$ multipass list --format table
Name                    State             IPv4             Image
k3s003                  Running           192.168.64.21    Ubuntu 18.04 LTS
k3s002                  Running           192.168.64.20    Ubuntu 18.04 LTS
k3s001                  Running           192.168.64.19    Ubuntu 18.04 LTS
$ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
k3s002   Ready    worker   24h   v1.15.4-k3s.1
k3s003   Ready    worker   24h   v1.15.4-k3s.1
k3s001   Ready    master   24h   v1.15.4-k3s.1

the k3s install (again)

multipass@k3s001:~$ sudo su -
root@k3s001:~# curl -sfL https://get.k3s.io | sh -
[INFO]  Finding latest release
[INFO]  Using v0.9.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.9.1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.9.1/k3s
root@k3s001:~# curl -sfL https://get.k3s.io | sh -
[INFO]  Finding latest release
[INFO]  Using v0.9.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.9.1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.9.1/k3s
root@k3s001:~# curl -sfL https://get.k3s.io | sh -
[INFO]  Finding latest release
[INFO]  Using v0.9.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.9.1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.9.1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
root@k3s001:~# cat /etc/rancher/k3s/k3s.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFUzTURReE5Ea3hNREFlRncweE9URXdNRGN3TWpJeE5UQmFGdzB5T1RFd01EUXdNakl4TlRCYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFUzTURReE5Ea3hNREJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQkxmQ2FPREVyQVlYMGdkSXNZVlNuYXFCSDloTGdHL0svVXpiNEN2c0p1YnMKQ0dXYXkyOUNJQlB3TmhucVVxYjlCbGZORXpJZzBTYjNpTWEvbmVVU0NoaWpJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSUdYMDR6RlIvcXVwCmJtMnBjYVArQ1k2RFBwNHRUaEowQ3o3SS94RVIvK3NnQWlFQS9rSm1wNHE1STM1RnBObmtXM1FzNVdLa1o3SFoKOHZKbmFzeGV5MU5nT1hVPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    password: 2b3bb9fdc218d5b6d0d0f4d68039764b
    username: admin
root@k3s001:~# cat /var/lib/rancher/k3s/server/node-token
K10ccd856eec2f143409ec885b18d5a8f5a91d13ed645d5d16e49fe1c8e64095891::node:1e58d200c6c2107c43cf1dbd27142e7d
root@k3s001:~# 

Set server in config : 192.168.64.19

Now let’s check our storage class then make it default:

$ kubectl get sc
NAME   PROVISIONER               AGE
fast   kubernetes.io/storageos   6h8m
$ kubectl patch storageclass fast -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/fast patched
$ kubectl get sc
NAME             PROVISIONER               AGE
fast (default)   kubernetes.io/storageos   6h12m

Setting it as default is an important step. If you don't, you'll have to be explicit in any deploys, be them yaml or helm based.

Installing Helm/Tiller

$ cat /Users/johnsi10/Documents/rbac-config.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
$ kubectl create -f /Users/johnsi10/Documents/rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
$ helm init --service-account tiller --history-max 200
$HELM_HOME has been configured at /Users/johnsi10/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

Now that we have Tiller and a default storage class...

Adding a dashboard:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

And we need a dashboard admin user to use it:

$ cat dashboard-user.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
$ kubectl apply -f dashboard-user.yaml 
serviceaccount/admin-user created
$ cat dashboard-binding.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
$ kubectl apply -f dashboard-binding.yaml 
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

Next we’ll need the token for the dashboard

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-xrlcg
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: e8da6cd7-bd58-4b8d-9cd4-98fabbd0e15b

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     526 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXhybGNnIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlOGRhNmNkNy1iZDU4LTRiOGQtOWNkNC05OGZhYmJkMGUxNWIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.s8sJf_6Uy1VvuMo_9n0fAv9K8o8U-Nm9RMohaK-agjMYvB_GG32SKFBnRcdPyqGdZu9Wt0huHI_rssw6Dy5v-jONPlZMMM_HInEqQU4O9f7FN4EMF5oX5CtqVhp-iwpES55RyyxPy_vlwMDUFbJK3GEggzEdk9l3sXtg95Kl6-e_VTrCOKbLkh4s6krw106aik4y8SJVNZsBDHtaDDOsVqQB59eLnLVY73JEcTc-RZ-3HtrPbmtnI-Z2PEHG8sD45QGuKK6m4mugGQxZeQ9-i-bYhuOsPiTGPM_L7TSK1GlEHIA9tLVPtMwsny47ORDC7QYEuK2ldlFatfXVbIDMGA

and use port-forward for the dashboard

$ kubectl port-forward kubernetes-dashboard-7d75c474bb-rb2tr -n kube-system 8443:8443
Forwarding from 127.0.0.1:8443 -> 8443
Forwarding from [::1]:8443 -> 8443
Handling connection for 8443

And we can see our storageos there as well

Networking.

The part that I'm working on next (and had been holding this blog entry on, but can't delay further), is loadbalancing and networking.

I tried just installing MetalLB (https://metallb.universe.tf/installation/)

$ kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml
namespace/metallb-system created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
daemonset.apps/speaker created
deployment.apps/controller created


$ kubectl apply -f layer2-config.yaml 
configmap/config created
$ cat layer2-config.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.42.5.240-10.42.5.250

And we could see that indeed LB requests were satisified by MetalLB

However, they were completely unroutable.  So yes, Metal LB handed out IPs, but they were as useful as a poopy flavoured lolly pop.  I've found a guide with Vagrant and Flannel and Metal LB that launches a cluster in VBox.  I'm working on figuring out this networking area and would welcome insights (isaac.johnson@gmail.com).

Summary:

After a bit of trial and error, we used Multipass and Bionic (18.04) images to make a great k3s cluster with valid persistent storage.  However, we stopped short of solving our networking and routing.