On Ubuntu 20.04 amd64 architecture.
1. Pre-req Checking
Turn off swap
1
swapoff -a
Check disk space
1
df -h
Others
2. Install Docker
Notice that here the
/etc/apt/keyrings
is already installed, so ignore the reminder from k8s. Starting Ubuntu 22.04, there’s no more need to install this.
1 | sudo apt-get update |
3. Install cri-dockerd for Container Runtime Interface
Actually docker will install containerd. This causes duplicate CRI for k8s.
- Change shell
Go install script doesn’t support tcsh, change to a POSIX shell.
1 | sudo passwd <username> |
- Install Go first
1 | curl -OL https://go.dev/dl/go1.20.4.linux-amd64.tar.gz |
Add Go for all users.
1 | sudo vi /etc/profile |
Add the following line:
1 | export PATH=$PATH:/usr/local/go/bin |
Then
1 | . /etc/profile |
Do this also for ~/.profile
. Run go version
to verify install.
- Install
cri-dockerd
1
2
3
4git clone https://github.com/Mirantis/cri-dockerd.git
cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
Below using sudo:
1 | mkdir -p /usr/local/bin |
4. Install kubeadm, kubelet, kubectl
Don’t self create
/etc/apt/keyrings
here andchmod 744
, 744 will make the package manager mistrust the public key, thus refuse to update.
1 | sudo apt-get update |
5. Init Control Plane Node(with flaw)
- TL;DR
1 | sudo kubeadm init --cri-socket unix:///var/run/cri-dockerd.sock --pod-network-cidr 10.244.0.0/16 |
- What happend in reality
Since there’re two CRI on the machine, explicitly select cri-dockerd.
1 | sudo kubeadm init --cri-socket unix:///var/run/cri-dockerd.sock |
- For non-root kubectl usage:
1
2
3mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
6. Add Pod Network Add-on
I am not sure why flannel is chosen, just viewed some random topics on the Internet.
install flannel
1
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
troubleshooting
CoreDNS is still pending(should be running), and flannel is always CrashLoopBackoff.
Do some search on Web, and thanks to this document.
Root cause: flannel requires explicitly setting –pod-network-cidr.
Tearing down
1
2
3
4kubectl drain node0.qmcurtis-158673.nyu-netsec-pg0.utah.cloudlab.us --delete-emptydir-data --force --ignore-daemonsets
sudo kubeadm reset
sudo "iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X"
sudo ipvsadm -C # not usefulRe-launching
1
sudo kubeadm init --cri-socket unix:///var/run/cri-dockerd.sock --pod-network-cidr 10.244.0.0/16
7. Join Worker Node
An example command (output of kubeadm init
):
1 | kubeadm join <IP:port> --cri-socket unix:///var/run/cri-dockerd.sock --token <token> \ |
8. Make Control Plane Node Schedulable(optional)
1 | kubectl taint nodes --all node-role.kubernetes.io/control-plane- |
9. Install Helm
1 | curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null |
10. Deploy Jitsi
Add helm repo
1
helm repo add jitsi https://jitsi-contrib.github.io/jitsi-helm/
Modifications
Through jitsi_jvb.yaml
(values explained later):
1 | publicURL: <URL> |
- Deploy
1
helm install myjitsi -f jitsi_jvb.yaml jitsi/jitsi-meet --namespace jit --create-namespace
Note: this won’t work, the web app is available but you cannot really join a meeting.
11. Jitsi Troubleshooting and Install PVC provisioner.
Dump Jicofo logs
1
kubectl logs <jicofo-pod-name>
Found that cannot establish communication with prosody.
Check prosody status
1
kubectl describe pod <prosody>
Pending, and the PVC is pending.
Thanks to this issue.
- Install local-path-provisioner
1 | kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/local-path-storage.yaml |
or make some change through kustomize
. e.g., I need a different storage path since my extra FS is mounted to /mydata
.
- Self-defined Storage Class
Leave room for further extension.
Apply this YAML:
1 | apiVersion: storage.k8s.io/v1 |
Some Issues
- On Jitsi’s side
Must specify a storageClassName in prosody.persistence
.
And remember to remove ths PVC if reconfigured and re-installed. the PVC won’t be automatically deleted or updated.
- Provisioner name
Must use rancher.io/local-path
, cluster.local/local-path-provisioner
will not work and cause the PVC waiting for provisioning forever.
- VolumeBindingMode
Don’t use Immediate, use WaitForFirstConsumer, otherwise the PVC will get reported node not specified
.
Fault Injection
Investigating kube-monkey
, TBA.