|
| 1 | +--- |
| 2 | +id: "creation" |
| 3 | +description: "Build a 3-node kubeadm cluster from scratch." |
| 4 | +title: "Create a cluster" |
| 5 | +weight: 2 |
| 6 | +--- |
| 7 | + |
| 8 | +This section guides you in creating of a 3-nodes Kubernetes cluster using [kubeadm](https://kubernetes.io/docs/reference/setup-tools/kubeadm/) bootstrapping tool. This is an important step as you will use this cluster throughout this workshop. |
| 9 | + |
| 10 | +The cluster you'll create is composed of 3 Nodes named **controlplane**, **worker1** and **worker2**. The controlplane Node runs the cluster components (API Server, Controller Manager, Scheduler, etcd), while worker1 and worker2 are the worker Nodes in charge of running the containerized workloads. |
| 11 | + |
| 12 | + |
| 13 | + |
| 14 | +## Provisioning VMs |
| 15 | +--- |
| 16 | + |
| 17 | +Before creating a cluster, it's necessary to provision the infrastructure (bare metal servers or virtual machines). You can create the 3 VMs on your local machine or a cloud provider (but this last option will come with a small cost). Ensure you name those VMs **controlplane**, **worker1**, and **worker2** to keep consistency alongside the workshop. Please also ensure each VM has at least 2 vCPUs and 2G of RAM so it meets the [prerequisites](https://bit.ly/kubeadm-prerequisites). |
| 18 | + |
| 19 | +If you want to create those VMs on your local machine, we recommend using [Multipass](https://multipass.run), a tool from [Canonical](https://canonical.com/). Multipass makes creating local VMs a breeze. Once you have installed Multipass, create the VMs as follows. |
| 20 | + |
| 21 | +```bash |
| 22 | +multipass launch --name controlplane --memory 2G --cpus 2 --disk 10G |
| 23 | +multipass launch --name worker1 --memory 2G --cpus 2 --disk 10G |
| 24 | +multipass launch --name worker2 --memory 2G --cpus 2 --disk 10G |
| 25 | +``` |
| 26 | + |
| 27 | + |
| 28 | + |
| 29 | +## Cluster initialization |
| 30 | +--- |
| 31 | + |
| 32 | +Now that the VMs are created, you need to install some dependencies on each on them (a couple of packages including **kubectl**, **containerd** and **kubeadm**). To simplify this process we provide some scripts that will do this job for you. |
| 33 | + |
| 34 | +First, ssh on the controlplane VM and install those dependencies using the following command. |
| 35 | + |
| 36 | +```bash |
| 37 | +curl https://luc.run/kubeadm/controlplane.sh | VERSION="1.32" sh |
| 38 | +``` |
| 39 | + |
| 40 | +Next, still from the controlplane VM, initialize the cluster. |
| 41 | + |
| 42 | +```bash |
| 43 | +sudo kubeadm init |
| 44 | +``` |
| 45 | + |
| 46 | +The initialization should take a few tens of seconds. The list below shows all the steps it takes. |
| 47 | + |
| 48 | +``` |
| 49 | +preflight Run pre-flight checks |
| 50 | +certs Certificate generation |
| 51 | + /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components |
| 52 | + /apiserver Generate the certificate for serving the Kubernetes API |
| 53 | + /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet |
| 54 | + /front-proxy-ca Generate the self-signed CA to provision identities for front proxy |
| 55 | + /front-proxy-client Generate the certificate for the front proxy client |
| 56 | + /etcd-ca Generate the self-signed CA to provision identities for etcd |
| 57 | + /etcd-server Generate the certificate for serving etcd |
| 58 | + /etcd-peer Generate the certificate for etcd nodes to communicate with each other |
| 59 | + /etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd |
| 60 | + /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd |
| 61 | + /sa Generate a private key for signing service account tokens along with its public key |
| 62 | +kubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file |
| 63 | + /admin Generate a kubeconfig file for the admin to use and for kubeadm itself |
| 64 | + /super-admin Generate a kubeconfig file for the super-admin |
| 65 | + /kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes |
| 66 | + /controller-manager Generate a kubeconfig file for the controller manager to use |
| 67 | + /scheduler Generate a kubeconfig file for the scheduler to use |
| 68 | +etcd Generate static Pod manifest file for local etcd |
| 69 | + /local Generate the static Pod manifest file for a local, single-node local etcd instance |
| 70 | +control-plane Generate all static Pod manifest files necessary to establish the control plane |
| 71 | + /apiserver Generates the kube-apiserver static Pod manifest |
| 72 | + /controller-manager Generates the kube-controller-manager static Pod manifest |
| 73 | + /scheduler Generates the kube-scheduler static Pod manifest |
| 74 | +kubelet-start Write kubelet settings and (re)start the kubelet |
| 75 | +upload-config Upload the kubeadm and kubelet configuration to a ConfigMap |
| 76 | + /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap |
| 77 | + /kubelet Upload the kubelet component config to a ConfigMap |
| 78 | +upload-certs Upload certificates to kubeadm-certs |
| 79 | +mark-control-plane Mark a node as a control-plane |
| 80 | +bootstrap-token Generates bootstrap tokens used to join a node to a cluster |
| 81 | +kubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap |
| 82 | + /enable-client-cert-rotation Enable kubelet client certificate rotation |
| 83 | +addon Install required addons for passing conformance tests |
| 84 | + /coredns Install the CoreDNS addon to a Kubernetes cluster |
| 85 | + /kube-proxy Install the kube-proxy addon to a Kubernetes cluster |
| 86 | +show-join-command Show the join command for control-plane and worker node |
| 87 | +``` |
| 88 | + |
| 89 | +Several commands are returned at the end of the installation process, which you'll use in the next part. |
| 90 | + |
| 91 | + |
| 92 | + |
| 93 | +## Retrieving kubeconfig file |
| 94 | +--- |
| 95 | + |
| 96 | +The first set of commands returned during the initialization step allows configuring kubectl for the current user. Run those commands from a shell in the controlplane Node. |
| 97 | + |
| 98 | +```bash |
| 99 | +mkdir -p $HOME/.kube |
| 100 | +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config |
| 101 | +sudo chown $(id -u):$(id -g) $HOME/.kube/config |
| 102 | +``` |
| 103 | + |
| 104 | +You can now list the Nodes. You'll get only one Node as you've not added the worker Nodes yet. |
| 105 | + |
| 106 | +```bash |
| 107 | +$ kubectl get no |
| 108 | +NAME STATUS ROLES AGE VERSION |
| 109 | +controlplane NotReady control-plane 5m4s v1.32.4 |
| 110 | +``` |
| 111 | + |
| 112 | +## Adding the first worker Node |
| 113 | +--- |
| 114 | + |
| 115 | +As you've done for the controlplane, use the following command to install the dependencies (kubectl, containerd, kubeadm) on worker1. |
| 116 | + |
| 117 | +```bash |
| 118 | +curl https://luc.run/kubeadm/worker.sh | VERSION="1.32" sh |
| 119 | +``` |
| 120 | + |
| 121 | +Then, run the join command returned during the initialization step. This command allows you to add worker nodes to the cluster. |
| 122 | + |
| 123 | +```bash |
| 124 | +sudo kubeadm join 10.81.0.174:6443 --token kolibl.0oieughn4y03zvm7 \ |
| 125 | + --discovery-token-ca-cert-hash sha256:a1d26efca219428731be6b62e3298a2e5014d829e51185e804f2f614b70d933d |
| 126 | +``` |
| 127 | + |
| 128 | +## Adding the second worker Node |
| 129 | +--- |
| 130 | + |
| 131 | +You need to do the same on worker2. First, install the dependencies. |
| 132 | + |
| 133 | +```bash |
| 134 | +curl https://luc.run/kubeadm/worker.sh | VERSION="1.32" sh |
| 135 | +``` |
| 136 | + |
| 137 | +Then, run the join command to add this Node to the cluster. |
| 138 | + |
| 139 | +```bash |
| 140 | +sudo kubeadm join 10.81.0.174:6443 --token kolibl.0oieughn4y03zvm7 \ |
| 141 | + --discovery-token-ca-cert-hash sha256:a1d26efca219428731be6b62e3298a2e5014d829e51185e804f2f614b70d933d |
| 142 | +``` |
| 143 | + |
| 144 | +You now have cluster with 3 Nodes. |
| 145 | + |
| 146 | + |
| 147 | + |
| 148 | +## Status of the Nodes |
| 149 | +--- |
| 150 | + |
| 151 | +List the Nodes and notice they are all in NotReady status. |
| 152 | + |
| 153 | +```bash |
| 154 | +$ kubectl get nodes |
| 155 | +NAME STATUS ROLES AGE VERSION |
| 156 | +controlplane NotReady control-plane 9m58s v1.32.4 |
| 157 | +worker1 NotReady <none> 58s v1.32.4 |
| 158 | +worker2 NotReady <none> 55s v1.32.4 |
| 159 | +``` |
| 160 | + |
| 161 | +If you go one step further and describe the controlplane Node, you'll get why the cluster is not ready yet. |
| 162 | + |
| 163 | +``` |
| 164 | +… |
| 165 | +KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized |
| 166 | +``` |
| 167 | + |
| 168 | +## Installing a network plugin |
| 169 | +--- |
| 170 | + |
| 171 | +Run the following commands from the controlplane Node to install Cilium in your cluster. |
| 172 | + |
| 173 | +```bash |
| 174 | +OS="$(uname | tr '[:upper:]' '[:lower:]')" |
| 175 | +ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" |
| 176 | +curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-$OS-$ARCH.tar.gz{,.sha256sum} |
| 177 | +sudo tar xzvfC cilium-$OS-$ARCH.tar.gz /usr/local/bin |
| 178 | +cilium install |
| 179 | +``` |
| 180 | + |
| 181 | +After a few tens of seconds, you'll see your cluster is ready. |
| 182 | + |
| 183 | +```bash |
| 184 | +$ kubectl get nodes |
| 185 | +NAME STATUS ROLES AGE VERSION |
| 186 | +controlplane Ready control-plane 13m v1.32.4 |
| 187 | +worker1 Ready <none> 4m28s v1.32.4 |
| 188 | +worker2 Ready <none> 4m25s v1.32.4 |
| 189 | +``` |
| 190 | + |
| 191 | +## Get the kubeconfig on the host machine |
| 192 | +--- |
| 193 | + |
| 194 | +To avoid connecting to the controlplane Node to run the kubectl commands, copy the kubeconfig file from the controlplane to the host machine. Make sure to copy this file into `$HOME/.kube/config` so it automatically configures kubectl. |
| 195 | + |
| 196 | +If you've created your VMs with Multipass, you can copy the kubeconfig file using the following commands. |
| 197 | + |
| 198 | +```bash |
| 199 | +multipass transfer controlplane:/home/ubuntu/.kube/config config |
| 200 | +mkdir $HOME/.kube |
| 201 | +mv config $HOME/.kube/config |
| 202 | +``` |
| 203 | + |
| 204 | +You should now be able to direcly list the Nodes from the host machine. |
| 205 | + |
| 206 | +```bash |
| 207 | +$ kubectl get nodes |
| 208 | +NAME STATUS ROLES AGE VERSION |
| 209 | +controlplane Ready control-plane 13m v1.32.4 |
| 210 | +worker1 Ready <none> 4m28s v1.32.4 |
| 211 | +worker2 Ready <none> 4m25s v1.32.4 |
| 212 | +``` |
0 commit comments