|
| 1 | +--- |
| 2 | +docType: "Chapter" |
| 3 | +id: "creation" |
| 4 | +chapterTitle: "Create a cluster" |
| 5 | +description: "Build a 3-node kubeadm cluster from scratch." |
| 6 | +lectures: 10 |
| 7 | +title: "Create a cluster" |
| 8 | +weight: 2 |
| 9 | +--- |
| 10 | + |
| 11 | +{{< chapterstyle >}} |
| 12 | + |
| 13 | +<p>This section guides you in creating of a 3-nodes Kubernetes cluster using <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/">kubeadm</a> bootstrapping tool. This is an important step as you will use this cluster throughout this workshop.</p> |
| 14 | + |
| 15 | +<p>The cluster you'll create is composed of 3 Nodes named <strong>controlplane</strong>, <strong>worker1</strong> and <strong>worker2</strong>. The controlplane Node runs the cluster components (API Server, Controller Manager, Scheduler, etcd), while worker1 and worker2 are the worker Nodes in charge of running the containerized workloads.</p> |
| 16 | + |
| 17 | +{{< image src="/images/learning-path/cka/creation/objectives.png" width="100%" align="center" alt="" >}} |
| 18 | + |
| 19 | +<h2>Provisioning VMs</h2> |
| 20 | + |
| 21 | +<p>Before creating a cluster, it's necessary to provision the infrastructure (bare metal servers or virtual machines). You can create the 3 VMs on your local machine or a cloud provider (but this last option will come with a small cost). Ensure you name those VMs <strong>controlplane</strong>, <strong>worker1</strong>, and <strong>worker2</strong> to keep consistency alongside the workshop. Please also ensure each VM has at least 2 vCPUs and 2G of RAM so it meets the <a href="https://bit.ly/kubeadm-prerequisites">prerequisites</a>.</p> |
| 22 | + |
| 23 | +<p>If you want to create those VMs on your local machine, we recommend using <a href="https://multipass.run">Multipass</a>, a tool from <a href="https://canonical.com/">Canonical</a>. Multipass makes creating local VMs a breeze. Once you have installed Multipass, create the VMs as follows.</p> |
| 24 | + |
| 25 | +```bash |
| 26 | +multipass launch --name controlplane --memory 2G --cpus 2 --disk 10G |
| 27 | +multipass launch --name worker1 --memory 2G --cpus 2 --disk 10G |
| 28 | +multipass launch --name worker2 --memory 2G --cpus 2 --disk 10G |
| 29 | +``` |
| 30 | +{{< image src="/images/learning-path/cka/creation/step-1.png" width="100%" align="center" alt="" >}} |
| 31 | + |
| 32 | +<h2>Cluster initialization</h2> |
| 33 | + |
| 34 | +<p>Now that the VMs are created, you need to install some dependencies on each on them (a couple of packages including <strong>kubectl</strong>, <strong>containerd</strong> and <strong>kubeadm</strong>). To simplify this process we provide some scripts that will do this job for you.</p> |
| 35 | + |
| 36 | +<p>First, ssh on the controlplane VM and install those dependencies using the following command.</p> |
| 37 | + |
| 38 | +```bash |
| 39 | +curl https://luc.run/kubeadm/controlplane.sh | VERSION="1.32" sh |
| 40 | +``` |
| 41 | + |
| 42 | +<p>Next, still from the controlplane VM, initialize the cluster.</p> |
| 43 | + |
| 44 | +```bash |
| 45 | +sudo kubeadm init |
| 46 | +``` |
| 47 | + |
| 48 | +<p>The initialization should take a few tens of seconds. The list below shows all the steps it takes.</p> |
| 49 | + |
| 50 | +<pre> |
| 51 | +preflight Run pre-flight checks |
| 52 | +certs Certificate generation |
| 53 | + /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components |
| 54 | + /apiserver Generate the certificate for serving the Kubernetes API |
| 55 | + /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet |
| 56 | + /front-proxy-ca Generate the self-signed CA to provision identities for front proxy |
| 57 | + /front-proxy-client Generate the certificate for the front proxy client |
| 58 | + /etcd-ca Generate the self-signed CA to provision identities for etcd |
| 59 | + /etcd-server Generate the certificate for serving etcd |
| 60 | + /etcd-peer Generate the certificate for etcd nodes to communicate with each other |
| 61 | + /etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd |
| 62 | + /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd |
| 63 | + /sa Generate a private key for signing service account tokens along with its public key |
| 64 | +kubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file |
| 65 | + /admin Generate a kubeconfig file for the admin to use and for kubeadm itself |
| 66 | + /super-admin Generate a kubeconfig file for the super-admin |
| 67 | + /kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes |
| 68 | + /controller-manager Generate a kubeconfig file for the controller manager to use |
| 69 | + /scheduler Generate a kubeconfig file for the scheduler to use |
| 70 | +etcd Generate static Pod manifest file for local etcd |
| 71 | + /local Generate the static Pod manifest file for a local, single-node local etcd instance |
| 72 | +control-plane Generate all static Pod manifest files necessary to establish the control plane |
| 73 | + /apiserver Generates the kube-apiserver static Pod manifest |
| 74 | + /controller-manager Generates the kube-controller-manager static Pod manifest |
| 75 | + /scheduler Generates the kube-scheduler static Pod manifest |
| 76 | +kubelet-start Write kubelet settings and (re)start the kubelet |
| 77 | +upload-config Upload the kubeadm and kubelet configuration to a ConfigMap |
| 78 | + /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap |
| 79 | + /kubelet Upload the kubelet component config to a ConfigMap |
| 80 | +upload-certs Upload certificates to kubeadm-certs |
| 81 | +mark-control-plane Mark a node as a control-plane |
| 82 | +bootstrap-token Generates bootstrap tokens used to join a node to a cluster |
| 83 | +kubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap |
| 84 | + /enable-client-cert-rotation Enable kubelet client certificate rotation |
| 85 | +addon Install required addons for passing conformance tests |
| 86 | + /coredns Install the CoreDNS addon to a Kubernetes cluster |
| 87 | + /kube-proxy Install the kube-proxy addon to a Kubernetes cluster |
| 88 | +show-join-command Show the join command for control-plane and worker node |
| 89 | +</pre> |
| 90 | + |
| 91 | +<p>Several commands are returned at the end of the installation process, which you'll use in the next part.</p> |
| 92 | + |
| 93 | +{{< image src="/images/learning-path/cka/creation/step-2.png" width="100%" align="center" alt="" >}} |
| 94 | + |
| 95 | +<h2>Retrieving kubeconfig file</h2> |
| 96 | + |
| 97 | +<p>The first set of commands returned during the initialization step allows configuring kubectl for the current user. Run those commands from a shell in the controlplane Node.</p> |
| 98 | + |
| 99 | +```bash |
| 100 | +mkdir -p $HOME/.kube |
| 101 | +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config |
| 102 | +sudo chown $(id -u):$(id -g) $HOME/.kube/config |
| 103 | +``` |
| 104 | + |
| 105 | +<p>You can now list the Nodes. You'll get only one Node as you've not added the worker Nodes yet.</p> |
| 106 | + |
| 107 | +```bash |
| 108 | +$ kubectl get no |
| 109 | +NAME STATUS ROLES AGE VERSION |
| 110 | +controlplane NotReady control-plane 5m4s v1.32.4 |
| 111 | +``` |
| 112 | + |
| 113 | +<h2>Adding the first worker Node</h2> |
| 114 | + |
| 115 | +<p>As you've done for the controlplane, use the following command to install the dependencies (kubectl, containerd, kubeadm) on worker1.</p> |
| 116 | + |
| 117 | +```bash |
| 118 | +curl https://luc.run/kubeadm/worker.sh | VERSION="1.32" sh |
| 119 | +``` |
| 120 | + |
| 121 | +<p>Then, run the join command returned during the initialization step. This command allows you to add worker nodes to the cluster.</p> |
| 122 | + |
| 123 | +```bash |
| 124 | +sudo kubeadm join 10.81.0.174:6443 --token kolibl.0oieughn4y03zvm7 \ |
| 125 | + --discovery-token-ca-cert-hash sha256:a1d26efca219428731be6b62e3298a2e5014d829e51185e804f2f614b70d933d |
| 126 | +``` |
| 127 | + |
| 128 | +<h2>Adding the second worker Node</h2> |
| 129 | + |
| 130 | +<p>You need to do the same on worker2. First, install the dependencies.</p> |
| 131 | + |
| 132 | +```bash |
| 133 | +curl https://luc.run/kubeadm/worker.sh | VERSION="1.32" sh |
| 134 | +``` |
| 135 | + |
| 136 | +<p>Then, run the join command to add this Node to the cluster.</p> |
| 137 | + |
| 138 | +```bash |
| 139 | +sudo kubeadm join 10.81.0.174:6443 --token kolibl.0oieughn4y03zvm7 \ |
| 140 | + --discovery-token-ca-cert-hash sha256:a1d26efca219428731be6b62e3298a2e5014d829e51185e804f2f614b70d933d |
| 141 | +``` |
| 142 | + |
| 143 | +<p>You now have cluster with 3 Nodes.</p> |
| 144 | + |
| 145 | +{{< image src="/images/learning-path/cka/creation/step-3.png" width="100%" align="center" alt="" >}} |
| 146 | + |
| 147 | +<h2>Status of the Nodes</h2> |
| 148 | + |
| 149 | +<p>List the Nodes and notice they are all in NotReady status.</p> |
| 150 | + |
| 151 | +```bash |
| 152 | +$ kubectl get nodes |
| 153 | +NAME STATUS ROLES AGE VERSION |
| 154 | +controlplane NotReady control-plane 9m58s v1.32.4 |
| 155 | +worker1 NotReady <none> 58s v1.32.4 |
| 156 | +worker2 NotReady <none> 55s v1.32.4 |
| 157 | +``` |
| 158 | + |
| 159 | +<p>If you go one step further and describe the controlplane Node, you'll get why the cluster is not ready yet.</p> |
| 160 | + |
| 161 | +<pre> |
| 162 | +… |
| 163 | +KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized |
| 164 | +</pre> |
| 165 | + |
| 166 | +<h2>Installing a network plugin</h2> |
| 167 | + |
| 168 | +<p>Run the following commands from the controlplane Node to install Cilium in your cluster.</p> |
| 169 | + |
| 170 | +```bash |
| 171 | +OS="$(uname | tr '[:upper:]' '[:lower:]')" |
| 172 | +ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" |
| 173 | +curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-$OS-$ARCH.tar.gz{,.sha256sum} |
| 174 | +sudo tar xzvfC cilium-$OS-$ARCH.tar.gz /usr/local/bin |
| 175 | +cilium install |
| 176 | +``` |
| 177 | + |
| 178 | +<p>After a few tens of seconds, you'll see your cluster is ready.</p> |
| 179 | + |
| 180 | +```bash |
| 181 | +$ kubectl get nodes |
| 182 | +NAME STATUS ROLES AGE VERSION |
| 183 | +controlplane Ready control-plane 13m v1.32.4 |
| 184 | +worker1 Ready <none> 4m28s v1.32.4 |
| 185 | +worker2 Ready <none> 4m25s v1.32.4 |
| 186 | +``` |
| 187 | + |
| 188 | +<h2>Get the kubeconfig on the host machine</h2> |
| 189 | + |
| 190 | +<p>To avoid connecting to the controlplane Node to run the kubectl commands, copy the kubeconfig file from the controlplane to the host machine. Make sure to copy this file into <code>$HOME/.kube/config</code> so it automatically configures kubectl.</p> |
| 191 | + |
| 192 | +<p>If you've created your VMs with Multipass, you can copy the kubeconfig file using the following commands.</p> |
| 193 | + |
| 194 | +```bash |
| 195 | +multipass transfer controlplane:/home/ubuntu/.kube/config config |
| 196 | +mkdir $HOME/.kube |
| 197 | +mv config $HOME/.kube/config |
| 198 | +``` |
| 199 | + |
| 200 | +<p>You should now be able to direcly list the Nodes from the host machine.</p> |
| 201 | + |
| 202 | +```bash |
| 203 | +$ kubectl get nodes |
| 204 | +NAME STATUS ROLES AGE VERSION |
| 205 | +controlplane Ready control-plane 13m v1.32.4 |
| 206 | +worker1 Ready <none> 4m28s v1.32.4 |
| 207 | +worker2 Ready <none> 4m25s v1.32.4 |
| 208 | +``` |
| 209 | + |
| 210 | +{{< /chapterstyle >}} |
0 commit comments