Skip to content

Commit d58fb49

Browse files
Merge pull request #44 from ShashaankS/CKA-prep
Update: CKA Preparation
2 parents 488aed7 + 46d773b commit d58fb49

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+2306
-0
lines changed
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
---
2+
title: "Certified Kubernetes Administrator (CKA) Preparation"
3+
description: "This learning path prepares you for the Certified Kubernetes Administrator (CKA) exam, covering essential topics such as cluster architecture, installation, configuration, and troubleshooting."
4+
banner: ""
5+
courses: 9
6+
weight: 8
7+
---
Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
---
2+
id: "certifications"
3+
description: "Get an overview of the existing Kubernetes certifications and what you need to learn for the CKA."
4+
title: "Certifications"
5+
weight: 1
6+
---
7+
8+
Get an overview of the existing Kubernetes certifications and what you need to learn for the CKA.
9+
10+
## Several certifications available
11+
---
12+
13+
| Certification | Type | Badge |
14+
|-------------------------------------------------------|----------|-------|
15+
| Kubernetes and Cloud Native Associate (KCNA) | MCQ | ![kcna](kcna.png) |
16+
| Kubernetes and Cloud Native Security Associate (KCSA) | MCQ | ![kcsa](kcsa.png) |
17+
| Certified Kubernetes Application Developer (CKAD) | Practice | ![ckad](ckad.png) |
18+
| Certified Kubernetes Administrator (CKA) | Practice | ![cka](cka.png) |
19+
| Certified Kubernetes Security Specialist (CKS) | Practice | ![cks](cks.png) |
20+
21+
\* *passing the CKA is a requirement before passing the CKS*
22+
23+
If you pass all those certifications, you become a [Kubestronaut](https://www.cncf.io/training/kubestronaut/).
24+
25+
## Expectation for the CKA
26+
---
27+
28+
The following table summarizes the distribution of the CKA questions across 5 main subjects.
29+
30+
| Subject | % |
31+
|----------------------------------------------------|-----|
32+
| Cluster Architecture, Installation & Configuration | 25% |
33+
| Workloads & Scheduling | 15% |
34+
| Services & Networking | 20% |
35+
| Storage | 10% |
36+
| Troubleshooting | 30% |
37+
38+
## CKA Environment
39+
---
40+
41+
The CKA is a 2h exam. It contains 15/20 questions and requires at least 66% correct answers. This exam is remotely proctored, so you can take it from home (or any other quiet location) at a time that best suits your schedule.
42+
43+
Before launching the exam, which you do via your [Linux Foundation Training Portal](https://trainingportal.linuxfoundation.org/access/saml/login), you need to perform a couple of prerequisites including making sure the PSI Browser works correctly on your environment. This browser gives you access to the remote Desktop you'll use during the exam.
44+
45+
![psi-browser](psi-browser.png)
46+
47+
## Tips & tricks
48+
---
49+
50+
### Tools
51+
52+
Make sure you have a basic knowledge of
53+
54+
- **vim**
55+
- **openssl**
56+
57+
```bash
58+
# Visualize the content of a certificate
59+
openssl x509 -in cert.crt -noout -text
60+
```
61+
62+
- **systemd / systemctl / journalctl**
63+
64+
```bash
65+
# Restart kubelet
66+
systemctl restart kubelet
67+
68+
# Check kubelet logs
69+
journalctl -u kubelet
70+
```
71+
72+
### Aliases
73+
74+
Defining a couple of aliases at the very beginning of the examination could save time.
75+
76+
```bash
77+
alias k=kubectl
78+
export dr="--dry-run=client -o yaml"
79+
export fd="--grace-period=0 --force"
80+
```
81+
82+
### Imperative commands
83+
84+
Don't create specifications manually, instead use `--dry-run=client -o yaml` as in these examples.
85+
86+
```bash
87+
k run nginx --image=nginx:1.20 --dry-run=client -o yaml > pod.yaml
88+
k create deploy www --image=nginx:1.20 --replicas=3 --dry-run=client -o yaml > deploy.yaml
89+
k create role create-pod --verb=create --resource=pods --dry-run=client -o yaml > role.yaml
90+
```
91+
92+
Quickly change the current Namespace.
93+
94+
```bash
95+
k config set-context --current --namespace=dev
96+
```
97+
98+
Don't wait for the grace period to get rid of a Pod.
99+
100+
```bash
101+
k delete po nginx --force --grace-period=0
102+
```
103+
104+
### Reference guide
105+
106+
The [Kubectl quick reference guide](https://kubernetes.io/docs/reference/kubectl/quick-reference/) is a must-read.
107+
108+
### Access to exam simulator
109+
110+
Registering for the CKA gives you access to two sessions of the official Exam simulator. I highly recommend using these sessions once you're almost ready.
8.05 KB
Loading
11.5 KB
Loading
22.7 KB
Loading
10.4 KB
Loading
11.5 KB
Loading
143 KB
Loading
Lines changed: 212 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,212 @@
1+
---
2+
id: "creation"
3+
description: "Build a 3-node kubeadm cluster from scratch."
4+
title: "Create a cluster"
5+
weight: 2
6+
---
7+
8+
This section guides you in creating of a 3-nodes Kubernetes cluster using [kubeadm](https://kubernetes.io/docs/reference/setup-tools/kubeadm/) bootstrapping tool. This is an important step as you will use this cluster throughout this workshop.
9+
10+
The cluster you'll create is composed of 3 Nodes named **controlplane**, **worker1** and **worker2**. The controlplane Node runs the cluster components (API Server, Controller Manager, Scheduler, etcd), while worker1 and worker2 are the worker Nodes in charge of running the containerized workloads.
11+
12+
![objectives](objectives.png)
13+
14+
## Provisioning VMs
15+
---
16+
17+
Before creating a cluster, it's necessary to provision the infrastructure (bare metal servers or virtual machines). You can create the 3 VMs on your local machine or a cloud provider (but this last option will come with a small cost). Ensure you name those VMs **controlplane**, **worker1**, and **worker2** to keep consistency alongside the workshop. Please also ensure each VM has at least 2 vCPUs and 2G of RAM so it meets the [prerequisites](https://bit.ly/kubeadm-prerequisites).
18+
19+
If you want to create those VMs on your local machine, we recommend using [Multipass](https://multipass.run), a tool from [Canonical](https://canonical.com/). Multipass makes creating local VMs a breeze. Once you have installed Multipass, create the VMs as follows.
20+
21+
```bash
22+
multipass launch --name controlplane --memory 2G --cpus 2 --disk 10G
23+
multipass launch --name worker1 --memory 2G --cpus 2 --disk 10G
24+
multipass launch --name worker2 --memory 2G --cpus 2 --disk 10G
25+
```
26+
27+
![step-1](step-1.png)
28+
29+
## Cluster initialization
30+
---
31+
32+
Now that the VMs are created, you need to install some dependencies on each on them (a couple of packages including **kubectl**, **containerd** and **kubeadm**). To simplify this process we provide some scripts that will do this job for you.
33+
34+
First, ssh on the controlplane VM and install those dependencies using the following command.
35+
36+
```bash
37+
curl https://luc.run/kubeadm/controlplane.sh | VERSION="1.32" sh
38+
```
39+
40+
Next, still from the controlplane VM, initialize the cluster.
41+
42+
```bash
43+
sudo kubeadm init
44+
```
45+
46+
The initialization should take a few tens of seconds. The list below shows all the steps it takes.
47+
48+
```
49+
preflight Run pre-flight checks
50+
certs Certificate generation
51+
/ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components
52+
/apiserver Generate the certificate for serving the Kubernetes API
53+
/apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet
54+
/front-proxy-ca Generate the self-signed CA to provision identities for front proxy
55+
/front-proxy-client Generate the certificate for the front proxy client
56+
/etcd-ca Generate the self-signed CA to provision identities for etcd
57+
/etcd-server Generate the certificate for serving etcd
58+
/etcd-peer Generate the certificate for etcd nodes to communicate with each other
59+
/etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd
60+
/apiserver-etcd-client Generate the certificate the apiserver uses to access etcd
61+
/sa Generate a private key for signing service account tokens along with its public key
62+
kubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
63+
/admin Generate a kubeconfig file for the admin to use and for kubeadm itself
64+
/super-admin Generate a kubeconfig file for the super-admin
65+
/kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes
66+
/controller-manager Generate a kubeconfig file for the controller manager to use
67+
/scheduler Generate a kubeconfig file for the scheduler to use
68+
etcd Generate static Pod manifest file for local etcd
69+
/local Generate the static Pod manifest file for a local, single-node local etcd instance
70+
control-plane Generate all static Pod manifest files necessary to establish the control plane
71+
/apiserver Generates the kube-apiserver static Pod manifest
72+
/controller-manager Generates the kube-controller-manager static Pod manifest
73+
/scheduler Generates the kube-scheduler static Pod manifest
74+
kubelet-start Write kubelet settings and (re)start the kubelet
75+
upload-config Upload the kubeadm and kubelet configuration to a ConfigMap
76+
/kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap
77+
/kubelet Upload the kubelet component config to a ConfigMap
78+
upload-certs Upload certificates to kubeadm-certs
79+
mark-control-plane Mark a node as a control-plane
80+
bootstrap-token Generates bootstrap tokens used to join a node to a cluster
81+
kubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap
82+
/enable-client-cert-rotation Enable kubelet client certificate rotation
83+
addon Install required addons for passing conformance tests
84+
/coredns Install the CoreDNS addon to a Kubernetes cluster
85+
/kube-proxy Install the kube-proxy addon to a Kubernetes cluster
86+
show-join-command Show the join command for control-plane and worker node
87+
```
88+
89+
Several commands are returned at the end of the installation process, which you'll use in the next part.
90+
91+
![step-2](step-2.png)
92+
93+
## Retrieving kubeconfig file
94+
---
95+
96+
The first set of commands returned during the initialization step allows configuring kubectl for the current user. Run those commands from a shell in the controlplane Node.
97+
98+
```bash
99+
mkdir -p $HOME/.kube
100+
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
101+
sudo chown $(id -u):$(id -g) $HOME/.kube/config
102+
```
103+
104+
You can now list the Nodes. You'll get only one Node as you've not added the worker Nodes yet.
105+
106+
```bash
107+
$ kubectl get no
108+
NAME STATUS ROLES AGE VERSION
109+
controlplane NotReady control-plane 5m4s v1.32.4
110+
```
111+
112+
## Adding the first worker Node
113+
---
114+
115+
As you've done for the controlplane, use the following command to install the dependencies (kubectl, containerd, kubeadm) on worker1.
116+
117+
```bash
118+
curl https://luc.run/kubeadm/worker.sh | VERSION="1.32" sh
119+
```
120+
121+
Then, run the join command returned during the initialization step. This command allows you to add worker nodes to the cluster.
122+
123+
```bash
124+
sudo kubeadm join 10.81.0.174:6443 --token kolibl.0oieughn4y03zvm7 \
125+
--discovery-token-ca-cert-hash sha256:a1d26efca219428731be6b62e3298a2e5014d829e51185e804f2f614b70d933d
126+
```
127+
128+
## Adding the second worker Node
129+
---
130+
131+
You need to do the same on worker2. First, install the dependencies.
132+
133+
```bash
134+
curl https://luc.run/kubeadm/worker.sh | VERSION="1.32" sh
135+
```
136+
137+
Then, run the join command to add this Node to the cluster.
138+
139+
```bash
140+
sudo kubeadm join 10.81.0.174:6443 --token kolibl.0oieughn4y03zvm7 \
141+
--discovery-token-ca-cert-hash sha256:a1d26efca219428731be6b62e3298a2e5014d829e51185e804f2f614b70d933d
142+
```
143+
144+
You now have cluster with 3 Nodes.
145+
146+
![step-3](step-3.png)
147+
148+
## Status of the Nodes
149+
---
150+
151+
List the Nodes and notice they are all in NotReady status.
152+
153+
```bash
154+
$ kubectl get nodes
155+
NAME STATUS ROLES AGE VERSION
156+
controlplane NotReady control-plane 9m58s v1.32.4
157+
worker1 NotReady <none> 58s v1.32.4
158+
worker2 NotReady <none> 55s v1.32.4
159+
```
160+
161+
If you go one step further and describe the controlplane Node, you'll get why the cluster is not ready yet.
162+
163+
```
164+
165+
KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
166+
```
167+
168+
## Installing a network plugin
169+
---
170+
171+
Run the following commands from the controlplane Node to install Cilium in your cluster.
172+
173+
```bash
174+
OS="$(uname | tr '[:upper:]' '[:lower:]')"
175+
ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')"
176+
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-$OS-$ARCH.tar.gz{,.sha256sum}
177+
sudo tar xzvfC cilium-$OS-$ARCH.tar.gz /usr/local/bin
178+
cilium install
179+
```
180+
181+
After a few tens of seconds, you'll see your cluster is ready.
182+
183+
```bash
184+
$ kubectl get nodes
185+
NAME STATUS ROLES AGE VERSION
186+
controlplane Ready control-plane 13m v1.32.4
187+
worker1 Ready <none> 4m28s v1.32.4
188+
worker2 Ready <none> 4m25s v1.32.4
189+
```
190+
191+
## Get the kubeconfig on the host machine
192+
---
193+
194+
To avoid connecting to the controlplane Node to run the kubectl commands, copy the kubeconfig file from the controlplane to the host machine. Make sure to copy this file into `$HOME/.kube/config` so it automatically configures kubectl.
195+
196+
If you've created your VMs with Multipass, you can copy the kubeconfig file using the following commands.
197+
198+
```bash
199+
multipass transfer controlplane:/home/ubuntu/.kube/config config
200+
mkdir $HOME/.kube
201+
mv config $HOME/.kube/config
202+
```
203+
204+
You should now be able to direcly list the Nodes from the host machine.
205+
206+
```bash
207+
$ kubectl get nodes
208+
NAME STATUS ROLES AGE VERSION
209+
controlplane Ready control-plane 13m v1.32.4
210+
worker1 Ready <none> 4m28s v1.32.4
211+
worker2 Ready <none> 4m25s v1.32.4
212+
```
81.8 KB
Loading

0 commit comments

Comments
 (0)