This repository was archived by the owner on Oct 7, 2019. It is now read-only.
File tree Expand file tree Collapse file tree 1 file changed +32
-0
lines changed Expand file tree Collapse file tree 1 file changed +32
-0
lines changed Original file line number Diff line number Diff line change 1+ # 8. Scale up your app
2+
3+ Estimated time: 18 min remaining
4+
5+ One of the powerful features offered by Kubernetes is how easy it is to scale your application. Suppose you suddenly
6+ need more capacity for your application; you can simply tell the replication controller to manage a new number of
7+ replicas for your pod:
8+ ``` sh
9+ $ kubectl scale rc hello-node --replicas=3
10+ $ kubectl get pods
11+ NAME READY STATUS RESTARTS AGE
12+ hello-node-6uzt8 1/1 Running 0 8m
13+ hello-node-gxhty 1/1 Running 0 34s
14+ hello-node-z2odh 1/1 Running 0 34s
15+ ```
16+ You now have three replicas of your application, each running independently on the cluster with the load balancer
17+ you created earlier and serving traffic to all of them.
18+ ``` sh
19+ $ kubectl get rc hello-node
20+ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
21+ hello-node hello-node gcr.io/..../hello-node:v1 run=hello-node 3
22+ ```
23+ Note the ** declarative approach** here - rather than starting or stopping new instances you declare how many instances
24+ you want to be running. Kubernetes reconciliation loops simply make sure the reality matches what you requested and
25+ take action if needed.
26+
27+ Here’s a diagram summarizing the state of our Kubernetes cluster:
28+
29+ ![ gke-diagram] ( https://codelabs.developers.google.com/codelabs/hello-kubernetes/img/img-13.png )
30+
31+ #### [ Go to step 9] ( step9.md )
32+ #### [ Go back to step 7] ( step7.md )
You can’t perform that action at this time.
0 commit comments