Skip to content
This repository was archived by the owner on Jan 9, 2023. It is now read-only.

Commit 627ae29

Browse files
committed
Fix docs
Signed-off-by: Mattias Gees <mattias.gees@gmail.com>
1 parent 0aa2cfe commit 627ae29

File tree

1 file changed

+33
-28
lines changed

1 file changed

+33
-28
lines changed

docs/examples/horizontal-pod-autoscaling.rst

Lines changed: 33 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Prometheus
2525

2626
You need Prometheus to scrape and store metrics from applications in its
2727
time series database. These metrics will be used by the HPA to decide if it
28-
has to scale the application. You can use an already running Prometheus in
28+
has to scale the application. You can use an already running Prometheus in
2929
your environment or opt to set one up with the following steps.
3030

3131
.. warning::
@@ -48,7 +48,7 @@ Prometheus, Alertmanager and configure ServiceMonitors.
4848
helm install coreos/prometheus-operator --name prometheus-operator --namespace application-monitoring
4949
5050
After installing the Prometheus operator you can install Prometheus. To accomplish
51-
this you use an yaml values file. This yaml file defines you want 2 Prometheus
51+
this you use a yaml values file. This yaml file defines you want 2 Prometheus
5252
pods running, keep data for 14 days and want to have a persistent volume
5353
of 20 GB per Prometheus pod. You can tweak the Prometheus install further
5454
by following the official `documentation <https://github.com/coreos/prometheus-operator/tree/master/helm/prometheus>`__.
@@ -88,22 +88,27 @@ You need to create a CA and SSL cert to validate your APIService with Kubernetes
8888
openssl req -x509 -sha256 -new -nodes -days 365 -newkey rsa:2048 -keyout ${PURPOSE}-ca.key -out ${PURPOSE}-ca.crt -subj "/CN=ca"
8989
echo '{"signing":{"default":{"expiry":"43800h","usages":["signing","key encipherment","'${PURPOSE}'"]}}}' > "${PURPOSE}-ca-config.json"
9090
91-
export SERVICE_NAME=prometheus-adapter-prometheus-adapter
92-
export ALT_NAMES='"prometheus-adapter-prometheus-adapter.application-monitoring","prometheus-adapter-prometheus-adapter.application-monitoring.svc"'
91+
export SERVICE_NAME=prometheus-adapter
92+
export ALT_NAMES='"prometheus-adapter.application-monitoring","prometheus-adapter.application-monitoring.svc"'
9393
echo '{"CN":"'${SERVICE_NAME}'","hosts":['${ALT_NAMES}'],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=server-ca.crt -ca-key=server-ca.key -config=server-ca-config.json - | cfssljson -bare apiserver
9494
95+
96+
.. warning::
97+
Make sure the ``SERVICE_NAME`` and ``ALT_NAMES`` match your application release
98+
name and namespace where it is deployed.
99+
95100
Now create an ``prometheus-adapater.yaml`` with the following content:
96101

97102
.. code-block:: yaml
98103
99104
tls:
100105
enable: true
101106
ca: |-
102-
<content of server-ca.crt>
107+
<replace with content of server-ca.crt>
103108
key: |-
104-
<content of apiserver-key.pem>
109+
<replace with content of apiserver-key.pem>
105110
certificate: |-
106-
<content of apiserver.pem>
111+
<replace with content of apiserver.pem>
107112
108113
# Change URL and port if you setup your own Prometheus server.
109114
prometheus:
@@ -117,15 +122,16 @@ Install the Prometheus Adapter:
117122
.. code-block:: bash
118123
119124
helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
120-
helm install incubator/prometheus-adapter --name prometheus-adapter --namespace application-monitoring -f prometheus-adapter.yaml
125+
helm install stable/prometheus-adapter --name prometheus-adapter --namespace application-monitoring -f prometheus-adapter.yaml
121126
122127
123-
Now you can test if it works by running the following command against your
128+
You can test if HPA works by running the following command against your
124129
Kubernetes cluster.
125130

126131
.. code-block:: bash
127132
128133
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
134+
129135
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[]}
130136
131137
Usage
@@ -144,7 +150,7 @@ your application with Prometheus.
144150
kind: ServiceMonitor
145151
metadata:
146152
name: <example>
147-
namespace: <example_namespace>
153+
namespace: application-monitoring
148154
labels:
149155
prometheus: prometheus-applications
150156
spec:
@@ -159,32 +165,31 @@ your application with Prometheus.
159165
matchLabels:
160166
<key>: <value that matches your application>
161167
162-
When adding the following ``ServiceMonitor``, make sure to keep ``prometheus``
163-
as an key in labels, this is how Prometheus discovers the different
164-
ServiceMonitors.
168+
When adding the ``ServiceMonitor``, make sure to keep ``prometheus`` as an key
169+
in labels, that is how Prometheus discovers the different ServiceMonitors.
170+
The ``ServiceMonitor`` has to be deployed in the same namespace as your Prometheus.
165171

166172
After applying the ``ServiceMonitor``, Prometheus should start discovering
167173
all your application pods and start to monitor them.
168174

169-
Now you can find the correct metric and you can get it out of the custom.metrics
170-
API endpoint.
175+
You can find the correct metric by querying the custom.metrics API endpoint.
171176

172177
.. code-block:: bash
173178
174179
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq
175-
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/requestcount" | jq
176-
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/service/example/requestcount | jq
180+
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/<aplication_namespace>/pods/*/<metric_name>" | jq
181+
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/<aplication_namespace>/service/<service_name>/<metric_name> | jq
177182
178-
When you found the correct metric to scale on, you can create your
183+
When you found the correct metric to scale on, you can create your
179184
``HorizontalPodAutoscaler``.
180185

181186
.. code-block:: yaml
182187
183188
kind: HorizontalPodAutoscaler
184189
apiVersion: autoscaling/v2beta1
185190
metadata:
186-
name: example
187-
namespace: default
191+
name: <example>
192+
namespace: <example_namespace>
188193
spec:
189194
scaleTargetRef:
190195
apiVersion: apps/v1beta2
@@ -193,15 +198,12 @@ When you found the correct metric to scale on, you can create your
193198
minReplicas: 2
194199
maxReplicas: 4
195200
metrics:
196-
- type: Object
197-
object:
198-
target:
199-
kind: Service
200-
name: example
201-
metricName: requestcount
202-
targetValue: 30
201+
- type: Pods
202+
pods:
203+
metricName: <metric_name>
204+
targetAverageValue: <metric_value>
203205
204-
Now you watch the horizontal pod autoscaler:
206+
Watch the horizontal pod autoscaler:
205207

206208
.. code-block:: bash
207209
@@ -212,3 +214,6 @@ More examples can be found in the kubernetes `documentation <https://kubernetes.
212214

213215
.. warning::
214216
Certainly take a look a types ``Object`` and ``Pods`` for HPA based on custom-metrics.
217+
218+
A fully worked out example with a example application that has metrics can be found in
219+
`luxas repo <https://github.com/luxas/kubeadm-workshop/blob/master/demos/monitoring/sample-metrics-app.yaml#L51>`__.

0 commit comments

Comments
 (0)