Skip to content

Commit 127b7bc

Browse files
committed
Complete re-write of the workshop so it follows the Katacoda scenario
1 parent 2df3725 commit 127b7bc

35 files changed

+843
-1472
lines changed

.workshop/setup

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,4 @@
77
# the steps having already been run. This is because this script will be
88
# run a second time if the container were restarted for some reason.
99

10-
envsubst < exercise/image-pipeline-resource.yaml.in > exercise/image-pipeline-resource.yaml
10+
envsubst < resources/image-pipeline-resource.yaml.in > resources/image-pipeline-resource.yaml
File renamed without changes.

workshop/content/01install-op.adoc

Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
OpenShift Pipelines is provided as an OpenShift add-on that can be installed via an operator that is available in the OpenShift OperatorHub.
2+
3+
Operators may be installed into a single namespace and only monitor resources in that namespace, but the OpenShift Pipelines Operator installs globally on the cluster and monitors and manage pipelines for every single user in the cluster.
4+
5+
To install the operator globally, you need to be a cluster administrator user. In this workshop environment, the operator has already been installed for you. Nevertheless, this is the process we followed in order to install the operator. **These instructions are for reference, as you will not be able to see these screens in the embedded console in the workshop due to your user's lack of the required privileges.**
6+
7+
== Install process
8+
9+
To install the OpenShift Pipelines operator on an OpenShift 4 cluster, you would go to the **Catalog > OperatorHub** tab in the OpenShift web console. You can see the list of available operators for OpenShift provided by Red Hat as well as a community of partners and open-source projects.
10+
11+
Next, you would click on the **Integration & Delivery** category to find the **OpenShift Pipeline Operator** as shown below:
12+
13+
image:images/operatorhub.png[OpenShift OperatorHub]
14+
15+
Click on **OpenShift Pipelines Operator**, **Continue**, and then **Install** as shown below:
16+
17+
image:images/operator-install-1.png[OpenShift Pipelines Operator]
18+
19+
Leave the default values after clicking **Install**. The operator will install globally and it will run in the `openshift-operators` project, as this is the pre-configured project for all global operators. Click on **Subscribe** in order to subscribe to the installation and update channels as shown below:
20+
21+
image:images/operator-install-2.png[OpenShift Pipelines Operator]
22+
23+
The operator is installed when you see the status updated from `1 installing` to `1 installed` as shown in the photo below:
24+
25+
image:images/operator-install-3.png[OpenShift Pipelines Operator]
26+
27+
This operator automates installation and updates of OpenShift Pipelines on the cluster as well as applying all configurations needed.
28+
29+
== Verify installation
30+
31+
The OpenShift Pipelines Operator provides all its resources under a single API group: tekton.dev. You can see the new resources by running:
32+
33+
[source,bash,role=execute]
34+
----
35+
oc api-resources --api-group=tekton.dev
36+
----
37+
38+
=== Verify user roles
39+
40+
To validate that your user has the appropriate roles, you can use the `oc auth can-i` command to see whether you can create Kubernetes custom resources of the kind needed by the OpenShift Pipelines Operator.
41+
42+
The custom resource you need to create an OpenShift Pipelines pipeline is a resource of the kind pipeline.tekton.dev in the tekton.dev API group. To check that you can create this, run:
43+
44+
[source,bash,role=execute]
45+
----
46+
oc auth can-i create pipeline.tekton.dev
47+
----
48+
49+
Or you can use the simplified version:
50+
51+
[source,bash,role=execute]
52+
----
53+
oc auth can-i create Pipeline
54+
----
55+
56+
When run, if the response is yes, you have the appropriate access.
57+
58+
Verify that you can create the rest of the Tekton custom resources needed for this workshop by running the commands below. All of the commands should respond with yes.
59+
60+
[source,bash,role=execute]
61+
----
62+
oc auth can-i create Task
63+
----
64+
65+
[source,bash,role=execute]
66+
----
67+
oc auth can-i create PipelineResource
68+
----
69+
70+
[source,bash,role=execute]
71+
----
72+
oc auth can-i create PipelineRun
73+
----
74+
75+
Now that we have verified that you can create the required resources let's start the workshop.
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
In this workshop, the pipeline you create uses tools such as https://github.com/openshift/source-to-image[s2i] and https://buildah.io/[Buildah] to create a container image for an application and build the image.
2+
3+
Building container images using build tools (such as s2i, Buildah and Kaniko) require privileged access to the cluster. OpenShift default security settings do not allow access to privileged containers unless correctly configured.
4+
5+
This operator has created a `ServiceAccount` with the required permissions to run privileged pods for building images. The name of this service account is easy to remember. It is named _pipeline_.
6+
7+
You can verify that the pipeline has been created by running the following command:
8+
9+
[source,bash,role=execute]
10+
----
11+
oc get serviceaccount pipeline
12+
----
13+
14+
In addition to privileged security context constraints (SCC), the _pipeline_ service account also has the edit role. This set of permissions allows _pipeline_ to push a container image to OpenShift's internal image registry.
15+
16+
_pipeline_ is only able to push to a section of OpenShift's internal image registry that corresponds to your OpenShift project namespace. This namespacing helps to separate projects on an OpenShift cluster.
17+
18+
The pipeline service account executes PipelineRuns on your behalf. You can see an explicit reference for a service account when you trigger a pipeline run later in this workshop.
19+
20+
In the next section, you will set up the sample application on OpenShift that is deployed in this workshop.
Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
For this tutorial, you're going to use a simple Node.js application that interacts with a MongoDB database. This application needs to deploy in a new project (i.e. Kubernetes namespace). This project was already created for you and you can see which project you are currently in by using:
2+
3+
[source,bash,role=execute]
4+
----
5+
oc project
6+
----
7+
8+
For this tutorial, you will deploy the _nodejs-ex_ sample application from the https://github.com/sclorg[sclorg] repository.
9+
10+
To prepare for _nodejs-ex _'s eventual deployment, you create Kubernetes objects that are supplementary to the application, such as a route (i.e. URL). The deployment will not complete since there are no container images built for the _nodejs-ex_ application yet. You will complete this deployment in the following sections through a CI/CD pipeline.
11+
12+
Create the supplementary Kubernetes objects by running the command below:
13+
14+
[source,bash,role=execute]
15+
----
16+
oc create -f sampleapp/sampleapp.yaml
17+
----
18+
19+
_nodejs-ex_ also needs a MongoDB database. You can deploy a container with MongoDB to your OpenShift project by running the following command:
20+
21+
[source,bash,role=execute]
22+
----
23+
oc new-app centos/mongodb-36-centos7 -e MONGODB_USER=admin MONGODB_DATABASE=mongodb MONGODB_PASSWORD=secret MONGODB_ADMIN_PASSWORD=super-secret
24+
----
25+
26+
You should see `--> Success` in the output of the command, which verifies the successful deployment of the container image.
27+
28+
The command above uses a container image with a CentOS 7 operating system and MongoDB 3.6 installed. It also sets environment variables using the -e option. MongoDB needs these environment variables for its deployment, such as the username, database name, password, and the admin password.
29+
30+
A service is an abstract way to expose an application running on a set of pods as a network service. Using a service name allows _nodejs-ex_ to reference a consistent endpoint in the event the pod hosting your MongoDB container is updated from events such as scaling pods up or down or redeploying your MongoDB container image with updates.
31+
32+
You can see all the services, including the one for _nodejs-ex_ in your OpenShift project by running the following command:
33+
34+
[source,bash,role=execute]
35+
----
36+
oc get services
37+
----
38+
39+
Now that you are familiar with Kubernetes services go ahead and connect _nodejs-ex_ to the MongoDB. To do this, set the connection string in an environment variable by running the following command:
40+
41+
[source,bash,role=execute]
42+
----
43+
oc set env dc/nodejs-ex MONGO_URL="mongodb://admin:secret@mongodb-36-centos7:27017/mongodb"
44+
----
45+
46+
== Verify the deployment
47+
48+
To verify the creation of the resources needed to support _nodejs-ex_ and the MongoDB, you can head out to the OpenShift web console.
49+
50+
You can make your way to the web console by clicking on the Console tab next to the Terminal tab at the center top of the workshop in your browser.
51+
52+
You need to log in with username `admin` and password `admin`.
53+
54+
Make sure the Developer option from the dropdown in the top left corner of the web console is selected as shown below:
55+
56+
image:images/developer-view.png[Developer View]
57+
58+
Next, select the Project dropdown menu shown below and choose the project namespace you have been working with (_lab-tekton_).
59+
60+
Next, click on the Topology tab on the left side of the web console if you don't see what's in the image below. Once in the Topology view, you can see the deployment config for the _nodejs-ex_ application and the MongoDB, which should look similar to the image below:
61+
62+
image:images/topology-view.png[Topology View]
63+
64+
You'll notice the white circle around the _nodejs-ex_ deployment config. This highlight means that the _nodejs-ex_ application isn't running yet. More specifically, no container hosting the _nodejs-ex_ application has been created, built, and deployed yet.
65+
66+
The _mongodb-36-centos7_ deployment config has a dark blue circle around it, meaning that a pod is running with a MongoDB container on it. The MongoDB should be all set to support the _nodejs-ex_ application at this point.
67+
68+
In the next section, you'll learn how to use Tekton tasks.
Lines changed: 148 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,148 @@
1+
`Tasks` consist of some steps that get executed sequentially. Each step gets executed in a separate container within the same task pod. They can have inputs and outputs to interact with other `Tasks` as part of a pipeline.
2+
3+
For this exercise, you will create the _s2i-nodejs_ task from the catalogue repositories using oc. This is the first of two tasks you add to your pipeline for this workshop.
4+
5+
The _s2i-nodejs_ task has been broken into pieces below to help highlight its key aspects.
6+
7+
_s2i-nodejs_ starts by defining a property called inputs, as shown below. Underneath inputs, a property called resources specify that a resource of type _git_ is required. This property indicates that this task takes a git repository as an input.
8+
9+
[source,yaml]
10+
----
11+
spec:
12+
inputs:
13+
resources:
14+
- name: source
15+
type: git
16+
----
17+
18+
The params property below defines fields that must be specified when using the task (e.g. the version of Node.js to use).
19+
20+
[source,yaml]
21+
----
22+
params:
23+
- name: VERSION
24+
description: The version of the nodejs
25+
default: '12'
26+
- name: PATH_CONTEXT
27+
description: The location of the path to run s2i from.
28+
default: .
29+
- name: TLSVERIFY
30+
description: Verify the TLS on the registry endpoint (for push/pull to a non-TLS registry)
31+
default: "true"
32+
----
33+
34+
There is also an _outputs_ property shown below that is used to specify that something is output as a result of running this task. The type of output is _image_. This property specifies that this task creates an image from the git repository provided as an input.
35+
36+
Many resource types are possible and not only limited to git and image. You can find out more about the possible resource types in the https://github.com/tektoncd/pipeline/blob/master/docs/resources.md#resource-types[Tekton documentation].
37+
38+
[source,yaml]
39+
----
40+
outputs:
41+
resources:
42+
- name: image
43+
type: image
44+
----
45+
46+
For each step of the task, a steps property is used to define what steps will run during this task. Each step is denoted by its name. _s2i-nodejs_ has three steps:
47+
48+
generate
49+
50+
[source,yaml]
51+
----
52+
- name: generate
53+
image: quay.io/openshift-pipeline/s2i
54+
workingdir: /workspace/source
55+
command: ['s2i', 'build', '$(inputs.params.PATH_CONTEXT)', 'centos/nodejs-$(inputs.params.VERSION)-centos7', '--as-dockerfile', '/gen-source/Dockerfile.gen']
56+
volumeMounts:
57+
- name: gen-source
58+
mountPath: /gen-source
59+
resources:
60+
limits:
61+
cpu: 500m
62+
memory: 1Gi
63+
requests:
64+
cpu: 500m
65+
memory: 1Gi
66+
----
67+
68+
build
69+
70+
[source,yaml]
71+
----
72+
- name: build
73+
image: quay.io/buildah/stable
74+
workingdir: /gen-source
75+
command: ['buildah', 'bud', '--tls-verify=$(inputs.params.TLSVERIFY)', '--layers', '-f', '/gen-source/Dockerfile.gen', '-t', '$(outputs.resources.image.url)', '.']
76+
volumeMounts:
77+
- name: varlibcontainers
78+
mountPath: /var/lib/containers
79+
- name: gen-source
80+
mountPath: /gen-source
81+
resources:
82+
limits:
83+
cpu: 500m
84+
memory: 1Gi
85+
requests:
86+
cpu: 500m
87+
memory: 1Gi
88+
securityContext:
89+
privileged: true
90+
----
91+
92+
push
93+
94+
[source,yaml]
95+
----
96+
- name: push
97+
image: quay.io/buildah/stable
98+
command: ['buildah', 'push', '--tls-verify=$(inputs.params.TLSVERIFY)', '$(outputs.resources.image.url)', 'docker://$(outputs.resources.image.url)']
99+
volumeMounts:
100+
- name: varlibcontainers
101+
mountPath: /var/lib/containers
102+
resources:
103+
limits:
104+
cpu: 500m
105+
memory: 1Gi
106+
requests:
107+
cpu: 500m
108+
memory: 1Gi
109+
securityContext:
110+
privileged: true
111+
----
112+
113+
Each step above runs serially in its own container. Since the generate step uses an s2i command to generate a Dockerfile from the source code from the git repository input, the image used for its container has s2i installed.
114+
115+
The build and push steps both use a Buildah image to run commands to build the Dockerfile created by the generate step and then push that image to an image registry (i.e. the output of the task).
116+
117+
You can see the images used for both these steps via the image property of each step.
118+
119+
The order of the steps above (i.e. 1. generate 2. build 3. push) is used to specify when these steps should run. For _s2i-nodejs_, this means _generate_ will run followed by build and then the push step will execute last.
120+
121+
Under the resources property of each step, you can define the amount of resources needed for the container in terms of CPU and memory.
122+
123+
[source,yaml]
124+
----
125+
resources:
126+
limits:
127+
cpu: 500m
128+
memory: 1Gi
129+
requests:
130+
cpu: 500m
131+
memory: 1Gi
132+
----
133+
134+
You can view the full definition of this task in the https://github.com/openshift/pipelines-catalog/blob/master/s2i-nodejs/s2i-nodejs-task.yaml[OpenShift Pipelines Catalog GitHub repository] or by using:
135+
136+
[source,bash,role=execute]
137+
----
138+
cat ./tektontasks/s2i-nodejs-task.yaml
139+
----
140+
141+
Create the _s2i-nodejs_ task that defines and builds a container image for the _nodejs-ex_ application and push the resulting image to an image registry:
142+
143+
[source,bash,role=execute]
144+
----
145+
oc create -f tektontasks/s2i-nodejs-task.yaml
146+
----
147+
148+
In the next section, you will examine the second task definitions that will be needed for our pipeline.
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
The openshift-client task you will create is simpler as shown below:
2+
3+
[source,yaml]
4+
----
5+
apiVersion: tekton.dev/v1alpha1
6+
kind: Task
7+
metadata:
8+
name: openshift-client
9+
spec:
10+
inputs:
11+
params:
12+
- name: ARGS
13+
description: The OpenShift CLI arguments to run
14+
default: help
15+
steps:
16+
- name: oc
17+
image: quay.io/openshiftlabs/openshift-cli-tekton-workshop:2.0
18+
command: ["/usr/local/bin/oc"]
19+
args:
20+
- "$(inputs.params.ARGS)"
21+
----
22+
23+
_openshift-client_ doesn't have any inputs or outputs associated with it. It also only has one step named oc.
24+
25+
This step uses an image with oc installed and runs the oc root command along with any args passed to the step under the args property. This task allows you to run any command with oc. You will use it to deploy the image created by the _s2i-nodejs_ task to OpenShift. You will see how this takes place in the next section.
26+
27+
Create the _openshift-client_ task that will deploy the image created by _s2i-nodejs_ as a container on OpenShift:
28+
29+
[source,bash,role=execute]
30+
----
31+
oc create -f tektontasks/openshift-client-task.yaml
32+
----
33+
34+
*Note*: For convenience, the tasks have been copied from their original locations in the Tekton and OpenShift catalogue git repositories to the workshop.
35+
36+
You can take a look at the list of tasks using the Tekton CLI (tkn):
37+
38+
[source,bash,role=execute]
39+
----
40+
tkn task ls
41+
----
42+
43+
You should see similar output to this:
44+
45+
[source,bash]
46+
----
47+
NAME AGE
48+
openshift-client 58 seconds ago
49+
s2i-nodejs 3 minutes ago
50+
----
51+
52+
In the next section, you will create a pipeline that uses _s2i-nodejs_ and _openshift-client_ tasks.

0 commit comments

Comments
 (0)