Skip to content

Commit 4074768

Browse files
athavrRaj Athavale
andauthored
update: YAML file explanations: Security Labs #1394 (#1588)
Co-authored-by: Raj Athavale <athavr@amazon.com>
1 parent 986037a commit 4074768

File tree

7 files changed

+55
-95
lines changed

7 files changed

+55
-95
lines changed

website/docs/security/cluster-access-management/kubernetes-rbac.md

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -7,17 +7,20 @@ As previously mentioned, the cluster access management controls and associated A
77

88
In this section of the lab, we'll show how to configure access entries with granular permissions using Kubernetes groups. This is useful when the pre-defined access policies are too permissive. As part of the lab setup, we created an IAM role named `eks-workshop-carts-team`. In this scenario, we'll demonstrate how to use that role to provide a team that only works on the **carts** service with permissions that allow them to view all resources in the `carts` namespace, but also delete pods.
99

10-
First, let's create the Kubernetes objects that model our required permissions. This `Role` provides the permissions we outlined above:
10+
First, let's create the Kubernetes objects that model our required permissions. This Role provides the permissions we outlined above:
1111

12-
```file
13-
manifests/modules/security/cam/rbac/role.yaml
14-
```
12+
::yaml{file="manifests/modules/security/cam/rbac/role.yaml" paths="metadata.namespace,rules.0,rules.1"}
1513

16-
And this `RoleBinding` will map the role to a group named `carts-team`:
14+
1. Restrict the Role permissions to apply only to the `carts` namespace
15+
2. This rule allows read-only operations `verbs: ["get", "list", "watch"]` on all resources `resources: ["*"]`
16+
3. This rule allows delete operations `verbs: ["delete"]` specific to pods only `resources: ["pods"]`
1717

18-
```file
19-
manifests/modules/security/cam/rbac/rolebinding.yaml
20-
```
18+
And this `RoleBinding` will map the Role to a Group named `carts-team`:
19+
20+
::yaml{file="manifests/modules/security/cam/rbac/rolebinding.yaml" paths="roleRef,subjects.0"}
21+
22+
1. `roleRef` references the `carts-team-role` Role we created earlier
23+
2. `subjects` specifies that a Group named `carts-team` will get the permissions associated with the Role
2124

2225
Let's apply these manifests:
2326

website/docs/security/guardduty/log-monitoring/privileged_container_mount.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,13 @@ In this lab you will be creating a container with `privileged` Security Context,
77

88
This exercise will generate two different findings, `PrivilegeEscalation:Kubernetes/PrivilegedContainer` which indicates that a container was launched with Privileged permissions, and `Persistence:Kubernetes/ContainerWithSensitiveMount` indicating a sensitive external host path mounted inside the container.
99

10-
To simulate the finding you'll be using a pre-configure manifest with some specific parameters already set, `SecurityContext: privileged: true` and also the `volume` and `volumeMount` options, mapping the `/etc` host directory to `/host-etc` Pod volume mount.
10+
To simulate the finding you'll be using a pre-configure manifest with some specific parameters already set:
1111

12-
```file
13-
manifests/modules/security/Guardduty/mount/privileged-pod-example.yaml
14-
```
12+
::yaml{file="manifests/modules/security/Guardduty/mount/privileged-pod-example.yaml" paths="spec.containers.0.securityContext,spec.containers.0.volumeMounts.0.mountPath,spec.volumes.0.hostPath.path"}
13+
14+
1. Setting `SecurityContext: privileged: true` grants full root privileges to the Pod
15+
2. `mountPath: /host-etc` specifies that the mapped host volume will be accessible inside the container at `/host-etc`
16+
3. `path: /etc` specifies that `/etc` directory from the host system will be the source directory for the mount
1517

1618
Apply the manifest shown above with the following command:
1719

website/docs/security/kyverno/baseline-pss.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,11 +22,12 @@ To prevent such escalated privileged capabilities and avoid unauthorized use of
2222

2323
The baseline profile of the Pod Security Standards is a collection of the most fundamental and crucial steps that can be taken to secure Pods. Starting from Kyverno 1.8, an entire profile can be assigned to the cluster through a single rule. To learn more about the privileges blocked by the Baseline Profile, please refer to the [Kyverno documentation](https://kyverno.io/policies/#:~:text=Baseline%20Pod%20Security%20Standards,cluster%20through%20a%20single%20rule).
2424

25-
```file
26-
manifests/modules/security/kyverno/baseline-policy/baseline-policy.yaml
27-
```
25+
::yaml{file="manifests/modules/security/kyverno/baseline-policy/baseline-policy.yaml" paths="spec.background,spec.validationFailureAction,spec.rules.0.match,spec.rules.0.validate"}
2826

29-
Note that the above policy is in `Enforce` mode and will block any requests to create privileged Pods.
27+
1. `background: true` applies the policy to existing resources in addition to new ones
28+
2. `validationFailureAction: Enforce` blocks non-compliant Pods from being created
29+
3. `match.any.resources.kinds: [Pod]` applies the policy to all Pod resources cluster-wide
30+
4. `validate.podSecurity` enforces Kubernetes Pod Security Standards at the `baseline` level with moderate security restrictions using the `latest` standards version
3031

3132
Go ahead and apply the Baseline Policy:
3233

website/docs/security/kyverno/creating-policy.md

Lines changed: 13 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -3,24 +3,18 @@ title: "Creating a Simple Policy"
33
sidebar_position: 71
44
---
55

6-
To gain an understanding of Kyverno policies, we'll start our lab with a simple Pod label requirement. As you may know, labels in Kubernetes are used to tag resources in the cluster.
6+
Kyverno has two kinds of Policy resources: **ClusterPolicy** used for Cluster-Wide Resources and **Policy** used for Namespaced Resources. To gain an understanding of Kyverno policies, we'll start our lab with a simple Pod label requirement. As you may know, labels in Kubernetes are used to tag resources in the cluster.
77

8-
Below is a sample policy requiring a Label `CostCenter`:
8+
Below is a sample `ClusterPolicy` which will block any Pod creation that doesn't have the label `CostCenter`:
99

10-
```file
11-
manifests/modules/security/kyverno/simple-policy/require-labels-policy.yaml
12-
```
13-
14-
Kyverno has two kinds of Policy resources: **ClusterPolicy** used for Cluster-Wide Resources and **Policy** used for Namespaced Resources. The example above shows a ClusterPolicy. Take some time to examine the following details in the configuration:
10+
::yaml{file="manifests/modules/security/kyverno/simple-policy/require-labels-policy.yaml" paths="spec.validationFailureAction,spec.rules,spec.rules.0.match,spec.rules.0.validate,spec.rules.0.validate.message,spec.rules.0.validate.pattern"}
1511

16-
- Under the `spec` section of the Policy, there's an attribute `validationFailureAction`. It tells Kyverno if the resource being validated should be allowed but reported (`Audit`) or blocked (`Enforce`). The default is `Audit`, but our example is set to `Enforce`.
17-
- The `rules` section contains one or more rules to be validated.
18-
- The `match` statement sets the scope of what will be checked. In this case, it's any `Pod` resource.
19-
- The `validate` statement attempts to positively check what is defined. If the statement, when compared with the requested resource, is true, it's allowed. If false, it's blocked.
20-
- The `message` is what gets displayed to a user if this rule fails validation.
21-
- The `pattern` object defines what pattern will be checked in the resource. In this case, it's looking for `metadata.labels` with `CostCenter`.
22-
23-
This example policy will block any Pod creation that doesn't have the label `CostCenter`.
12+
1. `spec.validationFailureAction` tells Kyverno if the resource being validated should be allowed but reported (`Audit`) or blocked (`Enforce`). The default is `Audit`, but in our example it is set to `Enforce`
13+
2. The `rules` section contains one or more rules to be validated
14+
3. The `match` statement sets the scope of what will be checked. In this case, it's any Pod resource
15+
4. The `validate` statement attempts to positively check what is defined. If the statement, when compared with the requested resource, is true, it's allowed. If false, it's blocked
16+
5. The `message` is what gets displayed to a user if this rule fails validation
17+
6. The `pattern` object defines what pattern will be checked in the resource. In this case, it's looking for `metadata.labels` with `CostCenter`
2418

2519
Create the policy using the following command:
2620

@@ -106,13 +100,12 @@ As you can see, the admission webhook successfully validated the Policy and the
106100

107101
In the above examples, you checked how Validation Policies work in their default behavior defined in `validationFailureAction`. However, Kyverno can also be used to manage Mutating rules within the Policy, to modify any API Requests to satisfy or enforce the specified requirements on the Kubernetes resources. The resource mutation occurs before validation, so the validation rules will not contradict the changes performed by the mutation section.
108102

109-
Below is a sample Policy with a mutation rule defined, which will be used to automatically add our label `CostCenter=IT` as default to any `Pod`:
103+
Below is a sample Policy with a mutation rule defined:
110104

111-
```file
112-
manifests/modules/security/kyverno/simple-policy/add-labels-mutation-policy.yaml
113-
```
105+
::yaml{file="manifests/modules/security/kyverno/simple-policy/add-labels-mutation-policy.yaml" paths="spec.rules.0.match,spec.rules.0.mutate"}
114106

115-
Notice the `mutate` section under the ClusterPolicy `spec`.
107+
1. `match.any.resources.kinds: [Pod]` targets this `ClusterPolicy` to all Pod resources cluster-wide
108+
2. `mutate` modifies resources during creation (vs. validate which blocks/allows). `patchStrategicMerge.metadata.labels.CostCenter: IT` automatically adds `CostCenter: IT` label to every Pod
116109

117110
Go ahead and create the above Policy using the following command:
118111

website/docs/security/kyverno/restricting-images.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,12 @@ To implement best practices, we'll define a policy that restricts the use of una
2626

2727
For this lab, we'll use the [Amazon ECR Public Gallery](https://public.ecr.aws/) as our trusted registry, blocking any containers that use images hosted in other registries. Here's a sample Kyverno policy to restrict image pulling for this use case:
2828

29-
```file
30-
manifests/modules/security/kyverno/images/restrict-registries.yaml
31-
```
29+
::yaml{file="manifests/modules/security/kyverno/images/restrict-registries.yaml" paths="spec.validationFailureAction,spec.background,spec.rules.0.match,spec.rules.0.validate.pattern"}
30+
31+
1. `validationFailureAction: Enforce` blocks non-compliant Pods from being created
32+
2. `background: true` applies the policy to existing resources in addition to new ones
33+
3. `match.any.resources.kinds: [Pod]` applies the policy to all Pod resources cluster-wide
34+
4. `validate.pattern` enforces that all container images must originate from the `public.ecr.aws/*` registry, blocking any images from unauthorized registries
3235

3336
> Note: This policy doesn't restrict the usage of InitContainers or Ephemeral Containers to the referred repository.
3437

website/docs/security/secrets-management/secrets-manager/ascp.md

Lines changed: 6 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -35,43 +35,17 @@ pod/secrets-store-csi-driver-provider-aws-dzg9r 1/1 Running 0 2
3535

3636
To provide access to secrets stored in AWS Secrets Manager via the CSI driver, you'll need a `SecretProviderClass` - a namespaced custom resource that provides driver configurations and parameters matching the information in AWS Secrets Manager.
3737

38-
```file
39-
manifests/modules/security/secrets-manager/secret-provider-class.yaml
40-
```
38+
::yaml{file="manifests/modules/security/secrets-manager/secret-provider-class.yaml" paths="spec.provider,spec.parameters.objects,spec.secretObjects.0"}
39+
40+
1. `provider: aws` specifies AWS Secrets Store CSI driver
41+
2. `parameters.objects` defines the AWS `secretsmanager` source secret named `$SECRET_NAME` and uses [jmesPath](https://jmespath.org/) to extract specific `username` and `password` fields into named aliases for Kubernetes consumption
42+
3. `secretObjects` creates a standard `Opaque` Kubernetes secret named `catalog-secret` that maps the extracted `username` and `password` fields to secret keys
4143

42-
Let's create this resource and examine its two main configuration sections:
44+
Let's create this resource:
4345

4446
```bash
4547
$ cat ~/environment/eks-workshop/modules/security/secrets-manager/secret-provider-class.yaml \
4648
| envsubst | kubectl apply -f -
4749
```
4850

49-
First, the `objects` parameter points to a secret named `$SECRET_NAME` that we created in AWS Secrets Manager in the previous step. Note that we're using [jmesPath](https://jmespath.org/) to extract specific key-value pairs from the JSON-formatted secret:
50-
51-
```bash
52-
$ kubectl get secretproviderclass -n catalog catalog-spc -o yaml | yq '.spec.parameters.objects'
53-
54-
- objectName: "eks-workshop-catalog-secret-WDD8yS"
55-
objectType: "secretsmanager"
56-
jmesPath:
57-
- path: username
58-
objectAlias: username
59-
- path: password
60-
objectAlias: password
61-
```
62-
63-
Second, the `secretObjects` section defines how to create and sync a Kubernetes Secret with data from the AWS Secrets Manager secret. When mounted to a Pod, the SecretProviderClass will create a Kubernetes Secret (if it doesn't exist) named `catalog-secret` and sync the values from AWS Secrets Manager:
64-
65-
```bash
66-
$ kubectl get secretproviderclass -n catalog catalog-spc -o yaml | yq '.spec.secretObjects'
67-
68-
- data:
69-
- key: username
70-
objectName: username
71-
- key: password
72-
objectName: password
73-
secretName: catalog-secret
74-
type: Opaque
75-
```
76-
7751
The Secret Store CSI Driver acts as an intermediary between Kubernetes and external secrets providers like AWS Secrets Manager. When configured with a SecretProviderClass, it can both mount secrets as files in Pod volumes and create synchronized Kubernetes Secret objects, providing flexibility in how applications consume these secrets.

website/docs/security/secrets-management/secrets-manager/external-secrets.md

Lines changed: 8 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -24,37 +24,21 @@ $ kubectl -n external-secrets describe sa external-secrets-sa | grep Annotations
2424
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::1234567890:role/eks-workshop-external-secrets-sa-irsa
2525
```
2626

27-
We need to create a `ClusterSecretStore` resource - this is a cluster-wide SecretStore that can be referenced by ExternalSecrets from any namespace:
27+
We need to create a `ClusterSecretStore` resource - this is a cluster-wide SecretStore that can be referenced by ExternalSecrets from any namespace. Lets inspect the file we will use to create this `ClusterSecretStore`:
2828

29-
```file
30-
manifests/modules/security/secrets-manager/cluster-secret-store.yaml
31-
```
29+
::yaml{file="manifests/modules/security/secrets-manager/cluster-secret-store.yaml" paths="spec.provider.aws.service,spec.provider.aws.region,spec.provider.aws.auth.jwt"}
3230

33-
```bash
34-
$ cat ~/environment/eks-workshop/modules/security/secrets-manager/cluster-secret-store.yaml \
35-
| envsubst | kubectl apply -f -
36-
```
31+
1. Set `service: SecretsManager` to use AWS Secrets Manager as the secret source
32+
2. Use the `$AWS_REGION` environment variable to specify the AWS region where secrets are stored
33+
3. `auth.jwt` uses IRSA to authenticate via the `external-secrets-sa` service account in the `external-secrets` namespace, which is linked to an IAM role with AWS Secrets Manager permissions
3734

38-
Let's examine the specifications of this newly created resource:
35+
Lets use this file to create the ClusterSecretStore resource.
3936

4037
```bash
41-
$ kubectl get clustersecretstores.external-secrets.io
42-
NAME AGE STATUS CAPABILITIES READY
43-
cluster-secret-store 81s Valid ReadWrite True
44-
$ kubectl get clustersecretstores.external-secrets.io cluster-secret-store -o yaml | yq '.spec'
45-
provider:
46-
aws:
47-
auth:
48-
jwt:
49-
serviceAccountRef:
50-
name: external-secrets-sa
51-
namespace: external-secrets
52-
region: us-west-2
53-
service: SecretsManager
38+
$ cat ~/environment/eks-workshop/modules/security/secrets-manager/cluster-secret-store.yaml \
39+
| envsubst | kubectl apply -f -
5440
```
5541

56-
The ClusterSecretStore uses a [JSON Web Token (JWT)](https://jwt.io/) referenced to our ServiceAccount to authenticate with AWS Secrets Manager.
57-
5842
Next, we'll create an `ExternalSecret` that defines what data should be fetched from AWS Secrets Manager and how it should be transformed into a Kubernetes Secret. We'll then update our `catalog` Deployment to use these credentials:
5943

6044
```kustomization

0 commit comments

Comments
 (0)