You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: website/docs/security/cluster-access-management/kubernetes-rbac.md
+11-8Lines changed: 11 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,17 +7,20 @@ As previously mentioned, the cluster access management controls and associated A
7
7
8
8
In this section of the lab, we'll show how to configure access entries with granular permissions using Kubernetes groups. This is useful when the pre-defined access policies are too permissive. As part of the lab setup, we created an IAM role named `eks-workshop-carts-team`. In this scenario, we'll demonstrate how to use that role to provide a team that only works on the **carts** service with permissions that allow them to view all resources in the `carts` namespace, but also delete pods.
9
9
10
-
First, let's create the Kubernetes objects that model our required permissions. This `Role` provides the permissions we outlined above:
10
+
First, let's create the Kubernetes objects that model our required permissions. This Role provides the permissions we outlined above:
Copy file name to clipboardExpand all lines: website/docs/security/guardduty/log-monitoring/privileged_container_mount.md
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,11 +7,13 @@ In this lab you will be creating a container with `privileged` Security Context,
7
7
8
8
This exercise will generate two different findings, `PrivilegeEscalation:Kubernetes/PrivilegedContainer` which indicates that a container was launched with Privileged permissions, and `Persistence:Kubernetes/ContainerWithSensitiveMount` indicating a sensitive external host path mounted inside the container.
9
9
10
-
To simulate the finding you'll be using a pre-configure manifest with some specific parameters already set, `SecurityContext: privileged: true` and also the `volume` and `volumeMount` options, mapping the `/etc` host directory to `/host-etc` Pod volume mount.
10
+
To simulate the finding you'll be using a pre-configure manifest with some specific parameters already set:
Copy file name to clipboardExpand all lines: website/docs/security/kyverno/baseline-pss.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,11 +22,12 @@ To prevent such escalated privileged capabilities and avoid unauthorized use of
22
22
23
23
The baseline profile of the Pod Security Standards is a collection of the most fundamental and crucial steps that can be taken to secure Pods. Starting from Kyverno 1.8, an entire profile can be assigned to the cluster through a single rule. To learn more about the privileges blocked by the Baseline Profile, please refer to the [Kyverno documentation](https://kyverno.io/policies/#:~:text=Baseline%20Pod%20Security%20Standards,cluster%20through%20a%20single%20rule).
Note that the above policy is in `Enforce` mode and will block any requests to create privileged Pods.
27
+
1.`background: true` applies the policy to existing resources in addition to new ones
28
+
2.`validationFailureAction: Enforce` blocks non-compliant Pods from being created
29
+
3.`match.any.resources.kinds: [Pod]` applies the policy to all Pod resources cluster-wide
30
+
4.`validate.podSecurity` enforces Kubernetes Pod Security Standards at the `baseline` level with moderate security restrictions using the `latest` standards version
Copy file name to clipboardExpand all lines: website/docs/security/kyverno/creating-policy.md
+13-20Lines changed: 13 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,24 +3,18 @@ title: "Creating a Simple Policy"
3
3
sidebar_position: 71
4
4
---
5
5
6
-
To gain an understanding of Kyverno policies, we'll start our lab with a simple Pod label requirement. As you may know, labels in Kubernetes are used to tag resources in the cluster.
6
+
Kyverno has two kinds of Policy resources: **ClusterPolicy** used for Cluster-Wide Resources and **Policy** used for Namespaced Resources. To gain an understanding of Kyverno policies, we'll start our lab with a simple Pod label requirement. As you may know, labels in Kubernetes are used to tag resources in the cluster.
7
7
8
-
Below is a sample policy requiring a Label`CostCenter`:
8
+
Below is a sample `ClusterPolicy` which will block any Pod creation that doesn't have the label`CostCenter`:
Kyverno has two kinds of Policy resources: **ClusterPolicy** used for Cluster-Wide Resources and **Policy** used for Namespaced Resources. The example above shows a ClusterPolicy. Take some time to examine the following details in the configuration:
- Under the `spec` section of the Policy, there's an attribute `validationFailureAction`. It tells Kyverno if the resource being validated should be allowed but reported (`Audit`) or blocked (`Enforce`). The default is `Audit`, but our example is set to `Enforce`.
17
-
- The `rules` section contains one or more rules to be validated.
18
-
- The `match` statement sets the scope of what will be checked. In this case, it's any `Pod` resource.
19
-
- The `validate` statement attempts to positively check what is defined. If the statement, when compared with the requested resource, is true, it's allowed. If false, it's blocked.
20
-
- The `message` is what gets displayed to a user if this rule fails validation.
21
-
- The `pattern` object defines what pattern will be checked in the resource. In this case, it's looking for `metadata.labels` with `CostCenter`.
22
-
23
-
This example policy will block any Pod creation that doesn't have the label `CostCenter`.
12
+
1.`spec.validationFailureAction` tells Kyverno if the resource being validated should be allowed but reported (`Audit`) or blocked (`Enforce`). The default is `Audit`, but in our example it is set to `Enforce`
13
+
2. The `rules` section contains one or more rules to be validated
14
+
3. The `match` statement sets the scope of what will be checked. In this case, it's any Pod resource
15
+
4. The `validate` statement attempts to positively check what is defined. If the statement, when compared with the requested resource, is true, it's allowed. If false, it's blocked
16
+
5. The `message` is what gets displayed to a user if this rule fails validation
17
+
6. The `pattern` object defines what pattern will be checked in the resource. In this case, it's looking for `metadata.labels` with `CostCenter`
24
18
25
19
Create the policy using the following command:
26
20
@@ -106,13 +100,12 @@ As you can see, the admission webhook successfully validated the Policy and the
106
100
107
101
In the above examples, you checked how Validation Policies work in their default behavior defined in `validationFailureAction`. However, Kyverno can also be used to manage Mutating rules within the Policy, to modify any API Requests to satisfy or enforce the specified requirements on the Kubernetes resources. The resource mutation occurs before validation, so the validation rules will not contradict the changes performed by the mutation section.
108
102
109
-
Below is a sample Policy with a mutation rule defined, which will be used to automatically add our label `CostCenter=IT` as default to any `Pod`:
103
+
Below is a sample Policy with a mutation rule defined:
Notice the `mutate` section under the ClusterPolicy `spec`.
107
+
1.`match.any.resources.kinds: [Pod]` targets this `ClusterPolicy` to all Pod resources cluster-wide
108
+
2.`mutate` modifies resources during creation (vs. validate which blocks/allows). `patchStrategicMerge.metadata.labels.CostCenter: IT` automatically adds `CostCenter: IT` label to every Pod
116
109
117
110
Go ahead and create the above Policy using the following command:
Copy file name to clipboardExpand all lines: website/docs/security/kyverno/restricting-images.md
+6-3Lines changed: 6 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,9 +26,12 @@ To implement best practices, we'll define a policy that restricts the use of una
26
26
27
27
For this lab, we'll use the [Amazon ECR Public Gallery](https://public.ecr.aws/) as our trusted registry, blocking any containers that use images hosted in other registries. Here's a sample Kyverno policy to restrict image pulling for this use case:
1.`validationFailureAction: Enforce` blocks non-compliant Pods from being created
32
+
2.`background: true` applies the policy to existing resources in addition to new ones
33
+
3.`match.any.resources.kinds: [Pod]` applies the policy to all Pod resources cluster-wide
34
+
4.`validate.pattern` enforces that all container images must originate from the `public.ecr.aws/*` registry, blocking any images from unauthorized registries
32
35
33
36
> Note: This policy doesn't restrict the usage of InitContainers or Ephemeral Containers to the referred repository.
To provide access to secrets stored in AWS Secrets Manager via the CSI driver, you'll need a `SecretProviderClass` - a namespaced custom resource that provides driver configurations and parameters matching the information in AWS Secrets Manager.
1.`provider: aws` specifies AWS Secrets Store CSI driver
41
+
2.`parameters.objects` defines the AWS `secretsmanager` source secret named `$SECRET_NAME` and uses [jmesPath](https://jmespath.org/) to extract specific `username` and `password` fields into named aliases for Kubernetes consumption
42
+
3.`secretObjects` creates a standard `Opaque` Kubernetes secret named `catalog-secret` that maps the extracted `username` and `password` fields to secret keys
41
43
42
-
Let's create this resource and examine its two main configuration sections:
First, the `objects` parameter points to a secret named `$SECRET_NAME` that we created in AWS Secrets Manager in the previous step. Note that we're using [jmesPath](https://jmespath.org/) to extract specific key-value pairs from the JSON-formatted secret:
Second, the `secretObjects` section defines how to create and sync a Kubernetes Secret with data from the AWS Secrets Manager secret. When mounted to a Pod, the SecretProviderClass will create a Kubernetes Secret (if it doesn't exist) named `catalog-secret` and sync the values from AWS Secrets Manager:
The Secret Store CSI Driver acts as an intermediary between Kubernetes and external secrets providers like AWS Secrets Manager. When configured with a SecretProviderClass, it can both mount secrets as files in Pod volumes and create synchronized Kubernetes Secret objects, providing flexibility in how applications consume these secrets.
We need to create a `ClusterSecretStore` resource - this is a cluster-wide SecretStore that can be referenced by ExternalSecrets from any namespace:
27
+
We need to create a `ClusterSecretStore` resource - this is a cluster-wide SecretStore that can be referenced by ExternalSecrets from any namespace. Lets inspect the file we will use to create this `ClusterSecretStore`:
1. Set `service: SecretsManager` to use AWS Secrets Manager as the secret source
32
+
2. Use the `$AWS_REGION` environment variable to specify the AWS region where secrets are stored
33
+
3.`auth.jwt` uses IRSA to authenticate via the `external-secrets-sa` service account in the `external-secrets` namespace, which is linked to an IAM role with AWS Secrets Manager permissions
37
34
38
-
Let's examine the specifications of this newly created resource:
35
+
Lets use this file to create the ClusterSecretStore resource.
39
36
40
37
```bash
41
-
$ kubectl get clustersecretstores.external-secrets.io
42
-
NAME AGE STATUS CAPABILITIES READY
43
-
cluster-secret-store 81s Valid ReadWrite True
44
-
$ kubectl get clustersecretstores.external-secrets.io cluster-secret-store -o yaml | yq '.spec'
The ClusterSecretStore uses a [JSON Web Token (JWT)](https://jwt.io/) referenced to our ServiceAccount to authenticate with AWS Secrets Manager.
57
-
58
42
Next, we'll create an `ExternalSecret` that defines what data should be fetched from AWS Secrets Manager and how it should be transformed into a Kubernetes Secret. We'll then update our `catalog` Deployment to use these credentials:
0 commit comments