Skip to content
Merged
Show file tree
Hide file tree
Changes from 24 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
5fcbfce
apply link changes for 8.6
leiicamundi Mar 4, 2025
dcbaece
wip files
leiicamundi Mar 5, 2025
97e4b9c
update links
leiicamundi Mar 5, 2025
1a335c5
re-integrate proper link structure
leiicamundi Mar 6, 2025
23421bc
fix blob
leiicamundi Mar 6, 2025
0cc3779
fix 8.6 broker
leiicamundi Mar 6, 2025
6177ba0
extract check topology command
leiicamundi Mar 6, 2025
35319a4
doc as code inc
leiicamundi Mar 6, 2025
260afa1
update instructions
leiicamundi Mar 7, 2025
6888e8f
update links for 8.7
leiicamundi Mar 11, 2025
77b7fd2
fix 8.8 links
leiicamundi Mar 12, 2025
1747e9d
fix links
leiicamundi Mar 12, 2025
a7d9319
fix link
leiicamundi Mar 12, 2025
7b39ff1
fix some broken links
leiicamundi Mar 12, 2025
9004290
restore Modeler check section
leiicamundi Mar 12, 2025
7eda986
update link for module
leiicamundi Mar 12, 2025
3831488
update link for module
leiicamundi Mar 12, 2025
97d5639
fix indent
leiicamundi Mar 12, 2025
ea64c64
remove dates from openshift support
leiicamundi Mar 12, 2025
379daa6
fix indentation
leiicamundi Mar 12, 2025
2535089
update 8.7 openshift
leiicamundi Mar 12, 2025
f69d026
update 8.7 openshift
leiicamundi Mar 12, 2025
3a81d92
remove outdate
leiicamundi Mar 12, 2025
01a9573
Merge branch 'main' into feature/integrate-reference-arch-changes
leiicamundi Mar 12, 2025
aaa55f7
fix the token
leiicamundi Mar 12, 2025
3d37b3c
update 8.6 links
leiicamundi Mar 13, 2025
67f6144
Merge branch 'main' into feature/integrate-reference-arch-changes
leiicamundi Mar 13, 2025
7d0bf37
update feature/rosa-8.7 to main
leiicamundi Mar 13, 2025
4df965f
Merge branch 'main' into feature/integrate-reference-arch-changes
leiicamundi Mar 13, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
165 changes: 57 additions & 108 deletions docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leaving this as a single comment for the file to switch it to main for the next version.

Original file line number Diff line number Diff line change
Expand Up @@ -408,12 +408,8 @@ identity:

Once you've prepared the `values.yml` file, run the following `envsubst` command to substitute the environment variables with their actual values:

```bash
# generate the final values
envsubst < values.yml > generated-values.yml

# print the result
cat generated-values.yml
```bash reference
https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/kubernetes/single-region/procedure/assemble-envsubst-values.sh
```

Next, store various passwords in a Kubernetes secret, which will be used by the Helm chart. Below is an example of how to set up the required secret. You can use `openssl` to generate random secrets and store them in environment variables:
Expand Down Expand Up @@ -452,17 +448,8 @@ This guide uses `helm upgrade --install` as it runs install on initial deploymen

You can track the progress of the installation using the following command:

```bash
watch -n 5 '
kubectl get pods -n camunda --output=wide;
if [ $(kubectl get pods -n camunda --field-selector=status.phase!=Running -o name | wc -l) -eq 0 ] &&
[ $(kubectl get pods -n camunda -o json | jq -r ".items[] | select(.status.containerStatuses[]?.ready == false)" | wc -l) -eq 0 ];
then
echo "All pods are Running and Healthy - Installation completed!";
else
echo "Some pods are not Running or Healthy";
fi
'
```bash reference
https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/kubernetes/single-region/procedure/check-deployment-ready.sh
```

<details>
Expand Down Expand Up @@ -622,6 +609,9 @@ Console:

### Use the token

<Tabs groupId="c8-connectivity">
<TabItem value="rest-api" label="REST API" default>

For a detailed guide on generating and using a token, please conduct the relevant documentation on [authenticating with the REST API](./../../../../../apis-tools/camunda-api-rest/camunda-api-rest-authentication.md?environment=self-managed).

<Tabs groupId="domain">
Expand Down Expand Up @@ -654,20 +644,10 @@ export ZEEBE_AUTHORIZATION_SERVER_URL=http://localhost:18080/auth/realms/camunda

</Tabs>

Generate a temporary token to access the REST API, then capture the value of the `access_token` property and store it as your token.

```shell
export TOKEN=$(curl --location --request POST "${ZEEBE_AUTHORIZATION_SERVER_URL}" \
--header "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode "client_id=${ZEEBE_CLIENT_ID}" \
--data-urlencode "client_secret=${ZEEBE_CLIENT_SECRET}" \
--data-urlencode "grant_type=client_credentials" | jq '.access_token' -r)
```

Use the stored token, in our case `TOKEN`, to use the REST API to print the cluster topology.
Generate a temporary token to access the REST API, then capture the value of the `access_token` property and store it as your token. Use the stored token (referred to as `TOKEN` in this case) to interact with the REST API and display the cluster topology:

```shell
curl --header "Authorization: Bearer ${TOKEN}" "${ZEEBE_ADDRESS_REST}/v2/topology"
```bash reference
https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/kubernetes/single-region/procedure/check-zeebe-cluster-topology.sh
```

...and results in the following output:
Expand All @@ -676,89 +656,58 @@ curl --header "Authorization: Bearer ${TOKEN}" "${ZEEBE_ADDRESS_REST}/v2/topolog
<summary>Example output</summary>
<summary>

```shell
{
"brokers": [
{
"nodeId": 0,
"host": "camunda-zeebe-0.camunda-zeebe",
"port": 26501,
"partitions": [
{
"partitionId": 1,
"role": "leader",
"health": "healthy"
},
{
"partitionId": 2,
"role": "follower",
"health": "healthy"
},
{
"partitionId": 3,
"role": "follower",
"health": "healthy"
}
],
"version": "8.6.0"
},
{
"nodeId": 1,
"host": "camunda-zeebe-1.camunda-zeebe",
"port": 26501,
"partitions": [
{
"partitionId": 1,
"role": "follower",
"health": "healthy"
},
{
"partitionId": 2,
"role": "leader",
"health": "healthy"
},
{
"partitionId": 3,
"role": "follower",
"health": "healthy"
}
],
"version": "8.6.0"
},
{
"nodeId": 2,
"host": "camunda-zeebe-2.camunda-zeebe",
"port": 26501,
"partitions": [
{
"partitionId": 1,
"role": "follower",
"health": "healthy"
},
{
"partitionId": 2,
"role": "follower",
"health": "healthy"
},
{
"partitionId": 3,
"role": "leader",
"health": "healthy"
}
],
"version": "8.6.0"
}
],
"clusterSize": 3,
"partitionsCount": 3,
"replicationFactor": 3,
"gatewayVersion": "8.6.0"
}
```json reference
https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/kubernetes/single-region/procedure/check-zeebe-cluster-topology-output.json
```

</summary>
</details>

</TabItem>
<TabItem value="modeler" label="Desktop Modeler">

Follow our existing [Modeler guide on deploying a diagram](/self-managed/modeler/desktop-modeler/deploy-to-self-managed.md). Below are the helper values required to be filled in Modeler:

<Tabs groupId="domain" defaultValue="with" queryString values={
[
{label: 'With domain', value: 'with' },
{label: 'Without domain', value: 'without' },
]}>

<TabItem value="with">

The following values are required for the OAuth authentication:

- **Cluster endpoint:** `https://zeebe.$DOMAIN_NAME`, replacing `$DOMAIN_NAME` with your domain
- **Client ID:** Retrieve the client ID value from the identity page of your created M2M application
- **Client Secret:** Retrieve the client secret value from the Identity page of your created M2M application
- **OAuth Token URL:** `https://$DOMAIN_NAME/auth/realms/camunda-platform/protocol/openid-connect/token`, replacing `$DOMAIN_NAME` with your domain
- **Audience:** `zeebe-api`, the default for Camunda 8 Self-Managed

</TabItem>

<TabItem value="without">

This requires port-forwarding the Zeebe Gateway to be able to connect to the cluster:

```shell
kubectl port-forward services/camunda-zeebe-gateway 26500:26500 --namespace camunda
```

The following values are required for OAuth authentication:

- **Cluster endpoint:** `http://localhost:26500`
- **Client ID:** Retrieve the client ID value from the identity page of your created M2M application
- **Client Secret:** Retrieve the client secret value from the Identity page of your created M2M application
- **OAuth Token URL:** `http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token`
- **Audience:** `zeebe-api`, the default for Camunda 8 Self-Managed

</TabItem>
</Tabs>

</TabItem>
</Tabs>

## Test the installation with payment example application

To test your installation with the deployment of a sample application, refer to the [installing payment example guide](../../../guides/installing-payment-example.md).
Expand Down
2 changes: 1 addition & 1 deletion docs/self-managed/setup/deploy/amazon/aws-ec2.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ Alternatively, the same setup can run with a single AWS EC2 instance, but be awa

- An AWS account to create any resources within AWS.
- On a high level, permissions are required on the **ec2**, **iam**, **elasticloadbalancing**, **kms**, **logs**, and **es** level.
- For a more fine-grained view of the permissions, check this [example policy](https://github.com/camunda/camunda-deployment-references/blob/main/aws/ec2/example/policy.json).
- For a more fine-grained view of the permissions, check this [example policy](https://github.com/camunda/camunda-deployment-references/tree/feature/rosa-8.8/aws/ec2/example/policy.json).
- Terraform (1.7+)
- Unix based Operating System (OS) with ssh and sftp
- Windows may be used with [Cygwin](https://www.cygwin.com/) or [Windows WSL](https://learn.microsoft.com/en-us/windows/wsl/install) but has not been tested
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ This module sets up the foundational configuration for ROSA HCP and Terraform us

We will leverage [Terraform modules](https://developer.hashicorp.com/terraform/language/modules), which allow us to abstract resources into reusable components, simplifying infrastructure management.

The [Camunda-provided module](https://github.com/camunda/camunda-tf-rosa) is publicly available and serves as a starting point for deploying Red Hat OpenShift clusters on AWS using a Hosted Control Plane.
The [Camunda-provided module](https://github.com/camunda/camunda-deployment-references/tree/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/) is publicly available and serves as a starting point for deploying Red Hat OpenShift clusters on AWS using a Hosted Control Plane.
It is highly recommended to review this module before implementation to understand its structure and capabilities.

Please note that this module is based on the official [ROSA HCP Terraform module documentation](https://docs.openshift.com/rosa/rosa_hcp/terraform/rosa-hcp-creating-a-cluster-quickly-terraform.html).
Expand Down Expand Up @@ -287,21 +287,21 @@ this guide uses a dedicated [aws terraform provider](https://registry.terraform.
This configuration will use the previously created S3 bucket for storing the Terraform state file:

```hcl reference
https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/clusters/config.tf
https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/clusters/config.tf
```

5. Create a file named `cluster_region_1.tf` in the same directory as your `config.tf`.
This file describes the cluster of the region 1:

```hcl reference
https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/clusters/cluster_region_1.tf
https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/clusters/cluster_region_1.tf
```

6. Create a file named `cluster_region_2.tf` in the same directory as your `config.tf`.
This file describes the cluster of the region 2:

```hcl reference
https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/clusters/cluster_region_2.tf
https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/clusters/cluster_region_2.tf
```

7. After setting up the terraform files and ensuring your AWS authentication is configured, initialize your Terraform project, then, initialize Terraform to configure the backend and download necessary provider plugins:
Expand Down Expand Up @@ -334,13 +334,13 @@ this guide uses a dedicated [aws terraform provider](https://registry.terraform.

1. Configure user access to the clusters. By default, the user who creates an OpenShift cluster has administrative access. If you want to grant access to other users, follow the [Red Hat documentation for granting admin rights to users](https://docs.openshift.com/rosa/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-admin-rights.html) when the cluster will be created.

1. Customize the clusters setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the [ROSA module documentation](https://github.com/camunda/camunda-tf-rosa/blob/v2.0.0/modules/rosa-hcp/README.md).
1. Customize the clusters setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the [ROSA module documentation](https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/modules/rosa-hcp/README.md).

:::caution Camunda Terraform module

This ROSA module is based on the [official Red Hat Terraform module for ROSA HCP](https://registry.terraform.io/modules/terraform-redhat/rosa-hcp/rhcs/latest). Please be aware of potential differences and choices in implementation between this module and the official one.

We invite you to consult the [Camunda ROSA module documentation](https://github.com/camunda/camunda-tf-rosa/blob/v2.0.0/modules/rosa-hcp/README.md) for more information.
We invite you to consult the [Camunda ROSA module documentation](https://github.com/camunda/camunda-deployment-references/tree/feature/rosa-8.8/aws/modules/rosa-hcp/README.md) for more information.

:::

Expand Down Expand Up @@ -417,13 +417,13 @@ We'll re-use the previously configured S3 bucket to store the state of the peeri
Begin by setting up the `config.tf` file to use the S3 backend for managing the Terraform state:

```hcl reference
https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/peering/config.tf
https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/peering/config.tf
```

Alongside the `config.tf` file, create a file called `peering.tf` to reference the peering configuration:

```hcl reference
https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/peering/peering.tf
https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/peering/peering.tf
```

One cluster will be referenced as the **owner**, and the other as the **accepter**.
Expand Down Expand Up @@ -497,13 +497,13 @@ We'll re-use the previously configured S3 bucket to store the state of the backu
Begin by setting up the `config.tf` file to use the S3 backend for managing the Terraform state:

```hcl reference
https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/backup_bucket/config.tf
https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/backup_bucket/config.tf
```

Finally, create a file called `backup_bucket.tf` to reference the elastic backup bucket configuration:

```hcl reference
https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/backup_bucket/backup_bucket.tf
https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/backup_bucket/backup_bucket.tf
```

This bucket configuration follows [multiple best practices](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html).
Expand Down Expand Up @@ -568,7 +568,7 @@ The `BACKUP_BUCKET_REGION` will define the region of the bucket, you can pick on

### Reference files

You can find the reference files used on [this page](https://github.com/camunda/camunda-deployment-references/tree/main/aws/rosa-hcp-dual-region/terraform)
You can find the reference files used on [this page](https://github.com/camunda/camunda-deployment-references/tree/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/)

## 2. Preparation for Camunda 8 installation

Expand Down
Loading
Loading