diff --git a/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md b/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md index 09064e6cc56..9e290665d77 100644 --- a/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md +++ b/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md @@ -408,12 +408,8 @@ identity: Once you've prepared the `values.yml` file, run the following `envsubst` command to substitute the environment variables with their actual values: -```bash -# generate the final values -envsubst < values.yml > generated-values.yml - -# print the result -cat generated-values.yml +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/kubernetes/single-region/procedure/assemble-envsubst-values.sh ``` Next, store various passwords in a Kubernetes secret, which will be used by the Helm chart. Below is an example of how to set up the required secret. You can use `openssl` to generate random secrets and store them in environment variables: @@ -452,17 +448,8 @@ This guide uses `helm upgrade --install` as it runs install on initial deploymen You can track the progress of the installation using the following command: -```bash -watch -n 5 ' - kubectl get pods -n camunda --output=wide; - if [ $(kubectl get pods -n camunda --field-selector=status.phase!=Running -o name | wc -l) -eq 0 ] && - [ $(kubectl get pods -n camunda -o json | jq -r ".items[] | select(.status.containerStatuses[]?.ready == false)" | wc -l) -eq 0 ]; - then - echo "All pods are Running and Healthy - Installation completed!"; - else - echo "Some pods are not Running or Healthy"; - fi -' +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/kubernetes/single-region/procedure/check-deployment-ready.sh ```
@@ -622,6 +609,9 @@ Console: ### Use the token + + + For a detailed guide on generating and using a token, please conduct the relevant documentation on [authenticating with the REST API](./../../../../../apis-tools/camunda-api-rest/camunda-api-rest-authentication.md?environment=self-managed). @@ -654,20 +644,10 @@ export ZEEBE_AUTHORIZATION_SERVER_URL=http://localhost:18080/auth/realms/camunda -Generate a temporary token to access the REST API, then capture the value of the `access_token` property and store it as your token. - -```shell -export TOKEN=$(curl --location --request POST "${ZEEBE_AUTHORIZATION_SERVER_URL}" \ ---header "Content-Type: application/x-www-form-urlencoded" \ ---data-urlencode "client_id=${ZEEBE_CLIENT_ID}" \ ---data-urlencode "client_secret=${ZEEBE_CLIENT_SECRET}" \ ---data-urlencode "grant_type=client_credentials" | jq '.access_token' -r) -``` - -Use the stored token, in our case `TOKEN`, to use the REST API to print the cluster topology. +Generate a temporary token to access the REST API, then capture the value of the `access_token` property and store it as your token. Use the stored token (referred to as `TOKEN` in this case) to interact with the REST API and display the cluster topology: -```shell -curl --header "Authorization: Bearer ${TOKEN}" "${ZEEBE_ADDRESS_REST}/v2/topology" +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/kubernetes/single-region/procedure/check-zeebe-cluster-topology.sh ``` ...and results in the following output: @@ -676,89 +656,58 @@ curl --header "Authorization: Bearer ${TOKEN}" "${ZEEBE_ADDRESS_REST}/v2/topolog Example output -```shell -{ - "brokers": [ - { - "nodeId": 0, - "host": "camunda-zeebe-0.camunda-zeebe", - "port": 26501, - "partitions": [ - { - "partitionId": 1, - "role": "leader", - "health": "healthy" - }, - { - "partitionId": 2, - "role": "follower", - "health": "healthy" - }, - { - "partitionId": 3, - "role": "follower", - "health": "healthy" - } - ], - "version": "8.6.0" - }, - { - "nodeId": 1, - "host": "camunda-zeebe-1.camunda-zeebe", - "port": 26501, - "partitions": [ - { - "partitionId": 1, - "role": "follower", - "health": "healthy" - }, - { - "partitionId": 2, - "role": "leader", - "health": "healthy" - }, - { - "partitionId": 3, - "role": "follower", - "health": "healthy" - } - ], - "version": "8.6.0" - }, - { - "nodeId": 2, - "host": "camunda-zeebe-2.camunda-zeebe", - "port": 26501, - "partitions": [ - { - "partitionId": 1, - "role": "follower", - "health": "healthy" - }, - { - "partitionId": 2, - "role": "follower", - "health": "healthy" - }, - { - "partitionId": 3, - "role": "leader", - "health": "healthy" - } - ], - "version": "8.6.0" - } - ], - "clusterSize": 3, - "partitionsCount": 3, - "replicationFactor": 3, - "gatewayVersion": "8.6.0" -} +```json reference +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/kubernetes/single-region/procedure/check-zeebe-cluster-topology-output.json ```
+ + + +Follow our existing [Modeler guide on deploying a diagram](/self-managed/modeler/desktop-modeler/deploy-to-self-managed.md). Below are the helper values required to be filled in Modeler: + + + + + +The following values are required for the OAuth authentication: + +- **Cluster endpoint:** `https://zeebe.$DOMAIN_NAME`, replacing `$DOMAIN_NAME` with your domain +- **Client ID:** Retrieve the client ID value from the identity page of your created M2M application +- **Client Secret:** Retrieve the client secret value from the Identity page of your created M2M application +- **OAuth Token URL:** `https://$DOMAIN_NAME/auth/realms/camunda-platform/protocol/openid-connect/token`, replacing `$DOMAIN_NAME` with your domain +- **Audience:** `zeebe-api`, the default for Camunda 8 Self-Managed + + + + + +This requires port-forwarding the Zeebe Gateway to be able to connect to the cluster: + +```shell +kubectl port-forward services/camunda-zeebe-gateway 26500:26500 --namespace camunda +``` + +The following values are required for OAuth authentication: + +- **Cluster endpoint:** `http://localhost:26500` +- **Client ID:** Retrieve the client ID value from the identity page of your created M2M application +- **Client Secret:** Retrieve the client secret value from the Identity page of your created M2M application +- **OAuth Token URL:** `http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token` +- **Audience:** `zeebe-api`, the default for Camunda 8 Self-Managed + + + + + + + ## Test the installation with payment example application To test your installation with the deployment of a sample application, refer to the [installing payment example guide](../../../guides/installing-payment-example.md). diff --git a/docs/self-managed/setup/deploy/amazon/aws-ec2.md b/docs/self-managed/setup/deploy/amazon/aws-ec2.md index 7b15c2c474f..5142fd4911c 100644 --- a/docs/self-managed/setup/deploy/amazon/aws-ec2.md +++ b/docs/self-managed/setup/deploy/amazon/aws-ec2.md @@ -55,7 +55,7 @@ Alternatively, the same setup can run with a single AWS EC2 instance, but be awa - An AWS account to create any resources within AWS. - On a high level, permissions are required on the **ec2**, **iam**, **elasticloadbalancing**, **kms**, **logs**, and **es** level. - - For a more fine-grained view of the permissions, check this [example policy](https://github.com/camunda/camunda-deployment-references/blob/main/aws/ec2/example/policy.json). + - For a more fine-grained view of the permissions, check this [example policy](https://github.com/camunda/camunda-deployment-references/tree/main/aws/ec2/example/policy.json). - Terraform (1.7+) - Unix based Operating System (OS) with ssh and sftp - Windows may be used with [Cygwin](https://www.cygwin.com/) or [Windows WSL](https://learn.microsoft.com/en-us/windows/wsl/install) but has not been tested diff --git a/docs/self-managed/setup/deploy/amazon/openshift/terraform-setup-dual-region.md b/docs/self-managed/setup/deploy/amazon/openshift/terraform-setup-dual-region.md index 2b8c9770e9d..1af50bb3321 100644 --- a/docs/self-managed/setup/deploy/amazon/openshift/terraform-setup-dual-region.md +++ b/docs/self-managed/setup/deploy/amazon/openshift/terraform-setup-dual-region.md @@ -161,7 +161,7 @@ This module sets up the foundational configuration for ROSA HCP and Terraform us We will leverage [Terraform modules](https://developer.hashicorp.com/terraform/language/modules), which allow us to abstract resources into reusable components, simplifying infrastructure management. -The [Camunda-provided module](https://github.com/camunda/camunda-tf-rosa) is publicly available and serves as a starting point for deploying Red Hat OpenShift clusters on AWS using a Hosted Control Plane. +The [Camunda-provided module](https://github.com/camunda/camunda-deployment-references/tree/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/) is publicly available and serves as a starting point for deploying Red Hat OpenShift clusters on AWS using a Hosted Control Plane. It is highly recommended to review this module before implementation to understand its structure and capabilities. Please note that this module is based on the official [ROSA HCP Terraform module documentation](https://docs.openshift.com/rosa/rosa_hcp/terraform/rosa-hcp-creating-a-cluster-quickly-terraform.html). @@ -287,21 +287,21 @@ this guide uses a dedicated [aws terraform provider](https://registry.terraform. This configuration will use the previously created S3 bucket for storing the Terraform state file: ```hcl reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/clusters/config.tf + https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/clusters/config.tf ``` 5. Create a file named `cluster_region_1.tf` in the same directory as your `config.tf`. This file describes the cluster of the region 1: ```hcl reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/clusters/cluster_region_1.tf + https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/clusters/cluster_region_1.tf ``` 6. Create a file named `cluster_region_2.tf` in the same directory as your `config.tf`. This file describes the cluster of the region 2: ```hcl reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/clusters/cluster_region_2.tf + https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/clusters/cluster_region_2.tf ``` 7. After setting up the terraform files and ensuring your AWS authentication is configured, initialize your Terraform project, then, initialize Terraform to configure the backend and download necessary provider plugins: @@ -334,13 +334,13 @@ this guide uses a dedicated [aws terraform provider](https://registry.terraform. 1. Configure user access to the clusters. By default, the user who creates an OpenShift cluster has administrative access. If you want to grant access to other users, follow the [Red Hat documentation for granting admin rights to users](https://docs.openshift.com/rosa/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-admin-rights.html) when the cluster will be created. -1. Customize the clusters setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the [ROSA module documentation](https://github.com/camunda/camunda-tf-rosa/blob/v2.0.0/modules/rosa-hcp/README.md). +1. Customize the clusters setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the [ROSA module documentation](https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/modules/rosa-hcp/README.md). :::caution Camunda Terraform module This ROSA module is based on the [official Red Hat Terraform module for ROSA HCP](https://registry.terraform.io/modules/terraform-redhat/rosa-hcp/rhcs/latest). Please be aware of potential differences and choices in implementation between this module and the official one. -We invite you to consult the [Camunda ROSA module documentation](https://github.com/camunda/camunda-tf-rosa/blob/v2.0.0/modules/rosa-hcp/README.md) for more information. +We invite you to consult the [Camunda ROSA module documentation](https://github.com/camunda/camunda-deployment-references/tree/feature/rosa-8.8/aws/modules/rosa-hcp/README.md) for more information. ::: @@ -417,13 +417,13 @@ We'll re-use the previously configured S3 bucket to store the state of the peeri Begin by setting up the `config.tf` file to use the S3 backend for managing the Terraform state: ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/peering/config.tf +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/peering/config.tf ``` Alongside the `config.tf` file, create a file called `peering.tf` to reference the peering configuration: ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/peering/peering.tf +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/peering/peering.tf ``` One cluster will be referenced as the **owner**, and the other as the **accepter**. @@ -497,13 +497,13 @@ We'll re-use the previously configured S3 bucket to store the state of the backu Begin by setting up the `config.tf` file to use the S3 backend for managing the Terraform state: ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/backup_bucket/config.tf +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/backup_bucket/config.tf ``` Finally, create a file called `backup_bucket.tf` to reference the elastic backup bucket configuration: ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/backup_bucket/backup_bucket.tf +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/terraform/backup_bucket/backup_bucket.tf ``` This bucket configuration follows [multiple best practices](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html). @@ -568,7 +568,7 @@ The `BACKUP_BUCKET_REGION` will define the region of the bucket, you can pick on ### Reference files -You can find the reference files used on [this page](https://github.com/camunda/camunda-deployment-references/tree/main/aws/rosa-hcp-dual-region/terraform) +You can find the reference files used on [this page](https://github.com/camunda/camunda-deployment-references/tree/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/) ## 2. Preparation for Camunda 8 installation diff --git a/docs/self-managed/setup/deploy/amazon/openshift/terraform-setup.md b/docs/self-managed/setup/deploy/amazon/openshift/terraform-setup.md index 60fc898ed65..cf826484922 100644 --- a/docs/self-managed/setup/deploy/amazon/openshift/terraform-setup.md +++ b/docs/self-managed/setup/deploy/amazon/openshift/terraform-setup.md @@ -73,6 +73,16 @@ Following this tutorial and steps will result in: ## 1. Configure AWS and initialize Terraform +### Obtain a copy of the reference architecture + +The first step is to download a copy of the reference architecture from the [GitHub repository](https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-single-region/). This archive will be used throughout the rest of this documentation, the reference architecture are versioned using the same Camunda versions (`stable/8.x`). + +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-single-region/procedure/get-your-copy.sh +``` + +With the reference architecture downloaded and extracted, you can proceed with the remaining steps outlined in this documentation. Ensure that you are in the correct directory before continuing with further instructions. + ### Terraform prerequisites To manage the infrastructure for Camunda 8 on AWS using Terraform, we need to set up Terraform's backend to store the state file remotely in an S3 bucket. This ensures secure and persistent storage of the state file. @@ -151,12 +161,12 @@ Now, follow these steps to create the S3 bucket with versioning enabled: This S3 bucket will now securely store your Terraform state files with versioning enabled. -#### Create a `config.tf` with the following setup +#### Edit the `config.tf` with the following setup Once the S3 bucket is created, configure your `config.tf` file to use the S3 backend for managing the Terraform state: ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/config.tf +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-single-region/config.tf ``` #### Initialize Terraform @@ -181,7 +191,7 @@ This module sets up the foundational configuration for ROSA HCP and Terraform us We will leverage [Terraform modules](https://developer.hashicorp.com/terraform/language/modules), which allow us to abstract resources into reusable components, simplifying infrastructure management. -The [Camunda-provided module](https://github.com/camunda/camunda-tf-rosa) is publicly available and serves as a robust starting point for deploying a Red Hat OpenShift cluster on AWS using a Hosted Control Plane. It is highly recommended to review this module before implementation to understand its structure and capabilities. +The [Camunda-provided module](https://github.com/camunda/camunda-deployment-references/tree/feature/rosa-8.8/aws/openshift/rosa-hcp-single-region/) is publicly available and serves as a robust starting point for deploying a Red Hat OpenShift cluster on AWS using a Hosted Control Plane. It is highly recommended to review this module before implementation to understand its structure and capabilities. Please note that this module is based on the official [ROSA HCP Terraform module documentation](https://docs.openshift.com/rosa/rosa_hcp/terraform/rosa-hcp-creating-a-cluster-quickly-terraform.html). It is presented as an example for running Camunda 8 in ROSA. For advanced use cases or custom setups, we encourage you to use the official module, which includes vendor-supported features. @@ -259,8 +269,7 @@ To set up a ROSA cluster, certain prerequisites must be configured on your AWS a #### Set up the ROSA cluster module -1. Create a `cluster.tf` file in the same directory as your `config.tf` file. -2. Add the following content to your newly created `cluster.tf` file to utilize the provided module: +1. Edit the `cluster.tf` file in the same directory as your `config.tf` file: :::note Configure your cluster @@ -274,26 +283,26 @@ To set up a ROSA cluster, certain prerequisites must be configured on your AWS a ::: ```hcl reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/cluster.tf + https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-single-region/cluster.tf ``` :::caution Camunda Terraform module This ROSA module is based on the [official Red Hat Terraform module for ROSA HCP](https://registry.terraform.io/modules/terraform-redhat/rosa-hcp/rhcs/latest). Please be aware of potential differences and choices in implementation between this module and the official one. - We invite you to consult the [Camunda ROSA module documentation](https://github.com/camunda/camunda-tf-rosa/blob/v2.0.0/modules/rosa-hcp/README.md) for more information. + We invite you to consult the [Camunda ROSA module documentation](https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/modules/rosa-hcp/README.md) for more information. ::: -3. [Initialize](#initialize-terraform) Terraform for this module using the following Terraform command: +2. [Initialize](#initialize-terraform) Terraform for this module using the following Terraform command: ```bash terraform init -backend-config="bucket=$S3_TF_BUCKET_NAME" -backend-config="key=$S3_TF_BUCKET_KEY" ``` -4. Configure user access to the cluster. By default, the user who creates the OpenShift cluster has administrative access. If you want to grant access to other users, follow the [Red Hat documentation for granting admin rights to users](https://docs.openshift.com/rosa/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-admin-rights.html) when the cluster is created. +3. Configure user access to the cluster. By default, the user who creates the OpenShift cluster has administrative access. If you want to grant access to other users, follow the [Red Hat documentation for granting admin rights to users](https://docs.openshift.com/rosa/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-admin-rights.html) when the cluster is created. -5. Customize the cluster setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the [ROSA module documentation](https://github.com/camunda/camunda-tf-rosa/blob/v2.0.0/modules/rosa-hcp/README.md). +4. Customize the cluster setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the [ROSA module documentation](https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/modules/rosa-hcp/README.md). ### Define outputs @@ -329,7 +338,7 @@ Terraform will now create the OpenShift cluster with all the necessary configura Depending on the installation path you have chosen, you can find the reference files used on this page: -- **Standard installation:** [Reference Files](https://github.com/camunda/camunda-deployment-references/tree/feature/openshift-ra-standard/aws/rosa-hcp/camunda-versions/8.7) +- **Standard installation:** [Reference Files](https://github.com/camunda/camunda-deployment-references/tree/feature/rosa-8.8/aws/openshift/rosa-hcp-single-region/) ## 2. Preparation for Camunda 8 installation @@ -339,11 +348,8 @@ You can access the created OpenShift cluster using the following steps: Set up the required environment variables: -```shell -export CLUSTER_NAME="$(terraform console <<Example Submariner check successfull output ```text reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/submariner/output.txt +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/submariner/output.txt ``` @@ -323,7 +323,7 @@ For more comprehensive details regarding the verification tests for Submariner u **Debugging the Submariner setup:** -If you are experiencing connectivity issues, we recommend spawning a pod in the `default` namespace that contains networking debugging tools. You can find an [example here](https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/submariner/debug-utils-submariner.yml). +If you are experiencing connectivity issues, we recommend spawning a pod in the `default` namespace that contains networking debugging tools. You can find an [example here](https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/submariner/debug-utils-submariner.yml). With this pod, you will be able to check flow openings, service resolution, and other network-related aspects. Troubleshooting requires examining all the underlying mechanisms of Submariner. Therefore, we also encourage you to read the [Submariner troubleshooting guide](https://submariner.io/operations/troubleshooting/). @@ -337,7 +337,7 @@ Before proceeding with the installation, ensure the required information is avai Review and adjust the following environment script to match your specific configuration: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/export_environment_prerequisites.sh +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/export_environment_prerequisites.sh ``` _If you are unsure about the values of the backup bucket, please refer to the [S3 backup bucket module setup](/self-managed/setup/deploy/amazon/openshift/terraform-setup-dual-region.md#s3-backup-bucket-module-setup) as a reference for implementation._ @@ -363,7 +363,7 @@ The Elasticsearch backup [bucket is tied to a specific region](https://docs.aws. The following script will create the required namespaces and secrets used to reference the bucket access. ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/setup_ns_secrets.sh +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/setup_ns_secrets.sh ``` Save it as `setup_ns_secrets.sh` and execute it: @@ -380,7 +380,7 @@ Throughout this guide, you will add and merge values into these files to configu - Save the following file as both `values-region-1.yml` and `values-region-2.yml` to serve as the base configuration: ```yaml reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/helm-values/values-base.yml + https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/helm-values/values-base.yml ``` :::warning Merging YAML files @@ -396,12 +396,12 @@ Set up the region ID using a unique integer for each region: - Add the following YAML configuration to your `values-region-1.yml`: ```yaml reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/helm-values/values-region-1.yml + https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/helm-values/values-region-1.yml ``` - Add the following YAML configuration to your `values-region-2.yml`: ```yaml reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/helm-values/values-region-2.yml + https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/helm-values/values-region-2.yml ``` **Security Context Constraints (SCCs)** @@ -416,7 +416,7 @@ For custom configurations or specific requirements, please refer to the [install Before deploying, some values in the value files need to be updated. To assist with generating these values, save the following Bash script as `generate_zeebe_helm_values.sh`: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/generate_zeebe_helm_values.sh +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/generate_zeebe_helm_values.sh ``` Then, source the output of the script. By doing so, we can reuse the values later for substitution, instead of manually adjusting the values files. You will be prompted to specify the number of Zeebe brokers (total number of Zeebe brokers in both Kubernetes clusters), for a dual-region setup we recommend `8`, resulting in four brokers per region: @@ -438,7 +438,7 @@ Make sure that the variable `CLUSTER_1_NAME` is set to the name of your first cl Once you've prepared each region's value file (`values-region-1.yml` and `values-region-2.yml`) file, run the following `envsubst` command to substitute the environment variables with their actual values: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/generate_helm_values.sh +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/generate_helm_values.sh ``` ### Install Camunda 8 using Helm @@ -446,7 +446,7 @@ https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp- With the value files for each region configured, you can now install Camunda 8 using Helm. Execute the following commands: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/install_chart.sh +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/install_chart.sh ``` This command: @@ -468,7 +468,7 @@ This guide uses `helm upgrade --install` as it runs install on initial deploymen Once Camunda is deployed across the two clusters, the next step is to expose each service to Submariner so it can be resolved by the other cluster: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/export_services_submariner.sh +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/export_services_submariner.sh ``` Alternatively, you can manage each service individually using the `ServiceExport` Custom Resource Definition (CRD). @@ -489,13 +489,13 @@ metadata: For each cluster, verify the status of the exported services with this script: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/verify_exported_services.sh +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/verify_exported_services.sh ``` To monitor the progress of the installation, save and execute the following script: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/verify_installation_completed.sh +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/verify_installation_completed.sh ``` Save it as `verify_installation_completed.sh`, make it executable, and run it: @@ -509,9 +509,9 @@ chmod +x verify_installation_completed.sh 1. Open a terminal and port-forward the Zeebe Gateway via `oc` from one of your clusters. Zeebe is stretching over both clusters and is `active-active`, meaning it doesn't matter which Zeebe Gateway to use to interact with your Zeebe cluster. -```shell -oc --context "$CLUSTER_1_NAME" -n "$CAMUNDA_NAMESPACE_1" port-forward "services/$HELM_RELEASE_NAME-zeebe-gateway" 8080:8080 -``` + ```shell + oc --context "$CLUSTER_1_NAME" -n "$CAMUNDA_NAMESPACE_1" port-forward "services/$HELM_RELEASE_NAME-zeebe-gateway" 8080:8080 + ``` 2. Open another terminal and use e.g. `cURL` to print the Zeebe cluster topology: @@ -525,7 +525,7 @@ oc --context "$CLUSTER_1_NAME" -n "$CAMUNDA_NAMESPACE_1" port-forward "services/ Example output ```text reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/zeebe-http-output.txt + https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/aws/openshift/rosa-hcp-dual-region/procedure/zeebe-http-output.txt ``` diff --git a/docs/self-managed/setup/deploy/openshift/redhat-openshift.md b/docs/self-managed/setup/deploy/openshift/redhat-openshift.md index 8b98b2b1a12..e1c5a14cf00 100644 --- a/docs/self-managed/setup/deploy/openshift/redhat-openshift.md +++ b/docs/self-managed/setup/deploy/openshift/redhat-openshift.md @@ -29,12 +29,13 @@ If you need to set up an OpenShift cluster on a cloud provider, we recommend our We conduct testing and ensure compatibility against the following OpenShift versions: -| OpenShift Version | [End of Support Date](https://access.redhat.com/support/policy/updates/openshift) | -| ----------------- | --------------------------------------------------------------------------------- | -| 4.17.x | June 27, 2025 | -| 4.16.x | December 27, 2025 | -| 4.15.x | August 27, 2025 | -| 4.14.x | May 1, 2025 | +| OpenShift Version | +| ----------------- | +| 4.18.x | +| 4.17.x | +| 4.16.x | +| 4.15.x | +| 4.14.x | :::caution Versions compatibility @@ -66,7 +67,7 @@ Over this guide, you will add and merge values in this file to configure your de You can find a reference example of this file here: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/base.yml +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/helm-values/base.yml ``` :::danger Merging YAML files @@ -95,18 +96,10 @@ To use these routes for the Zeebe Gateway, configure this through Ingress as wel The route created by OpenShift will use a domain to provide access to the platform. By default, you can use the OpenShift applications domain, but any other domain supported by the router can also be used. -To retrieve the OpenShift applications domain (used as an example here), run the following command: +To retrieve the OpenShift applications domain (used as an example here), run the following command and define the route domain that will be used for the Camunda 8 deployment: -```bash -export OPENSHIFT_APPS_DOMAIN=$(oc get ingresses.config.openshift.io cluster -o jsonpath='{.spec.domain}') -``` - -Next, define the route domain that will be used for the Camunda 8 deployment. For example: - -```bash -export DOMAIN_NAME="camunda.$OPENSHIFT_APPS_DOMAIN" - -echo "Camunda 8 will be reachable from $DOMAIN_NAME" +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/procedure/setup-application-domain.sh ``` If you choose to use a custom domain instead, ensure it is supported by your router configuration and replace the example domain with your desired domain. For more details on configuring custom domains in OpenShift, refer to the official [custom domain OpenShift documentation](https://docs.openshift.com/dedicated/applications/deployments/osd-config-custom-domains-applications.html). @@ -123,12 +116,8 @@ oc get ingresses.config/cluster -o json | jq '.metadata.annotations."ingress.ope Alternatively, if you use a dedicated IngressController for the deployment: -```bash -# List your IngressControllers -oc -n openshift-ingress-operator get ingresscontrollers - -# Replace with your IngressController name -oc -n openshift-ingress-operator get ingresscontrollers/ -o json | jq '.metadata.annotations."ingress.operator.openshift.io/default-enable-http2"' +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/procedure/get-ingress-http2-status.sh ``` - If the output is `"true"`, it means HTTP/2 is enabled. @@ -141,8 +130,8 @@ If HTTP/2 is not enabled, you can enable it by running the following command: **IngressController configuration:** -```bash -oc -n openshift-ingress-operator annotate ingresscontrollers/ ingress.operator.openshift.io/default-enable-http2=true +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/procedure/enable-ingress-http2.sh ``` **Global cluster configuration:** @@ -186,10 +175,10 @@ Additionally, the Zeebe Gateway should be configured to use an encrypted connect - We mount the **Service Certificate Secret** (`camunda-platform-internal-service-certificate`) to the Core pod and configure a secure TLS connection. - Update your `values.yml` file with the following: + Update your `values.yml` file with the following: ```yaml reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/core-route.yml + https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/helm-values/core-route.yml ``` The actual configuration properties can be reviewed: @@ -201,7 +190,7 @@ Additionally, the Zeebe Gateway should be configured to use an encrypted connect 2. **Connectors:** update your `values.yml` file with the following: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/connectors-route.yml +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/helm-values/connectors-route.yml ``` The actual configuration properties can be reviewed [in the Connectors configuration documentation](/self-managed/connectors-deployment/connectors-configuration.md#zeebe-broker-connection). @@ -211,7 +200,7 @@ The actual configuration properties can be reviewed [in the Connectors configura 1. Set up the global configuration to enable the single Ingress definition with the host. Update your configuration file as shown below: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/domain.yml +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/helm-values/domain.yml ``` @@ -244,7 +233,7 @@ However, you can use `kubectl port-forward` to access the Camunda platform witho To make this work, you will need to configure the deployment to reference `localhost` with the forwarded port. Update your `values.yml` file with the following: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/no-domain.yml +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/helm-values/no-domain.yml ``` @@ -264,7 +253,7 @@ The `global.compatibility.openshift.adaptSecurityContext` variable in your value - `disabled`: The `runAsUser` and `fsGroup` values will not be modified (default). ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/scc.yml +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/helm-values/scc.yml ``` @@ -273,7 +262,7 @@ https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/ To use permissive SCCs, simply install the charts as they are. Follow the [general Helm deployment guide](/self-managed/setup/install.md). ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/no-scc.yml +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/helm-values/no-scc.yml ``` @@ -287,18 +276,14 @@ Some components are not enabled by default in this deployment. For more informat Once you've prepared the `values.yml` file, run the following `envsubst` command to substitute the environment variables with their actual values: -```bash -# generate the final values -envsubst < values.yml > generated-values.yml - -# print the result -cat generated-values.yml +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/procedure/assemble-envsubst-values.sh ``` Next, store various passwords in a Kubernetes secret, which will be used by the Helm chart. Below is an example of how to set up the required secret. You can use `openssl` to generate random secrets and store them in environment variables: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/generate-passwords.sh +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/procedure/generate-passwords.sh ``` Use these environment variables in the `kubectl` command to create the secret. @@ -306,7 +291,7 @@ Use these environment variables in the `kubectl` command to create the secret. - The `smtp-password` should be replaced with the appropriate external value ([see how it's used by Web Modeler](/self-managed/modeler/web-modeler/configuration/configuration.md#smtp--email)). ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/create-identity-secret.sh +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/procedure/create-identity-secret.sh ``` ### Install Camunda 8 using Helm @@ -316,13 +301,13 @@ Now that the `generated-values.yml` is ready, you can install Camunda 8 using He The following are the required environment variables with some example values: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/chart-env.sh +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/procedure/chart-env.sh ``` Then run the following command: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/install-chart.sh +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/openshift/single-region/procedure/install-chart.sh ``` This command: @@ -339,17 +324,8 @@ This guide uses `helm upgrade --install` as it runs install on initial deploymen You can track the progress of the installation using the following command: -```bash -watch -n 5 ' - kubectl get pods -n camunda --output=wide; - if [ $(kubectl get pods -n camunda --field-selector=status.phase!=Running -o name | wc -l) -eq 0 ] && - [ $(kubectl get pods -n camunda -o json | jq -r ".items[] | select(.status.containerStatuses[]?.ready == false)" | wc -l) -eq 0 ]; - then - echo "All pods are Running and Healthy - Installation completed!"; - else - echo "Some pods are not Running or Healthy"; - fi -' +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/feature/rosa-8.8/generic/kubernetes/single-region/procedure/check-deployment-ready.sh ``` ## Verify connectivity to Camunda 8 diff --git a/versioned_docs/version-8.6/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md b/versioned_docs/version-8.6/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md index 750709f7b3e..f4eaa1ace64 100644 --- a/versioned_docs/version-8.6/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md +++ b/versioned_docs/version-8.6/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md @@ -407,12 +407,8 @@ identity: Once you've prepared the `values.yml` file, run the following `envsubst` command to substitute the environment variables with their actual values: -```bash -# generate the final values -envsubst < values.yml > generated-values.yml - -# print the result -cat generated-values.yml +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/kubernetes/single-region/procedure/assemble-envsubst-values.sh ``` Next, store various passwords in a Kubernetes secret, which will be used by the Helm chart. Below is an example of how to set up the required secret. You can use `openssl` to generate random secrets and store them in environment variables: @@ -451,17 +447,8 @@ This guide uses `helm upgrade --install` as it runs install on initial deploymen You can track the progress of the installation using the following command: -```bash -watch -n 5 ' - kubectl get pods -n camunda --output=wide; - if [ $(kubectl get pods -n camunda --field-selector=status.phase!=Running -o name | wc -l) -eq 0 ] && - [ $(kubectl get pods -n camunda -o json | jq -r ".items[] | select(.status.containerStatuses[]?.ready == false)" | wc -l) -eq 0 ]; - then - echo "All pods are Running and Healthy - Installation completed!"; - else - echo "Some pods are not Running or Healthy"; - fi -' +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/kubernetes/single-region/procedure/check-deployment-ready.sh ```
@@ -621,6 +608,9 @@ Console: ### Use the token + + + For a detailed guide on generating and using a token, please conduct the relevant documentation on [authenticating with the REST API](./../../../../../apis-tools/camunda-api-rest/camunda-api-rest-authentication.md?environment=self-managed). @@ -653,20 +643,10 @@ export ZEEBE_AUTHORIZATION_SERVER_URL=http://localhost:18080/auth/realms/camunda -Generate a temporary token to access the REST API, then capture the value of the `access_token` property and store it as your token. - -```shell -export TOKEN=$(curl --location --request POST "${ZEEBE_AUTHORIZATION_SERVER_URL}" \ ---header "Content-Type: application/x-www-form-urlencoded" \ ---data-urlencode "client_id=${ZEEBE_CLIENT_ID}" \ ---data-urlencode "client_secret=${ZEEBE_CLIENT_SECRET}" \ ---data-urlencode "grant_type=client_credentials" | jq '.access_token' -r) -``` - -Use the stored token, in our case `TOKEN`, to use the REST API to print the cluster topology. +Generate a temporary token to access the REST API, then capture the value of the `access_token` property and store it as your token. Use the stored token (referred to as `TOKEN` in this case) to interact with the REST API and display the cluster topology: -```shell -curl --header "Authorization: Bearer ${TOKEN}" "${ZEEBE_ADDRESS_REST}/v2/topology" +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/kubernetes/single-region/procedure/check-zeebe-cluster-topology.sh ``` ...and results in the following output: @@ -675,89 +655,58 @@ curl --header "Authorization: Bearer ${TOKEN}" "${ZEEBE_ADDRESS_REST}/v2/topolog Example output -```shell -{ - "brokers": [ - { - "nodeId": 0, - "host": "camunda-zeebe-0.camunda-zeebe", - "port": 26501, - "partitions": [ - { - "partitionId": 1, - "role": "leader", - "health": "healthy" - }, - { - "partitionId": 2, - "role": "follower", - "health": "healthy" - }, - { - "partitionId": 3, - "role": "follower", - "health": "healthy" - } - ], - "version": "8.6.0" - }, - { - "nodeId": 1, - "host": "camunda-zeebe-1.camunda-zeebe", - "port": 26501, - "partitions": [ - { - "partitionId": 1, - "role": "follower", - "health": "healthy" - }, - { - "partitionId": 2, - "role": "leader", - "health": "healthy" - }, - { - "partitionId": 3, - "role": "follower", - "health": "healthy" - } - ], - "version": "8.6.0" - }, - { - "nodeId": 2, - "host": "camunda-zeebe-2.camunda-zeebe", - "port": 26501, - "partitions": [ - { - "partitionId": 1, - "role": "follower", - "health": "healthy" - }, - { - "partitionId": 2, - "role": "follower", - "health": "healthy" - }, - { - "partitionId": 3, - "role": "leader", - "health": "healthy" - } - ], - "version": "8.6.0" - } - ], - "clusterSize": 3, - "partitionsCount": 3, - "replicationFactor": 3, - "gatewayVersion": "8.6.0" -} +```json reference +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/kubernetes/single-region/procedure/check-zeebe-cluster-topology-output.json ```
+ + + +Follow our existing [Modeler guide on deploying a diagram](/self-managed/modeler/desktop-modeler/deploy-to-self-managed.md). Below are the helper values required to be filled in Modeler: + + + + + +The following values are required for the OAuth authentication: + +- **Cluster endpoint:** `https://zeebe.$DOMAIN_NAME`, replacing `$DOMAIN_NAME` with your domain +- **Client ID:** Retrieve the client ID value from the identity page of your created M2M application +- **Client Secret:** Retrieve the client secret value from the Identity page of your created M2M application +- **OAuth Token URL:** `https://$DOMAIN_NAME/auth/realms/camunda-platform/protocol/openid-connect/token`, replacing `$DOMAIN_NAME` with your domain +- **Audience:** `zeebe-api`, the default for Camunda 8 Self-Managed + + + + + +This requires port-forwarding the Zeebe Gateway to be able to connect to the cluster: + +```shell +kubectl port-forward services/camunda-zeebe-gateway 26500:26500 --namespace camunda +``` + +The following values are required for OAuth authentication: + +- **Cluster endpoint:** `http://localhost:26500` +- **Client ID:** Retrieve the client ID value from the identity page of your created M2M application +- **Client Secret:** Retrieve the client secret value from the Identity page of your created M2M application +- **OAuth Token URL:** `http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token` +- **Audience:** `zeebe-api`, the default for Camunda 8 Self-Managed + + + + + + + ## Test the installation with payment example application To test your installation with the deployment of a sample application, refer to the [installing payment example guide](../../../guides/installing-payment-example.md). diff --git a/versioned_docs/version-8.6/self-managed/setup/deploy/amazon/openshift/terraform-setup.md b/versioned_docs/version-8.6/self-managed/setup/deploy/amazon/openshift/terraform-setup.md index 6244bd4fc07..4d81b307c73 100644 --- a/versioned_docs/version-8.6/self-managed/setup/deploy/amazon/openshift/terraform-setup.md +++ b/versioned_docs/version-8.6/self-managed/setup/deploy/amazon/openshift/terraform-setup.md @@ -77,6 +77,16 @@ Following this tutorial and steps will result in: ## 1. Configure AWS and initialize Terraform +### Obtain a copy of the reference architecture + +The first step is to download a copy of the reference architecture from the [GitHub repository](https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/aws/openshift/rosa-hcp-single-region/). This archive will be used throughout the rest of this documentation, the reference architecture are versioned using the same Camunda versions (`stable/8.x`). + +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/aws/openshift/rosa-hcp-single-region/procedure/get-your-copy.sh +``` + +With the reference architecture downloaded and extracted, you can proceed with the remaining steps outlined in this documentation. Ensure that you are in the correct directory before continuing with further instructions. + ### Terraform prerequisites To manage the infrastructure for Camunda 8 on AWS using Terraform, we need to set up Terraform's backend to store the state file remotely in an S3 bucket. This ensures secure and persistent storage of the state file. @@ -155,12 +165,12 @@ Now, follow these steps to create the S3 bucket with versioning enabled: This S3 bucket will now securely store your Terraform state files with versioning enabled. -#### Create a `config.tf` with the following setup +#### Edit the `config.tf` with the following setup Once the S3 bucket is created, configure your `config.tf` file to use the S3 backend for managing the Terraform state: ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/config.tf +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/aws/openshift/rosa-hcp-single-region/config.tf ``` #### Initialize Terraform @@ -185,7 +195,7 @@ This module sets up the foundational configuration for ROSA HCP and Terraform us We will leverage [Terraform modules](https://developer.hashicorp.com/terraform/language/modules), which allow us to abstract resources into reusable components, simplifying infrastructure management. -The [Camunda-provided module](https://github.com/camunda/camunda-tf-rosa) is publicly available and serves as a robust starting point for deploying a Red Hat OpenShift cluster on AWS using a Hosted Control Plane. It is highly recommended to review this module before implementation to understand its structure and capabilities. +The [Camunda-provided module](https://github.com/camunda/camunda-deployment-references/tree/stable/8.6/aws/openshift/rosa-hcp-single-region) is publicly available and serves as a robust starting point for deploying a Red Hat OpenShift cluster on AWS using a Hosted Control Plane. It is highly recommended to review this module before implementation to understand its structure and capabilities. Please note that this module is based on the official [ROSA HCP Terraform module documentation](https://docs.openshift.com/rosa/rosa_hcp/terraform/rosa-hcp-creating-a-cluster-quickly-terraform.html). It is presented as an example for running Camunda 8 in ROSA. For advanced use cases or custom setups, we encourage you to use the official module, which includes vendor-supported features. @@ -263,8 +273,7 @@ To set up a ROSA cluster, certain prerequisites must be configured on your AWS a #### Set up the ROSA cluster module -1. Create a `cluster.tf` file in the same directory as your `config.tf` file. -2. Add the following content to your newly created `cluster.tf` file to utilize the provided module: +1. Edit the `cluster.tf` file in the same directory as your `config.tf` file: :::note Configure your cluster @@ -278,26 +287,27 @@ To set up a ROSA cluster, certain prerequisites must be configured on your AWS a ::: ```hcl reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/cluster.tf + https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/aws/openshift/rosa-hcp-single-region/cluster.tf + ``` :::caution Camunda Terraform module This ROSA module is based on the [official Red Hat Terraform module for ROSA HCP](https://registry.terraform.io/modules/terraform-redhat/rosa-hcp/rhcs/latest). Please be aware of potential differences and choices in implementation between this module and the official one. - We invite you to consult the [Camunda ROSA module documentation](https://github.com/camunda/camunda-tf-rosa/blob/v2.0.0/modules/rosa-hcp/README.md) for more information. + We invite you to consult the [Camunda ROSA module documentation](https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/aws/modules/rosa-hcp/README.md) for more information. ::: -3. [Initialize](#initialize-terraform) Terraform for this module using the following Terraform command: +2. [Initialize](#initialize-terraform) Terraform for this module using the following Terraform command: ```bash terraform init -backend-config="bucket=$S3_TF_BUCKET_NAME" -backend-config="key=$S3_TF_BUCKET_KEY" ``` -4. Configure user access to the cluster. By default, the user who creates the OpenShift cluster has administrative access, if you want to grant access to other users, please follow the [Red Hat documentation for granting admin rights to users](https://docs.openshift.com/rosa/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-admin-rights.html) when the cluster is created. +3. Configure user access to the cluster. By default, the user who creates the OpenShift cluster has administrative access, if you want to grant access to other users, please follow the [Red Hat documentation for granting admin rights to users](https://docs.openshift.com/rosa/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-admin-rights.html) when the cluster is created. -5. Customize the cluster setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the [ROSA module documentation](https://github.com/camunda/camunda-tf-rosa/blob/v2.0.0/modules/rosa-hcp/README.md). +4. Customize the cluster setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the [ROSA module documentation](https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/aws/modules/rosa-hcp/README.md). ### Define outputs @@ -333,7 +343,7 @@ Terraform will now create the OpenShift cluster with all the necessary configura Depending on the installation path you have chosen, you can find the reference files used on this page: -- **Standard installation:** [Reference Files](https://github.com/camunda/camunda-deployment-references/tree/main/aws/rosa-hcp/camunda-versions/8.6) +- **Standard installation:** [Reference Files](https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/aws/openshift/rosa-hcp-single-region/) ## 2. Preparation for Camunda 8 installation @@ -343,11 +353,8 @@ You can access the created OpenShift cluster using the following steps: Set up the required environment variables: -```shell -export CLUSTER_NAME="$(terraform console << with your IngressController name -oc -n openshift-ingress-operator get ingresscontrollers/ -o json | jq '.metadata.annotations."ingress.operator.openshift.io/default-enable-http2"' +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/procedure/get-ingress-http2-status.sh ``` - If the output is `"true"`, it means HTTP/2 is enabled. @@ -137,8 +126,8 @@ If HTTP/2 is not enabled, you can enable it by running the following command: **IngressController configuration:** -```bash -oc -n openshift-ingress-operator annotate ingresscontrollers/ ingress.operator.openshift.io/default-enable-http2=true +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/procedure/enable-ingress-http2.sh ``` **Global cluster configuration:** @@ -176,11 +165,12 @@ Additionally, the Zeebe Gateway should be configured to use an encrypted connect - The second TLS secret is used on the exposed route, referenced as `camunda-platform-external-certificate`. For example, this would be the same TLS secret used for Ingress. We also configure the Zeebe Gateway Ingress to create a [Re-encrypt Route](https://docs.openshift.com/container-platform/latest/networking/routes/route-configuration.html#nw-ingress-creating-a-route-via-an-ingress_route-configuration). - Finally, we mount the **Service Certificate Secret** (`camunda-platform-internal-service-certificate`) to the Zeebe Gateway Pod. + Finally, we mount the **Service Certificate Secret** (`camunda-platform-internal-service-certificate`) to the Zeebe Gateway Pod and the Zeebe Pod to configure both [broker security](/self-managed/zeebe-deployment/configuration/broker.md#zeebebrokernetworksecurity) and gateway security. + Update your `values.yml` file with the following: ```yaml reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/zeebe-gateway-route.yml + https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/helm-values/zeebe-gateway-route.yml ``` The domain used by the Zeebe Gateway for gRPC is `zeebe-$DOMAIN_NAME` which different from the one used for the rest, namely `$DOMAIN_NAME`, to avoid any conflicts. It is also important to note that the port used for gRPC is `443`. @@ -190,7 +180,7 @@ Additionally, the Zeebe Gateway should be configured to use an encrypted connect Update your `values.yml` file with the following: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/operate-route.yml +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/helm-values/operate-route.yml ``` The actual configuration properties can be reviewed [in the Operate configuration documentation](/self-managed/operate-deployment/operate-configuration.md#zeebe-broker-connection). @@ -200,7 +190,7 @@ The actual configuration properties can be reviewed [in the Operate configuratio Update your `values.yml` file with the following: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/tasklist-route.yml +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/helm-values/tasklist-route.yml ``` The actual configuration properties can be reviewed [in the Tasklist configuration documentation](/self-managed/tasklist-deployment/tasklist-configuration.md#zeebe-broker-connection). @@ -208,7 +198,7 @@ The actual configuration properties can be reviewed [in the Tasklist configurati 1. **Connectors:** update your `values.yml` file with the following: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/connectors-route.yml +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/helm-values/connectors-route.yml ``` The actual configuration properties can be reviewed [in the Connectors configuration documentation](/self-managed/connectors-deployment/connectors-configuration.md#zeebe-broker-connection). @@ -218,7 +208,7 @@ The actual configuration properties can be reviewed [in the Connectors configura 1. Set up the global configuration to enable the single Ingress definition with the host. Update your configuration file as shown below: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/domain.yml +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/helm-values/domain.yml ``` @@ -251,7 +241,7 @@ However, you can use `kubectl port-forward` to access the Camunda platform witho To make this work, you will need to configure the deployment to reference `localhost` with the forwarded port. Update your `values.yml` file with the following: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/no-domain.yml +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/helm-values/no-domain.yml ``` @@ -271,7 +261,7 @@ The `global.compatibility.openshift.adaptSecurityContext` variable in your value - `disabled`: The `runAsUser` and `fsGroup` values will not be modified (default). ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/scc.yml +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/helm-values/scc.yml ``` @@ -280,7 +270,7 @@ https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/ To use permissive SCCs, simply install the charts as they are. Follow the [general Helm deployment guide](/self-managed/setup/install.md). ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/no-scc.yml +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/helm-values/no-scc.yml ``` @@ -294,27 +284,23 @@ Some components are not enabled by default in this deployment. For more informat Once you've prepared the `values.yml` file, run the following `envsubst` command to substitute the environment variables with their actual values: -```bash -# generate the final values -envsubst < values.yml > generated-values.yml - -# print the result -cat generated-values.yml +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/procedure/assemble-envsubst-values.sh ``` Next, store various passwords in a Kubernetes secret, which will be used by the Helm chart. Below is an example of how to set up the required secret. You can use `openssl` to generate random secrets and store them in environment variables: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/procedure/install/generate-passwords.sh +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/procedure/generate-passwords.sh ``` Use these environment variables in the `kubectl` command to create the secret. - The `smtp-password` should be replaced with the appropriate external value ([see how it's used by Web Modeler](/self-managed/modeler/web-modeler/configuration/configuration.md#smtp--email)). -```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/procedure/install/create-identity-secret.sh -``` + ```bash reference + https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/procedure/create-identity-secret.sh + ``` ### Install Camunda 8 using Helm @@ -323,13 +309,13 @@ Now that the `generated-values.yml` is ready, you can install Camunda 8 using He The following are the required environment variables with some example values: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/procedure/install/chart-env.sh +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/procedure/chart-env.sh ``` Then run the following command: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.6/procedure/install/install-chart.sh +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/openshift/single-region/procedure/install-chart.sh ``` This command: @@ -346,17 +332,8 @@ This guide uses `helm upgrade --install` as it runs install on initial deploymen You can track the progress of the installation using the following command: -```bash -watch -n 5 ' - kubectl get pods -n camunda --output=wide; - if [ $(kubectl get pods -n camunda --field-selector=status.phase!=Running -o name | wc -l) -eq 0 ] && - [ $(kubectl get pods -n camunda -o json | jq -r ".items[] | select(.status.containerStatuses[]?.ready == false)" | wc -l) -eq 0 ]; - then - echo "All pods are Running and Healthy - Installation completed!"; - else - echo "Some pods are not Running or Healthy"; - fi -' +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/stable/8.6/generic/kubernetes/single-region/procedure/check-deployment-ready.sh ``` ## Verify connectivity to Camunda 8 diff --git a/versioned_docs/version-8.6/self-managed/zeebe-deployment/configuration/broker.md b/versioned_docs/version-8.6/self-managed/zeebe-deployment/configuration/broker.md index 561206466e5..eb77072c974 100644 --- a/versioned_docs/version-8.6/self-managed/zeebe-deployment/configuration/broker.md +++ b/versioned_docs/version-8.6/self-managed/zeebe-deployment/configuration/broker.md @@ -156,11 +156,11 @@ network: ### zeebe.broker.network.security -| Field | Description | Example Value | -| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | -| enabled | Enables TLS authentication between this gateway and other nodes in the cluster. This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_ENABLED`. | false | -| certificateChainPath | Sets the path to the certificate chain file. This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_CERTIFICATECHAINPATH`. | | -| privateKeyPath | Sets the path to the private key file location. This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_PRIVATEKEYPATH`. | | +| Field | Description | Example Value | +| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------- | +| enabled | Enables TLS authentication between this gateway and other nodes in the cluster. This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_ENABLED`. | false | +| certificateChainPath | Sets the path to the certificate chain file. This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_CERTIFICATECHAINPATH`. | | +| privateKeyPath | Sets the path to the private key file location. This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_PRIVATEKEYPATH`. | | | keyStore | Configures the keystore file containing both the certificate chain and the private key; currently only supports PKCS12 format. | | | keyStore.filePath | The path for keystore file; This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_KEYSTORE_FILEPATH`. | /path/key.pem | | keyStore.password | Sets the password for the keystore file, if not set it is assumed there is no password; This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_KEYSTORE_PASSWORD` | changeme | diff --git a/versioned_docs/version-8.6/self-managed/zeebe-deployment/configuration/gateway.md b/versioned_docs/version-8.6/self-managed/zeebe-deployment/configuration/gateway.md index 6f48838aed6..82d326c850d 100644 --- a/versioned_docs/version-8.6/self-managed/zeebe-deployment/configuration/gateway.md +++ b/versioned_docs/version-8.6/self-managed/zeebe-deployment/configuration/gateway.md @@ -218,11 +218,11 @@ You can read more about intra-cluster security on [its dedicated page](../securi ::: -| Field | Description | Example value | -| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | -| enabled | Enables TLS authentication between this gateway and other nodes in the cluster. This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_ENABLED`. | false | -| certificateChainPath | Sets the path to the certificate chain file. This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_CERTIFICATECHAINPATH`. | | -| privateKeyPath | Sets the path to the private key file location. This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_PRIVATEKEYPATH`. | | +| Field | Description | Example value | +| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | +| enabled | Enables TLS authentication between this gateway and other nodes in the cluster. This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_ENABLED`. | false | +| certificateChainPath | Sets the path to the certificate chain file. This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_CERTIFICATECHAINPATH`. | | +| privateKeyPath | Sets the path to the private key file location. This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_PRIVATEKEYPATH`. | | | keyStore | Configures the keystore file containing both the certificate chain and the private key; currently only supports PKCS12 format. | | | keyStore.filePath | The path for keystore file; This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_KEYSTORE_FILEPATH`. | /path/key.pem | | keyStore.password | Sets the password for the keystore file, if not set it is assumed there is no password; This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_KEYSTORE_PASSWORD` | changeme | diff --git a/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md b/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md index bec4632b128..419c5f0e332 100644 --- a/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md +++ b/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md @@ -408,12 +408,8 @@ identity: Once you've prepared the `values.yml` file, run the following `envsubst` command to substitute the environment variables with their actual values: -```bash -# generate the final values -envsubst < values.yml > generated-values.yml - -# print the result -cat generated-values.yml +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/main/generic/kubernetes/single-region/procedure/assemble-envsubst-values.sh ``` Next, store various passwords in a Kubernetes secret, which will be used by the Helm chart. Below is an example of how to set up the required secret. You can use `openssl` to generate random secrets and store them in environment variables: @@ -452,17 +448,8 @@ This guide uses `helm upgrade --install` as it runs install on initial deploymen You can track the progress of the installation using the following command: -```bash -watch -n 5 ' - kubectl get pods -n camunda --output=wide; - if [ $(kubectl get pods -n camunda --field-selector=status.phase!=Running -o name | wc -l) -eq 0 ] && - [ $(kubectl get pods -n camunda -o json | jq -r ".items[] | select(.status.containerStatuses[]?.ready == false)" | wc -l) -eq 0 ]; - then - echo "All pods are Running and Healthy - Installation completed!"; - else - echo "Some pods are not Running or Healthy"; - fi -' +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/main/generic/kubernetes/single-region/procedure/check-deployment-ready.sh ```
@@ -622,6 +609,9 @@ Console: ### Use the token + + + For a detailed guide on generating and using a token, please conduct the relevant documentation on [authenticating with the REST API](./../../../../../apis-tools/camunda-api-rest/camunda-api-rest-authentication.md?environment=self-managed). @@ -654,20 +644,10 @@ export ZEEBE_AUTHORIZATION_SERVER_URL=http://localhost:18080/auth/realms/camunda -Generate a temporary token to access the REST API, then capture the value of the `access_token` property and store it as your token. - -```shell -export TOKEN=$(curl --location --request POST "${ZEEBE_AUTHORIZATION_SERVER_URL}" \ ---header "Content-Type: application/x-www-form-urlencoded" \ ---data-urlencode "client_id=${ZEEBE_CLIENT_ID}" \ ---data-urlencode "client_secret=${ZEEBE_CLIENT_SECRET}" \ ---data-urlencode "grant_type=client_credentials" | jq '.access_token' -r) -``` - -Use the stored token, in our case `TOKEN`, to use the REST API to print the cluster topology. +Generate a temporary token to access the REST API, then capture the value of the `access_token` property and store it as your token. Use the stored token (referred to as `TOKEN` in this case) to interact with the REST API and display the cluster topology: -```shell -curl --header "Authorization: Bearer ${TOKEN}" "${ZEEBE_ADDRESS_REST}/v2/topology" +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/main/generic/kubernetes/single-region/procedure/check-zeebe-cluster-topology.sh ``` ...and results in the following output: @@ -676,88 +656,56 @@ curl --header "Authorization: Bearer ${TOKEN}" "${ZEEBE_ADDRESS_REST}/v2/topolog Example output -```shell -{ - "brokers": [ - { - "nodeId": 0, - "host": "camunda-zeebe-0.camunda-zeebe", - "port": 26501, - "partitions": [ - { - "partitionId": 1, - "role": "leader", - "health": "healthy" - }, - { - "partitionId": 2, - "role": "follower", - "health": "healthy" - }, - { - "partitionId": 3, - "role": "follower", - "health": "healthy" - } - ], - "version": "8.6.0" - }, - { - "nodeId": 1, - "host": "camunda-zeebe-1.camunda-zeebe", - "port": 26501, - "partitions": [ - { - "partitionId": 1, - "role": "follower", - "health": "healthy" - }, - { - "partitionId": 2, - "role": "leader", - "health": "healthy" - }, - { - "partitionId": 3, - "role": "follower", - "health": "healthy" - } - ], - "version": "8.6.0" - }, - { - "nodeId": 2, - "host": "camunda-zeebe-2.camunda-zeebe", - "port": 26501, - "partitions": [ - { - "partitionId": 1, - "role": "follower", - "health": "healthy" - }, - { - "partitionId": 2, - "role": "follower", - "health": "healthy" - }, - { - "partitionId": 3, - "role": "leader", - "health": "healthy" - } - ], - "version": "8.6.0" - } - ], - "clusterSize": 3, - "partitionsCount": 3, - "replicationFactor": 3, - "gatewayVersion": "8.6.0" -} +```json reference +https://github.com/camunda/camunda-deployment-references/blob/main/generic/kubernetes/single-region/procedure/check-zeebe-cluster-topology-output.json ```
+ + + +Follow our existing [Modeler guide on deploying a diagram](/self-managed/modeler/desktop-modeler/deploy-to-self-managed.md). Below are the helper values required to be filled in Modeler: + + + + + +The following values are required for the OAuth authentication: + +- **Cluster endpoint:** `https://zeebe.$DOMAIN_NAME`, replacing `$DOMAIN_NAME` with your domain +- **Client ID:** Retrieve the client ID value from the identity page of your created M2M application +- **Client Secret:** Retrieve the client secret value from the Identity page of your created M2M application +- **OAuth Token URL:** `https://$DOMAIN_NAME/auth/realms/camunda-platform/protocol/openid-connect/token`, replacing `$DOMAIN_NAME` with your domain +- **Audience:** `zeebe-api`, the default for Camunda 8 Self-Managed + + + + + +This requires port-forwarding the Zeebe Gateway to be able to connect to the cluster: + +```shell +kubectl port-forward services/camunda-zeebe-gateway 26500:26500 --namespace camunda +``` + +The following values are required for OAuth authentication: + +- **Cluster endpoint:** `http://localhost:26500` +- **Client ID:** Retrieve the client ID value from the identity page of your created M2M application +- **Client Secret:** Retrieve the client secret value from the Identity page of your created M2M application +- **OAuth Token URL:** `http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token` +- **Audience:** `zeebe-api`, the default for Camunda 8 Self-Managed + + + + + + ## Test the installation with payment example application diff --git a/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/openshift/terraform-setup-dual-region.md b/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/openshift/terraform-setup-dual-region.md index 2b8c9770e9d..dbc457330fd 100644 --- a/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/openshift/terraform-setup-dual-region.md +++ b/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/openshift/terraform-setup-dual-region.md @@ -161,7 +161,7 @@ This module sets up the foundational configuration for ROSA HCP and Terraform us We will leverage [Terraform modules](https://developer.hashicorp.com/terraform/language/modules), which allow us to abstract resources into reusable components, simplifying infrastructure management. -The [Camunda-provided module](https://github.com/camunda/camunda-tf-rosa) is publicly available and serves as a starting point for deploying Red Hat OpenShift clusters on AWS using a Hosted Control Plane. +The [Camunda-provided module](https://github.com/camunda/camunda-deployment-references/tree/main/aws/openshift/rosa-hcp-dual-region/) is publicly available and serves as a starting point for deploying Red Hat OpenShift clusters on AWS using a Hosted Control Plane. It is highly recommended to review this module before implementation to understand its structure and capabilities. Please note that this module is based on the official [ROSA HCP Terraform module documentation](https://docs.openshift.com/rosa/rosa_hcp/terraform/rosa-hcp-creating-a-cluster-quickly-terraform.html). @@ -287,21 +287,21 @@ this guide uses a dedicated [aws terraform provider](https://registry.terraform. This configuration will use the previously created S3 bucket for storing the Terraform state file: ```hcl reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/clusters/config.tf + https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/terraform/clusters/config.tf ``` 5. Create a file named `cluster_region_1.tf` in the same directory as your `config.tf`. This file describes the cluster of the region 1: ```hcl reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/clusters/cluster_region_1.tf + https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/terraform/clusters/cluster_region_1.tf ``` 6. Create a file named `cluster_region_2.tf` in the same directory as your `config.tf`. This file describes the cluster of the region 2: ```hcl reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/clusters/cluster_region_2.tf + https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/terraform/clusters/cluster_region_2.tf ``` 7. After setting up the terraform files and ensuring your AWS authentication is configured, initialize your Terraform project, then, initialize Terraform to configure the backend and download necessary provider plugins: @@ -334,13 +334,13 @@ this guide uses a dedicated [aws terraform provider](https://registry.terraform. 1. Configure user access to the clusters. By default, the user who creates an OpenShift cluster has administrative access. If you want to grant access to other users, follow the [Red Hat documentation for granting admin rights to users](https://docs.openshift.com/rosa/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-admin-rights.html) when the cluster will be created. -1. Customize the clusters setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the [ROSA module documentation](https://github.com/camunda/camunda-tf-rosa/blob/v2.0.0/modules/rosa-hcp/README.md). +1. Customize the clusters setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the [ROSA module documentation](https://github.com/camunda/camunda-deployment-references/tree/main/aws/modules/rosa-hcp/README.md). :::caution Camunda Terraform module This ROSA module is based on the [official Red Hat Terraform module for ROSA HCP](https://registry.terraform.io/modules/terraform-redhat/rosa-hcp/rhcs/latest). Please be aware of potential differences and choices in implementation between this module and the official one. -We invite you to consult the [Camunda ROSA module documentation](https://github.com/camunda/camunda-tf-rosa/blob/v2.0.0/modules/rosa-hcp/README.md) for more information. +We invite you to consult the [Camunda ROSA module documentation](https://github.com/camunda/camunda-deployment-references/tree/main/aws/modules/rosa-hcp/README.md) for more information. ::: @@ -417,13 +417,13 @@ We'll re-use the previously configured S3 bucket to store the state of the peeri Begin by setting up the `config.tf` file to use the S3 backend for managing the Terraform state: ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/peering/config.tf +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/terraform/peering/config.tf ``` Alongside the `config.tf` file, create a file called `peering.tf` to reference the peering configuration: ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/peering/peering.tf +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/terraform/peering/peering.tf ``` One cluster will be referenced as the **owner**, and the other as the **accepter**. @@ -497,13 +497,13 @@ We'll re-use the previously configured S3 bucket to store the state of the backu Begin by setting up the `config.tf` file to use the S3 backend for managing the Terraform state: ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/backup_bucket/config.tf +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/terraform/backup_bucket/config.tf ``` Finally, create a file called `backup_bucket.tf` to reference the elastic backup bucket configuration: ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/terraform/backup_bucket/backup_bucket.tf +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/terraform/backup_bucket/backup_bucket.tf ``` This bucket configuration follows [multiple best practices](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html). @@ -568,7 +568,7 @@ The `BACKUP_BUCKET_REGION` will define the region of the bucket, you can pick on ### Reference files -You can find the reference files used on [this page](https://github.com/camunda/camunda-deployment-references/tree/main/aws/rosa-hcp-dual-region/terraform) +You can find the reference files used on [this page](https://github.com/camunda/camunda-deployment-references/tree/main/aws/openshift/rosa-hcp-dual-region/) ## 2. Preparation for Camunda 8 installation diff --git a/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/openshift/terraform-setup.md b/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/openshift/terraform-setup.md index ad771e0e119..3b76d99eaf2 100644 --- a/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/openshift/terraform-setup.md +++ b/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/openshift/terraform-setup.md @@ -73,6 +73,16 @@ Following this tutorial and steps will result in: ## 1. Configure AWS and initialize Terraform +### Obtain a copy of the reference architecture + +The first step is to download a copy of the reference architecture from the [GitHub repository](https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-single-region/). This archive will be used throughout the rest of this documentation, the reference architecture are versioned using the same Camunda versions (`stable/8.x`). + +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-single-region/procedure/get-your-copy.sh +``` + +With the reference architecture downloaded and extracted, you can proceed with the remaining steps outlined in this documentation. Ensure that you are in the correct directory before continuing with further instructions. + ### Terraform prerequisites To manage the infrastructure for Camunda 8 on AWS using Terraform, we need to set up Terraform's backend to store the state file remotely in an S3 bucket. This ensures secure and persistent storage of the state file. @@ -151,12 +161,12 @@ Now, follow these steps to create the S3 bucket with versioning enabled: This S3 bucket will now securely store your Terraform state files with versioning enabled. -#### Create a `config.tf` with the following setup +#### Edit the `config.tf` with the following setup Once the S3 bucket is created, configure your `config.tf` file to use the S3 backend for managing the Terraform state: ```hcl reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/config.tf +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-single-region/config.tf ``` #### Initialize Terraform @@ -181,7 +191,7 @@ This module sets up the foundational configuration for ROSA HCP and Terraform us We will leverage [Terraform modules](https://developer.hashicorp.com/terraform/language/modules), which allow us to abstract resources into reusable components, simplifying infrastructure management. -The [Camunda-provided module](https://github.com/camunda/camunda-tf-rosa) is publicly available and serves as a robust starting point for deploying a Red Hat OpenShift cluster on AWS using a Hosted Control Plane. It is highly recommended to review this module before implementation to understand its structure and capabilities. +The [Camunda-provided module](https://github.com/camunda/camunda-deployment-references/tree/main/aws/openshift/rosa-hcp-single-region) is publicly available and serves as a robust starting point for deploying a Red Hat OpenShift cluster on AWS using a Hosted Control Plane. It is highly recommended to review this module before implementation to understand its structure and capabilities. Please note that this module is based on the official [ROSA HCP Terraform module documentation](https://docs.openshift.com/rosa/rosa_hcp/terraform/rosa-hcp-creating-a-cluster-quickly-terraform.html). It is presented as an example for running Camunda 8 in ROSA. For advanced use cases or custom setups, we encourage you to use the official module, which includes vendor-supported features. @@ -259,8 +269,7 @@ To set up a ROSA cluster, certain prerequisites must be configured on your AWS a #### Set up the ROSA cluster module -1. Create a `cluster.tf` file in the same directory as your `config.tf` file. -2. Add the following content to your newly created `cluster.tf` file to utilize the provided module: +1. Edit the `cluster.tf` file in the same directory as your `config.tf` file: :::note Configure your cluster @@ -274,26 +283,26 @@ To set up a ROSA cluster, certain prerequisites must be configured on your AWS a ::: ```hcl reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/cluster.tf + https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-single-region/cluster.tf ``` :::caution Camunda Terraform module This ROSA module is based on the [official Red Hat Terraform module for ROSA HCP](https://registry.terraform.io/modules/terraform-redhat/rosa-hcp/rhcs/latest). Please be aware of potential differences and choices in implementation between this module and the official one. - We invite you to consult the [Camunda ROSA module documentation](https://github.com/camunda/camunda-tf-rosa/blob/v2.0.0/modules/rosa-hcp/README.md) for more information. + We invite you to consult the [Camunda ROSA module documentation](https://github.com/camunda/camunda-deployment-references/tree/main/aws/modules/rosa-hcp/README.md) for more information. ::: -3. [Initialize](#initialize-terraform) Terraform for this module using the following Terraform command: +2. [Initialize](#initialize-terraform) Terraform for this module using the following Terraform command: ```bash terraform init -backend-config="bucket=$S3_TF_BUCKET_NAME" -backend-config="key=$S3_TF_BUCKET_KEY" ``` -4. Configure user access to the cluster. By default, the user who creates the OpenShift cluster has administrative access. If you want to grant access to other users, follow the [Red Hat documentation for granting admin rights to users](https://docs.openshift.com/rosa/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-admin-rights.html) when the cluster is created. +3. Configure user access to the cluster. By default, the user who creates the OpenShift cluster has administrative access. If you want to grant access to other users, follow the [Red Hat documentation for granting admin rights to users](https://docs.openshift.com/rosa/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-admin-rights.html) when the cluster is created. -5. Customize the cluster setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the [ROSA module documentation](https://github.com/camunda/camunda-tf-rosa/blob/v2.0.0/modules/rosa-hcp/README.md). +4. Customize the cluster setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the [ROSA module documentation](https://github.com/camunda/camunda-deployment-references/tree/main/aws/modules/rosa-hcp/README.md). ### Define outputs @@ -329,7 +338,7 @@ Terraform will now create the OpenShift cluster with all the necessary configura Depending on the installation path you have chosen, you can find the reference files used on this page: -- **Standard installation:** [Reference Files](https://github.com/camunda/camunda-deployment-references/tree/main/aws/rosa-hcp/camunda-versions/8.7) +- **Standard installation:** [Reference Files](https://github.com/camunda/camunda-deployment-references/tree/main/aws/openshift/rosa-hcp-single-region/) ## 2. Preparation for Camunda 8 installation @@ -339,11 +348,8 @@ You can access the created OpenShift cluster using the following steps: Set up the required environment variables: -```shell -export CLUSTER_NAME="$(terraform console <<Example Submariner check successfull output ```text reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/submariner/output.txt +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/submariner/output.txt ``` @@ -323,7 +323,7 @@ For more comprehensive details regarding the verification tests for Submariner u **Debugging the Submariner setup:** -If you are experiencing connectivity issues, we recommend spawning a pod in the `default` namespace that contains networking debugging tools. You can find an [example here](https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/submariner/debug-utils-submariner.yml). +If you are experiencing connectivity issues, we recommend spawning a pod in the `default` namespace that contains networking debugging tools. You can find an [example here](https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/submariner/debug-utils-submariner.yml). With this pod, you will be able to check flow openings, service resolution, and other network-related aspects. Troubleshooting requires examining all the underlying mechanisms of Submariner. Therefore, we also encourage you to read the [Submariner troubleshooting guide](https://submariner.io/operations/troubleshooting/). @@ -337,7 +337,7 @@ Before proceeding with the installation, ensure the required information is avai Review and adjust the following environment script to match your specific configuration: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/export_environment_prerequisites.sh +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/export_environment_prerequisites.sh ``` _If you are unsure about the values of the backup bucket, please refer to the [S3 backup bucket module setup](/self-managed/setup/deploy/amazon/openshift/terraform-setup-dual-region.md#s3-backup-bucket-module-setup) as a reference for implementation._ @@ -363,7 +363,7 @@ The Elasticsearch backup [bucket is tied to a specific region](https://docs.aws. The following script will create the required namespaces and secrets used to reference the bucket access. ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/setup_ns_secrets.sh +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/setup_ns_secrets.sh ``` Save it as `setup_ns_secrets.sh` and execute it: @@ -380,7 +380,7 @@ Throughout this guide, you will add and merge values into these files to configu - Save the following file as both `values-region-1.yml` and `values-region-2.yml` to serve as the base configuration: ```yaml reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/helm-values/values-base.yml + https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/helm-values/values-base.yml ``` :::warning Merging YAML files @@ -396,12 +396,12 @@ Set up the region ID using a unique integer for each region: - Add the following YAML configuration to your `values-region-1.yml`: ```yaml reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/helm-values/values-region-1.yml + https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/helm-values/values-region-1.yml ``` - Add the following YAML configuration to your `values-region-2.yml`: ```yaml reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/helm-values/values-region-2.yml + https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/helm-values/values-region-2.yml ``` **Security Context Constraints (SCCs)** @@ -416,7 +416,7 @@ For custom configurations or specific requirements, please refer to the [install Before deploying, some values in the value files need to be updated. To assist with generating these values, save the following Bash script as `generate_zeebe_helm_values.sh`: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/generate_zeebe_helm_values.sh +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/generate_zeebe_helm_values.sh ``` Then, source the output of the script. By doing so, we can reuse the values later for substitution, instead of manually adjusting the values files. You will be prompted to specify the number of Zeebe brokers (total number of Zeebe brokers in both Kubernetes clusters), for a dual-region setup we recommend `8`, resulting in four brokers per region: @@ -438,7 +438,7 @@ Make sure that the variable `CLUSTER_1_NAME` is set to the name of your first cl Once you've prepared each region's value file (`values-region-1.yml` and `values-region-2.yml`) file, run the following `envsubst` command to substitute the environment variables with their actual values: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/generate_helm_values.sh +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/generate_helm_values.sh ``` ### Install Camunda 8 using Helm @@ -446,7 +446,7 @@ https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp- With the value files for each region configured, you can now install Camunda 8 using Helm. Execute the following commands: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/install_chart.sh +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/install_chart.sh ``` This command: @@ -468,7 +468,7 @@ This guide uses `helm upgrade --install` as it runs install on initial deploymen Once Camunda is deployed across the two clusters, the next step is to expose each service to Submariner so it can be resolved by the other cluster: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/export_services_submariner.sh +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/export_services_submariner.sh ``` Alternatively, you can manage each service individually using the `ServiceExport` Custom Resource Definition (CRD). @@ -489,13 +489,13 @@ metadata: For each cluster, verify the status of the exported services with this script: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/verify_exported_services.sh +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/verify_exported_services.sh ``` To monitor the progress of the installation, save and execute the following script: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/verify_installation_completed.sh +https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/verify_installation_completed.sh ``` Save it as `verify_installation_completed.sh`, make it executable, and run it: @@ -509,9 +509,9 @@ chmod +x verify_installation_completed.sh 1. Open a terminal and port-forward the Zeebe Gateway via `oc` from one of your clusters. Zeebe is stretching over both clusters and is `active-active`, meaning it doesn't matter which Zeebe Gateway to use to interact with your Zeebe cluster. -```shell -oc --context "$CLUSTER_1_NAME" -n "$CAMUNDA_NAMESPACE_1" port-forward "services/$HELM_RELEASE_NAME-zeebe-gateway" 8080:8080 -``` + ```shell + oc --context "$CLUSTER_1_NAME" -n "$CAMUNDA_NAMESPACE_1" port-forward "services/$HELM_RELEASE_NAME-zeebe-gateway" 8080:8080 + ``` 2. Open another terminal and use e.g. `cURL` to print the Zeebe cluster topology: @@ -525,7 +525,7 @@ oc --context "$CLUSTER_1_NAME" -n "$CAMUNDA_NAMESPACE_1" port-forward "services/ Example output ```text reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp-dual-region/procedure/camunda/8.7/zeebe-http-output.txt + https://github.com/camunda/camunda-deployment-references/blob/main/aws/openshift/rosa-hcp-dual-region/procedure/zeebe-http-output.txt ``` diff --git a/versioned_docs/version-8.7/self-managed/setup/deploy/openshift/redhat-openshift.md b/versioned_docs/version-8.7/self-managed/setup/deploy/openshift/redhat-openshift.md index 8b98b2b1a12..7569cc0fbef 100644 --- a/versioned_docs/version-8.7/self-managed/setup/deploy/openshift/redhat-openshift.md +++ b/versioned_docs/version-8.7/self-managed/setup/deploy/openshift/redhat-openshift.md @@ -29,12 +29,13 @@ If you need to set up an OpenShift cluster on a cloud provider, we recommend our We conduct testing and ensure compatibility against the following OpenShift versions: -| OpenShift Version | [End of Support Date](https://access.redhat.com/support/policy/updates/openshift) | -| ----------------- | --------------------------------------------------------------------------------- | -| 4.17.x | June 27, 2025 | -| 4.16.x | December 27, 2025 | -| 4.15.x | August 27, 2025 | -| 4.14.x | May 1, 2025 | +| OpenShift Version | +| ----------------- | +| 4.18.x | +| 4.17.x | +| 4.16.x | +| 4.15.x | +| 4.14.x | :::caution Versions compatibility @@ -66,7 +67,7 @@ Over this guide, you will add and merge values in this file to configure your de You can find a reference example of this file here: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/base.yml +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/helm-values/base.yml ``` :::danger Merging YAML files @@ -95,18 +96,10 @@ To use these routes for the Zeebe Gateway, configure this through Ingress as wel The route created by OpenShift will use a domain to provide access to the platform. By default, you can use the OpenShift applications domain, but any other domain supported by the router can also be used. -To retrieve the OpenShift applications domain (used as an example here), run the following command: +To retrieve the OpenShift applications domain (used as an example here), run the following command and define the route domain that will be used for the Camunda 8 deployment: -```bash -export OPENSHIFT_APPS_DOMAIN=$(oc get ingresses.config.openshift.io cluster -o jsonpath='{.spec.domain}') -``` - -Next, define the route domain that will be used for the Camunda 8 deployment. For example: - -```bash -export DOMAIN_NAME="camunda.$OPENSHIFT_APPS_DOMAIN" - -echo "Camunda 8 will be reachable from $DOMAIN_NAME" +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/procedure/setup-application-domain.sh ``` If you choose to use a custom domain instead, ensure it is supported by your router configuration and replace the example domain with your desired domain. For more details on configuring custom domains in OpenShift, refer to the official [custom domain OpenShift documentation](https://docs.openshift.com/dedicated/applications/deployments/osd-config-custom-domains-applications.html). @@ -123,12 +116,8 @@ oc get ingresses.config/cluster -o json | jq '.metadata.annotations."ingress.ope Alternatively, if you use a dedicated IngressController for the deployment: -```bash -# List your IngressControllers -oc -n openshift-ingress-operator get ingresscontrollers - -# Replace with your IngressController name -oc -n openshift-ingress-operator get ingresscontrollers/ -o json | jq '.metadata.annotations."ingress.operator.openshift.io/default-enable-http2"' +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/procedure/get-ingress-http2-status.sh ``` - If the output is `"true"`, it means HTTP/2 is enabled. @@ -141,8 +130,8 @@ If HTTP/2 is not enabled, you can enable it by running the following command: **IngressController configuration:** -```bash -oc -n openshift-ingress-operator annotate ingresscontrollers/ ingress.operator.openshift.io/default-enable-http2=true +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/procedure/enable-ingress-http2.sh ``` **Global cluster configuration:** @@ -159,9 +148,9 @@ This will add the necessary annotation to [enable HTTP/2 for Ingress in your Ope Additionally, the Zeebe Gateway should be configured to use an encrypted connection with TLS. In OpenShift, the connection from HAProxy to the Zeebe Gateway service can use HTTP/2 only for re-encryption or pass-through routes, and not for edge-terminated or insecure routes. -1. **Core Pod:** two [TLS secrets](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) for the Zeebe Gateway are required, one for the **service** and the other one for the **route**: +1. **Zeebe Gateway:** two [TLS secrets](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) for the Zeebe Gateway are required, one for the **service** and the other one for the **route**: - - The first TLS secret is issued to the Zeebe Gateway Service Name. This must use the [PKCS #8 syntax](https://en.wikipedia.org/wiki/PKCS_8) or [PKCS #1 syntax](https://en.wikipedia.org/wiki/PKCS_1) as Zeebe only supports these, referenced as `camunda-platform-internal-service-certificate`. This certificate is also use in the other components such as Operate, Tasklist. + - The first TLS secret is issued to the Zeebe Gateway Service Name. This must use the [PKCS #8 syntax](https://en.wikipedia.org/wiki/PKCS_8) or [PKCS #1 syntax](https://en.wikipedia.org/wiki/PKCS_1) as Zeebe only supports these, referenced as `camunda-platform-internal-service-certificate`. In the example below, a TLS certificate is generated for the Zeebe Gateway service with an [annotation](https://docs.openshift.com/container-platform/latest/security/certificates/service-serving-certificate.html). The generated certificate will be in the form of a secret. @@ -182,26 +171,43 @@ Additionally, the Zeebe Gateway should be configured to use an encrypted connect To configure a Zeebe cluster securely, it's essential to set up a secure communication configuration between pods: - - We enable gRPC ingress for the Core pod, which sets up a secure proxy that we'll use to communicate with the Zeebe cluster. To avoid conflicts with other services, we use a specific domain (`zeebe-$DOMAIN_NAME`) for the gRPC proxy, different from the one used by other services (`$DOMAIN_NAME`). We also note that the port used for gRPC is `443`. + - We enable gRPC ingress for the ZeebeGateway pod, which sets up a secure proxy that we'll use to communicate with the Zeebe cluster. To avoid conflicts with other services, we use a specific domain (`zeebe-$DOMAIN_NAME`) for the gRPC proxy, different from the one used by other services (`$DOMAIN_NAME`). We also note that the port used for gRPC is `443`. - We mount the **Service Certificate Secret** (`camunda-platform-internal-service-certificate`) to the Core pod and configure a secure TLS connection. + Finally, we mount the **Service Certificate Secret** (`camunda-platform-internal-service-certificate`) to the Zeebe Gateway Pod and the Zeebe Pod to configure both [broker security](/self-managed/zeebe-deployment/configuration/broker.md#zeebebrokernetworksecurity) and gateway security. - Update your `values.yml` file with the following: + Update your `values.yml` file with the following: ```yaml reference - https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/core-route.yml + https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/helm-values/zeebe-gateway-route.yml ``` - The actual configuration properties can be reviewed: + The domain used by the Zeebe Gateway for gRPC is `zeebe-$DOMAIN_NAME` which different from the one used for the rest, namely `$DOMAIN_NAME`, to avoid any conflicts. It is also important to note that the port used for gRPC is `443`. - - [in the Operate configuration documentation](/self-managed/operate-deployment/operate-configuration.md#zeebe-broker-connection), - - [in the Tasklist configuration documentation](/self-managed/tasklist-deployment/tasklist-configuration.md#zeebe-broker-connection), - - [in the Zeebe Gateway configuration documentation](/self-managed/zeebe-deployment/configuration/gateway.md). +2. **Operate:** mount the **Service Certificate Secret** to the Operate pod and configure the secure TLS connection. Here, only the `tls.crt` file is required. -2. **Connectors:** update your `values.yml` file with the following: +Update your `values.yml` file with the following: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/connectors-route.yml +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/helm-values/operate-route.yml +``` + +The actual configuration properties can be reviewed [in the Operate configuration documentation](/self-managed/operate-deployment/operate-configuration.md#zeebe-broker-connection). + +1. **Tasklist:** mount the **Service Certificate Secret** to the Tasklist pod and configure the secure TLS connection. Here, only the `tls.crt` file is required. + + Update your `values.yml` file with the following: + +```yaml reference +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/helm-values/tasklist-route.yml +``` + +The actual configuration properties can be reviewed [in the Tasklist configuration documentation](/self-managed/tasklist-deployment/tasklist-configuration.md#zeebe-broker-connection). + +1. **Connectors:** update your `values.yml` file with the following: + +```yaml reference +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/helm-values/connectors-route.yml ``` The actual configuration properties can be reviewed [in the Connectors configuration documentation](/self-managed/connectors-deployment/connectors-configuration.md#zeebe-broker-connection). @@ -211,7 +217,7 @@ The actual configuration properties can be reviewed [in the Connectors configura 1. Set up the global configuration to enable the single Ingress definition with the host. Update your configuration file as shown below: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/domain.yml +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/helm-values/domain.yml ``` @@ -244,7 +250,7 @@ However, you can use `kubectl port-forward` to access the Camunda platform witho To make this work, you will need to configure the deployment to reference `localhost` with the forwarded port. Update your `values.yml` file with the following: ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/no-domain.yml +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/helm-values/no-domain.yml ``` @@ -264,7 +270,7 @@ The `global.compatibility.openshift.adaptSecurityContext` variable in your value - `disabled`: The `runAsUser` and `fsGroup` values will not be modified (default). ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/scc.yml +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/helm-values/scc.yml ``` @@ -273,7 +279,7 @@ https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/ To use permissive SCCs, simply install the charts as they are. Follow the [general Helm deployment guide](/self-managed/setup/install.md). ```yaml reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/helm-values/no-scc.yml +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/helm-values/no-scc.yml ``` @@ -287,27 +293,23 @@ Some components are not enabled by default in this deployment. For more informat Once you've prepared the `values.yml` file, run the following `envsubst` command to substitute the environment variables with their actual values: -```bash -# generate the final values -envsubst < values.yml > generated-values.yml - -# print the result -cat generated-values.yml +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/procedure/assemble-envsubst-values.sh ``` Next, store various passwords in a Kubernetes secret, which will be used by the Helm chart. Below is an example of how to set up the required secret. You can use `openssl` to generate random secrets and store them in environment variables: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/generate-passwords.sh +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/procedure/generate-passwords.sh ``` Use these environment variables in the `kubectl` command to create the secret. - The `smtp-password` should be replaced with the appropriate external value ([see how it's used by Web Modeler](/self-managed/modeler/web-modeler/configuration/configuration.md#smtp--email)). -```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/create-identity-secret.sh -``` + ```bash reference + https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/procedure/create-identity-secret.sh + ``` ### Install Camunda 8 using Helm @@ -316,13 +318,13 @@ Now that the `generated-values.yml` is ready, you can install Camunda 8 using He The following are the required environment variables with some example values: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/chart-env.sh +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/procedure/chart-env.sh ``` Then run the following command: ```bash reference -https://github.com/camunda/camunda-deployment-references/blob/main/aws/rosa-hcp/camunda-versions/8.7/procedure/install/install-chart.sh +https://github.com/camunda/camunda-deployment-references/blob/main/generic/openshift/single-region/procedure/install-chart.sh ``` This command: @@ -339,24 +341,13 @@ This guide uses `helm upgrade --install` as it runs install on initial deploymen You can track the progress of the installation using the following command: -```bash -watch -n 5 ' - kubectl get pods -n camunda --output=wide; - if [ $(kubectl get pods -n camunda --field-selector=status.phase!=Running -o name | wc -l) -eq 0 ] && - [ $(kubectl get pods -n camunda -o json | jq -r ".items[] | select(.status.containerStatuses[]?.ready == false)" | wc -l) -eq 0 ]; - then - echo "All pods are Running and Healthy - Installation completed!"; - else - echo "Some pods are not Running or Healthy"; - fi -' +```bash reference +https://github.com/camunda/camunda-deployment-references/blob/main/generic/kubernetes/single-region/procedure/check-deployment-ready.sh ``` ## Verify connectivity to Camunda 8 -Please follow our [guide to verify connectivity to Camunda 8](/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md#verify-connectivity-to-camunda-8). - -The username of the first user is `demo`, the password is the one generated previously and stored in the environment variable `FIRST_USER_PASSWORD`. +Please follow our [guide to verify connectivity to Camunda 8](/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md#verify-connectivity-to-camunda-8) :::caution Domain name for gRPC Zeebe diff --git a/versioned_docs/version-8.7/self-managed/zeebe-deployment/configuration/broker.md b/versioned_docs/version-8.7/self-managed/zeebe-deployment/configuration/broker.md index 9254532ca2d..d6cca0ccbb6 100644 --- a/versioned_docs/version-8.7/self-managed/zeebe-deployment/configuration/broker.md +++ b/versioned_docs/version-8.7/self-managed/zeebe-deployment/configuration/broker.md @@ -160,11 +160,11 @@ network: ### zeebe.broker.network.security -| Field | Description | Example Value | -| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | -| enabled | Enables TLS authentication between this gateway and other nodes in the cluster. This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_ENABLED`. | false | -| certificateChainPath | Sets the path to the certificate chain file. This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_CERTIFICATECHAINPATH`. | | -| privateKeyPath | Sets the path to the private key file location. This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_PRIVATEKEYPATH`. | | +| Field | Description | Example Value | +| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------- | +| enabled | Enables TLS authentication between this gateway and other nodes in the cluster. This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_ENABLED`. | false | +| certificateChainPath | Sets the path to the certificate chain file. This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_CERTIFICATECHAINPATH`. | | +| privateKeyPath | Sets the path to the private key file location. This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_PRIVATEKEYPATH`. | | | keyStore | Configures the keystore file containing both the certificate chain and the private key; currently only supports PKCS12 format. | | | keyStore.filePath | The path for keystore file; This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_KEYSTORE_FILEPATH`. | /path/key.pem | | keyStore.password | Sets the password for the keystore file, if not set it is assumed there is no password; This setting can also be overridden using the environment variable `ZEEBE_BROKER_NETWORK_SECURITY_KEYSTORE_PASSWORD` | changeme | diff --git a/versioned_docs/version-8.7/self-managed/zeebe-deployment/configuration/gateway.md b/versioned_docs/version-8.7/self-managed/zeebe-deployment/configuration/gateway.md index 83d9ade6eb9..087763d0f48 100644 --- a/versioned_docs/version-8.7/self-managed/zeebe-deployment/configuration/gateway.md +++ b/versioned_docs/version-8.7/self-managed/zeebe-deployment/configuration/gateway.md @@ -242,11 +242,11 @@ You can read more about intra-cluster security on [its dedicated page](../securi ::: -| Field | Description | Example value | -| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | -| enabled | Enables TLS authentication between this gateway and other nodes in the cluster. This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_ENABLED`. | false | -| certificateChainPath | Sets the path to the certificate chain file. This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_CERTIFICATECHAINPATH`. | | -| privateKeyPath | Sets the path to the private key file location. This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_PRIVATEKEYPATH`. | | +| Field | Description | Example value | +| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | +| enabled | Enables TLS authentication between this gateway and other nodes in the cluster. This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_ENABLED`. | false | +| certificateChainPath | Sets the path to the certificate chain file. This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_CERTIFICATECHAINPATH`. | | +| privateKeyPath | Sets the path to the private key file location. This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_PRIVATEKEYPATH`. | | | keyStore | Configures the keystore file containing both the certificate chain and the private key; currently only supports PKCS12 format. | | | keyStore.filePath | The path for keystore file; This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_KEYSTORE_FILEPATH`. | /path/key.pem | | keyStore.password | Sets the password for the keystore file, if not set it is assumed there is no password; This setting can also be overridden using the environment variable `ZEEBE_GATEWAY_CLUSTER_SECURITY_KEYSTORE_PASSWORD` | changeme |