Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
b24a532
wip
leiicamundi Feb 25, 2025
18e72d6
fix workflow
leiicamundi Feb 25, 2025
b067507
Squash work
leiicamundi Mar 5, 2025
b6ae8ef
Merge branch 'camunda/8.6' into feature/integrate-tests-rosa
leiicamundi Mar 5, 2025
00165c5
fix golden plan
leiicamundi Mar 5, 2025
dcd5d60
fix fresh namespace
leiicamundi Mar 5, 2025
48f9565
re-sort
leiicamundi Mar 5, 2025
3e9c964
fix sort of json
leiicamundi Mar 5, 2025
0041f22
fix
leiicamundi Mar 5, 2025
2aee69d
final order
leiicamundi Mar 5, 2025
22510de
fix working dir
leiicamundi Mar 5, 2025
47a90c2
trigger delta
leiicamundi Mar 5, 2025
653c7a8
revert changes
leiicamundi Mar 5, 2025
3c886fd
update branches names and fix timeout
leiicamundi Mar 6, 2025
a277b29
up
leiicamundi Mar 6, 2025
b24aaa5
add golden files
leiicamundi Mar 6, 2025
f31f9a2
update action version
leiicamundi Mar 6, 2025
db60d70
cleanup old clusters
leiicamundi Mar 6, 2025
e5450fe
revert cleanup
leiicamundi Mar 6, 2025
8714538
fix ordering and cluster refusing permissions
leiicamundi Mar 6, 2025
0bbc13a
fix error prone cmd
leiicamundi Mar 6, 2025
58f2183
fix gobin
leiicamundi Mar 6, 2025
4a006b0
fix retry
leiicamundi Mar 6, 2025
8f2c5fa
add retry step
leiicamundi Mar 7, 2025
6eaa36c
integrate console and webmodeler in the tests
leiicamundi Mar 7, 2025
cc9a9b7
add login instructions
leiicamundi Mar 7, 2025
5847cc9
fix postgresql not enabled for webModeler
leiicamundi Mar 7, 2025
4ec4c07
include assemble as part of generic
leiicamundi Mar 7, 2025
c5064e5
add get copy
leiicamundi Mar 7, 2025
a9293f6
cleanup clusters
leiicamundi Mar 7, 2025
e981f3d
simplify copy
leiicamundi Mar 7, 2025
5edba4b
fix copy
leiicamundi Mar 7, 2025
332097e
simplify the get
leiicamundi Mar 7, 2025
e555e42
fix console domain
leiicamundi Mar 7, 2025
fa6dee5
re-enable ec2
leiicamundi Mar 10, 2025
62ad96a
fix some reported issues
leiicamundi Mar 10, 2025
a5e4be1
update asdf install
leiicamundi Mar 10, 2025
a1e2af9
revert delete all
leiicamundi Mar 10, 2025
cb301c2
new env
leiicamundi Mar 10, 2025
b259edb
add todos
leiicamundi Mar 11, 2025
7b20c04
revert cleanup date
leiicamundi Mar 11, 2025
edfc00b
update to 8.7
leiicamundi Mar 11, 2025
b0b4ea6
up
leiicamundi Mar 11, 2025
f29edcd
merge
leiicamundi Mar 11, 2025
9826e0f
fix missing
leiicamundi Mar 11, 2025
ff95ff0
fix installation step
leiicamundi Mar 11, 2025
d14a6b3
Merge branch 'main' into feature/rosa-8.7
leiicamundi Mar 11, 2025
098711d
remove .tool-versions for each ref arch
leiicamundi Mar 11, 2025
fbb3148
apply various feedbacks
leiicamundi Mar 11, 2025
5cdf315
add prefix to state
leiicamundi Mar 11, 2025
52bdb06
fix github.ref_name to ref
leiicamundi Mar 11, 2025
fb7ac22
fix matrix
leiicamundi Mar 11, 2025
1ef980d
fix filter
leiicamundi Mar 11, 2025
10d2eb6
fix
leiicamundi Mar 11, 2025
74ec203
don't fail on folder
leiicamundi Mar 11, 2025
38636cf
temporarly disable connectors
leiicamundi Mar 11, 2025
26ac018
fix camunda version for unreleased
leiicamundi Mar 11, 2025
69aed86
feat(rosa): integrate 8.8
leiicamundi Mar 11, 2025
e16bf68
remove sed
leiicamundi Mar 11, 2025
0a077ad
update Maintenance
leiicamundi Mar 11, 2025
5272562
fix 8.8 values
leiicamundi Mar 12, 2025
10aef95
fix missing secrets
leiicamundi Mar 12, 2025
22cd312
fix missing cat
leiicamundi Mar 12, 2025
023b652
move dual region
leiicamundi Mar 12, 2025
1c42f47
update README
leiicamundi Mar 12, 2025
fd4cec2
remove patch version
leiicamundi Mar 12, 2025
00c4022
use z for the pathc
leiicamundi Mar 12, 2025
b1f3fbf
clean all
leiicamundi Mar 12, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .camunda-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
alpha-8.8
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# Delete AWS ROSA HCP Single Region Clusters

## Description

This GitHub Action automates the deletion of aws/openshift/rosa-hcp-single-region reference architecture clusters using a shell script.


## Inputs

| name | description | required | default |
| --- | --- | --- | --- |
| `tf-bucket` | <p>Bucket containing the clusters states</p> | `true` | `""` |
| `tf-bucket-region` | <p>Region of the bucket containing the resources states, if not set, will fallback on AWS_REGION</p> | `false` | `""` |
| `tf-bucket-key-prefix` | <p>Key prefix of the bucket containing the resources states. It must contain a / at the end e.g 'my-prefix/'.</p> | `false` | `""` |
| `max-age-hours-cluster` | <p>Maximum age of clusters in hours</p> | `false` | `20` |
| `target` | <p>Specify an ID to destroy specific resources or "all" to destroy all resources</p> | `false` | `all` |
| `rosa-cli-version` | <p>Version of the ROSA CLI to use</p> | `false` | `latest` |
| `openshift-version` | <p>Version of the OpenShift to install</p> | `true` | `4.17.16` |


## Runs

This action is a `composite` action.

## Usage

```yaml
- uses: camunda/camunda-deployment-references/.github/actions/aws-openshift-rosa-hcp-single-region-cleanup@main
with:
tf-bucket:
# Bucket containing the clusters states
#
# Required: true
# Default: ""

tf-bucket-region:
# Region of the bucket containing the resources states, if not set, will fallback on AWS_REGION
#
# Required: false
# Default: ""

tf-bucket-key-prefix:
# Key prefix of the bucket containing the resources states. It must contain a / at the end e.g 'my-prefix/'.
#
# Required: false
# Default: ""

max-age-hours-cluster:
# Maximum age of clusters in hours
#
# Required: false
# Default: 20

target:
# Specify an ID to destroy specific resources or "all" to destroy all resources
#
# Required: false
# Default: all

rosa-cli-version:
# Version of the ROSA CLI to use
#
# Required: false
# Default: latest

openshift-version:
# Version of the OpenShift to install
#
# Required: true
# Default: 4.17.16
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
---
name: Delete AWS ROSA HCP Single Region Clusters

description: |
This GitHub Action automates the deletion of aws/openshift/rosa-hcp-single-region reference architecture clusters using a shell script.

inputs:
tf-bucket:
description: Bucket containing the clusters states
required: true

tf-bucket-region:
description: Region of the bucket containing the resources states, if not set, will fallback on AWS_REGION

tf-bucket-key-prefix:
description: Key prefix of the bucket containing the resources states. It must contain a / at the end e.g 'my-prefix/'.
default: ''

max-age-hours-cluster:
description: Maximum age of clusters in hours
default: '20'

target:
description: Specify an ID to destroy specific resources or "all" to destroy all resources
default: all

rosa-cli-version:
description: Version of the ROSA CLI to use
default: latest

openshift-version:
description: Version of the OpenShift to install
required: true
# renovate: datasource=custom.rosa-camunda depName=red-hat-openshift versioning=semver
default: 4.17.16

runs:
using: composite
steps:

- name: Install asdf tools with cache
uses: camunda/infraex-common-config/./.github/actions/asdf-install-tooling@6dc218bf7ee3812a4b6b13c305bce60d5d1d46e5 # 1.3.1

- name: Install ROSA CLI
shell: bash
run: |
curl -LO "https://mirror.openshift.com/pub/openshift-v4/clients/rosa/${{ inputs.rosa-cli-version }}/rosa-linux.tar.gz"
tar -xvf rosa-linux.tar.gz
sudo mv rosa /usr/local/bin/rosa
chmod +x /usr/local/bin/rosa
rm -f rosa-linux.tar.gz
rosa version

- name: Install CLI tools from OpenShift Mirror
uses: redhat-actions/openshift-tools-installer@144527c7d98999f2652264c048c7a9bd103f8a82 # v1
with:
oc: ${{ inputs.openshift-version }}

- name: Delete clusters
id: delete_clusters
shell: bash
run: |
if [ -n "${{ inputs.tf-bucket-region }}" ]; then
export AWS_S3_REGION="${{ inputs.tf-bucket-region }}"
fi

# Use repo .tool-version as global version
cp .tool-versions ~/.tool-versions

${{ github.action_path }}/scripts/destroy-clusters.sh "${{ inputs.tf-bucket }}" \
${{ github.action_path }}/../../../aws/openshift/rosa-hcp-single-region/ /tmp/cleanup/ \
${{ inputs.max-age-hours-cluster }} ${{ inputs.target }} ${{ inputs.tf-bucket-key-prefix }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,182 @@
#!/bin/bash

set -o pipefail

# Description:
# This script performs a Terraform destroy operation for clusters defined in an S3 bucket.
# It copies the Terraform module directory to a temporary location, initializes Terraform with
# the appropriate backend configuration, and runs `terraform destroy`. If the destroy operation
# is successful, it removes the corresponding S3 objects.
#
# Usage:
# ./destroy_clusters.sh <BUCKET> <MODULES_DIR> <TEMP_DIR_PREFIX> <MIN_AGE_IN_HOURS> <ID_OR_ALL> [KEY_PREFIX]
#
# Arguments:
# BUCKET: The name of the S3 bucket containing the cluster state files.
# MODULES_DIR: The directory containing the Terraform modules.
# TEMP_DIR_PREFIX: The prefix for the temporary directories created for each cluster.
# MIN_AGE_IN_HOURS: The minimum age (in hours) of clusters to be destroyed.
# ID_OR_ALL: The specific ID suffix to filter objects, or "all" to destroy all objects.
# KEY_PREFIX (optional): A prefix (with a '/' at the end) for filtering objects in the S3 bucket.
#
# Example:
# ./destroy_clusters.sh tf-state-rosa-ci-eu-west-3 ./modules/rosa-hcp/ /tmp/rosa/ 24 all
# ./destroy_clusters.sh tf-state-rosa-ci-eu-west-3 ./modules/rosa-hcp/ /tmp/rosa/ 24 rosa-cluster-2883
# ./destroy_clusters.sh tf-state-rosa-ci-eu-west-3 ./modules/rosa-hcp/ /tmp/rosa/ 24 all my-prefix/
#
# Requirements:
# - AWS CLI installed and configured with the necessary permissions to access and modify the S3 bucket.
# - Terraform installed and accessible in the PATH.


# Check for required arguments
if [ "$#" -lt 5 ] || [ "$#" -gt 6 ]; then
echo "Usage: $0 <BUCKET> <MODULES_DIR> <TEMP_DIR_PREFIX> <MIN_AGE_IN_HOURS> <ID_OR_ALL> [KEY_PREFIX]"
exit 1
fi

# Check if required environment variables are set
if [ -z "$RHCS_TOKEN" ]; then
echo "Error: The environment variable RHCS_TOKEN is not set."
exit 1
fi

if [ -z "$AWS_REGION" ]; then
echo "Error: The environment variable AWS_REGION is not set."
exit 1
fi

# Variables
BUCKET=$1
MODULES_DIR=$2
TEMP_DIR_PREFIX=$3
MIN_AGE_IN_HOURS=$4
ID_OR_ALL=$5
KEY_PREFIX=${6:-""} # Key prefix is optional
FAILED=0
CURRENT_DIR=$(pwd)
AWS_S3_REGION=${AWS_S3_REGION:-$AWS_REGION}


# Detect operating system and set the appropriate date command
if [[ "$(uname)" == "Darwin" ]]; then
date_command="gdate"
else
date_command="date"
fi

# Function to perform terraform destroy
destroy_cluster() {
local cluster_id=$1
local cluster_folder="$KEY_PREFIX$2"
# we must add two levels to replicate the "source = ../../modules" relative path presented in the module
local temp_dir="${TEMP_DIR_PREFIX}${cluster_id}/1/2"
local temp_generic_modules_dir="${TEMP_DIR_PREFIX}${cluster_id}/modules/"
local source_generic_modules="$MODULES_DIR/../../modules/"

echo "Copying generic modules $source_generic_modules in $temp_generic_modules_dir"

mkdir -p "$temp_generic_modules_dir" || return 1
cp -a "$source_generic_modules." "$temp_generic_modules_dir" || return 1

tree "$source_generic_modules" "$temp_generic_modules_dir" || return 1

echo "Copying $MODULES_DIR in $temp_dir"

mkdir -p "$temp_dir" || return 1
cp -a "$MODULES_DIR." "$temp_dir" || return 1

tree "$MODULES_DIR" "$temp_dir" || return 1

cd "$temp_dir" || return 1

tree "." || return 1

echo "tf state: bucket=$BUCKET key=${cluster_folder}/${cluster_id}.tfstate region=$AWS_S3_REGION"

if ! terraform init -backend-config="bucket=$BUCKET" -backend-config="key=${cluster_folder}/${cluster_id}.tfstate" -backend-config="region=$AWS_S3_REGION"; then return 1; fi

# Edit the name of the cluster
sed -i -e "s/\(rosa_cluster_name\s*=\s*\"\)[^\"]*\(\"\)/\1${cluster_id}\2/" cluster.tf

if ! terraform destroy -auto-approve; then return 1; fi

# Cleanup S3
echo "Deleting s3://$BUCKET/$cluster_folder"
if ! aws s3 rm "s3://$BUCKET/$cluster_folder" --recursive; then return 1; fi
if ! aws s3api delete-object --bucket "$BUCKET" --key "$cluster_folder/"; then return 1; fi

cd - || return 1
rm -rf "$temp_dir" || return 1
}

# List objects in the S3 bucket and parse the cluster IDs
all_objects=$(aws s3 ls "s3://$BUCKET/$KEY_PREFIX")
aws_exit_code=$?

# don't fail on folder absent
if [ $aws_exit_code -ne 0 ] && [ "$all_objects" != "" ]; then
echo "Error executing the aws s3 ls command (Exit Code: $aws_exit_code):" >&2
exit 1
fi

if [ "$ID_OR_ALL" == "all" ]; then
clusters=$(echo "$all_objects" | awk '{print $2}' | sed -n 's#^tfstate-\(.*\)/$#\1#p')
else
clusters=$(echo "$all_objects" | awk '{print $2}' | grep "tfstate-$ID_OR_ALL/" | sed -n 's#^tfstate-\(.*\)/$#\1#p')
fi

if [ -z "$clusters" ]; then
echo "No objects found in the S3 bucket. Exiting script." >&2
exit 0
fi

current_timestamp=$($date_command +%s)

for cluster_id in $clusters; do
cd "$CURRENT_DIR" || return 1


cluster_folder="tfstate-$cluster_id"
echo "Checking cluster $cluster_id in $cluster_folder"

last_modified=$(aws s3api head-object --bucket "$BUCKET" --key "$KEY_PREFIX$cluster_folder/${cluster_id}.tfstate" --output json | grep LastModified | awk -F '"' '{print $4}')
if [ -z "$last_modified" ]; then
echo "Error: Failed to retrieve last modified timestamp for cluster $cluster_id"
exit 1
fi

last_modified_timestamp=$($date_command -d "$last_modified" +%s)
if [ -z "$last_modified_timestamp" ]; then
echo "Error: Failed to convert last modified timestamp to seconds since epoch for cluster $cluster_id"
exit 1
fi
echo "Cluster $cluster_id last modification: $last_modified ($last_modified_timestamp)"

file_age_hours=$(( (current_timestamp - last_modified_timestamp) / 3600 ))
if [ -z "$file_age_hours" ]; then
echo "Error: Failed to calculate file age in hours for cluster $cluster_id"
exit 1
fi
echo "Cluster $cluster_id is $file_age_hours hours old"

if [ $file_age_hours -ge "$MIN_AGE_IN_HOURS" ]; then
echo "Destroying cluster $cluster_id in $cluster_folder"

if ! destroy_cluster "$cluster_id" "$cluster_folder"; then
echo "Error destroying cluster $cluster_id"
FAILED=1
fi
else
echo "Skipping cluster $cluster_id as it does not meet the minimum age requirement of $MIN_AGE_IN_HOURS hours"
fi
done

# Exit with the appropriate status
if [ $FAILED -ne 0 ]; then
echo "One or more operations failed."
exit 1
else
echo "All operations completed successfully."
exit 0
fi
Loading
Loading