Helm chart for deploying Codefresh On-Premises to Kubernetes.
- Prerequisites
- Get Repo Info
- Install Chart
- Chart Configuration
- Installing on OpenShift
- Firebase Configuration
- Additional configuration
- Configuring OIDC Provider
- Maintaining MongoDB indexes
- Upgrading
- Rollback
- Troubleshooting
- Values
Since version 2.1.7 chart is pushed only to OCI registry at
oci://quay.io/codefresh/codefresh
Versions prior to 2.1.7 are still available in ChartMuseum at
http://chartmuseum.codefresh.io/codefresh
- Kubernetes >= 1.28 && <= 1.32 (Supported versions mean that installation passed for the versions listed; however, it may work on older k8s versions as well)
- Helm 3.8.0+
- PV provisioner support in the underlying infrastructure (with resizing available)
- Minimal 4vCPU and 8Gi Memory available in the cluster (for production usage the recommended minimal cluster capacity is at least 12vCPUs and 36Gi Memory)
- GCR Service Account JSON sa.json(provided by Codefresh, contact support@codefresh.io)
- Firebase Realtime Database URL with legacy token. See Firebase Configuration
- Valid TLS certificates for Ingress
- When external PostgreSQL is used, pg_cronandpg_partmanextensions must be enabled for analytics to work (see AWS RDS example). Thepg_cronextension should be the 1.4 version or higher for Azure Postgres DB.
helm show all oci://quay.io/codefresh/codefreshImportant: only helm 3.8.0+ is supported
⚠️ The default chart configuration with embedded databases is not intended for production usage! You should use Cloud PaaS for MongoDB, PostgreSQL, Redis and RabbitMQ. See Configuring external services section for details.
Edit default values.yaml or create empty cf-values.yaml
- Pass sa.json(as a single line) to.Values.imageCredentials.password
# -- Credentials for Image Pull Secret object
imageCredentials:
  registry: us-docker.pkg.dev
  username: _json_key
  password: '{ "type": "service_account", "project_id": "codefresh-enterprise", "private_key_id": ... }'- Specify .Values.global.appUrl,.Values.global.firebaseUrl,.Values.global.firebaseSecret,.Values.global.env.MONGOOSE_AUTO_INDEX,.Values.global.env.MONGO_AUTOMATIC_INDEX_CREATION
global:
  # -- Application root url. Will be used in Ingress as hostname
  appUrl: onprem.mydomain.com
  # -- Firebase URL for logs streaming.
  firebaseUrl: <>
  # -- Firebase URL for logs streaming from existing secret
  firebaseUrlSecretKeyRef: {}
  # E.g.
  # firebaseUrlSecretKeyRef:
  #   name: my-secret
  #   key: firebase-url
  # -- Firebase Secret.
  firebaseSecret: <>
  # -- Firebase Secret from existing secret
  firebaseSecretSecretKeyRef: {}
  # E.g.
  # firebaseSecretSecretKeyRef:
  #   name: my-secret
  #   key: firebase-secret
  # -- Enable index creation in MongoDB
  # This is required for first-time installations!
  # Before usage in Production, you must set it to `false` or remove it!
  env:
    MONGOOSE_AUTO_INDEX: "true"
    MONGO_AUTOMATIC_INDEX_CREATION: "true"
- Specify .Values.ingress.tls.certand.Values.ingress.tls.keyOR.Values.ingress.tls.existingSecret
ingress:
  # -- Enable the Ingress
  enabled: true
  # -- Set the ingressClass that is used for the ingress.
  # Default `nginx-codefresh` is created from `ingress-nginx` controller subchart
  # If you specify a different ingress class, disable `ingress-nginx` subchart (see below)
  ingressClassName: nginx-codefresh
  tls:
    # -- Enable TLS
    enabled: true
    # -- Default secret name to be created with provided `cert` and `key` below
    secretName: "star.codefresh.io"
    # -- Certificate (base64 encoded)
    cert: ""
    # -- Private key (base64 encoded)
    key: ""
    # -- Existing `kubernetes.io/tls` type secret with TLS certificates (keys: `tls.crt`, `tls.key`)
    existingSecret: ""
ingress-nginx:
  # -- Enable ingress-nginx controller
  enabled: true- Or specify your own .Values.ingress.ingressClassName(disable built-in ingress-nginx subchart)
ingress:
  # -- Enable the Ingress
  enabled: true
  # -- Set the ingressClass that is used for the ingress.
  ingressClassName: nginx
ingress-nginx:
  # -- Disable ingress-nginx controller
  enabled: false- Install the chart
  helm upgrade --install cf oci://quay.io/codefresh/codefresh \
      -f cf-values.yaml \
      --namespace codefresh \
      --create-namespace \
      --debug \
      --wait \
      --timeout 15mOnce your Codefresh On-Prem instance is installed, configured, and confirmed to be ready for production use, the following variables must be set to false or removed:
global:
  env:
    MONGOOSE_AUTO_INDEX: "false"
    MONGO_AUTOMATIC_INDEX_CREATION: "false"See Customizing the Chart Before Installing. To see all configurable options with detailed comments, visit the chart's values.yaml, or run these configuration commands:
helm show values codefresh/codefreshCodefresh relies on several persistent services to store its data:
- MongoDB: Stores all account data (account settings, users, projects, pipelines, builds etc.)
- PostgreSQL: Stores data about events for the account (pipeline updates, deletes, etc.). The audit log uses the data from this database.
- Redis: Used for caching, and as a key-value store for cron trigger manager.
- RabbitMQ: Used for message queueing.
The following table reflects the recommended and supported versions of these databases for different Codefresh releases:
| Codefresh version | MongoDB | PostgreSQL | Redis | RabbitMQ | 
|---|---|---|---|---|
| 2.9.x | >=4.2 <=7.x Recommended: 7.x ( featureCompatibilityVersion: 7.0) | >= 16.x <= 17.x Recommended: 17.x | >= 7.0.x <= 7.4.x Recommended: 7.4.x | 3.13.x | 4.0.x | 4.1.x Recommended: 4.1.x | 
| 2.8.x | >=4.2 <=7.x Recommended: 7.x ( featureCompatibilityVersion: 6.0) | >= 13.x <= 17.x Recommended: 16.x | 17.x | >= 7.0.x <= 7.4.x Recommended: 7.4.x | 3.13.x | 4.0.x | 4.1.x Recommended: 4.0.x | 
| 2.7.x | >=4.2 <=6.x Recommended: 6.x ( featureCompatibilityVersion: 6.0) | 13.x | 7.0.x | 3.13.x | 
| 2.6.x | >=4.2 <=6.x Recommended: 6.x ( featureCompatibilityVersion: 5.0) | 13.x | 7.0.x | 3.13.x | 
Running on netfs (nfs, cifs) is not recommended.
Docker daemon (
cf-builderstateful set) can be run on block storage only.
All of them can be externalized. See the next sections.
The chart contains required dependencies for the corresponding services
To use external services like MongoDB Atlas Database or Amazon RDS for PostgreSQL you need to adjust the values accordingly:
⚠️ Important! If you use MongoDB Atlas, you must create user withWritepermissions before installing Codefresh:
Then, provide the user credentials in the chart values at
.Values.global.mongodbUser/mongodbRootUserSecretKeyRef
.Values.global.mongodbPassword/mongodbRootPasswordSecretKeyRef
.Values.seed.mongoSeedJob.mongodbRootUser/mongodbRootUserSecretKeyRef
.Values.seed.mongoSeedJob.mongodbRootPassword/mongodbRootPasswordSecretKeyRef
Ref:
Create Users in Atlas
values.yaml for external MongoDB:
seed:
  mongoSeedJob:
    # -- Enable mongo seed job. Seeds the required data (default idp/user/account), creates cfuser and required databases.
    enabled: true
    # -- Root user in plain text (required ONLY for seed job!).
    mongodbRootUser: "root"
    # -- Root user from existing secret
    mongodbRootUserSecretKeyRef: {}
    # E.g.
    # mongodbRootUserSecretKeyRef:
    #   name: my-secret
    #   key: mongodb-root-user
    # -- Root password in plain text (required ONLY for seed job!).
    mongodbRootPassword: "password"
    # -- Root password from existing secret
    mongodbRootPasswordSecretKeyRef: {}
    # E.g.
    # mongodbRootPasswordSecretKeyRef:
    #   name: my-secret
    #   key: mongodb-root-password
global:
  # -- LEGACY (but still supported) - Use `.global.mongodbProtocol` + `.global.mongodbUser/mongodbUserSecretKeyRef` + `.global.mongodbPassword/mongodbPasswordSecretKeyRef` + `.global.mongodbHost/mongodbHostSecretKeyRef` + `.global.mongodbOptions` instead
  # Default MongoDB URI. Will be used by ALL services to communicate with MongoDB.
  # Ref: https://www.mongodb.com/docs/manual/reference/connection-string/
  # Note! `defaultauthdb` is omitted on purpose (i.e. mongodb://.../[defaultauthdb]).
  mongoURI: ""
  # E.g.
  # mongoURI: "mongodb://cfuser:mTiXcU2wafr9@cf-mongodb:27017/"
  # -- Set mongodb protocol (`mongodb` / `mongodb+srv`)
  mongodbProtocol: mongodb
  # -- Set mongodb user in plain text
  mongodbUser: "cfuser"
  # -- Set mongodb user from existing secret
  mongodbUserSecretKeyRef: {}
  # E.g.
  # mongodbUserSecretKeyRef:
  #   name: my-secret
  #   key: mongodb-user
  # -- Set mongodb password in plain text
  mongodbPassword: "password"
  # -- Set mongodb password from existing secret
  mongodbPasswordSecretKeyRef: {}
  # E.g.
  # mongodbPasswordSecretKeyRef:
  #   name: my-secret
  #   key: mongodb-password
  # -- Set mongodb host in plain text
  mongodbHost: "my-mongodb.prod.svc.cluster.local:27017"
  # -- Set mongodb host from existing secret
  mongodbHostSecretKeyRef: {}
  # E.g.
  # mongodbHostSecretKeyRef:
  #   name: my-secret
  #   key: monogdb-host
  # -- Set mongodb connection string options
  # Ref: https://www.mongodb.com/docs/manual/reference/connection-string/#connection-string-options
  mongodbOptions: "retryWrites=true"
mongodb:
  # -- Disable mongodb subchart installation
  enabled: falseIn order to use MTLS (Mutual TLS) for MongoDB, you need:
- Create a K8S secret that contains the certificate (certificate file and private key).
The K8S secret should have one ca.pemkey.
cat cert.crt > ca.pem
cat cert.key >> ca.pem
kubectl create secret generic my-mongodb-tls --from-file=ca.pem- Add .Values.global.volumesand.Values.global.volumeMountsto mount the secret into all the services.
global:
  volumes:
    mongodb-tls:
      enabled: true
      type: secret
      nameOverride: my-mongodb-tls
      optional: true
  volumeMounts:
    mongodb-tls:
      path:
      - mountPath: /etc/ssl/mongodb/ca.pem
        subPath: ca.pem
  env:
    MONGODB_SSL_ENABLED: true
    MTLS_CERT_PATH: /etc/ssl/mongodb/ca.pem
    RUNTIME_MTLS_CERT_PATH: /etc/ssl/mongodb/ca.pem
    RUNTIME_MONGO_TLS: "true"
    # Set these env vars to 'false' if self-signed certificate is used to avoid x509 errors
    RUNTIME_MONGO_TLS_VALIDATE: "false"
    MONGO_MTLS_VALIDATE: "false"seed:
  postgresSeedJob:
    # -- Enable postgres seed job. Creates required user and databases.
    enabled: true
    # -- (optional) "postgres" admin user in plain text (required ONLY for seed job!)
    # Must be a privileged user allowed to create databases and grant roles.
    # If omitted, username and password from `.Values.global.postgresUser/postgresPassword` will be used.
    postgresUser: "postgres"
    # -- (optional) "postgres" admin user from exising secret
    postgresUserSecretKeyRef: {}
    # E.g.
    # postgresUserSecretKeyRef:
    #   name: my-secret
    #   key: postgres-user
    # -- (optional) Password for "postgres" admin user (required ONLY for seed job!)
    postgresPassword: "password"
    # -- (optional) Password for "postgres" admin user from existing secret
    postgresPasswordSecretKeyRef: {}
    # E.g.
    # postgresPasswordSecretKeyRef:
    #   name: my-secret
    #   key: postgres-password
global:
  # -- Set postgres user in plain text
  postgresUser: cf_user
  # -- Set postgres user from existing secret
  postgresUserSecretKeyRef: {}
  # E.g.
  # postgresUserSecretKeyRef:
  #   name: my-secret
  #   key: postgres-user
  # -- Set postgres password in plain text
  postgresPassword: password
  # -- Set postgres password from existing secret
  postgresPasswordSecretKeyRef: {}
  # E.g.
  # postgresPasswordSecretKeyRef:
  #   name: my-secret
  #   key: postgres-password
  # -- Set postgres service address in plain text.
  postgresHostname: "my-postgres.domain.us-east-1.rds.amazonaws.com"
  # -- Set postgres service from existing secret
  postgresHostnameSecretKeyRef: {}
  # E.g.
  # postgresHostnameSecretKeyRef:
  #   name: my-secret
  #   key: postgres-hostname
  # -- Set postgres port number
  postgresPort: 5432
  # -- Set postgres schema name for audit database in plain text.
  auditPostgresSchemaName: "public"
  # -- Disables saving events from eventbus into postgres.
  # When it is set to “false” all events (workflows, jobs, user etc.) from eventbus are starting saving to postgres and following services (charts-manager, cluster-providers, context-manager, cfapi, cf-platform-analytics, gitops-dashboard-manager, pipeline-manager, kube-integration, tasker-kubernetes, runtime-environment-manager) start requiring postgres connection.
  disablePostgresForEventbus: "true"
postgresql:
  # -- Disable postgresql subchart installation
  enabled: falseProvide the following env vars to enforce SSL connection to PostgresSQL:
global:
  env:
    # More info in the official docs: https://www.postgresql.org/docs/current/libpq-envars.html
    PGSSLMODE: "require"
helm-repo-manager:
  env:
    POSTGRES_DISABLE_SSL: "false"
⚠️ Important!
We do not support custom CA configuration for PostgreSQL, including self-signed certificates. This may cause incompatibility with some providers' default configurations.
In particular, Amazon RDS for PostgreSQL version 15 and later requires SSL encryption by default (ref).
We recommend disabling SSL on the provider side in such cases or using the following steps to mount custom CA certificates: Mounting private CA certs
global:
  # -- Set redis password in plain text
  redisPassword: password
  # -- Set redis service port
  redisPort: 6379
  # -- Set redis password from existing secret
  redisPasswordSecretKeyRef: {}
  # E.g.
  # redisPasswordSecretKeyRef:
  #   name: my-secret
  #   key: redis-password
  # -- Set redis hostname in plain text. Takes precedence over `global.redisService`!
  redisUrl: "my-redis.namespace.svc.cluster.local"
  # -- Set redis hostname from existing secret.
  redisUrlSecretKeyRef: {}
  # E.g.
  # redisUrlSecretKeyRef:
  #   name: my-secret
  #   key: redis-url
redis:
  # -- Disable redis subchart installation
  enabled: false
If ElastiCache is used, set
REDIS_TLStotruein.Values.global.env
⚠️ ElastiCache with Cluster mode is not supported!
global:
  env:
    REDIS_TLS: trueIn order to use MTLS (Mutual TLS) for Redis, you need:
- Create a K8S secret that contains the certificate (ca, certificate and private key).
cat ca.crt tls.crt > tls.crt
kubectl create secret tls my-redis-tls --cert=tls.crt --key=tls.key --dry-run=client -o yaml | kubectl apply -f -- Add .Values.global.volumesand.Values.global.volumeMountsto mount the secret into all the services.
global:
  volumes:
    redis-tls:
      enabled: true
      type: secret
      # Existing secret with TLS certificates (keys: `ca.crt` , `tls.crt`, `tls.key`)
      nameOverride: my-redis-tls
      optional: true
  volumeMounts:
    redis-tls:
      path:
      - mountPath: /etc/ssl/redis
  env:
    REDIS_TLS: true
    REDIS_CA_PATH: /etc/ssl/redis/ca.crt
    REDIS_CLIENT_CERT_PATH : /etc/ssl/redis/tls.crt
    REDIS_CLIENT_KEY_PATH: /etc/ssl/redis/tls.key
    # Set these env vars like that if self-signed certificate is used to avoid x509 errors
    REDIS_REJECT_UNAUTHORIZED: false
    REDIS_TLS_SKIP_VERIFY: trueglobal:
  # -- Set rabbitmq protocol (`amqp/amqps`)
  rabbitmqProtocol: amqp
  # -- Set rabbitmq username in plain text
  rabbitmqUsername: user
  # -- Set rabbitmq username from existing secret
  rabbitmqUsernameSecretKeyRef: {}
  # E.g.
  # rabbitmqUsernameSecretKeyRef:
  #   name: my-secret
  #   key: rabbitmq-username
  # -- Set rabbitmq password in plain text
  rabbitmqPassword: password
  # -- Set rabbitmq password from existing secret
  rabbitmqPasswordSecretKeyRef: {}
  # E.g.
  # rabbitmqPasswordSecretKeyRef:
  #   name: my-secret
  #   key: rabbitmq-password
  # -- Set rabbitmq service address in plain text. Takes precedence over `global.rabbitService`!
  rabbitmqHostname: "my-rabbitmq.namespace.svc.cluster.local:5672"
  # -- Set rabbitmq service address from existing secret.
  rabbitmqHostnameSecretKeyRef: {}
  # E.g.
  # rabbitmqHostnameSecretKeyRef:
  #   name: my-secret
  #   key: rabbitmq-hostname
rabbitmq:
  # -- Disable rabbitmq subchart installation
  enabled: falseThe chart deploys the ingress-nginx and exposes controller behind a Service of Type=LoadBalancer
All installation options for ingress-nginx are described at Configuration
Relevant examples for Codefesh are below:
certificate provided from ACM
ingress-nginx:
  controller:
    service:
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
        service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
        service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
        service.beta.kubernetes.io/aws-load-balancer-ssl-cert: < CERTIFICATE ARN >
      targetPorts:
        http: http
        https: http
# -- Ingress
ingress:
  tls:
    # -- Disable TLS
    enabled: falsecertificate provided as base64 string or as exisiting k8s secret
ingress-nginx:
  controller:
    service:
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: nlb
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
        service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
        service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
# -- Ingress
ingress:
  tls:
    # -- Enable TLS
    enabled: true
    # -- Default secret name to be created with provided `cert` and `key` below
    secretName: "star.codefresh.io"
    # -- Certificate (base64 encoded)
    cert: "LS0tLS1CRUdJTiBDRVJ...."
    # -- Private key (base64 encoded)
    key: "LS0tLS1CRUdJTiBSU0E..."
    # -- Existing `kubernetes.io/tls` type secret with TLS certificates (keys: `tls.crt`, `tls.key`)
    existingSecret: ""Application Load Balancer should be deployed to the cluster
ingress-nginx:
  # -- Disable ingress-nginx subchart installation
  enabled: false
ingress:
  # -- ALB contoller ingress class
  ingressClassName: alb
  annotations:
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
    alb.ingress.kubernetes.io/backend-protocol: HTTP
    alb.ingress.kubernetes.io/certificate-arn: <ARN>
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/success-codes: 200,404
    alb.ingress.kubernetes.io/target-type: ip
 services:
    # For ALB /* asterisk is required in path
    internal-gateway:
      - /*
If you install/upgrade Codefresh on an air-gapped environment without access to public registries (i.e. quay.io/docker.io) or Codefresh Enterprise registry at gcr.io, you will have to mirror the images to your organization’s container registry.
- 
Obtain image list for specific release 
- 
Push images to private docker registry 
- 
Specify image registry in values 
global:
  imageRegistry: myregistry.domain.com
There are 3 types of images, with the values above in rendered manifests images will be converted as follows:
non-Codefresh like:
bitnami/mongo:4.2
registry.k8s.io/ingress-nginx/controller:v1.4.0
postgres:13converted to:
myregistry.domain.com/bitnami/mongodb:4.2
myregistry.domain.com/ingress-nginx/controller:v1.2.0
myregistry.domain.com/postgres:13Codefresh public images like:
quay.io/codefresh/dind:20.10.13-1.25.2
quay.io/codefresh/engine:1.147.8
quay.io/codefresh/cf-docker-builder:1.1.14converted to:
myregistry.domain.com/codefresh/dind:20.10.13-1.25.2
myregistry.domain.com/codefresh/engine:1.147.8
myregistry.domain.com/codefresh/cf-docker-builder:1.1.14Codefresh private images like:
gcr.io/codefresh-enterprise/codefresh/cf-api:21.153.6
gcr.io/codefresh-enterprise/codefresh/cf-ui:14.69.38
gcr.io/codefresh-enterprise/codefresh/pipeline-manager:3.121.7converted to:
myregistry.domain.com/codefresh/cf-api:21.153.6
myregistry.domain.com/codefresh/cf-ui:14.69.38
myregistry.domain.com/codefresh/pipeline-manager:3.121.7Use the example below to override repository for all templates:
global:
  imagePullSecrets:
    - cf-registry
ingress-nginx:
  controller:
    image:
      registry: myregistry.domain.com
      image: codefresh/controller
mongodb:
  image:
    repository: codefresh/mongodb
postgresql:
  image:
    repository: codefresh/postgresql
consul:
  image:
    repository: codefresh/consul
redis:
  image:
    repository: codefresh/redis
rabbitmq:
  image:
    repository: codefresh/rabbitmq
nats:
  image:
    repository: codefresh/nats
builder:
  container:
    image:
      repository: codefresh/docker
runner:
  container:
    image:
      repository: codefresh/docker
internal-gateway:
  container:
    image:
      repository: codefresh/nginx-unprivileged
helm-repo-manager:
  chartmuseum:
    image:
      repository: myregistry.domain.com/codefresh/chartmuseum
cf-platform-analytics-platform:
  redis:
    image:
      repository: codefresh/redisThe chart installs cf-api as a single deployment. Though, at a larger scale, we do recommend to split cf-api to multiple roles (one deployment per role) as follows:
global:
  # -- Change internal cfapi service address
  cfapiService: cfapi-internal
  # -- Change endpoints cfapi service address
  cfapiEndpointsService: cfapi-endpoints
cfapi: &cf-api
  # -- Disable default cfapi deployment
  enabled: false
  # -- (optional) Enable the autoscaler
  # The value will be merged into each cfapi role. So you can specify it once.
  hpa:
    enabled: true
# Enable cf-api roles
cfapi-auth:
  <<: *cf-api
  enabled: true
cfapi-internal:
  <<: *cf-api
  enabled: true
cfapi-ws:
  <<: *cf-api
  enabled: true
cfapi-admin:
  <<: *cf-api
  enabled: true
cfapi-endpoints:
  <<: *cf-api
  enabled: true
cfapi-terminators:
  <<: *cf-api
  enabled: true
cfapi-sso-group-synchronizer:
  <<: *cf-api
  enabled: true
cfapi-buildmanager:
  <<: *cf-api
  enabled: true
cfapi-cacheevictmanager:
  <<: *cf-api
  enabled: true
cfapi-eventsmanagersubscriptions:
  <<: *cf-api
  enabled: true
cfapi-kubernetesresourcemonitor:
  <<: *cf-api
  enabled: true
cfapi-environments:
  <<: *cf-api
  enabled: true
cfapi-gitops-resource-receiver:
  <<: *cf-api
  enabled: true
cfapi-downloadlogmanager:
  <<: *cf-api
  enabled: true
cfapi-teams:
  <<: *cf-api
  enabled: true
cfapi-kubernetes-endpoints:
  <<: *cf-api
  enabled: true
cfapi-test-reporting:
  <<: *cf-api
  enabled: trueThe chart installs the non-HA version of Codefresh by default. If you want to run Codefresh in HA mode, use the example values below.
Note!
cronusis not supported in HA mode, otherwise builds with CRON triggers will be duplicated
values.yaml
cfapi:
  hpa:
    enabled: true
    # These are the defaults for all Codefresh subcharts
    # minReplicas: 2
    # maxReplicas: 10
    # targetCPUUtilizationPercentage: 70
argo-platform:
  abac:
    hpa:
      enabled: true
  analytics-reporter:
    hpa:
      enabled: true
  api-events:
    hpa:
      enabled: true
  api-graphql:
    hpa:
      enabled: true
  audit:
    hpa:
      enabled: true
  cron-executor:
    hpa:
      enabled: true
  event-handler:
    hpa:
      enabled: true
  ui:
    hpa:
      enabled: true
cfui:
  hpa:
    enabled: true
internal-gateway:
  hpa:
    enabled: true
cf-broadcaster:
  hpa:
    enabled: true
cf-platform-analytics-platform:
  hpa:
    enabled: true
charts-manager:
  hpa:
    enabled: true
cluster-providers:
  hpa:
    enabled: true
context-manager:
  hpa:
    enabled: true
gitops-dashboard-manager:
  hpa:
    enabled: true
helm-repo-manager:
  hpa:
    enabled: true
hermes:
  hpa:
    enabled: true
k8s-monitor:
  hpa:
    enabled: true
kube-integration:
  hpa:
    enabled: true
nomios:
  hpa:
    enabled: true
pipeline-manager:
  hpa:
    enabled: true
runtime-environment-manager:
  hpa:
    enabled: true
tasker-kubernetes:
  hpa:
    enabled: true
For infra services (MongoDB, PostgreSQL, RabbitMQ, Redis, Consul, Nats, Ingress-NGINX) from built-in Bitnami charts you can use the following example:
Note! Use topologySpreadConstraints for better resiliency
values.yaml
global:
  postgresService: postgresql-ha-pgpool
  mongodbHost: cf-mongodb-0,cf-mongodb-1,cf-mongodb-2  # Replace `cf` with your Helm Release name
  mongodbOptions: replicaSet=rs0&retryWrites=true
  redisUrl: cf-redis-ha-haproxy
builder:
  controller:
    replicas: 3
consul:
  replicaCount: 3
cfsign:
  controller:
    replicas: 3
  persistence:
    certs-data:
      enabled: false
  volumes:
    certs-data:
      type: emptyDir
  initContainers:
    volume-permissions:
      enabled: false
ingress-nginx:
  controller:
    autoscaling:
      enabled: true
mongodb:
  architecture: replicaset
  replicaCount: 3
  externalAccess:
    enabled: true
    service:
      type: ClusterIP
nats:
  replicaCount: 3
postgresql:
  enabled: false
postgresql-ha:
  enabled: true
  volumePermissions:
    enabled: true
rabbitmq:
  replicaCount: 3
redis:
  enabled: false
redis-ha:
  enabled: trueglobal:
  env:
    NODE_EXTRA_CA_CERTS: /etc/ssl/custom/ca.crt
  volumes:
    custom-ca:
      enabled: true
      type: secret
      existingName: my-custom-ca-cert # exisiting K8s secret object with the CA cert
      optional: true
  volumeMounts:
    custom-ca:
      path:
      - mountPath: /etc/ssl/custom/ca.crt
        subPath: ca.crtTo deploy Codefresh On-Prem on OpenShift use the following values example:
ingress:
  ingressClassName: openshift-default
global:
  dnsService: dns-default
  dnsNamespace: openshift-dns
  clusterDomain: cluster.local
# Requires privileged SCC.
builder:
  enabled: false
cfapi:
  podSecurityContext:
    enabled: false
cf-platform-analytics-platform:
  redis:
    master:
      podSecurityContext:
        enabled: false
      containerSecurityContext:
        enabled: false
cfsign:
  podSecurityContext:
    enabled: false
  initContainers:
    volume-permissions:
      enabled: false
cfui:
  podSecurityContext:
    enabled: false
internal-gateway:
  podSecurityContext:
    enabled: false
helm-repo-manager:
  chartmuseum:
    securityContext:
      enabled: false
consul:
  podSecurityContext:
    enabled: false
  containerSecurityContext:
    enabled: false
cronus:
  podSecurityContext:
    enabled: false
ingress-nginx:
  enabled: false
mongodb:
  podSecurityContext:
    enabled: false
  containerSecurityContext:
    enabled: false
postgresql:
  primary:
    podSecurityContext:
      enabled: false
    containerSecurityContext:
      enabled: false
redis:
  master:
    podSecurityContext:
      enabled: false
    containerSecurityContext:
      enabled: false
rabbitmq:
  podSecurityContext:
    enabled: false
  containerSecurityContext:
    enabled: false
# Requires privileged SCC.
runner:
  enabled: falseAs outlined in prerequisites, it's required to set up a Firebase database for builds logs streaming:
- Create a Database.
- Create a Legacy token for authentication.
- Set the following rules for the database:
{
   "rules": {
       "build-logs": {
           "$jobId":{
               ".read": "!root.child('production/build-logs/'+$jobId).exists() || (auth != null && auth.admin == true) || (auth == null && data.child('visibility').exists() && data.child('visibility').val() == 'public') || ( auth != null && data.child('accountId').exists() && auth.accountId == data.child('accountId').val() )",
               ".write": "auth != null && data.child('accountId').exists() && auth.accountId == data.child('accountId').val()"
           }
       },
       "environment-logs": {
           "$environmentId":{
               ".read": "!root.child('production/environment-logs/'+$environmentId).exists() || ( auth != null && data.child('accountId').exists() && auth.accountId == data.child('accountId').val() )",
               ".write": "auth != null && data.child('accountId').exists() && auth.accountId == data.child('accountId').val()"
           }
       }
   }
}However, if you're in an air-gapped environment, you can omit this prerequisite and use a built-in logging system (i.e. OfflineLogging feature-flag).
See feature management
With this method, Codefresh by default deletes builds older than six months.
The retention mechanism removes data from the following collections: workflowproccesses, workflowrequests, workflowrevisions
cfapi:
  env:
    # Determines if automatic build deletion through the Cron job is enabled.
    RETENTION_POLICY_IS_ENABLED: true
    # The maximum number of builds to delete by a single Cron job. To avoid database issues, especially when there are large numbers of old builds, we recommend deleting them in small chunks. You can gradually increase the number after verifying that performance is not affected.
    RETENTION_POLICY_BUILDS_TO_DELETE: 50
    # The number of days for which to retain builds. Builds older than the defined retention period are deleted.
    RETENTION_POLICY_DAYS: 180Configuration for Codefresh On-Prem >= 2.x
Previous configuration example (i.e.
RETENTION_POLICY_IS_ENABLED=true) is also supported in Codefresh On-Prem >= 2.x
For existing environments, for the retention mechanism to work, you must first drop the created  index in workflowprocesses collection. This requires a maintenance window that depends on the number of builds.
cfapi:
  env:
    # Determines if automatic build deletion is enabled.
    TTL_RETENTION_POLICY_IS_ENABLED: true
    # The number of days for which to retain builds, and can be between 30 (minimum) and 365 (maximum). Builds older than the defined retention period are deleted.
    TTL_RETENTION_POLICY_IN_DAYS: 180pipeline-manager:
  env:
    # Determines project's pipelines limit (default: 500)
    PROJECT_PIPELINES_LIMIT: 500cfapi:
  env:
    # Generate a unique session cookie (cf-uuid) on each login
    DISABLE_CONCURRENT_SESSIONS: true
    # Customize cookie domain
    CF_UUID_COOKIE_DOMAIN: .mydomain.comNote! Ingress host for gitops-runtime and ingress host for control plane must share the same root domain (i.e.
onprem.mydomain.comandruntime.mydomain.com)
cfapi:
  env:
    # Set value to the `X-Frame-Options` response header. Control the restrictions of embedding Codefresh page into the iframes.
    # Possible values: sameorigin(default) / deny
    FRAME_OPTIONS: sameorigin
cfui:
  env:
    FRAME_OPTIONS: sameoriginRead more about header at https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options.
CONTENT_SECURITY_POLICY is the string describing content policies. Use semi-colons to separate between policies. CONTENT_SECURITY_POLICY_REPORT_TO is a comma-separated list of JSON objects. Each object must have a name and an array of endpoints that receive the incoming CSP reports.
For detailed information, see the Content Security Policy article on MDN.
cfui:
  env:
    CONTENT_SECURITY_POLICY: "<YOUR SECURITY POLICIES>"
    CONTENT_SECURITY_POLICY_REPORT_ONLY: "default-src 'self'; font-src 'self'
      https://fonts.gstatic.com; script-src 'self' https://unpkg.com https://js.stripe.com;
      style-src 'self' https://fonts.googleapis.com; 'unsafe-eval' 'unsafe-inline'"
    CONTENT_SECURITY_POLICY_REPORT_TO: "<LIST OF ENDPOINTS AS JSON OBJECTS>"For detailed information, see the Securing your webhooks and Webhooks.
cfapi:
  env:
    USE_SHA256_GITHUB_SIGNATURE: "true"
In Codefresh On-Prem 2.6.x all Codefresh owner microservices include image digests in the default subchart values.
For example, default values for cfapi might look like this:
container:
  image:
    registry: us-docker.pkg.dev/codefresh-enterprise/gcr.io
    repository: codefresh/cf-api
    tag: 21.268.1
    digest: "sha256:bae42f8efc18facc2bf93690fce4ab03ef9607cec4443fada48292d1be12f5f8"
    pullPolicy: IfNotPresentthis resulting in the following image reference in the pod spec:
spec:
  containers:
    - name: cfapi
      image: us-docker.pkg.dev/codefresh-enterprise/gcr.io/codefresh/cf-api:21.268.1@sha256:bae42f8efc18facc2bf93690fce4ab03ef9607cec4443fada48292d1be12f5f8Note! When the
digestis providerd, thetagis ignored! You can omit digest and use tag only like the followingvalues.yamlexample:
cfapi:
  container:
    image:
      tag: 21.268.1
      # -- Set empty tag for digest
      digest: ""OpenID Connect (OIDC) allows Codefresh Builds to access resources in your cloud provider (such as AWS, Azure, GCP), without needing to store cloud credentials as long-lived pipeline secret variables.
- DNS name for OIDC Provider
- Valid TLS certificates for Ingress
- K8S secret containing JWKS (JSON Web Key Sets). Can be generated at mkjwk.org
- K8S secret containing Cliend ID (public identifier for app) and Client Secret (application password; cryptographically strong random string)
NOTE! In production usage use External Secrets Operator or HashiCorp Vault to create secrets. The following example uses
kubectlfor brevity.
For JWKS use Public and Private Keypair Set (if generated at mkjwk.org), for example:
cf-oidc-provider-jwks.json:
{
    "keys": [
        {
            "p": "...",
            "kty": "RSA",
            "q": "...",
            "d": "...",
            "e": "AQAB",
            "use": "sig",
            "qi": "...",
            "dp": "...",
            "alg": "RS256",
            "dq": "...",
            "n": "..."
        }
    ]
}# Creating secret containing JWKS.
# The secret KEY is `cf-oidc-provider-jwks.json`. It then referenced in `OIDC_JWKS_PRIVATE_KEYS_PATH` environment variable in `cf-oidc-provider`.
# The secret NAME is referenced in `.volumes.jwks-file.nameOverride` (volumeMount is configured in the chart already)
kubectl create secret generic cf-oidc-provider-jwks \
  --from-file=cf-oidc-provider-jwks.json \
  -n $NAMESPACE
# Creating secret containing Client ID and Client Secret
# Secret NAME is `cf-oidc-provider-client-secret`.
# It then referenced in `OIDC_CF_PLATFORM_CLIENT_ID` and `OIDC_CF_PLATFORM_CLIENT_SECRET` environment variables in `cf-oidc-provider`
# and in `OIDC_PROVIDER_CLIENT_ID` and `OIDC_PROVIDER_CLIENT_SECRET` in `cfapi`.
kubectl create secret generic cf-oidc-provider-client-secret \
  --from-literal=client-id=codefresh \
  --from-literal=client-secret='verysecureclientsecret' \
  -n $NAMESPACEvalues.yaml
global:
  # -- Set OIDC Provider URL
  oidcProviderService: "oidc.mydomain.com"
  # -- Default OIDC Provider service client ID in plain text.
  # Optional! If specified here, no need to specify CLIENT_ID/CLIENT_SECRET env vars in cfapi and cf-oidc-provider below.
  oidcProviderClientId: null
  # -- Default OIDC Provider service client secret in plain text.
  # Optional! If specified here, no need to specify CLIENT_ID/CLIENT_SECRET env vars in cfapi and cf-oidc-provider below.
  oidcProviderClientSecret: null
cfapi:
  # -- Set additional variables for cfapi
  # Reference a secret containing Client ID and Client Secret
  env:
    OIDC_PROVIDER_CLIENT_ID:
      valueFrom:
        secretKeyRef:
          name: cf-oidc-provider-client-secret
          key: client-id
    OIDC_PROVIDER_CLIENT_SECRET:
      valueFrom:
        secretKeyRef:
          name: cf-oidc-provider-client-secret
          key: client-secret
cf-oidc-provider:
  # -- Enable OIDC Provider
  enabled: true
  container:
    env:
      OIDC_JWKS_PRIVATE_KEYS_PATH: /secrets/jwks/cf-oidc-provider-jwks.json
      # -- Reference a secret containing Client ID and Client Secret
      OIDC_CF_PLATFORM_CLIENT_ID:
        valueFrom:
          secretKeyRef:
            name: cf-oidc-provider-client-secret
            key: client-id
      OIDC_CF_PLATFORM_CLIENT_SECRET:
        valueFrom:
          secretKeyRef:
            name: cf-oidc-provider-client-secret
            key: client-secret
  volumes:
    jwks-file:
      enabled: true
      type: secret
      # -- Secret name containing JWKS
      nameOverride: "cf-oidc-provider-jwks"
      optional: false
  ingress:
    main:
      # -- Enable ingress for OIDC Provider
      enabled: true
      annotations: {}
      # -- Set ingress class name
      ingressClassName: ""
      hosts:
        # -- Set OIDC Provider URL
      - host: "oidc.mydomain.com"
        paths:
        - path: /
        # For ALB (Application Load Balancer) /* asterisk is required in path
        # e.g.
        # - path: /*
      tls: []Deploy HELM chart with new values.yaml
Use https://oidc.mydomain.com/.well-known/openid-configuration to verify OIDC Provider configuration
To add Codefresh OIDC provider to IAM, see the AWS documentation
- For the provider URL: Use .Values.global.oidcProviderServicevalue withhttps://prefix (i.e. https://oidc.mydomain.com)
- For the Audienece: Use .Values.global.appUrlvalue withhttps://prefix (i.e. https://onprem.mydomain.com)
To configure the role and trust in IAM, see AWS documentation
Edit the trust policy to add the sub field to the validation conditions. For example, use StringLike to allow only builds from specific pipeline to assume a role in AWS.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/oidc.mydomain.com"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.mydomain.com:aud": "https://onprem.mydomain.com"
                },
                "StringLike": {
                    "oidc.mydomain.com:sub": "account:64884faac2751b77ca7ab324:pipeline:64f7232ab698cfcb95d93cef:*"
                }
            }
        }
    ]
}To see all the claims supported by Codefresh OIDC provider, see claims_supported entries at https://oidc.mydomain.com/.well-known/openid-configuration
"claims_supported": [
  "sub",
  "account_id",
  "account_name",
  "pipeline_id",
  "pipeline_name",
  "workflow_id",
  "initiator",
  "scm_user_name",
  "scm_repo_url",
  "scm_ref",
  "scm_pull_request_target_branch",
  "sid",
  "auth_time",
  "iss"
]Use obtain-oidc-id-token and aws-sts-assume-role-with-web-identity steps to exchange the OIDC token (JWT) for a cloud access token.
Sometimes, in new releases of Codefresh On-Prem, index requirements change. When this happens, it's mentioned in the Upgrading section for the specific release.
Tip
If you're upgrading from version X to version Y, and index requirements were updated in any of the intermediate versions, you only need to align your indexes with the index requirements of version Y. To do that, follow Index alignment instructions.
The required index definitions for each release can be found at the following resources:
- 2.6https://github.com/codefresh-io/codefresh-onprem-helm/tree/release-2.6/indexes
- 2.7https://github.com/codefresh-io/codefresh-onprem-helm/tree/release-2.7/indexes
- 2.8https://github.com/codefresh-io/codefresh-onprem-helm/tree/release-2.8/indexes
- 2.9https://github.com/codefresh-io/codefresh-onprem-helm/tree/release-2.9/indexes
The indexes specifications are stored in JSON files. The directory structure is:
indexes
├── <DB_NAME> # MongoDB database name
│   ├── <COLLECTION_NAME>.json # MongoDB indexes for the specified collectionOverview of the index alignment process:
- Identify the differences between the indexes in your MongoDB instance and the required index definitions.
- Create any missing indexes.
- Perform the upgrade of Codefresh On-Prem installation.
- Then remove any unnecessary indexes.
Important
Any changes to indexes should be performed during a defined maintenance window or during periods of lowest traffic to MongoDB.**
Building indexes during time periods where the target collection is under heavy write load can result in reduced write performance and longer index builds. (Source: MongoDB official documentation)
Even minor changes to indexes (e.g., index removal) can cause brief but noticeable performance degradation (Source: MongoDB official documentation)
For self-hosted MongoDB, follow the instructions below:
- Connect to the MongoDB server using the mongosh shell. Open your terminal or command prompt and run the following command, replacing <connection_string>with the appropriate MongoDB connection string for your server:
mongosh "<connection_string>"- Retrieve the list of indexes for a specific collection:
db.getSiblingDB('<db_name>').getCollection('<collection_name>').getIndexes()- Compare your indexes with the required indexes for the target release, and adjust them by creating any missing indexes or removing any unnecessary ones.
Index creation
- To create an indexes, we recommend using the createIndexescommand (ref):
Important
We recommend to create indexes in batches of 3 indexes at a time. However, it's highly recommended before creating indexes in production DB to test performance impact on a staging instance with prod-like amount of data.
Previous command should be completed before starting the next batch.
db.getSiblingDB('<db_name>').runCommand(
  {
    createIndexes: '<collection_name>',
    indexes: [
        { ... },  // Index definition from the doc above
        { ... },  // Index definition from the doc above
        { ... }   // Index definition from the doc above
    ],
  }
)After executing the command, you should see a result indicating that the indexes were created successfully.
Index removal
- To remove an index, use the dropIndex()method with<index_name>:
db.getSiblingDB('<db_name>').getCollection('<collection_name>').dropIndex('<index_name>')If you're hosting MongoDB on Atlas, use the following Manage Indexes guide to View, Create or Remove indexes.
Important
In Atlas, for production environments, it may be recommended to use rolling index builds by enabling the "Build index via rolling process" checkbox. (MongoDB official documentation)
This major chart version change (v1.4.X -> v2.0.0) contains some incompatible breaking change needing manual actions.
Before applying the upgrade, read through this section!
Codefesh 2.0 chart includes additional dependent microservices (charts):
- argo-platform: Main Codefresh GitOps module.
- internal-gateway: NGINX that proxies requests to the correct components (api-graphql, api-events, ui).
- argo-hub-platform: Service for Argo Workflow templates.
- platform-analyticsand- etl-starter: Service for Pipelines dasboard
These services require additional databases in MongoDB (audit/read-models/platform-analytics-postgres) and in Postgresql (analytics and analytics_pre_aggregations)
The helm chart is configured to re-run seed jobs to create necessary databases and users during the upgrade.
seed:
  # -- Enable all seed jobs
  enabled: trueStarting from version 2.0.0, two new MongoDB indexes have been added that are vital for optimizing database queries and enhancing overall system performance. It is crucial to create these indexes before performing the upgrade to avoid any potential performance degradation.
- account_1_annotations.key_1_annotations.value_1(db:- codefresh; collection:- annotations)
{
    "account" : 1,
    "annotations.key" : 1,
    "annotations.value" : 1
}- accountId_1_entityType_1_entityId_1(db:- codefresh; collection:- workflowprocesses)
{
    "accountId" : 1,
    "entityType" : 1,
    "entityId" : 1
}To prevent potential performance degradation during the upgrade, it is important to schedule a maintenance window during a period of low activity or minimal user impact and create the indexes mentioned above before initiating the upgrade process. By proactively creating these indexes, you can avoid the application automatically creating them during the upgrade and ensure a smooth transition with optimized performance.
Index Creation
If you're hosting MongoDB on Atlas, use the following Create, View, Drop, and Hide Indexes guide to create indexes mentioned above. It's important to create them in a rolling fashion (i.e. Build index via rolling process checkbox enabled) in produciton environment.
For self-hosted MongoDB, see the following instruction:
- Connect to the MongoDB server using the mongosh shell. Open your terminal or command prompt and run the following command, replacing <connection_string> with the appropriate MongoDB connection string for your server:
mongosh "<connection_string>"- Once connected, switch to the codefreshdatabase where the index will be located using theusecommand.
use codefresh- To create the indexes, use the createIndex() method. The createIndex() method should be executed on the db object.
db.workflowprocesses.createIndex({ account: 1, 'annotations.key': 1, 'annotations.value': 1 }, { name: 'account_1_annotations.key_1_annotations.value_1', sparse: true, background: true })db.annotations.createIndex({ accountId: 1, entityType: 1, entityId: 1 }, { name: 'accountId_1_entityType_1_entityId_1', background: true })After executing the createIndex() command, you should see a result indicating the successful creation of the index.
⚠️  Kcfi Deprecation
This major release deprecates kcfi installer. The recommended way to install Codefresh On-Prem is Helm.
Due to that, Kcfi config.yaml will not be compatible for Helm-based installation.
You still can reuse the same config.yaml for the Helm chart, but you need to remove (or update) the following sections.
- .Values.metadatais deprecated. Remove it from- config.yaml
1.4.x config.yaml
metadata:
  kind: codefresh
  installer:
    type: helm
    helm:
      chart: codefresh
      repoUrl: http://chartmuseum.codefresh.io/codefresh
      version: 1.4.x- .Values.kubernetesis deprecated. Remove it from- config.yaml
1.4.x config.yaml
kubernetes:
  namespace: codefresh
  context: context-name- 
.Values.tls(.Values.webTLS) is moved under.Values.ingress.tls. Remove.Values.tlsfromconfig.yamlafterwards.See full values.yaml. 
1.4.x config.yaml
tls:
  selfSigned: false
  cert: certs/certificate.crt
  key: certs/private.key2.0.0 config.yaml
# -- Ingress
ingress:
  # -- Enable the Ingress
  enabled: true
  # -- Set the ingressClass that is used for the ingress.
  ingressClassName: nginx-codefresh
  tls:
    # -- Enable TLS
    enabled: true
    # -- Default secret name to be created with provided `cert` and `key` below
    secretName: "star.codefresh.io"
    # -- Certificate (base64 encoded)
    cert: "LS0tLS1CRUdJTiBDRVJ...."
    # -- Private key (base64 encoded)
    key: "LS0tLS1CRUdJTiBSU0E..."
    # -- Existing `kubernetes.io/tls` type secret with TLS certificates (keys: `tls.crt`, `tls.key`)
    existingSecret: ""- 
.Values.imagesis deprecated. Remove.Values.imagesfromconfig.yaml.- 
.Values.images.codefreshRegistrySais changed to.Values.imageCredentials
- 
.Values.privateRegistry.addressis changed to.Values.global.imageRegistry(no trailing slash/at the end)
 
- 
1.4.x config.yaml
images:
  codefreshRegistrySa: sa.json
  usePrivateRegistry: true
  privateRegistry:
    address: myprivateregistry.domain
    username: username
    password: password2.0.0 config.yaml
# -- Credentials for Image Pull Secret object
imageCredentials: {}
# Pass sa.json (as a single line). Obtain GCR Service Account JSON (sa.json) at support@codefresh.io
# E.g.:
# imageCredentials:
#   registry: gcr.io
#   username: _json_key
#   password: '{ "type": "service_account", "project_id": "codefresh-enterprise", "private_key_id": ... }'2.0.0 config.yaml
global:
  # -- Global Docker image registry
  imageRegistry: "myprivateregistry.domain"- .Values.dbinfrais deprecated. Remove it from- config.yaml
1.4.x config.yaml
dbinfra:
  enabled: false- .Values.firebaseUrland- .Values.firebaseSecretis moved under- .Values.global
1.4.x config.yaml
firebaseUrl: <url>
firebaseSecret: <secret>
newrelicLicenseKey: <key>2.0.0 config.yaml
global:
  # -- Firebase URL for logs streaming.
  firebaseUrl: ""
  # -- Firebase Secret.
  firebaseSecret: ""
  # -- New Relic Key
  newrelicLicenseKey: ""- 
.Values.global.certsJobsand.Values.global.seedJobsis deprecated. Use.Values.seed.mongoSeedJoband.Values.seed.postgresSeedJob.See full values.yaml. 
1.4.x config.yaml
global:
  certsJobs: true
  seedJobs: true2.0.0 config.yaml
seed:
  # -- Enable all seed jobs
  enabled: true
  # -- Mongo Seed Job. Required at first install. Seeds the required data (default idp/user/account), creates cfuser and required databases.
  # @default -- See below
  mongoSeedJob:
    enabled: true
  # -- Postgres Seed Job. Required at first install. Creates required user and databases.
  # @default -- See below
  postgresSeedJob:
    enabled: true⚠️  Migration to Library Charts
All Codefresh subchart templates (i.e. cfapi, cfui, pipeline-manager, context-manager, etc) have been migrated to use Helm library charts.
That allows unifying the values structure across all Codefresh-owned charts. However, there are some immutable fields in the old charts which cannot be upgraded during a regular helm upgrade, and require additional manual actions.
Run the following commands before appying the upgrade.
- Delete cf-runnerandcf-builderstateful sets.
kubectl delete sts cf-runner --namespace $NAMESPACE
kubectl delete sts cf-builder --namespace $NAMESPACE- Delete all jobs
kubectl delete job --namespace $NAMESPACE -l release=cf- In values.yaml/config.yamlremove.Values.nomios.ingresssection if you have it
nomios:
  # Remove ingress section
  ingress:
    ...Due to deprecation of legacy ChartMuseum subchart in favor of upstream chartmuseum, you need to remove the old deployment before the upgrade due to immutable matchLabels field change in the deployment spec.
kubectl delete deploy cf-chartmuseum --namespace $NAMESPACE- If you have .persistence.enabled=truedefined and NOT.persistence.existingClaimlike:
helm-repo-manager:
  chartmuseum:
    persistence:
      enabled: truethen you have to backup the content of old PVC (mounted as /storage in the old deployment) before the upgrade!
POD_NAME=$(kubectl get pod -l app=chartmuseum -n $NAMESPACE --no-headers -o custom-columns=":metadata.name")
kubectl cp -n $NAMESPACE $POD_NAME:/storage $(pwd)/storageAfter the upgrade, restore the content into new deployment:
POD_NAME=$(kubectl get pod -l app.kubernetes.io/name=chartmuseum -n $NAMESPACE --no-headers -o custom-columns=":metadata.name")
kubectl cp -n $NAMESPACE $(pwd)/storage $POD_NAME:/storage- If you have .persistence.existingClaimdefined, you can keep it as is:
helm-repo-manager:
  chartmuseum:
    existingClaim: my-claim-name- If you have .Values.global.imageRegistryspecified, it won't be applied for the new chartmuseum subchart. Add image registry explicitly for the subchart as follows
global:
  imageRegistry: myregistry.domain.com
helm-repo-manager:
  chartmuseum:
    image:
      repository: myregistry.domain.com/codefresh/chartmuseumValues structure for argo-platform images has been changed.
Added registry to align with the rest of the services.
values for <= v2.0.16
argo-platform:
  api-graphql:
    image:
      repository: gcr.io/codefresh-enterprise/codefresh-io/argo-platform-api-graphql
  abac:
    image:
      repository: gcr.io/codefresh-enterprise/codefresh-io/argo-platform-abac
  analytics-reporter:
    image:
      repository: gcr.io/codefresh-enterprise/codefresh-io/argo-platform-analytics-reporter
  api-events:
    image:
      repository: gcr.io/codefresh-enterprise/codefresh-io/argo-platform-api-events
  audit:
    image:
      repository: gcr.io/codefresh-enterprise/codefresh-io/argo-platform-audit
  cron-executor:
    image:
      repository: gcr.io/codefresh-enterprise/codefresh-io/argo-platform-cron-executor
  event-handler:
    image:
      repository: gcr.io/codefresh-enterprise/codefresh-io/argo-platform-event-handler
  ui:
    image:
      repository: gcr.io/codefresh-enterprise/codefresh-io/argo-platform-uivalues for >= v2.0.17
argo-platform:
  api-graphql:
    image:
      registry: gcr.io/codefresh-enterprise
      repository: codefresh-io/argo-platform-api-graphql
  abac:
    image:
      registry: gcr.io/codefresh-enterprise
      repository: codefresh-io/argo-platform-abac
  analytics-reporter:
    image:
      registry: gcr.io/codefresh-enterprise
      repository: codefresh-io/argo-platform-analytics-reporter
  api-events:
    image:
      registry: gcr.io/codefresh-enterprise
      repository: codefresh-io/argo-platform-api-events
  audit:
    image:
      registry: gcr.io/codefresh-enterprise
      repository: codefresh-io/argo-platform-audit
  cron-executor:
    image:
      registry: gcr.io/codefresh-enterprise
      repository: codefresh-io/argo-platform-cron-executor
  event-handler:
    image:
      registry: gcr.io/codefresh-enterprise
      repository: codefresh-io/argo-platform-event-handler
  ui:
    image:
      registry: gcr.io/codefresh-enterprise
      repository: codefresh-io/argo-platform-ui
- 
Changed default ingress paths. All paths point to internal-gatewaynow. Remove any overrides at.Values.ingress.services! (updated example for ALB)
- 
Deprecated global.mongoURI. Supported for backward compatibility!
- 
Added global.mongodbProtocol/global.mongodbUser/global.mongodbPassword/global.mongodbHost/global.mongodbOptions
- 
Added global.mongodbUserSecretKeyRef/global.mongodbPasswordSecretKeyRef/global.mongodbHostSecretKeyRef
- 
Added seed.mongoSeedJob.mongodbRootUserSecretKeyRef/seed.mongoSeedJob.mongodbRootPasswordSecretKeyRef
- 
Added seed.postgresSeedJob.postgresUserSecretKeyRef/seed.postgresSeedJob.postgresPasswordSecretKeyRef
- 
Added global.firebaseUrlSecretKeyRef/global.firebaseSecretSecretKeyRef
- 
Added global.postgresUserSecretKeyRef/global.postgresPasswordSecretKeyRef/global.postgresHostnameSecretKeyRef
- 
Added global.rabbitmqUsernameSecretKeyRef/global.rabbitmqPasswordSecretKeyRef/global.rabbitmqHostnameSecretKeyRef
- 
Added global.redisPasswordSecretKeyRef/global.redisUrlSecretKeyRef
- 
Removed global.runtimeMongoURI(defaults toglobal.mongoURIorglobal.mongodbHost/global.mongodbHostSecretKeyRef/etc like values)
- 
Removed global.runtimeMongoDb(defaults toglobal.mongodbDatabase)
- 
Removed global.runtimeRedisHost(defaults toglobal.redisUrl/global.redisUrlSecretKeyReforglobal.redisService)
- 
Removed global.runtimeRedisPort(defaults toglobal.redisPort)
- 
Removed global.runtimeRedisPassword(defaults toglobal.redisPassword/global.redisPasswordSecretKeyRef)
- 
Removed global.runtimeRedisDb(defaults to values below)
cfapi:
  env:
    RUNTIME_REDIS_DB: 0
cf-broadcaster:
  env:
    REDIS_DB: 0Since version 2.1.7 chart is pushed only to OCI registry at
oci://quay.io/codefresh/codefresh
Versions prior to 2.1.7 are still available in ChartMuseum at
http://chartmuseum.codefresh.io/codefresh
Codefresh On-Prem 2.2.x uses MongoDB 5.x (4.x is still supported). If you run external MongoDB, it is highly recommended to upgrade it to 5.x after upgrading Codefresh On-Prem to 2.2.x.
If you run external Redis, this is not applicable to you.
Codefresh On-Prem 2.2.x adds (not replaces!) an optional Redis-HA (master/slave configuration with Sentinel sidecars for failover management) instead of a single Redis instance. To enable it, see the following values:
global:
  redisUrl: cf-redis-ha-haproxy # Replace `cf` with your Helm release name
# -- Disable standalone Redis instance
redis:
  enabled: false
# -- Enable Redis HA
redis-ha:
  enabled: truegcr.io) to GAR (us-docker.pkg.dev)
Update .Values.imageCredentials.registry to us-docker.pkg.dev if it's explicitly set to gcr.io in your values file.
Default .Values.imageCredentials for Onprem v2.2.x and below
imageCredentials:
  registry: gcr.io
  username: _json_key
  password: <YOUR_SERVICE_ACCOUNT_JSON_HERE>Default .Values.imageCredentials for Onprem v2.3.x and above
imageCredentials:
  registry: us-docker.pkg.dev
  username: _json_key
  password: <YOUR_SERVICE_ACCOUNT_JSON_HERE>Use helm history to determine which release has worked, then use helm rollback to perform a rollback
When rollback from 2.x prune these resources due to immutabled fields changes
kubectl delete sts cf-runner --namespace $NAMESPACE
kubectl delete sts cf-builder --namespace $NAMESPACE
kubectl delete deploy cf-chartmuseum --namespace $NAMESPACE
kubectl delete job --namespace $NAMESPACE -l release=$RELEASE_NAMEhelm rollback $RELEASE_NAME $RELEASE_NUMBER \
    --namespace $NAMESPACE \
    --debug \
    --waitNew cfapi-auth role is introduced in 2.4.x.
If you run onprem with multi-role cfapi configuration, make sure to enable cfapi-auth role:
cfapi-auth:
  <<: *cf-api
  enabled: trueSince 2.4.x, SYSTEM_TYPE is changed to PROJECT_ONE by default.
If you want to preserve original CLASSIC values, update cfapi environment variables:
cfapi:
  container:
    env:
      DEFAULT_SYSTEM_TYPE: CLASSIC
⚠️ WARNING! MongoDB indexes changed!Please, follow Maintaining MongoDB indexes guide to meet index requirements BEFORE the upgrade process.
⚠️ WARNING! MongoDB indexes changed!Please, follow Maintaining MongoDB indexes guide to meet index requirements BEFORE the upgrade process.
- Added option to provide global tolerations/nodeSelector/affinityfor all Codefresh subcharts
Note! These global settings will not be applied to Bitnami subcharts (e.g.
mongodb,redis,rabbitmq,postgres. etc)
global:
  tolerations:
    - key: "key"
      operator: "Equal"
      value: "value"
      effect: "NoSchedule"
  nodeSelector:
    key: "value"
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: "key"
                operator: "In"
                values:
                  - "value"
⚠️ WARNING! MongoDB indexes changed!Please, follow Maintaining MongoDB indexes guide to meet index requirements BEFORE the upgrade process.
Default MongoDB image is changed from 6.x to 7.x.
If you run external MongoDB (i.e. Atlas), it is required to upgrade it to 7.x after upgrading Codefresh On-Prem to 2.8.x.
- Before the upgrade, for backward compatibility (in case you need to rollback to 6.x), you should set featureCompatibilityVersionto6.0in your values file.
mongodb:
  migration:
    enabled: true
    featureCompatibilityVersion: "6.0"- 
Perform Codefresh On-Prem upgrade to 2.8.x. Make sure all systems are up and running. 
- 
After the upgrade, if all system are stable, you need to set featureCompatibilityVersionto7.0in your values file and re-deploy the chart.
mongodb:
  migration:
    enabled: true
    featureCompatibilityVersion: "7.0"mongodb:
  migration:
    enabled: falseDefault PostgreSQL image is changed from 13.x to 17.x
If you run external PostgreSQL, follow the official instructions to upgrade to 17.x.
⚠️ Important!
The default SSL configuration may change on your provider's side when you upgrade.
Please read the following section before the upgrade: Using SSL with a PostgreSQL
bitnami/postgresql subchart, direct upgrade is not supported due to incompatible breaking changes in the database files. You will see the following error in the logs:
postgresql 17:36:28.41 INFO  ==> ** Starting PostgreSQL **
2025-05-21 17:36:28.432 GMT [1] FATAL:  database files are incompatible with server
2025-05-21 17:36:28.432 GMT [1] DETAIL:  The data directory was initialized by PostgreSQL version 13, which is not compatible with this version 17.2.
You need to backup your data, delete the old PostgreSQL StatefulSet with PVCs and restore the data into a new PostgreSQL StatefulSet.
- 
Before the upgrade, backup your data on a separate PVC 
- 
Create PVC with the same or bigger size as your current PostgreSQL PVC: 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgresql-dump
spec:
  storageClassName: <STORAGE_CLASS>
  resources:
    requests:
      storage: <PVC_SIZE>
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce- Create a job to dump the data from the old PostgreSQL StatefulSet into the new PVC:
apiVersion: batch/v1
kind: Job
metadata:
  name: postgresql-dump
spec:
  ttlSecondsAfterFinished: 300
  template:
    spec:
      containers:
      - name: postgresql-dump
        image: quay.io/codefresh/postgresql:17
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "1Gi"
            cpu: "1"
        env:
          - name: PGUSER
            value: "<POSTGRES_USER>"
          - name: PGPASSWORD
            value: "<POSTGRES_PASSWORD>"
          - name: PGHOST
            value: "<POSTGRES_HOST>"
          - name: PGPORT
            value: "<POSTGRES_PORT>"
        command:
          - "/bin/bash"
          - "-c"
          - |
            pg_dumpall --verbose > /opt/postgresql-dump/dump.sql
        volumeMounts:
          - name: postgresql-dump
            mountPath: /opt/postgresql-dump
      securityContext:
        runAsUser: 0
        fsGroup: 0
      volumes:
        - name: postgresql-dump
          persistentVolumeClaim:
            claimName: postgresql-dump
      restartPolicy: Never- Delete old PostgreSQL StatefulSet and PVC
STS_NAME=$(kubectl get sts -n $NAMESPACE -l app.kubernetes.io/instance=$RELEASE_NAME -l app.kubernetes.io/name=postgresql -o jsonpath='{.items[0].metadata.name}')
PVC_NAME=$(kubectl get pvc -n $NAMESPACE -l app.kubernetes.io/instance=$RELEASE_NAME -l app.kubernetes.io/name=postgresql -o jsonpath='{.items[0].metadata.name}')
kubectl delete sts $STS_NAME -n $NAMESPACE
kubectl delete pvc $PVC_NAME -n $NAMESPACE- Peform the upgrade to 2.8.x with PostgreSQL seed job enabled to re-create users and databases
seed:
  postgresSeedJob:
    enabled: true- Create a job to restore the data from the new PVC into the new PostgreSQL StatefulSet:
apiVersion: batch/v1
kind: Job
metadata:
  name: postgresql-restore
spec:
  ttlSecondsAfterFinished: 300
  template:
    spec:
      containers:
      - name: postgresql-restore
        image: quay.io/codefresh/postgresql:17
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "1Gi"
            cpu: "1"
        env:
          - name: PGUSER
            value: "<POSTGRES_USER>"
          - name: PGPASSWORD
            value: "<POSTGRES_PASSWORD>"
          - name: PGHOST
            value: "<POSTGRES_HOST>"
          - name: PGPORT
            value: "<POSTGRES_PORT>"
        command:
          - "/bin/bash"
          - "-c"
          - |
            psql -f /opt/postgresql-dump/dump.sql
        volumeMounts:
          - name: postgresql-dump
            mountPath: /opt/postgresql-dump
      securityContext:
        runAsUser: 0
        fsGroup: 0
      volumes:
        - name: postgresql-dump
          persistentVolumeClaim:
            claimName: postgresql-dump
      restartPolicy: NeverDefault RabbitMQ image is changed from 3.x to 4.0
If you run external RabbitMQ, follow the official instructions to upgrade to 4.0
For built-in RabbitMQ bitnami/rabbitmq subchart, pre-upgrade hook was added to enable all stable feature flags.
- 
Added option to provide .Values.global.tolerations/.Values.global.nodeSelector/.Values.global.affinityfor all Codefresh subcharts
- 
Changed default location for public images from quay.io/codefreshtous-docker.pkg.dev/codefresh-inc/public-gcr-io/codefresh
- 
.Values.hookswas splitted into.Values.hooks.mongodband.Values.hooks.consul
Warning
BREAKING CHANGES
Default DinD image has been upgraded to 28.x, which removes support for pushing and pulling with legacy image manifest v2 schema 1 (ref).
Before upgrading Codefresh, please follow the instruction in this doc to identify deprecated images, upgrade them, and then proceed with upgrading the platform.
- .Values.runneris removed
Changes in indexes: follow Maintaining MongoDB indexes guide to meet index requirements before the upgrade process.
Changes in collections: following collections can be safely dropped after the upgrade to 2.9.x if they exist. These collections are no longer used and should be removed to maintain optimal database performance and prevent the accumulation of obsolete data.
- read-models.application-tree
- read-models.<entity>-history— every collection with- ~-historysuffix, such as- read-models.applications-history,- read-models.services-history, etc.
Builds are stuck in pending with Error: Failed to validate connection to Docker daemon; caused by Error: certificate has expired
Reason: Runtime certificates have expiried.
To check if runtime internal CA expired:
kubectl -n $NAMESPACE get secret/cf-codefresh-certs-client -o jsonpath="{.data['ca\.pem']}" | base64 -d | openssl x509 -enddate -nooutResolution: Replace internal CA and re-issue dind certs for runtime
- Delete k8s secret with expired certificate
kubectl -n $NAMESPACE delete secret cf-codefresh-certs-client- Set .Values.global.gencerts.enabled=true(.Values.global.certsJob=truefor onprem < 2.x version)
# -- Job to generate internal runtime secrets.
# @default -- See below
gencerts:
  enabled: true- Upgrade Codefresh On-Prem Helm release. It will recreate cf-codefresh-certs-clientsecret
helm upgrade --install cf codefresh/codefresh \
    -f cf-values.yaml \
    --namespace codefresh \
    --create-namespace \
    --debug \
    --wait \
    --timeout 15m- Restart cfapiandcfsigndeployments
kubectl -n $NAMESPACE rollout restart deployment/cf-cfapi
kubectl -n $NAMESPACE rollout restart deployment/cf-cfsignCase A: Codefresh Runner installed with HELM chart (charts/cf-runtime)
Re-apply the cf-runtime helm chart. Post-upgrade gencerts-dind helm hook will regenerate the dind certificates using a new CA.
Case B: Codefresh Runner installed with legacy CLI (codefresh runner init)
Delete codefresh-certs-server k8s secret and run ./configure-dind-certs.sh in your runtime namespace.
kubectl -n $NAMESPACE delete secret codefresh-certs-server
./configure-dind-certs.sh -n $RUNTIME_NAMESPACE https://$CODEFRESH_HOST $CODEFRESH_API_TOKENConsul Error: Refusing to rejoin cluster because the server has been offline for more than the configured server_rejoin_age_max
After platform upgrade, Consul fails with the error refusing to rejoin cluster because the server has been offline for more than the configured server_rejoin_age_max - consider wiping your data dir. There is known issue of hashicorp/consul behaviour. Try to wipe out or delete the consul PV with config data and restart Consul StatefulSet.