diff --git a/pages/cockpit/how-to/activate-managed-alerts.mdx b/pages/cockpit/how-to/activate-managed-alerts.mdx
index fbd4895b7e..3ea45887c4 100644
--- a/pages/cockpit/how-to/activate-managed-alerts.mdx
+++ b/pages/cockpit/how-to/activate-managed-alerts.mdx
@@ -4,7 +4,7 @@ description: Learn how to activate preconfigured alerts for your Cockpit resourc
categories:
- observability
dates:
- validation: 2025-07-29
+ validation: 2025-10-22
posted: 2024-04-05
---
import Requirements from '@macros/iam/requirements.mdx'
@@ -19,7 +19,7 @@ This page shows you how to activate [preconfigured alerts](/cockpit/concepts/#pr
- A Scaleway account logged into the [console](https://console.scaleway.com)
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
- [Enabled](/cockpit/how-to/enable-alert-manager/) the alert manager
- - [Added contacts](/cockpit/how-to/add-contact-points/)
+ - [Added contacts](/cockpit/how-to/enable-alert-manager/#how-to-add-contacts/)
## How to activate preconfigured alerts
diff --git a/pages/cockpit/how-to/add-contact-points.mdx b/pages/cockpit/how-to/add-contact-points.mdx
deleted file mode 100644
index 1533107a25..0000000000
--- a/pages/cockpit/how-to/add-contact-points.mdx
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: How to manage contacts
-description: Learn how to configure and manage contacts to notify when alerts are triggered or resolved using the Scaleway console. Follow the steps to configure contacts, choose whether to be notified when alerts are resolved, and send test alerts.
-categories:
- - observability
-dates:
- validation: 2025-07-29
- posted: 2024-04-05
----
-import Requirements from '@macros/iam/requirements.mdx'
-
-
-This page shows you how to add and manage [contacts](/cockpit/concepts/#contacts) to ensure the right people are notified when alerts are triggered or resolved using the [Scaleway console](https://console.scaleway.com/).
-
-You are prompted to create contacts when [enabling the alert manager](/cockpit/how-to/enable-alert-manager/) for the first time, or when re-enabling it after disabling. However, you can also perform this step independently from the alert manager configuration at any time.
-
-
-
- - A Scaleway account logged into the [console](https://console.scaleway.com)
- - [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
- - [Enabled](/cockpit/how-to/enable-alert-manager/) the alert manager
-
-## How to add contacts
-
-1. Click **Cockpit** in the **Monitoring** section of the [console](https://console.scaleway.com/) side menu. The **Cockpit** overview page displays.
-2. Click the **Alerts** tab.
-3. Click the **Region** drop-down and select the desired region.
-
- Make sure that you select the same region as the [data sources](/cockpit/concepts/#data-sources) you want your contacts to be alerted for.
-
-4. Click **Add email** in the **Contacts** section. A pop-up displays.
-5. Enter an email address, then click **+ Add email**. Your email address displays and by default, the **Resolved notifications** checkbox is ticked. This means that you will receive notifications for resolved alerts.
-6. Optionally, enter another email and click **+ Add email** to add another contact.
-7. Click **Add contacts** to confirm. The email addresses appears in the list of your contacts.
-
-## How to manage contacts
-
-1. Click **Cockpit** in the **Monitoring** section of the [console](https://console.scaleway.com/) side menu. The **Cockpit** overview page displays.
-2. Click the **Alerts** tab.
-3. Click the **Region** drop-down and select the desired region.
-
- Make sure that you select the same region as the [data sources](/cockpit/concepts/#data-sources) you want your contacts to be alerted for.
-
-4. Scroll to the **Contacts** section and:
- - click **Send test alert** to ensure that your alerts are sent to your contacts. You **must have [activated preconfigured alerts](/cockpit/how-to/activate-managed-alerts/)** beforehand.
- - clear the checkbox under **Resolved notifications** to **stop receiving resolved notifications**.
- - click the trash icon next to the contact you wish to **delete**, then click **Delete contact** to confirm.
-
- The contact you delete will no longer receive alerts. If this is your only configured contact, alert notifications will stop until you add a new contact.
-
\ No newline at end of file
diff --git a/pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx b/pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx
index cb85c582e8..7b79400ecf 100644
--- a/pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx
+++ b/pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx
@@ -1,8 +1,8 @@
---
-title: How to configure alerts for Scaleway resources in Grafana
-description: Learn how to configure alerts for Scaleway resources in Grafana. Follow the steps to create alert rules, define conditions, and set up notifications for your monitored resources.
+title: How to configure custom alerts in Grafana
+description: Learn how to configure custom alerts for Scaleway resources in Grafana. Follow the steps to create alert rules, define conditions, and set up notifications for your monitored resources.
dates:
- validation: 2025-08-20
+ validation: 2025-10-22
posted: 2023-11-06
---
import Requirements from '@macros/iam/requirements.mdx'
@@ -30,7 +30,7 @@ This page shows you how to create alert rules in Grafana for monitoring Scaleway
- Scaleway resources you can monitor
- [Created Grafana credentials](/cockpit/how-to/retrieve-grafana-credentials/) with the **Editor** role
- [Enabled](/cockpit/how-to/enable-alert-manager/) the Scaleway alert manager in the same region as the resources you want to be alerted for
-- [Added](/cockpit/how-to/add-contact-points/) contacts in the Scaleway console or contact points in Grafana (with the `Scaleway Alerting` alert manager of the same region as your `Scaleway Metrics` data source), otherwise alerts will not be delivered
+- [Added](/cockpit/how-to/enable-alert-manager/#how-to-add-contacts) contacts in the Scaleway console or contact points in Grafana (with the `Scaleway Alerting` alert manager of the same region as your `Scaleway Metrics` data source), otherwise alerts will not be delivered
## Switch to the data source-managed tab
@@ -56,103 +56,108 @@ Data source managed alert rules allow you to configure alerts managed by the dat
Switch between the tabs below to create alerts for a Scaleway Instance, an Object Storage bucket, a Kubernetes cluster Pod, or Cockpit logs.
-
-
- The steps below explain how to create the metric selection and configure an alert condition that triggers when **your Instance consumes more than 10% of a single CPU core over the past 5 minutes.**
-
- 1. In the query field next to the **Loading metrics... >** button, paste the following query. Make sure that the values for the labels you have selected (for example, `resource_id`) correspond to those of the target resource.
- ```bash
- rate(instance_server_cpu_seconds_total{resource_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"}[5m]) > 0.1
- ```
-
- The `instance_server_cpu_seconds_total` metric records how many seconds of CPU time your Instance has used in total. It is helpful to detect unexpected CPU usage spikes.
-
- 2. In the **Set alert evaluation behavior** section, specify how long the condition must be met before triggering the alert.
- 3. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert rules. Rules that share the same group will use the same configuration, including the evaluation interval which determines how often the rule is evaluated (by default: every 1 minute). You can modify this interval later in the group settings.
-
- The evaluation interval is different from the pending period set in step 2. The evaluation interval controls how often the rule is checked, while the pending period defines how long the condition must be continuously met before the alert fires.
-
- 4. In the **Configure labels and notifications** section, click **+ Add labels**. A pop-up appears.
- 5. Enter a label and value name and click **Save**. You can skip this step if you want your alerts to be sent to the contacts you may already have created in the Scaleway console.
-
- In Grafana, notifications are sent by matching alerts to notification policies based on labels. This step is about deciding how alerts will reach you or your team (Slack, email, etc.) based on labels you attach to them. Then, you can set up rules that define who receives notifications in the **Notification policies** page.
- For example, if your alert named `alert-for-high-cpu-usage` has the label `team = instances-team`, you are telling Grafana to send a notification to the Instances team when the alert gets triggered. Find out how to [configure notification policies in Grafana](/tutorials/configure-slack-alerting/#configuring-a-notification-policy).
-
- 6. Click **Save rule and exit** in the top right corner of your screen to save and activate your alert.
- 7. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contacts](/cockpit/concepts/#contact-points).
-
-
- The steps below explain how to create the metric selection and configure an alert condition that triggers when **the object count in your bucket exceeds a specific threshold**.
-
- 1. In the query field next to the **Loading metrics... >** button, paste the following query. Make sure that the values for the labels you have selected (for example, `resource_id` and `region`) correspond to those of the target resource.
- ```bash
- object_storage_bucket_objects_total{region="fr-par", resource_id="my-bucket"} > 2000
- ```
-
- The `object_storage_bucket_objects_total` metric indicates the total number of objects stored in a given Object Storage bucket. It is useful to monitor and control object growth in your bucket and avoid hitting thresholds.
-
- 2. In the **Set alert evaluation behavior** section, specify how long the condition must be met before triggering the alert.
- 3. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert rules. Rules that share the same group will use the same configuration, including the evaluation interval which determines how often the rule is evaluated (by default: every 1 minute). You can modify this interval later in the group settings.
-
- The evaluation interval is different from the pending period set in step 2. The evaluation interval controls how often the rule is checked, while the pending period defines how long the condition must be continuously met before the alert fires.
-
- 4. In the **Configure labels and notifications** section, click **+ Add labels**. A pop-up appears.
- 5. Enter a label and value name and click **Save**. You can skip this step if you want your alerts to be sent to the contacts you may already have created in the Scaleway console.
-
- In Grafana, notifications are sent by matching alerts to notification policies based on labels. This step is about deciding how alerts will reach you or your team (Slack, email, etc.) based on labels you attach to them. Then, you can set up rules that define who receives notifications in the **Notification policies** page.
- For example, if an alert has the label `team = object-storage-team`, you are telling Grafana to send a notification to the Object Storage team when your alert is firing. Find out how to [configure notification policies in Grafana](/tutorials/configure-slack-alerting/#configuring-a-notification-policy).
-
- 6. Click **Save rule and exit** in the top right corner of your screen to save and activate your alert.
- 7. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contacts](/cockpit/concepts/#contact-points).
-
-
- The steps below explain how to create the metric selection and configure an alert condition that triggers when **no new Pod activity occurs, which could mean your cluster is stuck or unresponsive.**
-
- 1. In the query field next to the **Loading metrics... >** button, paste the following query. Make sure that the values for the labels you have selected (for example, `resource_name`) correspond to those of the target resource.
- ```bash
- rate(kubernetes_cluster_k8s_shoot_nodes_pods_usage_total{resource_name="k8s-par-quizzical-chatelet"}[15m]) == 0
- ```
-
- The `kubernetes_cluster_k8s_shoot_nodes_pods_usage_total` metric represents the total number of Pods currently running across all nodes in your Kubernetes cluster. It is helpful to monitor current Pod consumption per node pool or cluster, and help track resource saturation or unexpected workload spikes.
-
- 2. In the **Set alert evaluation behavior** field, specify how long the condition must be true before triggering the alert.
- 3. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert rules. Rules that share the same group will use the same configuration, including the evaluation interval which determines how often the rule is evaluated (by default: every 1 minute). You can modify this interval later in the group settings.
-
- The evaluation interval is different from the pending period set in step 2. The evaluation interval controls how often the rule is checked, while the pending period defines how long the condition must be continuously met before the alert fires.
-
- 4. In the **Configure labels and notifications** section, click **+ Add labels**. A pop-up appears.
- 5. Enter a label and value name and click **Save**. You can skip this step if you want your alerts to be sent to the contacts you may already have created in the Scaleway console.
-
- In Grafana, notifications are sent by matching alerts to notification policies based on labels. This step is about deciding how alerts will reach you or your team (Slack, email, etc.) based on labels you attach to them. Then, you can set up rules that define who receives notifications in the **Notification policies** page.
- For example, if an alert has the label `team = kubernetes-team`, you are telling Grafana to send a notification to the Kubernetes team when your alert is firing. Find out how to [configure notification policies in Grafana](/tutorials/configure-slack-alerting/#configuring-a-notification-policy).
-
- 6. Click **Save rule and exit** in the top right corner of your screen to save and activate your alert.
- 7. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contacts](/cockpit/concepts/#contact-points).
-
-
- The steps below explain how to create the metric selection and configure an alert condition that triggers when **no logs are stored for 5 minutes, which may indicate your app or system is broken**.
-
- 1. In the query field next to the **Loading metrics... >** button, paste the following query. Make sure that the values for the labels you have selected (for example, `resource_name`) correspond to those of the target resource.
- ```bash
- observability_cockpit_loki_chunk_store_stored_chunks_total:increase5m{resource_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"} == 0
- ```
-
- The `observability_cockpit_loki_chunk_store_stored_chunks_total:increase5m` metric represents the number of chunks (log storage blocks) that have been written over the last 5 minutes for a specific resource. It is useful to monitor log ingestion activity and detect issues such as a crash of the logging agent, or your application not producing logs.
-
- 2. In the **Set alert evaluation behavior** field, specify how long the condition must be true before triggering the alert.
- 3. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert rules. Rules that share the same group will use the same configuration, including the evaluation interval which determines how often the rule is evaluated (by default: every 1 minute). You can modify this interval later in the group settings.
-
- The evaluation interval is different from the pending period set in step 2. The evaluation interval controls how often the rule is checked, while the pending period defines how long the condition must be continuously met before the alert fires.
-
- 4. In the **Configure labels and notifications** section, click **+ Add labels**. A pop-up appears.
- 5. Enter a label and value name and click **Save**. You can skip this step if you want your alerts to be sent to the contacts you may already have created in the Scaleway console.
-
- In Grafana, notifications are sent by matching alerts to notification policies based on labels. This step is about deciding how alerts will reach you or your team (Slack, email, etc.) based on labels you attach to them. Then, you can set up rules that define who receives notifications in the **Notification policies** page.
- For example, if an alert has the label `team = cockpit-team`, you are telling Grafana to send a notification to the Cockpit team when your alert is firing. Find out how to [configure notification policies in Grafana](/tutorials/configure-slack-alerting/#configuring-a-notification-policy).
-
- 6. Click **Save rule and exit** in the top right corner of your screen to save and activate your alert.
- 7. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contacts](/cockpit/concepts/#contact-points).
-
+
+
+ The steps below explain how to create the metric selection and configure an alert condition that triggers when **your Instance consumes more than 10% of a single CPU core over the past 5 minutes.**
+
+ 1. In the query field next to the **Loading metrics... >** button, paste the following query. Make sure that the values for the labels you have selected (for example, `resource_id`) correspond to those of the target resource.
+ ```bash
+ rate(instance_server_cpu_seconds_total{resource_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"}[5m]) > 0.1
+ ```
+
+ The `instance_server_cpu_seconds_total` metric records how many seconds of CPU time your Instance has used in total. It is helpful to detect unexpected CPU usage spikes.
+
+ 2. In the **Set alert evaluation behavior** section, specify for how long the condition must be met before triggering the alert.
+ 3. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert rules. Rules that share the same group will use the same configuration, including the evaluation interval which determines how often the rule is evaluated (by default: every 1 minute). You can modify this interval later in the group settings.
+
+ The evaluation interval is different from the pending period set in step 2. The evaluation interval controls how often the rule is checked, while the pending period defines how long the condition must be continuously met before the alert fires.
+
+ 4. In the **Configure labels and notifications** section, click **+ Add labels**. A pop-up appears.
+ 5. Enter a label and value name and click **Save**. You can skip this step if you want your alerts to be sent to the contacts you may already have created in the Scaleway console.
+
+ In Grafana, notifications are sent by matching alerts to notification policies based on labels. This step determines how alerts will reach you or your team (Slack, email, etc.) based on labels you attach to them. Then, you can set up rules that define who receives notifications in the **Notification policies** page.
+ For example, if your alert named `alert-for-high-cpu-usage` has the label `team = instances-team`, you are telling Grafana to send a notification to the Instances team when the alert gets triggered. Find out how to [configure notification policies in Grafana](/tutorials/configure-slack-alerting/#configuring-a-notification-policy).
+
+ 6. Click **Save rule and exit** in the top right corner of your screen to save and activate your alert.
+ 7. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contacts](/cockpit/concepts/#contact-points).
+
+
+ The steps below explain how to create the metric selection and configure an alert condition that triggers when **the object count in your bucket exceeds a specific threshold**.
+
+ 1. In the query field next to the **Loading metrics... >** button, paste the following query. Make sure that the values for the labels you have selected (for example, `resource_id` and `region`) correspond to those of the target resource.
+ ```bash
+ object_storage_bucket_objects_total{region="fr-par", resource_id="my-bucket"} > 2000
+ ```
+
+ The `object_storage_bucket_objects_total` metric indicates the total number of objects stored in a given Object Storage bucket. It is useful to monitor and control object growth in your bucket and avoid hitting thresholds.
+
+
+ 2. In the **Set alert evaluation behavior** section, specify how long the condition must be met before triggering the alert.
+ 3. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert rules. Rules that share the same group will use the same configuration, including the evaluation interval which determines how often the rule is evaluated (by default: every 1 minute). You can modify this interval later in the group settings.
+
+
+ The evaluation interval is different from the pending period set in step 2. The evaluation interval controls how often the rule is checked, while the pending period defines how long the condition must be continuously met before the alert fires.
+
+
+ 4. In the **Configure labels and notifications** section, click **+ Add labels**. A pop-up appears.
+ 5. Enter a label and value name and click **Save**. You can skip this step if you want your alerts to be sent to the contacts you may already have created in the Scaleway console.
+
+
+ In Grafana, notifications are sent by matching alerts to notification policies based on labels. This step determines how alerts will reach you or your team (Slack, email, etc.) based on labels you attach to them. Then, you can set up rules that define who receives notifications in the **Notification policies** page.
+ For example, if an alert has the label `team = object-storage-team`, you are telling Grafana to send a notification to the Object Storage team when your alert fires. Find out how to [configure notification policies in Grafana](/tutorials/configure-slack-alerting/#configuring-a-notification-policy).
+
+
+ 6. Click **Save rule and exit** in the top right corner of your screen to save and activate your alert.
+ 7. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contacts](/cockpit/concepts/#contact-points).
+
+
+ The steps below explain how to create the metric selection and configure an alert condition that triggers when **no new Pod activity occurs, which could mean your cluster is stuck or unresponsive.**
+
+ 1. In the query field next to the **Loading metrics... >** button, paste the following query. Make sure that the values for the labels you have selected (for example, `resource_name`) correspond to those of the target resource.
+ ```bash
+ rate(kubernetes_cluster_k8s_shoot_nodes_pods_usage_total{resource_name="k8s-par-quizzical-chatelet"}[15m]) == 0
+ ```
+
+ The `kubernetes_cluster_k8s_shoot_nodes_pods_usage_total` metric represents the total number of Pods currently running across all nodes in your Kubernetes cluster. It is helpful to monitor current Pod consumption per node pool or cluster, and help track resource saturation or unexpected workload spikes.
+
+ 2. In the **Set alert evaluation behavior** field, specify for how long the condition must be true before triggering the alert.
+ 3. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert rules. Rules that share the same group will use the same configuration, including the evaluation interval which determines how often the rule is evaluated (by default: every 1 minute). You can modify this interval later in the group settings.
+
+ The evaluation interval is different from the pending period set in step 2. The evaluation interval controls how often the rule is checked, while the pending period defines how long the condition must be continuously met before the alert fires.
+
+ 4. In the **Configure labels and notifications** section, click **+ Add labels**. A pop-up appears.
+ 5. Enter a label and value name and click **Save**. You can skip this step if you want your alerts to be sent to the contacts you may already have created in the Scaleway console.
+
+ In Grafana, notifications are sent by matching alerts to notification policies based on labels. This step is about deciding how alerts will reach you or your team (Slack, email, etc.) based on labels you attach to them. Then, you can set up rules that define who receives notifications in the **Notification policies** page.
+ For example, if an alert has the label `team = kubernetes-team`, you are telling Grafana to send a notification to the Kubernetes team when your alert fires. Find out how to [configure notification policies in Grafana](/tutorials/configure-slack-alerting/#configuring-a-notification-policy).
+
+ 6. Click **Save rule and exit** in the top right corner of your screen to save and activate your alert.
+ 7. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contacts](/cockpit/concepts/#contact-points).
+
+
+ The steps below explain how to create the metric selection and configure an alert condition that triggers when **no logs are stored for 5 minutes, which may indicate your app or system is broken**.
+
+ 1. In the query field next to the **Loading metrics... >** button, paste the following query. Make sure that the values for the labels you have selected (for example, `resource_name`) correspond to those of the target resource.
+ ```bash
+ observability_cockpit_loki_chunk_store_stored_chunks_total:increase5m{resource_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"} == 0
+ ```
+
+ The `observability_cockpit_loki_chunk_store_stored_chunks_total:increase5m` metric represents the number of chunks (log storage blocks) that have been written over the last 5 minutes for a specific resource. It is useful to monitor log ingestion activity and detect issues such as a crash of the logging agent, or your application not producing logs.
+
+ 2. In the **Set alert evaluation behavior** field, specify for how long the condition must be true before triggering the alert.
+ 3. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert rules. Rules that share the same group will use the same configuration, including the evaluation interval which determines how often the rule is evaluated (by default: every 1 minute). You can modify this interval later in the group settings.
+
+ The evaluation interval is different from the pending period set in step 2. The evaluation interval controls how often the rule is checked, while the pending period defines how long the condition must be continuously met before the alert fires.
+
+ 4. In the **Configure labels and notifications** section, click **+ Add labels**. A pop-up appears.
+ 5. Enter a label and value name and click **Save**. You can skip this step if you want your alerts to be sent to the contacts you may already have created in the Scaleway console.
+
+ In Grafana, notifications are sent by matching alerts to notification policies based on labels. This step determines how alerts will reach you or your team (Slack, email, etc.) based on labels you attach to them. Then, you can set up rules that define who receives notifications in the **Notification policies** page.
+ For example, if an alert has the label `team = cockpit-team`, you are telling Grafana to send a notification to the Cockpit team when your alert fires. Find out how to [configure notification policies in Grafana](/tutorials/configure-slack-alerting/#configuring-a-notification-policy).
+
+ 6. Click **Save rule and exit** in the top right corner of your screen to save and activate your alert.
+ 7. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contacts](/cockpit/concepts/#contact-points).
+
**You can configure up to a maximum of 10 alerts** for the `Scaleway Metrics` data source.
diff --git a/pages/cockpit/how-to/enable-alert-manager.mdx b/pages/cockpit/how-to/enable-alert-manager.mdx
index efbb419d64..3ff96ffd77 100644
--- a/pages/cockpit/how-to/enable-alert-manager.mdx
+++ b/pages/cockpit/how-to/enable-alert-manager.mdx
@@ -1,18 +1,18 @@
---
-title: How to enable the alert manager
-description: Learn how to enable Scaleway's regionalized alert manager and add contacts to configure alert notifications for your resources.
+title: How to configure the alert manager
+description: Learn how to configure the Scaleway regionalized alert manager and add contacts to receive alert notifications for your resources.
categories:
- observability
dates:
- validation: 2025-07-29
+ validation: 2025-10-22
posted: 2024-04-05
---
import Requirements from '@macros/iam/requirements.mdx'
-This page shows you how to enable Scaleway's regionalized alert manager, and add notification contacts that will be notified when your alerts are triggered, using the [Scaleway console](https://console.scaleway.com/).
+This page shows you how to enable Scaleway's regionalized alert manager, add and manage contacts that will be notified when your alerts are triggered, using the [Scaleway console](https://console.scaleway.com/).
-You can [add](/cockpit/how-to/add-contact-points/) or manage contacts at any time to ensure the right people are notified when alerts fire.
+You can add or manage contacts at any time to ensure the right people are notified when alerts fire.
@@ -40,6 +40,47 @@ Enabling Scaleway's regionalized alert manager allows you configure preconfigure
6. Enter an email address, then click **+ Add email** and **Add contacts**. Your email address displays in the **Contacts** section, and by default, the **Resolved notifications** box is ticked. This means that you will receive notifications for resolved alerts.
7. Optionally, click **Skip for now** if you do not want to add contacts yet.
- You are prompted to create contacts when enabling the alert manager for the first time, or when re-enabling it after disabling. However, you can also [add](/cockpit/how-to/add-contact-points/) or manage them independently from the alert manager configuration at any time.
+ You are prompted to create contacts when enabling the alert manager for the first time, or when re-enabling it after disabling. However, you can also add or manage them independently from the alert manager configuration at any time.
+## How to add contacts
+
+
+1. Click **Cockpit** in the **Monitoring** section of the [console](https://console.scaleway.com/) side menu. The **Cockpit** overview page displays.
+2. Click the **Alerts** tab.
+3. Click the **Region** drop-down and select the desired region.
+
+ Make sure that you select the same region as the [data sources](/cockpit/concepts/#data-sources) you want your contacts to be alerted for.
+
+4. Click **Add email** in the **Contacts** section. A pop-up displays.
+5. Enter an email address, then click **+ Add email**. Your email address displays and by default, the **Resolved notifications** checkbox is ticked. This means that you will receive notifications for resolved alerts.
+6. Optionally, enter another email and click **+ Add email** to add another contact.
+7. Click **Add contacts** to confirm. The email addresses appears in the list of your contacts.
+
+Refer to the dedicated documentation to find out how to manage your contacts.
+
+## How to manage contacts
+
+1. Click **Cockpit** in the **Monitoring** section of the [console](https://console.scaleway.com/) side menu. The **Cockpit** overview page displays.
+2. Click the **Alerts** tab.
+3. Click the **Region** drop-down and select the desired region.
+
+ Make sure that you select the same region as the [data sources](/cockpit/concepts/#data-sources) you want your contacts to be alerted for.
+
+4. Scroll to the **Contacts** section and:
+ - click **Send test alert** to ensure that your alerts are sent to your contacts. You **must have [activated preconfigured alerts](/cockpit/how-to/activate-managed-alerts/)** beforehand.
+ - clear the checkbox under **Resolved notifications** to **stop receiving resolved notifications**.
+ - click the trash icon next to the contact you wish to **delete**, then click **Delete contact** to confirm.
+
+ The contact you delete will no longer receive alerts. If this is your only configured contact, alert notifications will stop until you add a new contact.
+
+
+## Advanced configuration
+
+
+ Make sure that you use the Scaleway alert manager if you follow the Grafana documentation below
+
+
+- Find out how to configure templates and customize your alert notification messages in the [dedicated Grafana documentation](https://grafana.com/docs/grafana/latest/alerting/fundamentals/templates/)
+
+- Find out how to configure notification policies in the [dedicated Grafana documentation](http://grafana.com/docs/grafana/latest/alerting/configure-notifications/)
\ No newline at end of file
diff --git a/pages/cockpit/menu.ts b/pages/cockpit/menu.ts
index dfba912553..004d7c4087 100644
--- a/pages/cockpit/menu.ts
+++ b/pages/cockpit/menu.ts
@@ -19,32 +19,28 @@ export const cockpitMenu = {
{
items: [
{
- label: 'Retrieve your Grafana credentials',
- slug: 'retrieve-grafana-credentials',
- },
- {
- label: 'Create a token',
- slug: 'create-token',
+ label: 'Configure the alert manager',
+ slug: 'enable-alert-manager',
},
{
- label: 'Enable the alert manager',
- slug: 'enable-alert-manager',
+ label: 'Activate Scaleway preconfigured alerts',
+ slug: 'activate-managed-alerts',
},
{
- label: 'Create and push traces',
- slug: 'activate-push-traces',
+ label: 'Configure alerts for Scaleway resources',
+ slug: 'configure-alerts-for-scw-resources',
},
{
- label: 'Activate Scaleway preconfigured alerts',
- slug: 'activate-managed-alerts',
+ label: 'Retrieve your Grafana credentials',
+ slug: 'retrieve-grafana-credentials',
},
{
- label: 'Manage contacts',
- slug: 'add-contact-points',
+ label: 'Create a token',
+ slug: 'create-token',
},
{
- label: 'Configure alerts for Scaleway resources',
- slug: 'configure-alerts-for-scw-resources',
+ label: 'Create and push traces',
+ slug: 'activate-push-traces',
},
{
label: 'Access Grafana and preconfigured dashboards',
diff --git a/pages/cockpit/reference-content/cockpit-limitations.mdx b/pages/cockpit/reference-content/cockpit-limitations.mdx
index 3cbcd48fa2..1b0f89a93f 100644
--- a/pages/cockpit/reference-content/cockpit-limitations.mdx
+++ b/pages/cockpit/reference-content/cockpit-limitations.mdx
@@ -3,7 +3,7 @@ title: Cockpit capabilities and limits
description: Discover the capabilities and limits of Cockpit, including retention periods, Loki and Mimir limits, and product integrations for comprehensive infrastructure monitoring and management efficiency.
tags: observability cockpit retention metrics logs
dates:
- validation: 2025-08-28
+ validation: 2025-10-22
posted: 2023-09-05
---
@@ -62,45 +62,49 @@ The following table provides information about [Mimir](/cockpit/concepts/#mimir)
The following table provides details about the products that are integrated into Cockpit. This means that you can have metrics and/or logs and/or alerts in your Cockpit for the products mentioned below.
+Some Scaleway resources provide preconfigured alerts to notify you of any abnormal behavior. Refer to the **Alert management** section in the **Alerts tab** of the [Scaleway console](https://console.scaleway.com/cockpit/) to find out which Scaleway resources have integrated alerts.
+
**Sending metrics and logs using an external path is a billable feature**. As such, any additional data that you may push yourself will be billed, even if you send data from Scaleway products that are **integrated into Cockpit**. Refer to the [product pricing](https://www.scaleway.com/en/pricing/managed-services/#cockpit) for more information.
-| **Product Name** | **Metrics** | **Logs** | **Alerts** |
-|----------------------------|-----------------|-----------------|-----------------|
-| CPU & GPU Instances | **Integrated*** | Not integrated | Not integrated |
-| Managed Inference | **Integrated*** | **Integrated*** | Not integrated |
-| Generative APIs | **Integrated*** | Not integrated | Not integrated |
-| Elastic Metal | Not integrated | Not integrated | Not integrated |
-| Apple silicon | Not integrated | Not integrated | Not integrated |
-| Kubernetes Kapsule | **Integrated*** | **Integrated*** | **Integrated** |
-| Kubernetes Kosmos | **Integrated*** | **Integrated*** | **Integrated** |
-| Container Registry | Not integrated | Not integrated | Not integrated |
-| Serverless Containers | **Integrated*** | **Integrated*** | Not integrated |
-| Serverless Functions | **Integrated*** | **Integrated*** | Not integrated |
-| Serverless Jobs | **Integrated*** | **Integrated*** | Not integrated |
-| NATS | **Integrated*** | Not integrated | Not integrated |
-| Queues | **Integrated*** | Not integrated | Not integrated |
-| Topics and Events | **Integrated*** | Not integrated | Not integrated |
-| Block Storage | **Integrated*** | Not integrated | Not integrated |
-| Object Storage | **Integrated*** | **Integrated*** | Not integrated |
-| Database RDB PostgreSQL | **Integrated*** | **Integrated*** | **Integrated** |
-| Database RDB MySQL | **Integrated*** | **Integrated*** | **Integrated** |
-| Serverless SQL Database | **Integrated*** | **Integrated*** | **Integrated** |
-| Redis | **Integrated*** | **Integrated*** | **Integrated** |
-| MongoDB | **Integrated*** | Not integrated | Not integrated |
-| Data Lab (Apache Spark) | **Integrated*** | Not integrated | Not integrated |
-| Clickhouse | Planned | Not integrated | Not integrated |
-| Private Networks | **Integrated*** | Not integrated | Not integrated |
-| Public Gateways | **Integrated*** | **Integrated*** | Not integrated |
-| Load Balancers | **Integrated*** | **Integrated*** | Not integrated |
-| Domains and DNS | Not integrated | Not integrated | Not integrated |
-| Edge Services | **Integrated*** | Not integrated | Not integrated |
-| Transactional Email | **Integrated*** | Not integrated | Not integrated |
-| IoT Hub | **Integrated*** | Not integrated | Not integrated |
-| Web Hosting | Not integrated | Not integrated | Not integrated |
-| Cockpit | **Integrated*** | Not integrated | Not integrated |
-| Audit Trail | Not integrated | Not integrated | Not integrated |
-| IAM | Not integrated | Not integrated | Not integrated |
-| Secret Manager | **Integrated*** | Not integrated | Not integrated |
-| Key Manager | Not integrated | Not integrated | Not integrated |
+| **Product Name** | **Metrics** | **Logs** |
+|----------------------------|-----------------|-----------------|
+| CPU & GPU Instances | **Integrated*** | Not integrated |
+| Managed Inference | **Integrated*** | **Integrated*** |
+| Generative APIs | **Integrated*** | Not integrated |
+| Elastic Metal | Not integrated | Not integrated |
+| Apple silicon | Not integrated | Not integrated |
+| Kubernetes Kapsule | **Integrated*** | **Integrated*** |
+| Kubernetes Kosmos | **Integrated*** | **Integrated*** |
+| Container Registry | Not integrated | Not integrated |
+| Serverless Containers | **Integrated*** | **Integrated*** |
+| Serverless Functions | **Integrated*** | **Integrated*** |
+| Serverless Jobs | **Integrated*** | **Integrated*** |
+| NATS | **Integrated*** | Not integrated |
+| Queues | **Integrated*** | Not integrated |
+| Topics and Events | **Integrated*** | Not integrated |
+| Block Storage | **Integrated*** | Not integrated |
+| Object Storage | **Integrated*** | **Integrated*** |
+| Database RDB PostgreSQL | **Integrated*** | **Integrated*** |
+| Database RDB MySQL | **Integrated*** | **Integrated*** |
+| Serverless SQL Database | **Integrated*** | **Integrated*** |
+| Redis | **Integrated*** | **Integrated*** |
+| MongoDB | **Integrated*** | Not integrated |
+| Data Lab (Apache Spark) | **Integrated*** | Not integrated |
+| Clickhouse | Planned | Not integrated |
+| Private Networks | **Integrated*** | Not integrated |
+| Public Gateways | **Integrated*** | **Integrated*** |
+| Load Balancers | **Integrated*** | **Integrated*** |
+| Domains and DNS | Not integrated | Not integrated |
+| Edge Services | **Integrated*** | Not integrated |
+| Transactional Email | **Integrated*** | Not integrated |
+| IoT Hub | **Integrated*** | Not integrated |
+| Web Hosting | Not integrated | Not integrated |
+| Cockpit | **Integrated*** | Not integrated |
+| Audit Trail | Not integrated | Not integrated |
+| IAM | Not integrated | Not integrated |
+| Secret Manager | **Integrated*** | Not integrated |
+| Key Manager | Not integrated | Not integrated |
+
+
*: Including data and dashboards
diff --git a/pages/serverless-containers/how-to/configure-alerts-containers.mdx b/pages/serverless-containers/how-to/configure-alerts-containers.mdx
index 313456a8a0..359f00b3a5 100644
--- a/pages/serverless-containers/how-to/configure-alerts-containers.mdx
+++ b/pages/serverless-containers/how-to/configure-alerts-containers.mdx
@@ -18,7 +18,7 @@ This page shows you how to configure alerts for Scaleway Serverless Containers u
- Scaleway resources you can monitor
- [Created Grafana credentials](/cockpit/how-to/retrieve-grafana-credentials/) with the **Editor** role
- [Enabled](/cockpit/how-to/enable-alert-manager/) the alert manager
- - [Created](/cockpit/how-to/add-contact-points/) at least one contact point
+ - [Added](/cockpit/how-to/enable-alert-manager/#how-to-add-contacts) at least one contact in the Scaleway console or contact points in Grafana
- Selected the **Scaleway Alerting** alert manager in Grafana
1. [Log in to Grafana](/cockpit/how-to/access-grafana-and-managed-dashboards/) using your credentials.
diff --git a/pages/serverless-functions/how-to/configure-alerts-functions.mdx b/pages/serverless-functions/how-to/configure-alerts-functions.mdx
index dad1248804..91b1ea2402 100644
--- a/pages/serverless-functions/how-to/configure-alerts-functions.mdx
+++ b/pages/serverless-functions/how-to/configure-alerts-functions.mdx
@@ -18,7 +18,7 @@ This page shows you how to configure alerts for Scaleway Serverless Functions us
- Scaleway resources you can monitor
- [Created Grafana credentials](/cockpit/how-to/retrieve-grafana-credentials/) with the **Editor** role
- [Enabled](/cockpit/how-to/enable-alert-manager/) the alert manager
- - [Created](/cockpit/how-to/add-contact-points/) at least one contact point
+ - [Added](/cockpit/how-to/enable-alert-manager/#how-to-add-contacts) at least one contact in the Scaleway console or contact points in Grafana
- Selected the **Scaleway Alerting** alert manager in Grafana
1. [Log in to Grafana](/cockpit/how-to/access-grafana-and-managed-dashboards/) using your credentials.
diff --git a/pages/serverless-jobs/how-to/configure-alerts-jobs.mdx b/pages/serverless-jobs/how-to/configure-alerts-jobs.mdx
index 19cb7a750a..30945570bf 100644
--- a/pages/serverless-jobs/how-to/configure-alerts-jobs.mdx
+++ b/pages/serverless-jobs/how-to/configure-alerts-jobs.mdx
@@ -19,7 +19,7 @@ This page shows you how to configure alerts for Scaleway Serverless Jobs using S
- Scaleway resources you can monitor
- [Created Grafana credentials](/cockpit/how-to/retrieve-grafana-credentials/) with the **Editor** role
- [Enabled](/cockpit/how-to/enable-alert-manager/) the alert manager
- - [Added](/cockpit/how-to/add-contact-points/) at least one contact in the Scaleway console or contact points in Grafana
+ - [Added](/cockpit/how-to/enable-alert-manager/#how-to-add-contacts) at least one contact in the Scaleway console or contact points in Grafana
- Selected the **Scaleway Alerting** alert manager in Grafana
1. [Log in to Grafana](/cockpit/how-to/access-grafana-and-managed-dashboards/) using your credentials.