Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
63 changes: 62 additions & 1 deletion docs/product/explore/trace-explorer/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -93,11 +93,72 @@ Dive deeper into your data with aggregation capabilities in Trace Explorer.

<Arcade src="https://demo.arcade.software/rgVB85wJiopGNPO3KFJS?embed" />

## Compare Attributes (Beta)

<Alert level="info">

Attribute Breakdowns is currently in beta. To access this feature, enable the Early Adopter flag in your [organization settings](https://sentry.io/orgredirect/settings/:orgslug/).

</Alert>

When investigating performance issues, you often need to answer: *"What's different about this spike compared to normal behavior?"*. Sentry's compare attributes feature lets you analyze attribute distributions between a selected time range and your baseline data without writing additional queries.

This helps you quickly identify whether a latency spike, error surge, or other anomaly is caused by a specific release, geographic region, browser, API endpoint, or any other attribute you've captured in your spans.

### How to Use Attribute Comparison

- On the Trace Explorer page, begin by dragging across a chart's timeline to narrow down to the time range that you want to compare. You will see an option to `Compare Attribute Breakdowns`.

- When you click on `Compare Attribute Breakdowns`, the attribute comparison charts below will load with the baseline and time range differences. Purple bars on the left are the time range values, and gray bars on the right are the baseline values. **Note:** Clicking anywhere else on the page while in *compare attributes* mode will stop the comparison and reset to baseline.

![Attribute Comparison Screenshot =800x](./img/attribute_comparison.png)

#### Reading the Bar Charts

Each chart displays two percentages, like `99.8% | 84.6%`:

| Percentage | Meaning |
|-------|---------|
| First percentage (Selection) | What % of spans in your **selected time range** have this attribute |
| Second percentage (Baseline) | What % of spans in the **baseline** have this attribute |

A large difference in these percentages can indicate that the attribute is more (or less) present during the anomaly.

- **Taller bars** = higher frequency of that attribute value within its cohort
- If the distributions differ significantly, that attribute may explain the anomaly

Widgets are ordered based on how often the attribute is present in the selection vs the baseline. The most frequently occurring attributes are shown first.

### Example: Debugging a Latency Spike

Suppose your p95 duration chart shows a spike from 3s to 12s. To investigate:

1. **Drag-select the spike** on the p95 chart
2. **Click** and choose "Compare Attribute Breakdowns" from the menu that appears
3. **Review the breakdowns** to find attributes with different distributions:

| Attribute | What to Look For |
|-----------|-----------------|
| `release` | Is a new release version overrepresented in the spike? |
| `span.domain` | Is a specific external domain causing slowdowns? |
| `user.geo.country_code` | Are users in certain regions more affected? |
| `browser.name` | Is the issue browser-specific? |
| `http.status_code` | Are there more errors (5xx) in the spike? |
| `effectiveConnectionType` | Are slow network connections involved? |

If, for example, you see that 80% of spans in your selection come from a specific `release` that only accounts for 20% of the baseline, you've likely found the culprit.

### Best Practices

- **Look for the biggest differences**: Attributes where the selection and baseline distributions differ most are your best leads.
- **Check coverage percentages**: Low coverage (e.g., `4.7% | 0.5%`) means the attribute isn't populated for most spans — take findings with caution.
- **Use Group By together**: Combine attribute comparison with the Group By feature in Visualize to see trends broken down by your suspect attribute over time.

## Create Alerts and Dashboard Widgets From Queries or Compare Queries

You can create Alerts and Dashboard widgets from your queries by clicking the "Save As" button:

![Trace Explorer Screenshot](./img/trace_explorer_save.png)
![Trace Explorer Screenshot =800x](./img/trace_explorer_save.png)

You can also run side-by-side comparisons of different queries to analyze changes or differences in span data.

Expand Down
10 changes: 2 additions & 8 deletions docs/product/sentry-basics/performance-monitoring.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -44,15 +44,9 @@ Alongside typical [error issues](/product/issues/issue-details/error-issues/), S

### 3: Trace Explorer: Find root causes & common patterns for performance issues

While Performance Issues and Insights give you a high-level view of performance data, the [Trace Explorer](/product/explore/traces/) lets you slice and dice all your performance data by any attribute. This is an extremely powerful tool for finding root causes and common patterns for performance regressions or potential optimizations. For instance, you can find the p95 duration of all spans on the `GET /users/:id` endpoint, or find all pages slowing down because of a particular middleware routing.
While Performance Issues and Insights give you a high-level view of performance data, the [Trace Explorer](/product/explore/trace-explorer/) lets you slice and dice all your performance data by any attribute. This is an extremely powerful tool for finding root causes and common patterns for performance regressions or potential optimizations. For instance, you can find the p95 duration of all spans on the `GET /users/:id` endpoint, or find all pages slowing down because of a particular middleware routing.

The newest version of the Trace Explorer also lets you quickly calculate metrics based on your span data, like p50, p95, and p99 durations, and then group them by any attribute. This lets you quickly see if there's specific endpoints, versions, user groups, etc. that are performing poorly. For a more thorough dive into how you can use the new Trace Explorer and Span Metrics, take a look at our most recent [blog post](https://blog.sentry.io/find-and-fix-performance-bottlenecks-with-sentrys-trace-explorer/).

<Alert>

This demo shows a [new version](/product/explore/new-trace-explorer/) of the Trace Explorer that is available for all self-serve orgs in the Early Adopter program. Enable the EA flag in your [organization settings](https://sentry.io/orgredirect/settings/:orgslug/) to gain access to Span Metrics, Group By, Create Widgets and Alerts.

</Alert>
Trace Explorer also lets you quickly calculate metrics based on your span data, like p50, p95, and p99 durations, and then group them by any attribute. This lets you quickly see if there are specific endpoints, versions, user groups, etc. that are performing poorly. You can also compare attributes from a zoomed in time span against your app's baseline data in order to more efficiently find the root cause. For a deeper dive into how you can use Trace Explorer and Span Metrics, take a look at our most recent [blog post](https://blog.sentry.io/find-and-fix-performance-bottlenecks-with-sentrys-trace-explorer/).


<Arcade src="https://demo.arcade.software/E04YSJpq1w8bpk18Q5Kp" /> <br />
Expand Down
Loading