Skip to content

Conversation

@brunopgalvao
Copy link
Contributor

πŸ“ Description

Resolves #1085

πŸ” Review Preference

Choose one:

  • βœ… I have time to handle formatting/style feedback myself
  • ⚑ Docs team handles formatting (check "Allow edits from maintainers")

πŸ€– AI-Ready Docs

If content changed, regenerate AI files:

  • βœ… I ran python3 scripts/generate_llms.py
  • ⚑ Docs team will regenerate (check "Allow edits from maintainers")

βœ… Checklist

@brunopgalvao brunopgalvao self-assigned this Nov 7, 2025
@brunopgalvao brunopgalvao added B0 - Needs Review Pull request is ready for review C1 - Medium Medium priority task A0 - New Content Pull request contains new content pages labels Nov 7, 2025
@brunopgalvao brunopgalvao marked this pull request as ready for review November 8, 2025 04:17
@brunopgalvao brunopgalvao requested a review from a team as a code owner November 8, 2025 04:17
Copy link
Collaborator

@nhussein11 nhussein11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting read, thank you!

Benchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise [weight](/reference/glossary/#weight){target=\_blank}, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks.
Benchmarking is the process of measuring the computational resources (execution time and storage) required by your pallet's extrinsics. Accurate [weight](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/index.html){target=\_blank} calculations are essential for ensuring your blockchain can process transactions efficiently while protecting against denial-of-service attacks.

This guide continues the pallet development series, building on the [Create a Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet), [Mock Your Runtime](/parachains/customize-runtime/pallet-development/mock-runtime), and [Test Your Pallet](/parachains/customize-runtime/pallet-development/pallet-testing) tutorials. You'll learn how to benchmark the counter pallet extrinsics and integrate the generated weights into your runtime.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This links should have {target=\_blank}

Before you begin, ensure you have:

- Completed the previous pallet development tutorials
- Basic understanding of computational complexity
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a link for computational complexity if readers aren't familiar with it. For example:
"basic understanding of computational complexity (check this [page] to understand what is it)"

- Basic understanding of computational complexity
- Familiarity with Rust's testing framework

## Why Benchmark?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recall that regulators tend to avoid using question titles whenever possible.

- [`ref_time`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.Weight.html#method.ref_time){target=\_blank}: Computational time measured in picoseconds
- [`proof_size`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.Weight.html#method.proof_size){target=\_blank}: Storage proof size in bytes

## Step 1: Create the Benchmarking Module
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And I believe that we don't use this kind of titles with steps, we should do: "## Create the Benchmarking Module"


## Step 3: Add WeightInfo to Config

By making `WeightInfo` an associated type in the `Config` trait, you allow each runtime that uses your pallet to specify which weight implementation to use. Different deployment environments (testnets, production chains, or different hardware configurations) may have different performance characteristics and can use different weight calculations.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
By making `WeightInfo` an associated type in the `Config` trait, you allow each runtime that uses your pallet to specify which weight implementation to use. Different deployment environments (testnets, production chains, or different hardware configurations) may have different performance characteristics and can use different weight calculations.
By making `WeightInfo` an associated type in the `Config` trait, you allow each runtime that uses your pallet to specify which weight implementation to use. Different deployment environments (TestNets, production chains, or different hardware configurations) may have different performance characteristics and can use different weight calculations.


## Step 5: Include the Benchmarking Module

The `#[cfg(feature = "runtime-benchmarks")]` attribute ensures that benchmarking code is only compiled when explicitly needed. This keeps your production runtime lean by excluding benchmarking infrastructure from normal builds, as it's only needed when generating weights.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean to say: "This keeps your production runtime Clean"? or "lean" was the correct word?

### Register Benchmarks

## Benchmarking Process
The `define_benchmarks!` macro creates the infrastructure that allows the benchmarking CLI tool to discover and execute your pallet's benchmarks.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The `define_benchmarks!` macro creates the infrastructure that allows the benchmarking CLI tool to discover and execute your pallet's benchmarks.
The [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro creates the infrastructure that allows the benchmarking CLI tool to discover and execute your pallet's benchmarks.


Benchmarking on faster hardware than production leads to under-estimated weights - attackers could submit extrinsics that consume more resources than the weights suggest, potentially causing blocks to take longer than expected to produce or even halting the chain. Conversely, benchmarking on slower hardware creates over-estimated weights, resulting in unnecessarily high transaction fees and wasted block capacity.

**Best practices:**
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd remove best practices

**Note**: The actual numbers will vary based on your hardware and implementation complexity. The [`DbWeight`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.RuntimeDbWeight.html){target=\_blank} accounts for database read and write operations.

2. Check your runtime's `lib.rs` file to ensure the `benchmarks` module is imported. The import should look like this:
## Benchmarking Best Practices
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd remove best practices

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A0 - New Content Pull request contains new content pages B0 - Needs Review Pull request is ready for review C1 - Medium Medium priority task

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants