diff --git a/.ai/categories/parachains.md b/.ai/categories/parachains.md index 8dc600b26..8ab22ed04 100644 --- a/.ai/categories/parachains.md +++ b/.ai/categories/parachains.md @@ -970,310 +970,459 @@ For reference, Astar's implementation of [`pallet-contracts`](https://github.com --- -Page Title: Benchmarking FRAME Pallets +Page Title: Benchmark Your Pallet - Source (raw): https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md - Canonical (HTML): https://docs.polkadot.com/parachains/customize-runtime/pallet-development/benchmark-pallet/ -- Summary: Learn how to use FRAME's benchmarking framework to measure extrinsic execution costs and provide accurate weights for on-chain computations. - -# Benchmarking +- Summary: Learn how to benchmark extrinsics in your custom pallet to generate precise weight calculations suitable for production use. ## Introduction -Benchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise [weight](/reference/glossary/#weight){target=\_blank}, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks. +Benchmarking is the process of measuring the computational resources (execution time and storage) required by your pallet's extrinsics. Accurate [weight](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/index.html){target=\_blank} calculations are essential for ensuring your blockchain can process transactions efficiently while protecting against denial-of-service attacks. -The Polkadot SDK leverages the [FRAME](/reference/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\_blank} benchmarking framework, offering tools to measure and assign weights to extrinsics. These weights help determine the maximum number of transactions or system-level calls processed within a block. This guide covers how to use FRAME's [benchmarking framework](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank}, from setting up your environment to writing and running benchmarks for your custom pallets. You'll understand how to generate accurate weights by the end, ensuring your runtime remains performant and secure. +This guide demonstrates how to benchmark a pallet and incorporate the resulting weight values. This example uses the custom counter pallet from previous guides in this series, but you can replace it with the code from another pallet if desired. -## The Case for Benchmarking +## Prerequisites -Benchmarking helps validate that the required execution time for different functions is within reasonable boundaries to ensure your blockchain runtime can handle transactions efficiently and securely. By accurately measuring the weight of each extrinsic, you can prevent service interruptions caused by computationally intensive calls that exceed block time limits. Without benchmarking, runtime performance could be vulnerable to DoS attacks, where malicious users exploit functions with unoptimized weights. +Before you begin, ensure you have: -Benchmarking also ensures predictable transaction fees. Weights derived from benchmark tests accurately reflect the resource usage of function calls, allowing fair fee calculation. This approach discourages abuse while maintaining network reliability. +- A pallet to benchmark. If you followed the pallet development tutorials, you can use the counter pallet from the [Create a Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/){target=\_blank} guide. You can also follow these steps to benchmark a custom pallet by updating the `benchmarking.rs` functions, and instances of usage in future steps, to calculate weights using your specific pallet functionality. +- Basic understanding of [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity){target=\_blank}. +- Familiarity with [Rust's testing framework](https://doc.rust-lang.org/book/ch11-00-testing.html){target=\_blank}. +- Familiarity setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\_blank}. Refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\_blank} guide for instructions if needed. -### Benchmarking and Weight +## Create the Benchmarking Module -In Polkadot SDK-based chains, weight quantifies the computational effort needed to process transactions. This weight includes factors such as: +Create a new file `benchmarking.rs` in your pallet's `src` directory and add the following code: -- Computational complexity. -- Storage complexity (proof size). -- Database reads and writes. -- Hardware specifications. +```rust title="pallets/pallet-custom/src/benchmarking.rs" +#![cfg(feature = "runtime-benchmarks")] -Benchmarking uses real-world testing to simulate worst-case scenarios for extrinsics. The framework generates a linear model for weight calculation by running multiple iterations with varied parameters. These worst-case weights ensure blocks remain within execution limits, enabling the runtime to maintain throughput under varying loads. Excess fees can be refunded if a call uses fewer resources than expected, offering users a fair cost model. - -Because weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specifications of hardware used for benchmarking. By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period. +use super::*; +use frame::deps::frame_benchmarking::v2::*; +use frame::benchmarking::prelude::RawOrigin; -Within FRAME, each function call that is dispatched must have a `#[pallet::weight]` annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs: +#[benchmarks] +mod benchmarks { + use super::*; -```rust hl_lines="2" -#[pallet::call_index(0)] -#[pallet::weight(T::WeightInfo::do_something())] -pub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) } -``` + #[benchmark] + fn set_counter_value() { + let new_value: u32 = 100; -The `WeightInfo` file is automatically generated during benchmarking. Based on these tests, this file provides accurate weights for each extrinsic. + #[extrinsic_call] + _(RawOrigin::Root, new_value); -## Benchmarking Process + assert_eq!(CounterValue::::get(), new_value); + } -Benchmarking a pallet involves the following steps: + #[benchmark] + fn increment() { + let caller: T::AccountId = whitelisted_caller(); + let amount: u32 = 50; -1. Creating a `benchmarking.rs` file within your pallet's structure. -2. Writing a benchmarking test for each extrinsic. -3. Executing the benchmarking tool to calculate weights based on performance metrics. + #[extrinsic_call] + _(RawOrigin::Signed(caller.clone()), amount); -The benchmarking tool runs multiple iterations to model worst-case execution times and determine the appropriate weight. By default, the benchmarking pipeline is deactivated. To activate it, compile your runtime with the `runtime-benchmarks` feature flag. + assert_eq!(CounterValue::::get(), amount); + assert_eq!(UserInteractions::::get(caller), 1); + } -### Prepare Your Environment + #[benchmark] + fn decrement() { + // First, set the counter to a non-zero value + CounterValue::::put(100); -Install the [`frame-omni-bencher`](https://crates.io/crates/frame-omni-bencher){target=\_blank} command-line tool: + let caller: T::AccountId = whitelisted_caller(); + let amount: u32 = 30; -```bash -cargo install frame-omni-bencher -``` + #[extrinsic_call] + _(RawOrigin::Signed(caller.clone()), amount); -Before writing benchmark tests, you need to ensure the `frame-benchmarking` crate is included in your pallet's `Cargo.toml` similar to the following: + assert_eq!(CounterValue::::get(), 70); + assert_eq!(UserInteractions::::get(caller), 1); + } -```toml title="Cargo.toml" -frame-benchmarking = { version = "37.0.0", default-features = false } + impl_benchmark_test_suite!(Pallet, crate::mock::new_test_ext(), crate::mock::Test); +} ``` -You must also ensure that you add the `runtime-benchmarks` feature flag as follows under the `[features]` section of your pallet's `Cargo.toml`: +This module contains all the [benchmarking definitions](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} for your pallet. If you are benchmarking a different pallet, update the testing logic as needed to test your pallet's functionality. -```toml title="Cargo.toml" -runtime-benchmarks = [ - "frame-benchmarking/runtime-benchmarks", - "frame-support/runtime-benchmarks", - "frame-system/runtime-benchmarks", - "sp-runtime/runtime-benchmarks", -] -``` +## Define the Weight Trait -Lastly, ensure that `frame-benchmarking` is included in `std = []`: +Add a `weights` module to your pallet that defines the `WeightInfo` trait using the following code: -```toml title="Cargo.toml" -std = [ - # ... - "frame-benchmarking?/std", - # ... -] -``` +```rust title="pallets/pallet-custom/src/weights.rs" +#[frame::pallet] +pub mod pallet { + use frame::prelude::*; + pub use weights::WeightInfo; -Once complete, you have the required dependencies for writing benchmark tests for your pallet. + pub mod weights { + use frame::prelude::*; -### Write Benchmark Tests + pub trait WeightInfo { + fn set_counter_value() -> Weight; + fn increment() -> Weight; + fn decrement() -> Weight; + } -Create a `benchmarking.rs` file in your pallet's `src/`. Your directory structure should look similar to the following: + impl WeightInfo for () { + fn set_counter_value() -> Weight { + Weight::from_parts(10_000, 0) + } + fn increment() -> Weight { + Weight::from_parts(15_000, 0) + } + fn decrement() -> Weight { + Weight::from_parts(15_000, 0) + } + } + } + // ... rest of pallet +} ``` -my-pallet/ -├── src/ -│ ├── lib.rs # Main pallet implementation -│ └── benchmarking.rs # Benchmarking -└── Cargo.toml + +The `WeightInfo for ()` implementation provides placeholder weights for development. If you are using a different pallet, update the `weights` module to use your pallet's function names. + +## Add WeightInfo to Config + +Update your pallet's `Config` trait to include `WeightInfo` by adding the following code: + +```rust title="pallets/pallet-custom/src/lib.rs" +#[pallet::config] +pub trait Config: frame_system::Config { + type RuntimeEvent: From> + IsType<::RuntimeEvent>; + + #[pallet::constant] + type CounterMaxValue: Get; + + type WeightInfo: weights::WeightInfo; +} ``` -With the directory structure set, you can use the [`polkadot-sdk-parachain-template`](https://github.com/paritytech/polkadot-sdk-parachain-template/tree/master/pallets){target=\_blank} to get started as follows: +The [`WeightInfo`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/trait.WeightInfo.html){target=\_blank} trait provides an abstraction layer that allows weights to be swapped at runtime configuration. By making `WeightInfo` an associated type in the `Config` trait, you will enable each runtime that uses your pallet to specify which weight implementation to use. -```rust title="benchmarking.rs (starter template)" -//! Benchmarking setup for pallet-template -#![cfg(feature = "runtime-benchmarks")] +## Update Extrinsic Weight Annotations -use super::*; -use frame_benchmarking::v2::*; +Replace the placeholder weights in your extrinsics with calls to the `WeightInfo` trait by adding the following code: -#[benchmarks] -mod benchmarks { - use super::*; - #[cfg(test)] - use crate::pallet::Pallet as Template; - use frame_system::RawOrigin; - - #[benchmark] - fn do_something() { - let caller: T::AccountId = whitelisted_caller(); - #[extrinsic_call] - do_something(RawOrigin::Signed(caller), 100); - - assert_eq!(Something::::get().map(|v| v.block_number), Some(100u32.into())); - } - - #[benchmark] - fn cause_error() { - Something::::put(CompositeStruct { block_number: 100u32.into() }); - let caller: T::AccountId = whitelisted_caller(); - #[extrinsic_call] - cause_error(RawOrigin::Signed(caller)); - - assert_eq!(Something::::get().map(|v| v.block_number), Some(101u32.into())); - } - - impl_benchmark_test_suite!(Template, crate::mock::new_test_ext(), crate::mock::Test); +```rust title="pallets/pallet-custom/src/lib.rs" +#[pallet::call] +impl Pallet { + #[pallet::call_index(0)] + #[pallet::weight(T::WeightInfo::set_counter_value())] + pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { + // ... implementation + } + + #[pallet::call_index(1)] + #[pallet::weight(T::WeightInfo::increment())] + pub fn increment(origin: OriginFor, amount: u32) -> DispatchResult { + // ... implementation + } + + #[pallet::call_index(2)] + #[pallet::weight(T::WeightInfo::decrement())] + pub fn decrement(origin: OriginFor, amount: u32) -> DispatchResult { + // ... implementation + } } ``` -In your benchmarking tests, employ these best practices: +By calling `T::WeightInfo::function_name()` instead of using hardcoded `Weight::from_parts()` values, your extrinsics automatically use whichever weight implementation is configured in the runtime. You can switch between placeholder weights for testing and benchmarked weights for production easily, without changing any pallet code. -- **Write custom testing functions**: The function `do_something` in the preceding example is a placeholder. Similar to writing unit tests, you must write custom functions to benchmark test your extrinsics. Access the mock runtime and use functions such as `whitelisted_caller()` to sign transactions and facilitate testing. -- **Use the `#[extrinsic_call]` macro**: This macro is used when calling the extrinsic itself and is a required part of a benchmarking function. See the [`extrinsic_call`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html#extrinsic_call-and-block){target=\_blank} docs for more details. -- **Validate extrinsic behavior**: The `assert_eq` expression ensures that the extrinsic is working properly within the benchmark context. +If you are using a different pallet, be sure to update the functions for `WeightInfo` accordingly. -Add the `benchmarking` module to your pallet. In the pallet `lib.rs` file add the following: +## Include the Benchmarking Module + +At the top of your `lib.rs`, add the module declaration by adding the following code: + +```rust title="pallets/pallet-custom/src/lib.rs" +#![cfg_attr(not(feature = "std"), no_std)] + +extern crate alloc; +use alloc::vec::Vec; + +pub use pallet::*; -```rust #[cfg(feature = "runtime-benchmarks")] mod benchmarking; + +// Additional pallet code ``` -### Add Benchmarks to Runtime +The `#[cfg(feature = "runtime-benchmarks")]` attribute ensures that benchmarking code is only compiled when explicitly needed to keep your production runtime efficient. -Before running the benchmarking tool, you must integrate benchmarks with your runtime as follows: +## Configure Pallet Dependencies -1. Navigate to your `runtime/src` directory and check if a `benchmarks.rs` file exists. If not, create one. This file will contain the macro that registers all pallets for benchmarking along with their respective configurations: +Update your pallet's `Cargo.toml` to enable the benchmarking feature by adding the following code: - ```rust title="benchmarks.rs" - frame_benchmarking::define_benchmarks!( - [frame_system, SystemBench::] - [pallet_parachain_template, TemplatePallet] - [pallet_balances, Balances] - [pallet_session, SessionBench::] - [pallet_timestamp, Timestamp] - [pallet_message_queue, MessageQueue] - [pallet_sudo, Sudo] - [pallet_collator_selection, CollatorSelection] - [cumulus_pallet_parachain_system, ParachainSystem] - [cumulus_pallet_xcmp_queue, XcmpQueue] - ); +```toml title="pallets/pallet-custom/Cargo.toml" +[dependencies] +codec = { features = ["derive"], workspace = true } +scale-info = { features = ["derive"], workspace = true } +frame = { features = ["experimental", "runtime"], workspace = true } + +[features] +default = ["std"] +runtime-benchmarks = [ + "frame/runtime-benchmarks", +] +std = [ + "codec/std", + "scale-info/std", + "frame/std", +] +``` + +The Cargo feature flag system lets you conditionally compile code based on which features are enabled. By defining a `runtime-benchmarks` feature that cascades to FRAME's benchmarking features, you create a clean way to build your pallet with or without benchmarking support, ensuring all necessary dependencies are available when needed but excluded from production builds. + +## Update Mock Runtime + +Add the `WeightInfo` type to your test configuration in `mock.rs` by adding the following code: + +```rust title="pallets/pallet-custom/src/mock.rs" +impl pallet_custom::Config for Test { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = (); +} +``` + +In your mock runtime for testing, use the placeholder `()` implementation of `WeightInfo`, since unit tests focus on verifying functional correctness rather than performance. + +## Configure Runtime Benchmarking + +To execute benchmarks, your pallet must be integrated into the runtime's benchmarking infrastructure. Follow these steps to update the runtime configuration: + +1. **Update `runtime/Cargo.toml`**: Add your pallet to the runtime's `runtime-benchmarks` feature as follows: + + ```toml title="runtime/Cargo.toml" + runtime-benchmarks = [ + "cumulus-pallet-parachain-system/runtime-benchmarks", + "hex-literal", + "pallet-parachain-template/runtime-benchmarks", + "polkadot-sdk/runtime-benchmarks", + "pallet-custom/runtime-benchmarks", + ] + ``` + + When you build the runtime with `--features runtime-benchmarks`, this configuration ensures all necessary benchmarking code across all pallets (including yours) is included. + +2. **Update runtime configuration**: Using the the placeholder implementation, run development benchmarks as follows: + + ```rust title="runtime/src/configs/mod.rs" + impl pallet_custom::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = (); + } ``` - For example, to add a new pallet named `pallet_parachain_template` for benchmarking, include it in the macro as shown: - ```rust title="benchmarks.rs" hl_lines="3" - frame_benchmarking::define_benchmarks!( +3. **Register benchmarks**: Add your pallet to the benchmark list in `runtime/src/benchmarks.rs` as follows: + + ```rust title="runtime/src/benchmarks.rs" + polkadot_sdk::frame_benchmarking::define_benchmarks!( [frame_system, SystemBench::] - [pallet_parachain_template, TemplatePallet] + [pallet_balances, Balances] + // ... other pallets + [pallet_custom, CustomPallet] ); ``` - !!!warning "Updating `define_benchmarks!` macro is required" - Any pallet that needs to be benchmarked must be included in the [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro. The CLI will only be able to access and benchmark pallets that are registered here. + The [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro creates the infrastructure that allows the benchmarking CLI tool to discover and execute your pallet's benchmarks. -2. Check your runtime's `lib.rs` file to ensure the `benchmarks` module is imported. The import should look like this: +## Test Benchmark Compilation - ```rust title="lib.rs" - #[cfg(feature = "runtime-benchmarks")] - mod benchmarks; - ``` +Run the following command to verify your benchmarks compile and run as tests: - The `runtime-benchmarks` feature gate ensures benchmark tests are isolated from production runtime code. +```bash +cargo test -p pallet-custom --features runtime-benchmarks +``` -3. Enable runtime benchmarking for your pallet in `runtime/Cargo.toml`: +You will see terminal output similar to the following as your benchmark tests pass: - ```toml - runtime-benchmarks = [ - # ... - "pallet_parachain_template/runtime-benchmarks", - ] +
+ cargo test -p pallet-custom --features runtime-benchmarks + test benchmarking::benchmarks::bench_set_counter_value ... ok + test benchmarking::benchmarks::bench_increment ... ok + test benchmarking::benchmarks::bench_decrement ... ok + +
- ``` +The `impl_benchmark_test_suite!` macro generates unit tests for each benchmark. Running these tests verifies that your benchmarks compile correctly, execute without panicking, and pass their assertions, catching issues early before building the entire runtime. -### Run Benchmarks +## Build the Runtime with Benchmarks -You can now compile your runtime with the `runtime-benchmarks` feature flag. This feature flag is crucial as the benchmarking tool will look for this feature being enabled to know when it should run benchmark tests. Follow these steps to compile the runtime with benchmarking enabled: +Compile the runtime with benchmarking enabled to generate the Wasm binary using the following command: -1. Run `build` with the feature flag included: +```bash +cargo build --release --features runtime-benchmarks +``` - ```bash - cargo build --features runtime-benchmarks --release - ``` +This command produces the runtime WASM file needed for benchmarking, typically located at: `target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm` -2. Create a `weights.rs` file in your pallet's `src/` directory. This file will store the auto-generated weight calculations: +The build includes all the benchmarking infrastructure and special host functions needed for measurement. The resulting WASM runtime contains your benchmark code and can communicate with the benchmarking tool's execution environment. You'll create a different build later for operating your chain in production. - ```bash - touch weights.rs - ``` +## Install the Benchmarking Tool -3. Before running the benchmarking tool, you'll need a template file that defines how weight information should be formatted. Download the official template from the Polkadot SDK repository and save it in your project folders for future use: +Install the `frame-omni-bencher` CLI tool using the following command: - ```bash - curl https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ - --output ./pallets/benchmarking/frame-weight-template.hbs - ``` +```bash +cargo install frame-omni-bencher --locked +``` + +[`frame-omni-bencher`](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\_blank} is the official Polkadot SDK tool designed explicitly for FRAME pallet benchmarking. It provides a standardized way to execute benchmarks, measure execution times and storage operations, and generate properly formatted weight files with full integration into the FRAME weight system. + +## Download the Weight Template -4. Run the benchmarking tool to measure extrinsic weights: +Download the official weight template file using the following commands: + +```bash +curl -L https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ +--output ./pallets/pallet-custom/frame-weight-template.hbs +``` + +The weight template is a Handlebars file that transforms raw benchmark data into a correctly formatted Rust source file. It defines the structure of the generated `weights.rs` file, including imports, trait definitions, documentation comments, and formatting. Using the official template ensures your weight files follow the Polkadot SDK conventions and include all necessary metadata, such as benchmark execution parameters, storage operation counts, and hardware information. + +## Execute Benchmarks + +Run benchmarks for your pallet to generate weight files using the following commands: + +```bash +frame-omni-bencher v1 benchmark pallet \ + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \ + --pallet pallet_custom \ + --extrinsic "" \ + --template ./pallets/pallet-custom/frame-weight-template.hbs \ + --output ./pallets/pallet-custom/src/weights.rs +``` + +Benchmarks execute against the compiled WASM runtime rather than native code because WASM is what actually runs in production on the blockchain. WASM execution can have different performance characteristics than native code due to compilation and sandboxing overhead, so benchmarking against the WASM ensures your weight measurements reflect real-world conditions. + +??? note "Additional customization" + + You can customize benchmark execution with additional parameters for more detailed measurements, as shown in the sample code below: ```bash frame-omni-bencher v1 benchmark pallet \ - --runtime INSERT_PATH_TO_WASM_RUNTIME \ - --pallet INSERT_NAME_OF_PALLET \ - --extrinsic "" \ - --template ./frame-weight-template.hbs \ - --output weights.rs + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \ + --pallet pallet_custom \ + --extrinsic "" \ + --steps 50 \ + --repeat 20 \ + --template ./pallets/pallet-custom/frame-weight-template.hbs \ + --output ./pallets/pallet-custom/src/weights.rs ``` + + - **`--steps 50`**: Number of different input values to test when using linear components (default: 50). More steps provide finer granularity for detecting complexity trends but increase benchmarking time. + - **`--repeat 20`**: Number of repetitions for each measurement (default: 20). More repetitions improve statistical accuracy by averaging out variance, reducing the impact of system noise, and providing more reliable weight estimates. + - **`--heap-pages 4096`**: WASM heap pages allocation. Affects available memory during execution. + - **`--wasm-execution compiled`**: WASM execution method. Use `compiled` for performance closest to production conditions. - !!! tip "Flag definitions" - - **`--runtime`**: The path to your runtime's Wasm. - - **`--pallet`**: The name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined in `define_benchmarks`. - - **`--extrinsic`**: Which extrinsic to test. Using `""` implies all extrinsics will be benchmarked. - - **`--template`**: Defines how weight information should be formatted. - - **`--output`**: Where the output of the auto-generated weights will reside. +## Use Generated Weights -The generated `weights.rs` file contains weight annotations for your extrinsics, ready to be added to your pallet. The output should be similar to the following. Some output is omitted for brevity: +After running benchmarks, a `weights.rs` file is generated containing measured weights based on actual measurements of your code running on real hardware, accounting for the specific complexity of your logic, storage access patterns, and computational requirements. -
- frame-omni-bencher v1 benchmark pallet \ - --runtime INSERT_PATH_TO_WASM_RUNTIME \ - --pallet "INSERT_NAME_OF_PALLET" \ - --extrinsic "" \ - --template ./frame-weight-template.hbs \ - --output ./weights.rs - ... - 2025-01-15T16:41:33.557045Z INFO polkadot_sdk_frame::benchmark::pallet: [ 0 % ] Starting benchmark: pallet_parachain_template::do_something - 2025-01-15T16:41:33.564644Z INFO polkadot_sdk_frame::benchmark::pallet: [ 50 % ] Starting benchmark: pallet_parachain_template::cause_error - ... - Created file: "weights.rs" - -
+Follow these steps to use the generated weights with your pallet: + +1. Integrate the generated weights by adding the weights module to your pallet's `lib.rs` as follows: + + ```rust title="pallets/pallet-custom/src/lib.rs" + #![cfg_attr(not(feature = "std"), no_std)] -#### Add Benchmark Weights to Pallet + extern crate alloc; + use alloc::vec::Vec; -Once the `weights.rs` is generated, you must integrate it with your pallet. + pub use pallet::*; -1. To begin the integration, import the `weights` module and the `WeightInfo` trait, then add both to your pallet's `Config` trait. Complete the following steps to set up the configuration: + #[cfg(feature = "runtime-benchmarks")] + mod benchmarking; - ```rust title="lib.rs" pub mod weights; - use crate::weights::WeightInfo; - - /// Configure the pallet by specifying the parameters and types on which it depends. - #[pallet::config] - pub trait Config: frame_system::Config { - // ... - /// A type representing the weights required by the dispatchables of this pallet. - type WeightInfo: WeightInfo; + + #[frame::pallet] + pub mod pallet { + use super::*; + use frame::prelude::*; + use crate::weights::WeightInfo; + // ... rest of pallet } ``` -2. Next, you must add this to the `#[pallet::weight]` annotation in all the extrinsics via the `Config` as follows: + Unlike the benchmarking module (which is only needed when running benchmarks), the weights module must be available in all builds because the runtime needs to call the weight functions during regular operation to calculate transaction fees and enforce block limits. - ```rust hl_lines="2" title="lib.rs" - #[pallet::call_index(0)] - #[pallet::weight(T::WeightInfo::do_something())] - pub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) } +2. Update your runtime configuration to use the generated weights instead of the placeholder `()` implementation by adding the following code: + + ```rust title="runtime/src/configs/mod.rs" + impl pallet_custom::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = pallet_custom::weights::SubstrateWeight; + } ``` -3. Finally, configure the actual weight values in your runtime. In `runtime/src/config/mod.rs`, add the following code: + This change activates your benchmarked weights in the production runtime. Now, when users submit transactions that call your pallet's extrinsics, the runtime will use the actual measured weights to calculate fees and enforce block limits. + +??? code "Example generated weight file" + + The generated `weights.rs` file will look similar to this: + + ```rust title="pallets/pallet-custom/src/weights.rs" + //! Autogenerated weights for `pallet_custom` + //! + //! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 32.0.0 + //! DATE: 2025-01-15, STEPS: `50`, REPEAT: `20` + + #![cfg_attr(rustfmt, rustfmt_skip)] + #![allow(unused_parens)] + #![allow(unused_imports)] + #![allow(missing_docs)] + + use frame_support::{traits::Get, weights::{Weight, constants::RocksDbWeight}}; + use core::marker::PhantomData; + + pub trait WeightInfo { + fn set_counter_value() -> Weight; + fn increment() -> Weight; + fn decrement() -> Weight; + } + + pub struct SubstrateWeight(PhantomData); + impl WeightInfo for SubstrateWeight { + fn set_counter_value() -> Weight { + Weight::from_parts(8_234_000, 0) + .saturating_add(T::DbWeight::get().reads(1)) + .saturating_add(T::DbWeight::get().writes(1)) + } + + fn increment() -> Weight { + Weight::from_parts(12_456_000, 0) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } - ```rust title="mod.rs" - // Configure pallet. - impl pallet_parachain_template::Config for Runtime { - // ... - type WeightInfo = pallet_parachain_template::weights::SubstrateWeight; + fn decrement() -> Weight { + Weight::from_parts(11_987_000, 0) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } } ``` -## Where to Go Next + The actual numbers in your `weights.rs` file will vary based on your hardware and implementation complexity. The [`DbWeight`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.RuntimeDbWeight.html){target=\_blank} accounts for database read and write operations. + +Congratulations, you've successfully benchmarked a pallet and updated your runtime to use the generated weight values. -- View the Rust Docs for a more comprehensive, low-level view of the [FRAME V2 Benchmarking Suite](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=_blank}. -- Read the [FRAME Benchmarking and Weights](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_benchmarking_weight/index.html){target=_blank} reference document, a concise guide which details how weights and benchmarking work. +## Related Resources + +- [FRAME Benchmarking Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/index.html){target=\_blank} +- [Weight Struct Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.Weight.html){target=\_blank} +- [Benchmarking v2 API](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} +- [frame-omni-bencher Tool](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\_blank} --- @@ -2061,7 +2210,7 @@ This command validates all pallet configurations and prepares the build for depl ## Run Your Chain Locally -Launch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}. +Launch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}. For instructions on setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\_blank}, refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\_blank} guide. ### Generate a Chain Specification @@ -6338,843 +6487,112 @@ This section covers the most common customization patterns you'll encounter: --- -Page Title: Pallet Unit Testing +Page Title: Run a Parachain Network -- Source (raw): https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md -- Canonical (HTML): https://docs.polkadot.com/parachains/customize-runtime/pallet-development/pallet-testing/ -- Summary: Learn how to write comprehensive unit tests for your custom pallets using mock runtimes, ensuring reliability and correctness before deployment. +- Source (raw): https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-testing-run-a-parachain-network.md +- Canonical (HTML): https://docs.polkadot.com/parachains/testing/run-a-parachain-network/ +- Summary: Quickly install and configure Zombienet to deploy and test Polkadot-based blockchain networks with this comprehensive getting-started guide. -# Pallet Unit Testing +# Run a Parachain Network Using Zombienet ## Introduction -Unit testing in the Polkadot SDK helps ensure that the functions provided by a pallet behave as expected. It also confirms that data and events associated with a pallet are processed correctly during interactions. With your mock runtime in place from the [previous guide](/parachains/customize-runtime/pallet-development/mock-runtime/), you can now write comprehensive tests that verify your pallet's behavior in isolation. - -In this guide, you'll learn how to: - -- Structure test modules effectively. -- Test dispatchable functions. -- Verify storage changes. -- Check event emission. -- Test error conditions. -- Use genesis configurations in tests. +Zombienet is a robust testing framework designed for Polkadot SDK-based blockchain networks. It enables developers to efficiently deploy and test ephemeral blockchain environments on platforms like Kubernetes, Podman, and native setups. With its simple and versatile CLI, Zombienet provides an all-in-one solution for spawning networks, running tests, and validating performance. -## Prerequisites +This guide will outline the different installation methods for Zombienet, provide step-by-step instructions for setting up on various platforms, and highlight essential provider-specific features and requirements. -Before you begin, ensure you: +By following this guide, Zombienet will be up and running quickly, ready to streamline your blockchain testing and development workflows. -- Completed the [Make a Custom Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/) guide. -- Completed the [Mock Your Runtime](/parachains/customize-runtime/pallet-development/mock-runtime/) guide. -- Configured custom counter pallet with mock runtime in `pallets/pallet-custom`. -- Understood the basics of [Rust testing](https://doc.rust-lang.org/book/ch11-00-testing.html){target=\_blank}. +## Install Zombienet -## Understanding FRAME Testing Tools +Zombienet releases are available on the [Zombienet repository](https://github.com/paritytech/zombienet){target=\_blank}. -[FRAME](/reference/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\_blank} provides specialized testing macros and utilities that make pallet testing more efficient: +Multiple options are available for installing Zombienet, depending on the user's preferences and the environment where it will be used. The following section will guide you through the installation process for each option. -### Assertion Macros +=== "Use the executable" -- **[`assert_ok!`](https://paritytech.github.io/polkadot-sdk/master/frame_support/macro.assert_ok.html){target=\_blank}** - Asserts that a dispatchable call succeeds. -- **[`assert_noop!`](https://paritytech.github.io/polkadot-sdk/master/frame_support/macro.assert_noop.html){target=\_blank}** - Asserts that a call fails without changing state (no operation). -- **[`assert_eq!`](https://doc.rust-lang.org/std/macro.assert_eq.html){target=\_blank}** - Standard Rust equality assertion. + Install Zombienet using executables by visiting the [latest release](https://github.com/paritytech/zombienet/releases){target=\_blank} page and selecting the appropriate asset for your operating system. You can download the executable and move it to a directory in your PATH. -!!!info "`assert_noop!` Explained" - Use `assert_noop!` to ensure the operation fails without any state changes. This is critical for testing error conditions - it verifies both that the error occurs AND that no storage was modified. + Each release includes executables for Linux and macOS. Executables are generated using [pkg](https://github.com/vercel/pkg){target=\_blank}, which allows the Zombienet CLI to operate without requiring Node.js to be installed. -### System Pallet Test Helpers + Then, ensure the downloaded file is executable: -The [`frame_system`](https://paritytech.github.io/polkadot-sdk/master/frame_system/index.html){target=\_blank} pallet provides useful methods for testing: + ```bash + chmod +x zombienet-macos-arm64 + ``` -- **[`System::events()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.events){target=\_blank}** - Returns all events emitted during the test. -- **[`System::assert_last_event()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_last_event){target=\_blank}** - Asserts the last event matches expectations. -- **[`System::set_block_number()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.set_block_number){target=\_blank}** - Sets the current block number. + Finally, you can run the following command to check if the installation was successful. If so, it will display the version of the installed Zombienet: -!!!info "Events and Block Number" - Events are not emitted on block 0 (genesis block). If you need to test events, ensure you set the block number to at least 1 using `System::set_block_number(1)`. + ```bash + ./zombienet-macos-arm64 version + ``` -### Origin Types + If you want to add the `zombienet` executable to your PATH, you can move it to a directory in your PATH, such as `/usr/local/bin`: -- **[`RuntimeOrigin::root()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/enum.RawOrigin.html#variant.Root){target=\_blank}** - Root/sudo origin for privileged operations. -- **[`RuntimeOrigin::signed(account)`](https://paritytech.github.io/polkadot-sdk/master/frame_system/enum.RawOrigin.html#variant.Signed){target=\_blank}** - Signed origin from a specific account. -- **[`RuntimeOrigin::none()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/enum.RawOrigin.html#variant.None){target=\_blank}** - No origin (typically fails for most operations). + ```bash + mv zombienet-macos-arm64 /usr/local/bin/zombienet + ``` -Learn more about origins in the [FRAME Origin reference document](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_origin/index.html){target=\_blank}. + Now you can refer to the `zombienet` executable directly. -## Create the Tests Module + ```bash + zombienet version + ``` -Create a new file for your tests within the pallet directory: +=== "Use Nix" -1. Navigate to your pallet directory: + For Nix users, the Zombienet repository provides a [`flake.nix`](https://github.com/paritytech/zombienet/blob/main/flake.nix){target=\_blank} file to install Zombienet making it easy to incorporate Zombienet into Nix-based projects. + + To install Zombienet utilizing Nix, users can run the following command, triggering the fetching of the flake and subsequently installing the Zombienet package: ```bash - cd pallets/pallet-custom/src + nix run github:paritytech/zombienet/INSERT_ZOMBIENET_VERSION -- \ + spawn INSERT_ZOMBIENET_CONFIG_FILE_NAME.toml ``` -2. Create a new file named `tests.rs`: + Replace the `INSERT_ZOMBIENET_VERSION` with the desired version of Zombienet and the `INSERT_ZOMBIENET_CONFIG_FILE_NAME` with the name of the configuration file you want to use. + + To run the command above, you need to have [Flakes](https://nixos.wiki/wiki/Flakes#Enable_flakes){target=\_blank} enabled. + Alternatively, you can also include the Zombienet binary in the PATH for the current shell using the following command: + ```bash - touch tests.rs + nix shell github:paritytech/zombienet/INSERT_ZOMBIENET_VERSION ``` -3. Open `src/lib.rs` and add the tests module declaration after the mock module: +=== "Use Docker" - ```rust title="src/lib.rs" - #![cfg_attr(not(feature = "std"), no_std)] + Zombienet can also be run using Docker. The Zombienet repository provides a Docker image that can be used to run the Zombienet CLI. To run Zombienet using Docker, you can use the following command: - pub use pallet::*; + ```bash + docker run -it --rm \ + -v $(pwd):/home/nonroot/zombie-net/host-current-files \ + paritytech/zombienet + ``` - #[cfg(test)] - mod mock; + The command above will run the Zombienet CLI inside a Docker container and mount the current directory to the `/home/nonroot/zombie-net/host-current-files` directory. This allows Zombienet to access the configuration file and other files in the current directory. If you want to mount a different directory, replace `$(pwd)` with the desired directory path. - #[cfg(test)] - mod tests; + Inside the Docker container, you can run the Zombienet CLI commands. First, you need to set up Zombienet to download the necessary binaries: - #[frame::pallet] - pub mod pallet { - // ... existing pallet code - } + ```bash + npm run zombie -- setup polkadot polkadot-parachain ``` -## Set Up the Test Module + After that, you need to add those binaries to the PATH: -Open `src/tests.rs` and add the basic structure with necessary imports: + ```bash + export PATH=/home/nonroot/zombie-net:$PATH + ``` -```rust -use crate::{mock::*, Error, Event}; -use frame::deps::frame_support::{assert_noop, assert_ok}; -use frame::deps::sp_runtime::DispatchError; -``` + Finally, you can run the Zombienet CLI commands. For example, to spawn a network using a specific configuration file, you can run the following command: -This setup imports: + ```bash + npm run zombie -- -p native spawn host-current-files/minimal.toml + ``` -- The mock runtime and test utilities from `mock.rs` -- Your pallet's `Error` and `Event` types -- FRAME's assertion macros via `frame::deps` -- `DispatchError` for testing origin checks + The command above mounts the current directory to the `/workspace` directory inside the Docker container, allowing Zombienet to access the configuration file and other files in the current directory. If you want to mount a different directory, replace `$(pwd)` with the desired directory path. -???+ code "Complete Pallet Code Reference" - Here's the complete pallet code that you'll be testing throughout this guide: - - ```rust - #![cfg_attr(not(feature = "std"), no_std)] - - pub use pallet::*; - - #[frame::pallet] - pub mod pallet { - use frame::prelude::*; - - #[pallet::pallet] - pub struct Pallet(_); - - #[pallet::config] - pub trait Config: frame_system::Config { - type RuntimeEvent: From> + IsType<::RuntimeEvent>; - - #[pallet::constant] - type CounterMaxValue: Get; - } - - #[pallet::event] - #[pallet::generate_deposit(pub(super) fn deposit_event)] - pub enum Event { - CounterValueSet { - new_value: u32, - }, - CounterIncremented { - new_value: u32, - who: T::AccountId, - amount: u32, - }, - CounterDecremented { - new_value: u32, - who: T::AccountId, - amount: u32, - }, - } - - #[pallet::error] - pub enum Error { - NoneValue, - Overflow, - Underflow, - CounterMaxValueExceeded, - } - - #[pallet::storage] - pub type CounterValue = StorageValue<_, u32, ValueQuery>; - - #[pallet::storage] - pub type UserInteractions = StorageMap< - _, - Blake2_128Concat, - T::AccountId, - u32, - ValueQuery - >; - - #[pallet::genesis_config] - #[derive(DefaultNoBound)] - pub struct GenesisConfig { - pub initial_counter_value: u32, - pub initial_user_interactions: Vec<(T::AccountId, u32)>, - } - - #[pallet::genesis_build] - impl BuildGenesisConfig for GenesisConfig { - fn build(&self) { - CounterValue::::put(self.initial_counter_value); - for (account, count) in &self.initial_user_interactions { - UserInteractions::::insert(account, count); - } - } - } - - #[pallet::call] - impl Pallet { - #[pallet::call_index(0)] - #[pallet::weight(0)] - pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { - ensure_root(origin)?; - ensure!(new_value <= T::CounterMaxValue::get(), Error::::CounterMaxValueExceeded); - CounterValue::::put(new_value); - Self::deposit_event(Event::CounterValueSet { new_value }); - Ok(()) - } - - #[pallet::call_index(1)] - #[pallet::weight(0)] - pub fn increment(origin: OriginFor, amount: u32) -> DispatchResult { - let who = ensure_signed(origin)?; - let current_value = CounterValue::::get(); - let new_value = current_value.checked_add(amount).ok_or(Error::::Overflow)?; - ensure!(new_value <= T::CounterMaxValue::get(), Error::::CounterMaxValueExceeded); - CounterValue::::put(new_value); - UserInteractions::::mutate(&who, |count| { - *count = count.saturating_add(1); - }); - Self::deposit_event(Event::CounterIncremented { new_value, who, amount }); - Ok(()) - } - - #[pallet::call_index(2)] - #[pallet::weight(0)] - pub fn decrement(origin: OriginFor, amount: u32) -> DispatchResult { - let who = ensure_signed(origin)?; - let current_value = CounterValue::::get(); - let new_value = current_value.checked_sub(amount).ok_or(Error::::Underflow)?; - CounterValue::::put(new_value); - UserInteractions::::mutate(&who, |count| { - *count = count.saturating_add(1); - }); - Self::deposit_event(Event::CounterDecremented { new_value, who, amount }); - Ok(()) - } - } - } - - ``` - -## Write Your First Test - -Let's start with a simple test to verify the increment function works correctly. - -### Test Basic Increment - -Test that the increment function increases counter value and emits events. - -```rust -#[test] -fn increment_works() { - new_test_ext().execute_with(|| { - // Set block number to 1 so events are registered - System::set_block_number(1); - - let account = 1u64; - - // Increment by 50 - assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 50)); - assert_eq!(crate::CounterValue::::get(), 50); - - // Check event was emitted - System::assert_last_event( - Event::CounterIncremented { - new_value: 50, - who: account, - amount: 50, - } - .into(), - ); - - // Check user interactions were tracked - assert_eq!(crate::UserInteractions::::get(account), 1); - }); -} -``` - -Run your first test: - -```bash -cargo test --package pallet-custom increment_works -``` - -You should see: - -``` -running 1 test -test tests::increment_works ... ok -``` - -Congratulations! You've written and run your first pallet test. - -## Test Error Conditions - -Now let's test that our pallet correctly handles errors. Error testing is crucial to ensure your pallet fails safely. - -### Test Overflow Protection - -Test that incrementing at u32::MAX fails with Overflow error. - -```rust -#[test] -fn increment_fails_on_overflow() { - new_test_ext_with_counter(u32::MAX).execute_with(|| { - // Attempt to increment when at max u32 should fail - assert_noop!( - CustomPallet::increment(RuntimeOrigin::signed(1), 1), - Error::::Overflow - ); - }); -} -``` - -Test overflow protection: - -```bash -cargo test --package pallet-custom increment_fails_on_overflow -``` - -### Test Underflow Protection - -Test that decrementing below zero fails with Underflow error. - -```rust -#[test] -fn decrement_fails_on_underflow() { - new_test_ext_with_counter(10).execute_with(|| { - // Attempt to decrement below zero should fail - assert_noop!( - CustomPallet::decrement(RuntimeOrigin::signed(1), 11), - Error::::Underflow - ); - }); -} -``` - -Verify underflow protection: - -```bash -cargo test --package pallet-custom decrement_fails_on_underflow -``` - -## Test Access Control - -Verify that origin checks work correctly and unauthorized access is prevented. - -### Test Root-Only Access - -Test that set_counter_value requires root origin and rejects signed origins. - -```rust -#[test] -fn set_counter_value_requires_root() { - new_test_ext().execute_with(|| { - let alice = 1u64; - - // When: non-root user tries to set counter - // Then: should fail with BadOrigin - assert_noop!( - CustomPallet::set_counter_value(RuntimeOrigin::signed(alice), 100), - DispatchError::BadOrigin - ); - - // But root should succeed - assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 100)); - assert_eq!(crate::CounterValue::::get(), 100); - }); -} -``` - -Test access control: - -```bash -cargo test --package pallet-custom set_counter_value_requires_root -``` - -## Test Event Emission - -Verify that events are emitted correctly with the right data. - -### Test Event Data - -The [`increment_works`](/parachains/customize-runtime/pallet-development/pallet-testing/#test-basic-increment) test (shown earlier) already demonstrates event testing by: - -1. Setting the block number to 1 to enable event emission. -2. Calling the dispatchable function. -3. Using `System::assert_last_event()` to verify the correct event was emitted with expected data. - -This pattern applies to all dispatchables that emit events. For a dedicated event-only test focusing on the `set_counter_value` function: - -Test that set_counter_value updates storage and emits correct event. - -```rust -#[test] -fn set_counter_value_works() { - new_test_ext().execute_with(|| { - // Set block number to 1 so events are registered - System::set_block_number(1); - - // Set counter to 100 - assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 100)); - assert_eq!(crate::CounterValue::::get(), 100); - - // Check event was emitted - System::assert_last_event(Event::CounterValueSet { new_value: 100 }.into()); - }); -} -``` - -Run the event test: - -```bash -cargo test --package pallet-custom set_counter_value_works -``` - -## Test Genesis Configuration - -Verify that genesis configuration works correctly. - -### Test Genesis Setup - -Test that genesis configuration correctly initializes counter and user interactions. - -```rust -#[test] -fn genesis_config_works() { - new_test_ext_with_interactions(42, vec![(1, 5), (2, 10)]).execute_with(|| { - // Check initial counter value - assert_eq!(crate::CounterValue::::get(), 42); - - // Check initial user interactions - assert_eq!(crate::UserInteractions::::get(1), 5); - assert_eq!(crate::UserInteractions::::get(2), 10); - }); -} -``` - -Test genesis configuration: - -```bash -cargo test --package pallet-custom genesis_config_works -``` - -## Run All Tests - -Now run all your tests together: - -```bash -cargo test --package pallet-custom -``` - -You should see all tests passing: - -
- $ cargo test --package pallet-custom - running 15 tests - test mock::__construct_runtime_integrity_test::runtime_integrity_tests ... ok - test mock::test_genesis_config_builds ... ok - test tests::decrement_fails_on_underflow ... ok - test tests::decrement_tracks_multiple_interactions ... ok - test tests::decrement_works ... ok - test tests::different_users_tracked_separately ... ok - test tests::genesis_config_works ... ok - test tests::increment_fails_on_overflow ... ok - test tests::increment_respects_max_value ... ok - test tests::increment_tracks_multiple_interactions ... ok - test tests::increment_works ... ok - test tests::mixed_increment_and_decrement_works ... ok - test tests::set_counter_value_requires_root ... ok - test tests::set_counter_value_respects_max_value ... ok - test tests::set_counter_value_works ... ok - - test result: ok. 15 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out -
- -!!!note "Mock Runtime Tests" - You'll notice 2 additional tests from the `mock` module: - - - `mock::__construct_runtime_integrity_test::runtime_integrity_tests` - Auto-generated test that validates runtime construction - - `mock::test_genesis_config_builds` - Validates that genesis configuration builds correctly - - These tests are automatically generated from your mock runtime setup and help ensure the test environment itself is valid. - -Congratulations! You have a well-tested pallet covering the essential testing patterns! - -These tests demonstrate comprehensive coverage including basic operations, error conditions, access control, event emission, state management, and genesis configuration. As you build more complex pallets, you'll apply these same patterns to test additional functionality. - -??? code "Full Test Suite Code" - Here's the complete `tests.rs` file for quick reference: - - ```rust - use crate::{mock::*, Error, Event}; - use frame::deps::frame_support::{assert_noop, assert_ok}; - use frame::deps::sp_runtime::DispatchError; - - #[test] - fn set_counter_value_works() { - new_test_ext().execute_with(|| { - // Set block number to 1 so events are registered - System::set_block_number(1); - - // Set counter to 100 - assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 100)); - assert_eq!(crate::CounterValue::::get(), 100); - - // Check event was emitted - System::assert_last_event(Event::CounterValueSet { new_value: 100 }.into()); - }); - } - - #[test] - fn set_counter_value_requires_root() { - new_test_ext().execute_with(|| { - // Attempt to set counter with non-root origin should fail - assert_noop!( - CustomPallet::set_counter_value(RuntimeOrigin::signed(1), 100), - DispatchError::BadOrigin - ); - }); - } - - #[test] - fn set_counter_value_respects_max_value() { - new_test_ext().execute_with(|| { - // Attempt to set counter above max value (1000) should fail - assert_noop!( - CustomPallet::set_counter_value(RuntimeOrigin::root(), 1001), - Error::::CounterMaxValueExceeded - ); - - // Setting to exactly max value should work - assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 1000)); - assert_eq!(crate::CounterValue::::get(), 1000); - }); - } - - #[test] - fn increment_works() { - new_test_ext().execute_with(|| { - // Set block number to 1 so events are registered - System::set_block_number(1); - - let account = 1u64; - - // Increment by 50 - assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 50)); - assert_eq!(crate::CounterValue::::get(), 50); - - // Check event was emitted - System::assert_last_event( - Event::CounterIncremented { - new_value: 50, - who: account, - amount: 50, - } - .into(), - ); - - // Check user interactions were tracked - assert_eq!(crate::UserInteractions::::get(account), 1); - }); - } - - #[test] - fn increment_tracks_multiple_interactions() { - new_test_ext().execute_with(|| { - let account = 1u64; - - // Increment multiple times - assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 10)); - assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 20)); - assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 30)); - - // Check counter value - assert_eq!(crate::CounterValue::::get(), 60); - - // Check user interactions were tracked (should be 3) - assert_eq!(crate::UserInteractions::::get(account), 3); - }); - } - - #[test] - fn increment_fails_on_overflow() { - new_test_ext_with_counter(u32::MAX).execute_with(|| { - // Attempt to increment when at max u32 should fail - assert_noop!( - CustomPallet::increment(RuntimeOrigin::signed(1), 1), - Error::::Overflow - ); - }); - } - - #[test] - fn increment_respects_max_value() { - new_test_ext_with_counter(950).execute_with(|| { - // Incrementing past max value (1000) should fail - assert_noop!( - CustomPallet::increment(RuntimeOrigin::signed(1), 51), - Error::::CounterMaxValueExceeded - ); - - // Incrementing to exactly max value should work - assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(1), 50)); - assert_eq!(crate::CounterValue::::get(), 1000); - }); - } - - #[test] - fn decrement_works() { - new_test_ext_with_counter(100).execute_with(|| { - // Set block number to 1 so events are registered - System::set_block_number(1); - - let account = 2u64; - - // Decrement by 30 - assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(account), 30)); - assert_eq!(crate::CounterValue::::get(), 70); - - // Check event was emitted - System::assert_last_event( - Event::CounterDecremented { - new_value: 70, - who: account, - amount: 30, - } - .into(), - ); - - // Check user interactions were tracked - assert_eq!(crate::UserInteractions::::get(account), 1); - }); - } - - #[test] - fn decrement_fails_on_underflow() { - new_test_ext_with_counter(10).execute_with(|| { - // Attempt to decrement below zero should fail - assert_noop!( - CustomPallet::decrement(RuntimeOrigin::signed(1), 11), - Error::::Underflow - ); - }); - } - - #[test] - fn decrement_tracks_multiple_interactions() { - new_test_ext_with_counter(100).execute_with(|| { - let account = 3u64; - - // Decrement multiple times - assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(account), 10)); - assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(account), 20)); - - // Check counter value - assert_eq!(crate::CounterValue::::get(), 70); - - // Check user interactions were tracked (should be 2) - assert_eq!(crate::UserInteractions::::get(account), 2); - }); - } - - #[test] - fn mixed_increment_and_decrement_works() { - new_test_ext_with_counter(50).execute_with(|| { - let account = 4u64; - - // Mix of increment and decrement - assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 25)); - assert_eq!(crate::CounterValue::::get(), 75); - - assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(account), 15)); - assert_eq!(crate::CounterValue::::get(), 60); - - assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 10)); - assert_eq!(crate::CounterValue::::get(), 70); - - // Check user interactions were tracked (should be 3) - assert_eq!(crate::UserInteractions::::get(account), 3); - }); - } - - #[test] - fn different_users_tracked_separately() { - new_test_ext().execute_with(|| { - let account1 = 1u64; - let account2 = 2u64; - - // User 1 increments - assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account1), 10)); - assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account1), 10)); - - // User 2 decrements - assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(account2), 5)); - - // Check counter value (10 + 10 - 5 = 15) - assert_eq!(crate::CounterValue::::get(), 15); - - // Check user interactions are tracked separately - assert_eq!(crate::UserInteractions::::get(account1), 2); - assert_eq!(crate::UserInteractions::::get(account2), 1); - }); - } - - #[test] - fn genesis_config_works() { - new_test_ext_with_interactions(42, vec![(1, 5), (2, 10)]).execute_with(|| { - // Check initial counter value - assert_eq!(crate::CounterValue::::get(), 42); - - // Check initial user interactions - assert_eq!(crate::UserInteractions::::get(1), 5); - assert_eq!(crate::UserInteractions::::get(2), 10); - }); - } - ``` - -## Where to Go Next - -
- -- Guide __Add Your Custom Pallet to the Runtime__ - - --- - - Your pallet is tested and ready! Learn how to integrate it into your runtime. - - [:octicons-arrow-right-24: Integrate](/parachains/customize-runtime/pallet-development/add-to-runtime/) - -
- - ---- - -Page Title: Run a Parachain Network - -- Source (raw): https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-testing-run-a-parachain-network.md -- Canonical (HTML): https://docs.polkadot.com/parachains/testing/run-a-parachain-network/ -- Summary: Quickly install and configure Zombienet to deploy and test Polkadot-based blockchain networks with this comprehensive getting-started guide. - -# Run a Parachain Network Using Zombienet - -## Introduction - -Zombienet is a robust testing framework designed for Polkadot SDK-based blockchain networks. It enables developers to efficiently deploy and test ephemeral blockchain environments on platforms like Kubernetes, Podman, and native setups. With its simple and versatile CLI, Zombienet provides an all-in-one solution for spawning networks, running tests, and validating performance. - -This guide will outline the different installation methods for Zombienet, provide step-by-step instructions for setting up on various platforms, and highlight essential provider-specific features and requirements. - -By following this guide, Zombienet will be up and running quickly, ready to streamline your blockchain testing and development workflows. - -## Install Zombienet - -Zombienet releases are available on the [Zombienet repository](https://github.com/paritytech/zombienet){target=\_blank}. - -Multiple options are available for installing Zombienet, depending on the user's preferences and the environment where it will be used. The following section will guide you through the installation process for each option. - -=== "Use the executable" - - Install Zombienet using executables by visiting the [latest release](https://github.com/paritytech/zombienet/releases){target=\_blank} page and selecting the appropriate asset for your operating system. You can download the executable and move it to a directory in your PATH. - - Each release includes executables for Linux and macOS. Executables are generated using [pkg](https://github.com/vercel/pkg){target=\_blank}, which allows the Zombienet CLI to operate without requiring Node.js to be installed. - - Then, ensure the downloaded file is executable: - - ```bash - chmod +x zombienet-macos-arm64 - ``` - - Finally, you can run the following command to check if the installation was successful. If so, it will display the version of the installed Zombienet: - - ```bash - ./zombienet-macos-arm64 version - ``` - - If you want to add the `zombienet` executable to your PATH, you can move it to a directory in your PATH, such as `/usr/local/bin`: - - ```bash - mv zombienet-macos-arm64 /usr/local/bin/zombienet - ``` - - Now you can refer to the `zombienet` executable directly. - - ```bash - zombienet version - ``` - -=== "Use Nix" - - For Nix users, the Zombienet repository provides a [`flake.nix`](https://github.com/paritytech/zombienet/blob/main/flake.nix){target=\_blank} file to install Zombienet making it easy to incorporate Zombienet into Nix-based projects. - - To install Zombienet utilizing Nix, users can run the following command, triggering the fetching of the flake and subsequently installing the Zombienet package: - - ```bash - nix run github:paritytech/zombienet/INSERT_ZOMBIENET_VERSION -- \ - spawn INSERT_ZOMBIENET_CONFIG_FILE_NAME.toml - ``` - - Replace the `INSERT_ZOMBIENET_VERSION` with the desired version of Zombienet and the `INSERT_ZOMBIENET_CONFIG_FILE_NAME` with the name of the configuration file you want to use. - - To run the command above, you need to have [Flakes](https://nixos.wiki/wiki/Flakes#Enable_flakes){target=\_blank} enabled. - - Alternatively, you can also include the Zombienet binary in the PATH for the current shell using the following command: - - ```bash - nix shell github:paritytech/zombienet/INSERT_ZOMBIENET_VERSION - ``` - -=== "Use Docker" - - Zombienet can also be run using Docker. The Zombienet repository provides a Docker image that can be used to run the Zombienet CLI. To run Zombienet using Docker, you can use the following command: - - ```bash - docker run -it --rm \ - -v $(pwd):/home/nonroot/zombie-net/host-current-files \ - paritytech/zombienet - ``` - - The command above will run the Zombienet CLI inside a Docker container and mount the current directory to the `/home/nonroot/zombie-net/host-current-files` directory. This allows Zombienet to access the configuration file and other files in the current directory. If you want to mount a different directory, replace `$(pwd)` with the desired directory path. - - Inside the Docker container, you can run the Zombienet CLI commands. First, you need to set up Zombienet to download the necessary binaries: - - ```bash - npm run zombie -- setup polkadot polkadot-parachain - ``` - - After that, you need to add those binaries to the PATH: - - ```bash - export PATH=/home/nonroot/zombie-net:$PATH - ``` - - Finally, you can run the Zombienet CLI commands. For example, to spawn a network using a specific configuration file, you can run the following command: - - ```bash - npm run zombie -- -p native spawn host-current-files/minimal.toml - ``` - - The command above mounts the current directory to the `/workspace` directory inside the Docker container, allowing Zombienet to access the configuration file and other files in the current directory. If you want to mount a different directory, replace `$(pwd)` with the desired directory path. - -## Providers +## Providers Zombienet supports different backend providers for running the nodes. At this moment, [Kubernetes](https://kubernetes.io/){target=\_blank}, [Podman](https://podman.io/){target=\_blank}, and local providers are supported, which can be declared as `kubernetes`, `podman`, or `native`, respectively. @@ -8973,6 +8391,142 @@ The system maintains precise conversion mechanisms between: This ensures accurate fee calculation while maintaining compatibility with existing Ethereum tools and workflows. +--- + +Page Title: Unit Test Pallets + +- Source (raw): https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md +- Canonical (HTML): https://docs.polkadot.com/parachains/customize-runtime/pallet-development/pallet-testing/ +- Summary: Learn how to efficiently test pallets in the Polkadot SDK, ensuring the reliability and security of your pallets operations. + +# Unit Test Pallets + +## Introduction + +Unit testing in the Polkadot SDK helps ensure that the functions provided by a pallet behave as expected. It also confirms that data and events associated with a pallet are processed correctly during interactions. The Polkadot SDK offers a set of APIs to create a test environment to simulate runtime and mock transaction execution for extrinsics and queries. + +To begin unit testing, you must first set up a mock runtime that simulates blockchain behavior, incorporating the necessary pallets. For a deeper understanding, consult the [Mock Runtime](/parachains/customize-runtime/pallet-development/mock-runtime/){target=\_blank} guide. + +## Writing Unit Tests + +Once the mock runtime is in place, the next step is to write unit tests that evaluate the functionality of your pallet. Unit tests allow you to test specific pallet features in isolation, ensuring that each function behaves correctly under various conditions. These tests typically reside in your pallet module's `test.rs` file. + +Unit tests in the Polkadot SDK use the Rust testing framework, and the mock runtime you've defined earlier will serve as the test environment. Below are the typical steps involved in writing unit tests for a pallet. + +The tests confirm that: + +- **Pallets initialize correctly**: At the start of each test, the system should initialize with block number 0, and the pallets should be in their default states. +- **Pallets modify each other's state**: The second test shows how one pallet can trigger changes in another pallet's internal state, confirming proper cross-pallet interactions. +- **State transitions between blocks are seamless**: By simulating block transitions, the tests validate that the runtime responds correctly to changes in the block number. + +Testing pallet interactions within the runtime is critical for ensuring the blockchain behaves as expected under real-world conditions. Writing integration tests allows validation of how pallets function together, preventing issues that might arise when the system is fully assembled. + +This approach provides a comprehensive view of the runtime's functionality, ensuring the blockchain is stable and reliable. + +### Test Initialization + +Each test starts by initializing the runtime environment, typically using the `new_test_ext()` function, which sets up the mock storage and environment. + +```rust +#[test] +fn test_pallet_functionality() { + new_test_ext().execute_with(|| { + // Test logic goes here + }); +} +``` + +### Function Call Testing + +Call the pallet's extrinsics or functions to simulate user interaction or internal logic. Use the `assert_ok!` macro to check for successful execution and `assert_err!` to verify that errors are correctly handled. + +```rust +#[test] +fn it_works_for_valid_input() { + new_test_ext().execute_with(|| { + // Call an extrinsic or function + assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param)); + }); +} + +#[test] +fn it_fails_for_invalid_input() { + new_test_ext().execute_with(|| { + // Call an extrinsic with invalid input and expect an error + assert_err!( + TemplateModule::some_function(Origin::signed(1), invalid_param), + Error::::InvalidInput + ); + }); +} +``` + +### Storage Testing + +After calling a function or extrinsic in your pallet, it's essential to verify that the state changes in the pallet's storage match the expected behavior to ensure data is updated correctly based on the actions taken. + +The following example shows how to test the storage behavior before and after the function call: + +```rust +#[test] +fn test_storage_update_on_extrinsic_call() { + new_test_ext().execute_with(|| { + // Check the initial storage state (before the call) + assert_eq!(Something::::get(), None); + + // Dispatch a signed extrinsic, which modifies storage + assert_ok!(TemplateModule::do_something(RuntimeOrigin::signed(1), 42)); + + // Validate that the storage has been updated as expected (after the call) + assert_eq!(Something::::get(), Some(42)); + }); +} + +``` + +### Event Testing + +It's also crucial to test the events that your pallet emits during execution. By default, events generated in a pallet using the [`#generate_deposit`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.generate_deposit.html){target=\_blank} macro are stored under the system's event storage key (system/events) as [`EventRecord`](https://paritytech.github.io/polkadot-sdk/master/frame_system/struct.EventRecord.html){target=\_blank} entries. These can be accessed using [`System::events()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.events){target=\_blank} or verified with specific helper methods provided by the system pallet, such as [`assert_has_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_has_event){target=\_blank} and [`assert_last_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_last_event){target=\_blank}. + +Here's an example of testing events in a mock runtime: + +```rust +#[test] +fn it_emits_events_on_success() { + new_test_ext().execute_with(|| { + // Call an extrinsic or function + assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param)); + + // Verify that the expected event was emitted + assert!(System::events().iter().any(|record| { + record.event == Event::TemplateModule(TemplateEvent::SomeEvent) + })); + }); +} +``` + +Some key considerations are: + +- **Block number**: Events are not emitted on the genesis block, so you need to set the block number using [`System::set_block_number()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.set_block_number){target=\_blank} to ensure events are triggered. +- **Converting events**: Use `.into()` when instantiating your pallet's event to convert it into a generic event type, as required by the system's event storage. + +## Where to Go Next + +- Dive into the full implementation of the [`mock.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/mock.rs){target=\_blank} and [`test.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/tests.rs){target=\_blank} files in the [Solochain Template](https://github.com/paritytech/polkadot-sdk/tree/master/templates/solochain){target=_blank}. + +
+ +- Guide __Benchmarking__ + + --- + + Explore methods to measure the performance and execution cost of your pallet. + + [:octicons-arrow-right-24: Reference](/develop/parachains/testing/benchmarking) + +
+ + --- Page Title: Unlock a Parachain diff --git a/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md b/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md index 748981792..28b0799e1 100644 --- a/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md +++ b/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md @@ -1,305 +1,454 @@ --- -title: Benchmarking FRAME Pallets -description: Learn how to use FRAME's benchmarking framework to measure extrinsic execution costs and provide accurate weights for on-chain computations. +title: Benchmark Your Pallet +description: Learn how to benchmark extrinsics in your custom pallet to generate precise weight calculations suitable for production use. categories: Parachains url: https://docs.polkadot.com/parachains/customize-runtime/pallet-development/benchmark-pallet/ --- -# Benchmarking - ## Introduction -Benchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise [weight](/reference/glossary/#weight){target=\_blank}, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks. +Benchmarking is the process of measuring the computational resources (execution time and storage) required by your pallet's extrinsics. Accurate [weight](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/index.html){target=\_blank} calculations are essential for ensuring your blockchain can process transactions efficiently while protecting against denial-of-service attacks. -The Polkadot SDK leverages the [FRAME](/reference/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\_blank} benchmarking framework, offering tools to measure and assign weights to extrinsics. These weights help determine the maximum number of transactions or system-level calls processed within a block. This guide covers how to use FRAME's [benchmarking framework](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank}, from setting up your environment to writing and running benchmarks for your custom pallets. You'll understand how to generate accurate weights by the end, ensuring your runtime remains performant and secure. +This guide demonstrates how to benchmark a pallet and incorporate the resulting weight values. This example uses the custom counter pallet from previous guides in this series, but you can replace it with the code from another pallet if desired. -## The Case for Benchmarking +## Prerequisites -Benchmarking helps validate that the required execution time for different functions is within reasonable boundaries to ensure your blockchain runtime can handle transactions efficiently and securely. By accurately measuring the weight of each extrinsic, you can prevent service interruptions caused by computationally intensive calls that exceed block time limits. Without benchmarking, runtime performance could be vulnerable to DoS attacks, where malicious users exploit functions with unoptimized weights. +Before you begin, ensure you have: -Benchmarking also ensures predictable transaction fees. Weights derived from benchmark tests accurately reflect the resource usage of function calls, allowing fair fee calculation. This approach discourages abuse while maintaining network reliability. +- A pallet to benchmark. If you followed the pallet development tutorials, you can use the counter pallet from the [Create a Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/){target=\_blank} guide. You can also follow these steps to benchmark a custom pallet by updating the `benchmarking.rs` functions, and instances of usage in future steps, to calculate weights using your specific pallet functionality. +- Basic understanding of [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity){target=\_blank}. +- Familiarity with [Rust's testing framework](https://doc.rust-lang.org/book/ch11-00-testing.html){target=\_blank}. +- Familiarity setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\_blank}. Refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\_blank} guide for instructions if needed. -### Benchmarking and Weight +## Create the Benchmarking Module -In Polkadot SDK-based chains, weight quantifies the computational effort needed to process transactions. This weight includes factors such as: +Create a new file `benchmarking.rs` in your pallet's `src` directory and add the following code: -- Computational complexity. -- Storage complexity (proof size). -- Database reads and writes. -- Hardware specifications. +```rust title="pallets/pallet-custom/src/benchmarking.rs" +#![cfg(feature = "runtime-benchmarks")] -Benchmarking uses real-world testing to simulate worst-case scenarios for extrinsics. The framework generates a linear model for weight calculation by running multiple iterations with varied parameters. These worst-case weights ensure blocks remain within execution limits, enabling the runtime to maintain throughput under varying loads. Excess fees can be refunded if a call uses fewer resources than expected, offering users a fair cost model. - -Because weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specifications of hardware used for benchmarking. By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period. +use super::*; +use frame::deps::frame_benchmarking::v2::*; +use frame::benchmarking::prelude::RawOrigin; -Within FRAME, each function call that is dispatched must have a `#[pallet::weight]` annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs: +#[benchmarks] +mod benchmarks { + use super::*; -```rust hl_lines="2" -#[pallet::call_index(0)] -#[pallet::weight(T::WeightInfo::do_something())] -pub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) } -``` + #[benchmark] + fn set_counter_value() { + let new_value: u32 = 100; -The `WeightInfo` file is automatically generated during benchmarking. Based on these tests, this file provides accurate weights for each extrinsic. + #[extrinsic_call] + _(RawOrigin::Root, new_value); -## Benchmarking Process + assert_eq!(CounterValue::::get(), new_value); + } -Benchmarking a pallet involves the following steps: + #[benchmark] + fn increment() { + let caller: T::AccountId = whitelisted_caller(); + let amount: u32 = 50; -1. Creating a `benchmarking.rs` file within your pallet's structure. -2. Writing a benchmarking test for each extrinsic. -3. Executing the benchmarking tool to calculate weights based on performance metrics. + #[extrinsic_call] + _(RawOrigin::Signed(caller.clone()), amount); -The benchmarking tool runs multiple iterations to model worst-case execution times and determine the appropriate weight. By default, the benchmarking pipeline is deactivated. To activate it, compile your runtime with the `runtime-benchmarks` feature flag. + assert_eq!(CounterValue::::get(), amount); + assert_eq!(UserInteractions::::get(caller), 1); + } -### Prepare Your Environment + #[benchmark] + fn decrement() { + // First, set the counter to a non-zero value + CounterValue::::put(100); -Install the [`frame-omni-bencher`](https://crates.io/crates/frame-omni-bencher){target=\_blank} command-line tool: + let caller: T::AccountId = whitelisted_caller(); + let amount: u32 = 30; -```bash -cargo install frame-omni-bencher -``` + #[extrinsic_call] + _(RawOrigin::Signed(caller.clone()), amount); -Before writing benchmark tests, you need to ensure the `frame-benchmarking` crate is included in your pallet's `Cargo.toml` similar to the following: + assert_eq!(CounterValue::::get(), 70); + assert_eq!(UserInteractions::::get(caller), 1); + } -```toml title="Cargo.toml" -frame-benchmarking = { version = "37.0.0", default-features = false } + impl_benchmark_test_suite!(Pallet, crate::mock::new_test_ext(), crate::mock::Test); +} ``` -You must also ensure that you add the `runtime-benchmarks` feature flag as follows under the `[features]` section of your pallet's `Cargo.toml`: +This module contains all the [benchmarking definitions](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} for your pallet. If you are benchmarking a different pallet, update the testing logic as needed to test your pallet's functionality. + +## Define the Weight Trait + +Add a `weights` module to your pallet that defines the `WeightInfo` trait using the following code: + +```rust title="pallets/pallet-custom/src/weights.rs" +#[frame::pallet] +pub mod pallet { + use frame::prelude::*; + pub use weights::WeightInfo; + + pub mod weights { + use frame::prelude::*; + + pub trait WeightInfo { + fn set_counter_value() -> Weight; + fn increment() -> Weight; + fn decrement() -> Weight; + } + + impl WeightInfo for () { + fn set_counter_value() -> Weight { + Weight::from_parts(10_000, 0) + } + fn increment() -> Weight { + Weight::from_parts(15_000, 0) + } + fn decrement() -> Weight { + Weight::from_parts(15_000, 0) + } + } + } -```toml title="Cargo.toml" -runtime-benchmarks = [ - "frame-benchmarking/runtime-benchmarks", - "frame-support/runtime-benchmarks", - "frame-system/runtime-benchmarks", - "sp-runtime/runtime-benchmarks", -] + // ... rest of pallet +} ``` -Lastly, ensure that `frame-benchmarking` is included in `std = []`: +The `WeightInfo for ()` implementation provides placeholder weights for development. If you are using a different pallet, update the `weights` module to use your pallet's function names. -```toml title="Cargo.toml" -std = [ - # ... - "frame-benchmarking?/std", - # ... -] -``` +## Add WeightInfo to Config -Once complete, you have the required dependencies for writing benchmark tests for your pallet. +Update your pallet's `Config` trait to include `WeightInfo` by adding the following code: -### Write Benchmark Tests +```rust title="pallets/pallet-custom/src/lib.rs" +#[pallet::config] +pub trait Config: frame_system::Config { + type RuntimeEvent: From> + IsType<::RuntimeEvent>; -Create a `benchmarking.rs` file in your pallet's `src/`. Your directory structure should look similar to the following: + #[pallet::constant] + type CounterMaxValue: Get; -``` -my-pallet/ -├── src/ -│ ├── lib.rs # Main pallet implementation -│ └── benchmarking.rs # Benchmarking -└── Cargo.toml + type WeightInfo: weights::WeightInfo; +} ``` -With the directory structure set, you can use the [`polkadot-sdk-parachain-template`](https://github.com/paritytech/polkadot-sdk-parachain-template/tree/master/pallets){target=\_blank} to get started as follows: +The [`WeightInfo`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/trait.WeightInfo.html){target=\_blank} trait provides an abstraction layer that allows weights to be swapped at runtime configuration. By making `WeightInfo` an associated type in the `Config` trait, you will enable each runtime that uses your pallet to specify which weight implementation to use. -```rust title="benchmarking.rs (starter template)" -//! Benchmarking setup for pallet-template -#![cfg(feature = "runtime-benchmarks")] +## Update Extrinsic Weight Annotations -use super::*; -use frame_benchmarking::v2::*; +Replace the placeholder weights in your extrinsics with calls to the `WeightInfo` trait by adding the following code: -#[benchmarks] -mod benchmarks { - use super::*; - #[cfg(test)] - use crate::pallet::Pallet as Template; - use frame_system::RawOrigin; - - #[benchmark] - fn do_something() { - let caller: T::AccountId = whitelisted_caller(); - #[extrinsic_call] - do_something(RawOrigin::Signed(caller), 100); - - assert_eq!(Something::::get().map(|v| v.block_number), Some(100u32.into())); - } - - #[benchmark] - fn cause_error() { - Something::::put(CompositeStruct { block_number: 100u32.into() }); - let caller: T::AccountId = whitelisted_caller(); - #[extrinsic_call] - cause_error(RawOrigin::Signed(caller)); - - assert_eq!(Something::::get().map(|v| v.block_number), Some(101u32.into())); - } - - impl_benchmark_test_suite!(Template, crate::mock::new_test_ext(), crate::mock::Test); +```rust title="pallets/pallet-custom/src/lib.rs" +#[pallet::call] +impl Pallet { + #[pallet::call_index(0)] + #[pallet::weight(T::WeightInfo::set_counter_value())] + pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { + // ... implementation + } + + #[pallet::call_index(1)] + #[pallet::weight(T::WeightInfo::increment())] + pub fn increment(origin: OriginFor, amount: u32) -> DispatchResult { + // ... implementation + } + + #[pallet::call_index(2)] + #[pallet::weight(T::WeightInfo::decrement())] + pub fn decrement(origin: OriginFor, amount: u32) -> DispatchResult { + // ... implementation + } } ``` -In your benchmarking tests, employ these best practices: +By calling `T::WeightInfo::function_name()` instead of using hardcoded `Weight::from_parts()` values, your extrinsics automatically use whichever weight implementation is configured in the runtime. You can switch between placeholder weights for testing and benchmarked weights for production easily, without changing any pallet code. + +If you are using a different pallet, be sure to update the functions for `WeightInfo` accordingly. + +## Include the Benchmarking Module + +At the top of your `lib.rs`, add the module declaration by adding the following code: -- **Write custom testing functions**: The function `do_something` in the preceding example is a placeholder. Similar to writing unit tests, you must write custom functions to benchmark test your extrinsics. Access the mock runtime and use functions such as `whitelisted_caller()` to sign transactions and facilitate testing. -- **Use the `#[extrinsic_call]` macro**: This macro is used when calling the extrinsic itself and is a required part of a benchmarking function. See the [`extrinsic_call`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html#extrinsic_call-and-block){target=\_blank} docs for more details. -- **Validate extrinsic behavior**: The `assert_eq` expression ensures that the extrinsic is working properly within the benchmark context. +```rust title="pallets/pallet-custom/src/lib.rs" +#![cfg_attr(not(feature = "std"), no_std)] -Add the `benchmarking` module to your pallet. In the pallet `lib.rs` file add the following: +extern crate alloc; +use alloc::vec::Vec; + +pub use pallet::*; -```rust #[cfg(feature = "runtime-benchmarks")] mod benchmarking; + +// Additional pallet code ``` -### Add Benchmarks to Runtime +The `#[cfg(feature = "runtime-benchmarks")]` attribute ensures that benchmarking code is only compiled when explicitly needed to keep your production runtime efficient. -Before running the benchmarking tool, you must integrate benchmarks with your runtime as follows: +## Configure Pallet Dependencies -1. Navigate to your `runtime/src` directory and check if a `benchmarks.rs` file exists. If not, create one. This file will contain the macro that registers all pallets for benchmarking along with their respective configurations: +Update your pallet's `Cargo.toml` to enable the benchmarking feature by adding the following code: - ```rust title="benchmarks.rs" - frame_benchmarking::define_benchmarks!( - [frame_system, SystemBench::] - [pallet_parachain_template, TemplatePallet] - [pallet_balances, Balances] - [pallet_session, SessionBench::] - [pallet_timestamp, Timestamp] - [pallet_message_queue, MessageQueue] - [pallet_sudo, Sudo] - [pallet_collator_selection, CollatorSelection] - [cumulus_pallet_parachain_system, ParachainSystem] - [cumulus_pallet_xcmp_queue, XcmpQueue] - ); +```toml title="pallets/pallet-custom/Cargo.toml" +[dependencies] +codec = { features = ["derive"], workspace = true } +scale-info = { features = ["derive"], workspace = true } +frame = { features = ["experimental", "runtime"], workspace = true } + +[features] +default = ["std"] +runtime-benchmarks = [ + "frame/runtime-benchmarks", +] +std = [ + "codec/std", + "scale-info/std", + "frame/std", +] +``` + +The Cargo feature flag system lets you conditionally compile code based on which features are enabled. By defining a `runtime-benchmarks` feature that cascades to FRAME's benchmarking features, you create a clean way to build your pallet with or without benchmarking support, ensuring all necessary dependencies are available when needed but excluded from production builds. + +## Update Mock Runtime + +Add the `WeightInfo` type to your test configuration in `mock.rs` by adding the following code: + +```rust title="pallets/pallet-custom/src/mock.rs" +impl pallet_custom::Config for Test { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = (); +} +``` + +In your mock runtime for testing, use the placeholder `()` implementation of `WeightInfo`, since unit tests focus on verifying functional correctness rather than performance. + +## Configure Runtime Benchmarking + +To execute benchmarks, your pallet must be integrated into the runtime's benchmarking infrastructure. Follow these steps to update the runtime configuration: + +1. **Update `runtime/Cargo.toml`**: Add your pallet to the runtime's `runtime-benchmarks` feature as follows: + + ```toml title="runtime/Cargo.toml" + runtime-benchmarks = [ + "cumulus-pallet-parachain-system/runtime-benchmarks", + "hex-literal", + "pallet-parachain-template/runtime-benchmarks", + "polkadot-sdk/runtime-benchmarks", + "pallet-custom/runtime-benchmarks", + ] ``` - For example, to add a new pallet named `pallet_parachain_template` for benchmarking, include it in the macro as shown: - ```rust title="benchmarks.rs" hl_lines="3" - frame_benchmarking::define_benchmarks!( + When you build the runtime with `--features runtime-benchmarks`, this configuration ensures all necessary benchmarking code across all pallets (including yours) is included. + +2. **Update runtime configuration**: Using the the placeholder implementation, run development benchmarks as follows: + + ```rust title="runtime/src/configs/mod.rs" + impl pallet_custom::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = (); + } + ``` + +3. **Register benchmarks**: Add your pallet to the benchmark list in `runtime/src/benchmarks.rs` as follows: + + ```rust title="runtime/src/benchmarks.rs" + polkadot_sdk::frame_benchmarking::define_benchmarks!( [frame_system, SystemBench::] - [pallet_parachain_template, TemplatePallet] + [pallet_balances, Balances] + // ... other pallets + [pallet_custom, CustomPallet] ); ``` - !!!warning "Updating `define_benchmarks!` macro is required" - Any pallet that needs to be benchmarked must be included in the [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro. The CLI will only be able to access and benchmark pallets that are registered here. + The [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro creates the infrastructure that allows the benchmarking CLI tool to discover and execute your pallet's benchmarks. -2. Check your runtime's `lib.rs` file to ensure the `benchmarks` module is imported. The import should look like this: +## Test Benchmark Compilation - ```rust title="lib.rs" - #[cfg(feature = "runtime-benchmarks")] - mod benchmarks; - ``` +Run the following command to verify your benchmarks compile and run as tests: - The `runtime-benchmarks` feature gate ensures benchmark tests are isolated from production runtime code. +```bash +cargo test -p pallet-custom --features runtime-benchmarks +``` -3. Enable runtime benchmarking for your pallet in `runtime/Cargo.toml`: +You will see terminal output similar to the following as your benchmark tests pass: - ```toml - runtime-benchmarks = [ - # ... - "pallet_parachain_template/runtime-benchmarks", - ] +
+ cargo test -p pallet-custom --features runtime-benchmarks + test benchmarking::benchmarks::bench_set_counter_value ... ok + test benchmarking::benchmarks::bench_increment ... ok + test benchmarking::benchmarks::bench_decrement ... ok + +
- ``` +The `impl_benchmark_test_suite!` macro generates unit tests for each benchmark. Running these tests verifies that your benchmarks compile correctly, execute without panicking, and pass their assertions, catching issues early before building the entire runtime. -### Run Benchmarks +## Build the Runtime with Benchmarks -You can now compile your runtime with the `runtime-benchmarks` feature flag. This feature flag is crucial as the benchmarking tool will look for this feature being enabled to know when it should run benchmark tests. Follow these steps to compile the runtime with benchmarking enabled: +Compile the runtime with benchmarking enabled to generate the Wasm binary using the following command: -1. Run `build` with the feature flag included: +```bash +cargo build --release --features runtime-benchmarks +``` - ```bash - cargo build --features runtime-benchmarks --release - ``` +This command produces the runtime WASM file needed for benchmarking, typically located at: `target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm` -2. Create a `weights.rs` file in your pallet's `src/` directory. This file will store the auto-generated weight calculations: +The build includes all the benchmarking infrastructure and special host functions needed for measurement. The resulting WASM runtime contains your benchmark code and can communicate with the benchmarking tool's execution environment. You'll create a different build later for operating your chain in production. - ```bash - touch weights.rs - ``` +## Install the Benchmarking Tool -3. Before running the benchmarking tool, you'll need a template file that defines how weight information should be formatted. Download the official template from the Polkadot SDK repository and save it in your project folders for future use: +Install the `frame-omni-bencher` CLI tool using the following command: - ```bash - curl https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ - --output ./pallets/benchmarking/frame-weight-template.hbs - ``` +```bash +cargo install frame-omni-bencher --locked +``` + +[`frame-omni-bencher`](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\_blank} is the official Polkadot SDK tool designed explicitly for FRAME pallet benchmarking. It provides a standardized way to execute benchmarks, measure execution times and storage operations, and generate properly formatted weight files with full integration into the FRAME weight system. + +## Download the Weight Template -4. Run the benchmarking tool to measure extrinsic weights: +Download the official weight template file using the following commands: + +```bash +curl -L https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ +--output ./pallets/pallet-custom/frame-weight-template.hbs +``` + +The weight template is a Handlebars file that transforms raw benchmark data into a correctly formatted Rust source file. It defines the structure of the generated `weights.rs` file, including imports, trait definitions, documentation comments, and formatting. Using the official template ensures your weight files follow the Polkadot SDK conventions and include all necessary metadata, such as benchmark execution parameters, storage operation counts, and hardware information. + +## Execute Benchmarks + +Run benchmarks for your pallet to generate weight files using the following commands: + +```bash +frame-omni-bencher v1 benchmark pallet \ + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \ + --pallet pallet_custom \ + --extrinsic "" \ + --template ./pallets/pallet-custom/frame-weight-template.hbs \ + --output ./pallets/pallet-custom/src/weights.rs +``` + +Benchmarks execute against the compiled WASM runtime rather than native code because WASM is what actually runs in production on the blockchain. WASM execution can have different performance characteristics than native code due to compilation and sandboxing overhead, so benchmarking against the WASM ensures your weight measurements reflect real-world conditions. + +??? note "Additional customization" + + You can customize benchmark execution with additional parameters for more detailed measurements, as shown in the sample code below: ```bash frame-omni-bencher v1 benchmark pallet \ - --runtime INSERT_PATH_TO_WASM_RUNTIME \ - --pallet INSERT_NAME_OF_PALLET \ - --extrinsic "" \ - --template ./frame-weight-template.hbs \ - --output weights.rs + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \ + --pallet pallet_custom \ + --extrinsic "" \ + --steps 50 \ + --repeat 20 \ + --template ./pallets/pallet-custom/frame-weight-template.hbs \ + --output ./pallets/pallet-custom/src/weights.rs ``` + + - **`--steps 50`**: Number of different input values to test when using linear components (default: 50). More steps provide finer granularity for detecting complexity trends but increase benchmarking time. + - **`--repeat 20`**: Number of repetitions for each measurement (default: 20). More repetitions improve statistical accuracy by averaging out variance, reducing the impact of system noise, and providing more reliable weight estimates. + - **`--heap-pages 4096`**: WASM heap pages allocation. Affects available memory during execution. + - **`--wasm-execution compiled`**: WASM execution method. Use `compiled` for performance closest to production conditions. - !!! tip "Flag definitions" - - **`--runtime`**: The path to your runtime's Wasm. - - **`--pallet`**: The name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined in `define_benchmarks`. - - **`--extrinsic`**: Which extrinsic to test. Using `""` implies all extrinsics will be benchmarked. - - **`--template`**: Defines how weight information should be formatted. - - **`--output`**: Where the output of the auto-generated weights will reside. +## Use Generated Weights -The generated `weights.rs` file contains weight annotations for your extrinsics, ready to be added to your pallet. The output should be similar to the following. Some output is omitted for brevity: +After running benchmarks, a `weights.rs` file is generated containing measured weights based on actual measurements of your code running on real hardware, accounting for the specific complexity of your logic, storage access patterns, and computational requirements. -
- frame-omni-bencher v1 benchmark pallet \ - --runtime INSERT_PATH_TO_WASM_RUNTIME \ - --pallet "INSERT_NAME_OF_PALLET" \ - --extrinsic "" \ - --template ./frame-weight-template.hbs \ - --output ./weights.rs - ... - 2025-01-15T16:41:33.557045Z INFO polkadot_sdk_frame::benchmark::pallet: [ 0 % ] Starting benchmark: pallet_parachain_template::do_something - 2025-01-15T16:41:33.564644Z INFO polkadot_sdk_frame::benchmark::pallet: [ 50 % ] Starting benchmark: pallet_parachain_template::cause_error - ... - Created file: "weights.rs" - -
+Follow these steps to use the generated weights with your pallet: + +1. Integrate the generated weights by adding the weights module to your pallet's `lib.rs` as follows: -#### Add Benchmark Weights to Pallet + ```rust title="pallets/pallet-custom/src/lib.rs" + #![cfg_attr(not(feature = "std"), no_std)] -Once the `weights.rs` is generated, you must integrate it with your pallet. + extern crate alloc; + use alloc::vec::Vec; -1. To begin the integration, import the `weights` module and the `WeightInfo` trait, then add both to your pallet's `Config` trait. Complete the following steps to set up the configuration: + pub use pallet::*; + + #[cfg(feature = "runtime-benchmarks")] + mod benchmarking; - ```rust title="lib.rs" pub mod weights; - use crate::weights::WeightInfo; - - /// Configure the pallet by specifying the parameters and types on which it depends. - #[pallet::config] - pub trait Config: frame_system::Config { - // ... - /// A type representing the weights required by the dispatchables of this pallet. - type WeightInfo: WeightInfo; + + #[frame::pallet] + pub mod pallet { + use super::*; + use frame::prelude::*; + use crate::weights::WeightInfo; + // ... rest of pallet } ``` -2. Next, you must add this to the `#[pallet::weight]` annotation in all the extrinsics via the `Config` as follows: + Unlike the benchmarking module (which is only needed when running benchmarks), the weights module must be available in all builds because the runtime needs to call the weight functions during regular operation to calculate transaction fees and enforce block limits. - ```rust hl_lines="2" title="lib.rs" - #[pallet::call_index(0)] - #[pallet::weight(T::WeightInfo::do_something())] - pub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) } +2. Update your runtime configuration to use the generated weights instead of the placeholder `()` implementation by adding the following code: + + ```rust title="runtime/src/configs/mod.rs" + impl pallet_custom::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = pallet_custom::weights::SubstrateWeight; + } ``` -3. Finally, configure the actual weight values in your runtime. In `runtime/src/config/mod.rs`, add the following code: + This change activates your benchmarked weights in the production runtime. Now, when users submit transactions that call your pallet's extrinsics, the runtime will use the actual measured weights to calculate fees and enforce block limits. + +??? code "Example generated weight file" + + The generated `weights.rs` file will look similar to this: - ```rust title="mod.rs" - // Configure pallet. - impl pallet_parachain_template::Config for Runtime { - // ... - type WeightInfo = pallet_parachain_template::weights::SubstrateWeight; + ```rust title="pallets/pallet-custom/src/weights.rs" + //! Autogenerated weights for `pallet_custom` + //! + //! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 32.0.0 + //! DATE: 2025-01-15, STEPS: `50`, REPEAT: `20` + + #![cfg_attr(rustfmt, rustfmt_skip)] + #![allow(unused_parens)] + #![allow(unused_imports)] + #![allow(missing_docs)] + + use frame_support::{traits::Get, weights::{Weight, constants::RocksDbWeight}}; + use core::marker::PhantomData; + + pub trait WeightInfo { + fn set_counter_value() -> Weight; + fn increment() -> Weight; + fn decrement() -> Weight; + } + + pub struct SubstrateWeight(PhantomData); + impl WeightInfo for SubstrateWeight { + fn set_counter_value() -> Weight { + Weight::from_parts(8_234_000, 0) + .saturating_add(T::DbWeight::get().reads(1)) + .saturating_add(T::DbWeight::get().writes(1)) + } + + fn increment() -> Weight { + Weight::from_parts(12_456_000, 0) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } + + fn decrement() -> Weight { + Weight::from_parts(11_987_000, 0) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } } ``` -## Where to Go Next + The actual numbers in your `weights.rs` file will vary based on your hardware and implementation complexity. The [`DbWeight`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.RuntimeDbWeight.html){target=\_blank} accounts for database read and write operations. + +Congratulations, you've successfully benchmarked a pallet and updated your runtime to use the generated weight values. + +## Related Resources -- View the Rust Docs for a more comprehensive, low-level view of the [FRAME V2 Benchmarking Suite](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=_blank}. -- Read the [FRAME Benchmarking and Weights](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_benchmarking_weight/index.html){target=_blank} reference document, a concise guide which details how weights and benchmarking work. +- [FRAME Benchmarking Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/index.html){target=\_blank} +- [Weight Struct Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.Weight.html){target=\_blank} +- [Benchmarking v2 API](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} +- [frame-omni-bencher Tool](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\_blank} diff --git a/.ai/pages/parachains-customize-runtime-pallet-development-create-a-pallet.md b/.ai/pages/parachains-customize-runtime-pallet-development-create-a-pallet.md index 6b36c500f..6dd373bcb 100644 --- a/.ai/pages/parachains-customize-runtime-pallet-development-create-a-pallet.md +++ b/.ai/pages/parachains-customize-runtime-pallet-development-create-a-pallet.md @@ -671,7 +671,7 @@ This command validates all pallet configurations and prepares the build for depl ## Run Your Chain Locally -Launch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}. +Launch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}. For instructions on setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\_blank}, refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\_blank} guide. ### Generate a Chain Specification diff --git a/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md b/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md index 1f61c2c61..213067d7a 100644 --- a/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md +++ b/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md @@ -1,11 +1,11 @@ --- -title: Pallet Unit Testing -description: Learn how to write comprehensive unit tests for your custom pallets using mock runtimes, ensuring reliability and correctness before deployment. +title: Unit Test Pallets +description: Learn how to efficiently test pallets in the Polkadot SDK, ensuring the reliability and security of your pallets operations. categories: Parachains url: https://docs.polkadot.com/parachains/customize-runtime/pallet-development/pallet-testing/ --- -# Pallet Unit Testing +# Unit Test Pallets ## Introduction diff --git a/.ai/site-index.json b/.ai/site-index.json index 2fe21850a..c9de19f4e 100644 --- a/.ai/site-index.json +++ b/.ai/site-index.json @@ -1361,14 +1361,14 @@ }, { "id": "parachains-customize-runtime-pallet-development-benchmark-pallet", - "title": "Benchmarking FRAME Pallets", + "title": "Benchmark Your Pallet", "slug": "parachains-customize-runtime-pallet-development-benchmark-pallet", "categories": [ "Parachains" ], "raw_md_url": "https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md", "html_url": "https://docs.polkadot.com/parachains/customize-runtime/pallet-development/benchmark-pallet/", - "preview": "Benchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise [weight](/reference/glossary/#weight){target=\\_blank}, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks.", + "preview": "Benchmarking is the process of measuring the computational resources (execution time and storage) required by your pallet's extrinsics. Accurate [weight](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/index.html){target=\\_blank} calculations are essential for ensuring your blockchain can process transactions efficiently while protecting against denial-of-service attacks.", "outline": [ { "depth": 2, @@ -1377,52 +1377,92 @@ }, { "depth": 2, - "title": "The Case for Benchmarking", - "anchor": "the-case-for-benchmarking" + "title": "Prerequisites", + "anchor": "prerequisites" }, { - "depth": 3, - "title": "Benchmarking and Weight", - "anchor": "benchmarking-and-weight" + "depth": 2, + "title": "Create the Benchmarking Module", + "anchor": "create-the-benchmarking-module" }, { "depth": 2, - "title": "Benchmarking Process", - "anchor": "benchmarking-process" + "title": "Define the Weight Trait", + "anchor": "define-the-weight-trait" }, { - "depth": 3, - "title": "Prepare Your Environment", - "anchor": "prepare-your-environment" + "depth": 2, + "title": "Add WeightInfo to Config", + "anchor": "add-weightinfo-to-config" }, { - "depth": 3, - "title": "Write Benchmark Tests", - "anchor": "write-benchmark-tests" + "depth": 2, + "title": "Update Extrinsic Weight Annotations", + "anchor": "update-extrinsic-weight-annotations" }, { - "depth": 3, - "title": "Add Benchmarks to Runtime", - "anchor": "add-benchmarks-to-runtime" + "depth": 2, + "title": "Include the Benchmarking Module", + "anchor": "include-the-benchmarking-module" }, { - "depth": 3, - "title": "Run Benchmarks", - "anchor": "run-benchmarks" + "depth": 2, + "title": "Configure Pallet Dependencies", + "anchor": "configure-pallet-dependencies" }, { "depth": 2, - "title": "Where to Go Next", - "anchor": "where-to-go-next" + "title": "Update Mock Runtime", + "anchor": "update-mock-runtime" + }, + { + "depth": 2, + "title": "Configure Runtime Benchmarking", + "anchor": "configure-runtime-benchmarking" + }, + { + "depth": 2, + "title": "Test Benchmark Compilation", + "anchor": "test-benchmark-compilation" + }, + { + "depth": 2, + "title": "Build the Runtime with Benchmarks", + "anchor": "build-the-runtime-with-benchmarks" + }, + { + "depth": 2, + "title": "Install the Benchmarking Tool", + "anchor": "install-the-benchmarking-tool" + }, + { + "depth": 2, + "title": "Download the Weight Template", + "anchor": "download-the-weight-template" + }, + { + "depth": 2, + "title": "Execute Benchmarks", + "anchor": "execute-benchmarks" + }, + { + "depth": 2, + "title": "Use Generated Weights", + "anchor": "use-generated-weights" + }, + { + "depth": 2, + "title": "Related Resources", + "anchor": "related-resources" } ], "stats": { - "chars": 14715, - "words": 1879, - "headings": 9, - "estimated_token_count_total": 3338 + "chars": 19780, + "words": 2425, + "headings": 17, + "estimated_token_count_total": 4492 }, - "hash": "sha256:915bc91edd56cdedd516e871dbe450d70c9f99fb467cc00ff231ea3a74f61d96", + "hash": "sha256:cee5050dcc0967da1f1ddaabc769376514223bd453966fb3b6f8322d78755160", "token_estimator": "heuristic-v1" }, { @@ -1568,12 +1608,12 @@ } ], "stats": { - "chars": 26671, - "words": 3041, + "chars": 26958, + "words": 3085, "headings": 26, - "estimated_token_count_total": 6113 + "estimated_token_count_total": 6194 }, - "hash": "sha256:607e283aaa1295de0af191d97de7f6f87afb722c601a447821fde6a09b97f1af", + "hash": "sha256:dad68ea59fd05fd60dc8890c4cf5615243c7ea879830b0dcf3a5e5e53c3ccec7", "token_estimator": "heuristic-v1" }, { @@ -1664,7 +1704,7 @@ }, { "id": "parachains-customize-runtime-pallet-development-pallet-testing", - "title": "Pallet Unit Testing", + "title": "Unit Test Pallets", "slug": "parachains-customize-runtime-pallet-development-pallet-testing", "categories": [ "Parachains" @@ -1780,12 +1820,12 @@ } ], "stats": { - "chars": 25092, - "words": 2533, - "headings": 21, - "estimated_token_count_total": 5673 + "chars": 6895, + "words": 912, + "headings": 7, + "estimated_token_count_total": 1563 }, - "hash": "sha256:5b6975fc79037690c912a0644a0a438212248e984d7fdb35bd6aea820e637965", + "hash": "sha256:041ccd82f0c1ddfb93be05feb6cf9d7d4a7e37af6caa8fa8fdab5d5538017122", "token_estimator": "heuristic-v1" }, { diff --git a/llms-full.jsonl b/llms-full.jsonl index c59e67fab..e420b2c51 100644 --- a/llms-full.jsonl +++ b/llms-full.jsonl @@ -182,15 +182,29 @@ {"page_id": "parachains-customize-runtime-add-smart-contract-functionality", "page_title": "Add Smart Contract Functionality", "index": 13, "depth": 2, "title": "pallet-contracts (Legacy)", "anchor": "pallet-contracts-legacy", "start_char": 5720, "end_char": 6051, "estimated_token_count": 81, "token_estimator": "heuristic-v1", "text": "## pallet-contracts (Legacy)\n\n[`pallet-contracts`](https://docs.rs/pallet-contracts/latest/pallet_contracts/index.html#contracts-pallet){target=\\_blank} is the original Wasm-based smart contract pallet for Polkadot SDK chains. While still functional, it's considered legacy as development efforts have shifted to `pallet-revive`."} {"page_id": "parachains-customize-runtime-add-smart-contract-functionality", "page_title": "Add Smart Contract Functionality", "index": 14, "depth": 3, "title": "Implementation Example", "anchor": "implementation-example", "start_char": 6051, "end_char": 6304, "estimated_token_count": 59, "token_estimator": "heuristic-v1", "text": "### Implementation Example\n\nFor reference, Astar's implementation of [`pallet-contracts`](https://github.com/AstarNetwork/Astar/blob/b6f7a408d31377130c3713ed52941a06b5436402/runtime/astar/src/lib.rs#L693){target=\\_blank} demonstrates production usage."} {"page_id": "parachains-customize-runtime-add-smart-contract-functionality", "page_title": "Add Smart Contract Functionality", "index": 15, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 6304, "end_char": 6655, "estimated_token_count": 92, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n
\n\n- Guide __Add a Pallet to the Runtime__\n\n ---\n\n Learn the step-by-step process for integrating Polkadot SDK pallets into your blockchain's runtime.\n\n [:octicons-arrow-right-24: Get Started](/parachains/customize-runtime/add-existing-pallets/)\n\n
"} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 16, "end_char": 1205, "estimated_token_count": 235, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nBenchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise [weight](/reference/glossary/#weight){target=\\_blank}, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks.\n\nThe Polkadot SDK leverages the [FRAME](/reference/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\\_blank} benchmarking framework, offering tools to measure and assign weights to extrinsics. These weights help determine the maximum number of transactions or system-level calls processed within a block. This guide covers how to use FRAME's [benchmarking framework](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\\_blank}, from setting up your environment to writing and running benchmarks for your custom pallets. You'll understand how to generate accurate weights by the end, ensuring your runtime remains performant and secure."} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 1, "depth": 2, "title": "The Case for Benchmarking", "anchor": "the-case-for-benchmarking", "start_char": 1205, "end_char": 1999, "estimated_token_count": 114, "token_estimator": "heuristic-v1", "text": "## The Case for Benchmarking\n\nBenchmarking helps validate that the required execution time for different functions is within reasonable boundaries to ensure your blockchain runtime can handle transactions efficiently and securely. By accurately measuring the weight of each extrinsic, you can prevent service interruptions caused by computationally intensive calls that exceed block time limits. Without benchmarking, runtime performance could be vulnerable to DoS attacks, where malicious users exploit functions with unoptimized weights.\n\nBenchmarking also ensures predictable transaction fees. Weights derived from benchmark tests accurately reflect the resource usage of function calls, allowing fair fee calculation. This approach discourages abuse while maintaining network reliability."} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 2, "depth": 3, "title": "Benchmarking and Weight", "anchor": "benchmarking-and-weight", "start_char": 1999, "end_char": 3665, "estimated_token_count": 321, "token_estimator": "heuristic-v1", "text": "### Benchmarking and Weight \n\nIn Polkadot SDK-based chains, weight quantifies the computational effort needed to process transactions. This weight includes factors such as:\n\n- Computational complexity.\n- Storage complexity (proof size).\n- Database reads and writes.\n- Hardware specifications.\n\nBenchmarking uses real-world testing to simulate worst-case scenarios for extrinsics. The framework generates a linear model for weight calculation by running multiple iterations with varied parameters. These worst-case weights ensure blocks remain within execution limits, enabling the runtime to maintain throughput under varying loads. Excess fees can be refunded if a call uses fewer resources than expected, offering users a fair cost model.\n \nBecause weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specifications of hardware used for benchmarking. By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period.\n\nWithin FRAME, each function call that is dispatched must have a `#[pallet::weight]` annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs:\n\n```rust hl_lines=\"2\"\n#[pallet::call_index(0)]\n#[pallet::weight(T::WeightInfo::do_something())]\npub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) }\n```\n\nThe `WeightInfo` file is automatically generated during benchmarking. Based on these tests, this file provides accurate weights for each extrinsic."} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 3, "depth": 2, "title": "Benchmarking Process", "anchor": "benchmarking-process", "start_char": 3665, "end_char": 4208, "estimated_token_count": 98, "token_estimator": "heuristic-v1", "text": "## Benchmarking Process\n\nBenchmarking a pallet involves the following steps: \n\n1. Creating a `benchmarking.rs` file within your pallet's structure.\n2. Writing a benchmarking test for each extrinsic.\n3. Executing the benchmarking tool to calculate weights based on performance metrics.\n\nThe benchmarking tool runs multiple iterations to model worst-case execution times and determine the appropriate weight. By default, the benchmarking pipeline is deactivated. To activate it, compile your runtime with the `runtime-benchmarks` feature flag."} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 4, "depth": 3, "title": "Prepare Your Environment", "anchor": "prepare-your-environment", "start_char": 4208, "end_char": 5262, "estimated_token_count": 293, "token_estimator": "heuristic-v1", "text": "### Prepare Your Environment\n\nInstall the [`frame-omni-bencher`](https://crates.io/crates/frame-omni-bencher){target=\\_blank} command-line tool:\n\n```bash\ncargo install frame-omni-bencher\n```\n\nBefore writing benchmark tests, you need to ensure the `frame-benchmarking` crate is included in your pallet's `Cargo.toml` similar to the following:\n\n```toml title=\"Cargo.toml\"\nframe-benchmarking = { version = \"37.0.0\", default-features = false }\n```\n\nYou must also ensure that you add the `runtime-benchmarks` feature flag as follows under the `[features]` section of your pallet's `Cargo.toml`:\n\n```toml title=\"Cargo.toml\"\nruntime-benchmarks = [\n \"frame-benchmarking/runtime-benchmarks\",\n \"frame-support/runtime-benchmarks\",\n \"frame-system/runtime-benchmarks\",\n \"sp-runtime/runtime-benchmarks\",\n]\n```\n\nLastly, ensure that `frame-benchmarking` is included in `std = []`: \n\n```toml title=\"Cargo.toml\"\nstd = [\n # ...\n \"frame-benchmarking?/std\",\n # ...\n]\n```\n\nOnce complete, you have the required dependencies for writing benchmark tests for your pallet."} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 5, "depth": 3, "title": "Write Benchmark Tests", "anchor": "write-benchmark-tests", "start_char": 5262, "end_char": 7718, "estimated_token_count": 645, "token_estimator": "heuristic-v1", "text": "### Write Benchmark Tests\n\nCreate a `benchmarking.rs` file in your pallet's `src/`. Your directory structure should look similar to the following:\n\n```\nmy-pallet/\n├── src/\n│ ├── lib.rs # Main pallet implementation\n│ └── benchmarking.rs # Benchmarking\n└── Cargo.toml\n```\n\nWith the directory structure set, you can use the [`polkadot-sdk-parachain-template`](https://github.com/paritytech/polkadot-sdk-parachain-template/tree/master/pallets){target=\\_blank} to get started as follows:\n\n```rust title=\"benchmarking.rs (starter template)\"\n//! Benchmarking setup for pallet-template\n#![cfg(feature = \"runtime-benchmarks\")]\n\nuse super::*;\nuse frame_benchmarking::v2::*;\n\n#[benchmarks]\nmod benchmarks {\n\tuse super::*;\n\t#[cfg(test)]\n\tuse crate::pallet::Pallet as Template;\n\tuse frame_system::RawOrigin;\n\n\t#[benchmark]\n\tfn do_something() {\n\t\tlet caller: T::AccountId = whitelisted_caller();\n\t\t#[extrinsic_call]\n\t\tdo_something(RawOrigin::Signed(caller), 100);\n\n\t\tassert_eq!(Something::::get().map(|v| v.block_number), Some(100u32.into()));\n\t}\n\n\t#[benchmark]\n\tfn cause_error() {\n\t\tSomething::::put(CompositeStruct { block_number: 100u32.into() });\n\t\tlet caller: T::AccountId = whitelisted_caller();\n\t\t#[extrinsic_call]\n\t\tcause_error(RawOrigin::Signed(caller));\n\n\t\tassert_eq!(Something::::get().map(|v| v.block_number), Some(101u32.into()));\n\t}\n\n\timpl_benchmark_test_suite!(Template, crate::mock::new_test_ext(), crate::mock::Test);\n}\n```\n\nIn your benchmarking tests, employ these best practices:\n\n- **Write custom testing functions**: The function `do_something` in the preceding example is a placeholder. Similar to writing unit tests, you must write custom functions to benchmark test your extrinsics. Access the mock runtime and use functions such as `whitelisted_caller()` to sign transactions and facilitate testing.\n- **Use the `#[extrinsic_call]` macro**: This macro is used when calling the extrinsic itself and is a required part of a benchmarking function. See the [`extrinsic_call`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html#extrinsic_call-and-block){target=\\_blank} docs for more details.\n- **Validate extrinsic behavior**: The `assert_eq` expression ensures that the extrinsic is working properly within the benchmark context.\n\nAdd the `benchmarking` module to your pallet. In the pallet `lib.rs` file add the following:\n\n```rust\n#[cfg(feature = \"runtime-benchmarks\")]\nmod benchmarking;\n```"} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 6, "depth": 3, "title": "Add Benchmarks to Runtime", "anchor": "add-benchmarks-to-runtime", "start_char": 7718, "end_char": 9847, "estimated_token_count": 418, "token_estimator": "heuristic-v1", "text": "### Add Benchmarks to Runtime\n\nBefore running the benchmarking tool, you must integrate benchmarks with your runtime as follows:\n\n1. Navigate to your `runtime/src` directory and check if a `benchmarks.rs` file exists. If not, create one. This file will contain the macro that registers all pallets for benchmarking along with their respective configurations:\n\n ```rust title=\"benchmarks.rs\"\n frame_benchmarking::define_benchmarks!(\n [frame_system, SystemBench::]\n [pallet_parachain_template, TemplatePallet]\n [pallet_balances, Balances]\n [pallet_session, SessionBench::]\n [pallet_timestamp, Timestamp]\n [pallet_message_queue, MessageQueue]\n [pallet_sudo, Sudo]\n [pallet_collator_selection, CollatorSelection]\n [cumulus_pallet_parachain_system, ParachainSystem]\n [cumulus_pallet_xcmp_queue, XcmpQueue]\n );\n ```\n\n For example, to add a new pallet named `pallet_parachain_template` for benchmarking, include it in the macro as shown:\n ```rust title=\"benchmarks.rs\" hl_lines=\"3\"\n frame_benchmarking::define_benchmarks!(\n [frame_system, SystemBench::]\n [pallet_parachain_template, TemplatePallet]\n );\n ```\n\n !!!warning \"Updating `define_benchmarks!` macro is required\"\n Any pallet that needs to be benchmarked must be included in the [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\\_blank} macro. The CLI will only be able to access and benchmark pallets that are registered here.\n\n2. Check your runtime's `lib.rs` file to ensure the `benchmarks` module is imported. The import should look like this:\n\n ```rust title=\"lib.rs\"\n #[cfg(feature = \"runtime-benchmarks\")]\n mod benchmarks;\n ```\n\n The `runtime-benchmarks` feature gate ensures benchmark tests are isolated from production runtime code.\n\n3. Enable runtime benchmarking for your pallet in `runtime/Cargo.toml`:\n\n ```toml\n runtime-benchmarks = [\n # ...\n \"pallet_parachain_template/runtime-benchmarks\",\n ]\n\n ```"} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 7, "depth": 3, "title": "Run Benchmarks", "anchor": "run-benchmarks", "start_char": 9847, "end_char": 14232, "estimated_token_count": 1100, "token_estimator": "heuristic-v1", "text": "### Run Benchmarks\n\nYou can now compile your runtime with the `runtime-benchmarks` feature flag. This feature flag is crucial as the benchmarking tool will look for this feature being enabled to know when it should run benchmark tests. Follow these steps to compile the runtime with benchmarking enabled:\n\n1. Run `build` with the feature flag included:\n\n ```bash\n cargo build --features runtime-benchmarks --release\n ```\n\n2. Create a `weights.rs` file in your pallet's `src/` directory. This file will store the auto-generated weight calculations:\n\n ```bash\n touch weights.rs\n ```\n\n3. Before running the benchmarking tool, you'll need a template file that defines how weight information should be formatted. Download the official template from the Polkadot SDK repository and save it in your project folders for future use:\n\n ```bash\n curl https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \\\n --output ./pallets/benchmarking/frame-weight-template.hbs\n ```\n\n4. Run the benchmarking tool to measure extrinsic weights:\n\n ```bash\n frame-omni-bencher v1 benchmark pallet \\\n --runtime INSERT_PATH_TO_WASM_RUNTIME \\\n --pallet INSERT_NAME_OF_PALLET \\\n --extrinsic \"\" \\\n --template ./frame-weight-template.hbs \\\n --output weights.rs\n ```\n\n !!! tip \"Flag definitions\"\n - **`--runtime`**: The path to your runtime's Wasm.\n - **`--pallet`**: The name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined in `define_benchmarks`.\n - **`--extrinsic`**: Which extrinsic to test. Using `\"\"` implies all extrinsics will be benchmarked.\n - **`--template`**: Defines how weight information should be formatted.\n - **`--output`**: Where the output of the auto-generated weights will reside.\n\nThe generated `weights.rs` file contains weight annotations for your extrinsics, ready to be added to your pallet. The output should be similar to the following. Some output is omitted for brevity:\n\n
\n frame-omni-bencher v1 benchmark pallet \\\n --runtime INSERT_PATH_TO_WASM_RUNTIME \\\n --pallet \"INSERT_NAME_OF_PALLET\" \\\n --extrinsic \"\" \\\n --template ./frame-weight-template.hbs \\\n --output ./weights.rs\n ...\n 2025-01-15T16:41:33.557045Z INFO polkadot_sdk_frame::benchmark::pallet: [ 0 % ] Starting benchmark: pallet_parachain_template::do_something\n 2025-01-15T16:41:33.564644Z INFO polkadot_sdk_frame::benchmark::pallet: [ 50 % ] Starting benchmark: pallet_parachain_template::cause_error\n ...\n Created file: \"weights.rs\"\n \n
\n\n#### Add Benchmark Weights to Pallet\n\nOnce the `weights.rs` is generated, you must integrate it with your pallet. \n\n1. To begin the integration, import the `weights` module and the `WeightInfo` trait, then add both to your pallet's `Config` trait. Complete the following steps to set up the configuration:\n\n ```rust title=\"lib.rs\"\n pub mod weights;\n use crate::weights::WeightInfo;\n\n /// Configure the pallet by specifying the parameters and types on which it depends.\n #[pallet::config]\n pub trait Config: frame_system::Config {\n // ...\n /// A type representing the weights required by the dispatchables of this pallet.\n type WeightInfo: WeightInfo;\n }\n ```\n\n2. Next, you must add this to the `#[pallet::weight]` annotation in all the extrinsics via the `Config` as follows:\n\n ```rust hl_lines=\"2\" title=\"lib.rs\"\n #[pallet::call_index(0)]\n #[pallet::weight(T::WeightInfo::do_something())]\n pub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) }\n ```\n\n3. Finally, configure the actual weight values in your runtime. In `runtime/src/config/mod.rs`, add the following code:\n\n ```rust title=\"mod.rs\"\n // Configure pallet.\n impl pallet_parachain_template::Config for Runtime {\n // ...\n type WeightInfo = pallet_parachain_template::weights::SubstrateWeight;\n }\n ```"} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 8, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 14232, "end_char": 14715, "estimated_token_count": 114, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n- View the Rust Docs for a more comprehensive, low-level view of the [FRAME V2 Benchmarking Suite](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=_blank}.\n- Read the [FRAME Benchmarking and Weights](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_benchmarking_weight/index.html){target=_blank} reference document, a concise guide which details how weights and benchmarking work."} +{"page_id": "parachains-customize-runtime-pallet-development-add-pallet-to-runtime", "page_title": "Add Pallets to the Runtime", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 30, "end_char": 866, "estimated_token_count": 192, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nIn previous tutorials, you learned how to [create a custom pallet](/tutorials/polkadot-sdk/parachains/zero-to-hero/build-custom-pallet/){target=\\_blank} and [test it](/tutorials/polkadot-sdk/parachains/zero-to-hero/pallet-unit-testing/){target=\\_blank}. The next step is to include this pallet in your runtime, integrating it into the core logic of your blockchain.\n\nThis tutorial will guide you through adding two pallets to your runtime: the custom pallet you previously developed and the [utility pallet](https://paritytech.github.io/polkadot-sdk/master/pallet_utility/index.html){target=\\_blank}. This standard Polkadot SDK pallet provides powerful dispatch functionality. The utility pallet offers, for example, batch dispatch, a stateless operation that enables executing multiple calls in a single transaction."} +{"page_id": "parachains-customize-runtime-pallet-development-add-pallet-to-runtime", "page_title": "Add Pallets to the Runtime", "index": 1, "depth": 2, "title": "Add the Pallets as Dependencies", "anchor": "add-the-pallets-as-dependencies", "start_char": 866, "end_char": 8510, "estimated_token_count": 1856, "token_estimator": "heuristic-v1", "text": "## Add the Pallets as Dependencies\n\nFirst, you'll update the runtime's `Cargo.toml` file to include the Utility pallet and your custom pallets as dependencies for the runtime. Follow these steps:\n\n1. Open the `runtime/Cargo.toml` file and locate the `[dependencies]` section. Add pallet-utility as one of the features for the `polkadot-sdk` dependency with the following line:\n\n ```toml hl_lines=\"4\" title=\"runtime/Cargo.toml\"\n [dependencies]\n ...\n polkadot-sdk = { workspace = true, features = [\n \"pallet-utility\",\n ...\n ], default-features = false }\n ```\n\n2. In the same `[dependencies]` section, add the custom pallet that you built from scratch with the following line:\n\n ```toml hl_lines=\"3\" title=\"Cargo.toml\"\n [dependencies]\n ...\n custom-pallet = { path = \"../pallets/custom-pallet\", default-features = false }\n ```\n\n3. In the `[features]` section, add the custom pallet to the `std` feature list:\n\n ```toml hl_lines=\"5\" title=\"Cargo.toml\"\n [features]\n default = [\"std\"]\n std = [\n ...\n \"custom-pallet/std\",\n ...\n ]\n ```\n\n3. Save the changes and close the `Cargo.toml` file.\n\n Once you have saved your file, it should look like the following:\n\n ???- code \"runtime/Cargo.toml\"\n \n ```rust title=\"runtime/Cargo.toml\"\n [package]\n name = \"parachain-template-runtime\"\n description = \"A parachain runtime template built with Substrate and Cumulus, part of Polkadot Sdk.\"\n version = \"0.1.0\"\n license = \"Unlicense\"\n authors.workspace = true\n homepage.workspace = true\n repository.workspace = true\n edition.workspace = true\n publish = false\n\n [package.metadata.docs.rs]\n targets = [\"x86_64-unknown-linux-gnu\"]\n\n [build-dependencies]\n docify = { workspace = true }\n substrate-wasm-builder = { optional = true, workspace = true, default-features = true }\n\n [dependencies]\n codec = { features = [\"derive\"], workspace = true }\n cumulus-pallet-parachain-system.workspace = true\n docify = { workspace = true }\n hex-literal = { optional = true, workspace = true, default-features = true }\n log = { workspace = true }\n pallet-parachain-template = { path = \"../pallets/template\", default-features = false }\n polkadot-sdk = { workspace = true, features = [\n \"pallet-utility\",\n \"cumulus-pallet-aura-ext\",\n \"cumulus-pallet-session-benchmarking\",\n \"cumulus-pallet-weight-reclaim\",\n \"cumulus-pallet-xcm\",\n \"cumulus-pallet-xcmp-queue\",\n \"cumulus-primitives-aura\",\n \"cumulus-primitives-core\",\n \"cumulus-primitives-utility\",\n \"pallet-aura\",\n \"pallet-authorship\",\n \"pallet-balances\",\n \"pallet-collator-selection\",\n \"pallet-message-queue\",\n \"pallet-session\",\n \"pallet-sudo\",\n \"pallet-timestamp\",\n \"pallet-transaction-payment\",\n \"pallet-transaction-payment-rpc-runtime-api\",\n \"pallet-xcm\",\n \"parachains-common\",\n \"polkadot-parachain-primitives\",\n \"polkadot-runtime-common\",\n \"runtime\",\n \"staging-parachain-info\",\n \"staging-xcm\",\n \"staging-xcm-builder\",\n \"staging-xcm-executor\",\n ], default-features = false }\n scale-info = { features = [\"derive\"], workspace = true }\n serde_json = { workspace = true, default-features = false, features = [\n \"alloc\",\n ] }\n smallvec = { workspace = true, default-features = true }\n\n custom-pallet = { path = \"../pallets/custom-pallet\", default-features = false }\n\n [features]\n default = [\"std\"]\n std = [\n \"codec/std\",\n \"cumulus-pallet-parachain-system/std\",\n \"log/std\",\n \"pallet-parachain-template/std\",\n \"polkadot-sdk/std\",\n \"scale-info/std\",\n \"serde_json/std\",\n \"substrate-wasm-builder\",\n \"custom-pallet/std\",\n ]\n\n runtime-benchmarks = [\n \"cumulus-pallet-parachain-system/runtime-benchmarks\",\n \"hex-literal\",\n \"pallet-parachain-template/runtime-benchmarks\",\n \"polkadot-sdk/runtime-benchmarks\",\n ]\n\n try-runtime = [\n \"cumulus-pallet-parachain-system/try-runtime\",\n \"pallet-parachain-template/try-runtime\",\n \"polkadot-sdk/try-runtime\",\n ]\n\n # Enable the metadata hash generation.\n #\n # This is hidden behind a feature because it increases the compile time.\n # The wasm binary needs to be compiled twice, once to fetch the metadata,\n # generate the metadata hash and then a second time with the\n # `RUNTIME_METADATA_HASH` environment variable set for the `CheckMetadataHash`\n # extension.\n metadata-hash = [\"substrate-wasm-builder/metadata-hash\"]\n\n # A convenience feature for enabling things when doing a build\n # for an on-chain release.\n on-chain-release-build = [\"metadata-hash\"]\n\n ```\n\nUpdate your root parachain template's `Cargo.toml` file to include your custom pallet as a dependency. Follow these steps:\n\n1. Open the `./Cargo.toml` file and locate the `[workspace]` section. \n \n Make sure the `custom-pallet` is a member of the workspace:\n\n ```toml hl_lines=\"4\" title=\"Cargo.toml\"\n [workspace]\n default-members = [\"pallets/template\", \"runtime\"]\n members = [\n \"node\", \"pallets/custom-pallet\",\n \"pallets/template\",\n \"runtime\",\n ]\n ```\n\n???- code \"./Cargo.toml\"\n\n ```rust title=\"./Cargo.toml\"\n [workspace.package]\n license = \"MIT-0\"\n authors = [\"Parity Technologies \"]\n homepage = \"https://paritytech.github.io/polkadot-sdk/\"\n repository = \"https://github.com/paritytech/polkadot-sdk-parachain-template.git\"\n edition = \"2021\"\n\n [workspace]\n default-members = [\"pallets/template\", \"runtime\"]\n members = [\n \"node\", \"pallets/custom-pallet\",\n \"pallets/template\",\n \"runtime\",\n ]\n resolver = \"2\"\n\n [workspace.dependencies]\n parachain-template-runtime = { path = \"./runtime\", default-features = false }\n pallet-parachain-template = { path = \"./pallets/template\", default-features = false }\n clap = { version = \"4.5.13\" }\n color-print = { version = \"0.3.4\" }\n docify = { version = \"0.2.9\" }\n futures = { version = \"0.3.31\" }\n jsonrpsee = { version = \"0.24.3\" }\n log = { version = \"0.4.22\", default-features = false }\n polkadot-sdk = { version = \"2503.0.1\", default-features = false }\n prometheus-endpoint = { version = \"0.17.2\", default-features = false, package = \"substrate-prometheus-endpoint\" }\n serde = { version = \"1.0.214\", default-features = false }\n codec = { version = \"3.7.4\", default-features = false, package = \"parity-scale-codec\" }\n cumulus-pallet-parachain-system = { version = \"0.20.0\", default-features = false }\n hex-literal = { version = \"0.4.1\", default-features = false }\n scale-info = { version = \"2.11.6\", default-features = false }\n serde_json = { version = \"1.0.132\", default-features = false }\n smallvec = { version = \"1.11.0\", default-features = false }\n substrate-wasm-builder = { version = \"26.0.1\", default-features = false }\n frame = { version = \"0.9.1\", default-features = false, package = \"polkadot-sdk-frame\" }\n\n [profile.release]\n opt-level = 3\n panic = \"unwind\"\n\n [profile.production]\n codegen-units = 1\n inherits = \"release\"\n lto = true\n ```"} +{"page_id": "parachains-customize-runtime-pallet-development-add-pallet-to-runtime", "page_title": "Add Pallets to the Runtime", "index": 2, "depth": 3, "title": "Update the Runtime Configuration", "anchor": "update-the-runtime-configuration", "start_char": 8510, "end_char": 10415, "estimated_token_count": 406, "token_estimator": "heuristic-v1", "text": "### Update the Runtime Configuration\n\nConfigure the pallets by implementing their `Config` trait and update the runtime macro to include the new pallets:\n\n1. Add the `OriginCaller` import:\n\n ```rust title=\"mod.rs\" hl_lines=\"8\"\n // Local module imports\n use super::OriginCaller;\n ...\n ```\n\n2. Implement the [`Config`](https://paritytech.github.io/polkadot-sdk/master/pallet_utility/pallet/trait.Config.html){target=\\_blank} trait for both pallets at the end of the `runtime/src/config/mod.rs` file:\n\n ```rust title=\"mod.rs\" hl_lines=\"8-25\"\n ...\n /// Configure the pallet template in pallets/template.\n impl pallet_parachain_template::Config for Runtime {\n type RuntimeEvent = RuntimeEvent;\n type WeightInfo = pallet_parachain_template::weights::SubstrateWeight;\n }\n\n // Configure utility pallet.\n impl pallet_utility::Config for Runtime {\n type RuntimeEvent = RuntimeEvent;\n type RuntimeCall = RuntimeCall;\n type PalletsOrigin = OriginCaller;\n type WeightInfo = pallet_utility::weights::SubstrateWeight;\n }\n // Define counter max value runtime constant.\n parameter_types! {\n pub const CounterMaxValue: u32 = 500;\n }\n\n // Configure custom pallet.\n impl custom_pallet::Config for Runtime {\n type RuntimeEvent = RuntimeEvent;\n type CounterMaxValue = CounterMaxValue;\n }\n ```\n\n3. Locate the `#[frame_support::runtime]` macro in the `runtime/src/lib.rs` file and add the pallets:\n\n ```rust hl_lines=\"9-14\" title=\"lib.rs\"\n #[frame_support::runtime]\n mod runtime {\n #[runtime::runtime]\n #[runtime::derive(\n ...\n )]\n pub struct Runtime;\n #[runtime::pallet_index(51)]\n pub type Utility = pallet_utility;\n\n #[runtime::pallet_index(52)]\n pub type CustomPallet = custom_pallet;\n }\n ```"} +{"page_id": "parachains-customize-runtime-pallet-development-add-pallet-to-runtime", "page_title": "Add Pallets to the Runtime", "index": 3, "depth": 2, "title": "Recompile the Runtime", "anchor": "recompile-the-runtime", "start_char": 10415, "end_char": 10864, "estimated_token_count": 89, "token_estimator": "heuristic-v1", "text": "## Recompile the Runtime\n\nAfter adding and configuring your pallets in the runtime, the next step is to ensure everything is set up correctly. To do this, recompile the runtime with the following command (make sure you're in the project's root directory):\n\n```bash\ncargo build --release\n```\n\nThis command ensures the runtime compiles without errors, validates the pallet configurations, and prepares the build for subsequent testing or deployment."} +{"page_id": "parachains-customize-runtime-pallet-development-add-pallet-to-runtime", "page_title": "Add Pallets to the Runtime", "index": 4, "depth": 2, "title": "Run Your Chain Locally", "anchor": "run-your-chain-locally", "start_char": 10864, "end_char": 12339, "estimated_token_count": 365, "token_estimator": "heuristic-v1", "text": "## Run Your Chain Locally\n\nLaunch your parachain locally and start producing blocks:\n\n!!!tip\n Generated chain TestNet specifications include development accounts \"Alice\" and \"Bob.\" These accounts are pre-funded with native parachain currency, allowing you to sign and send TestNet transactions. Take a look at the [Polkadot.js Accounts section](https://polkadot.js.org/apps/#/accounts){target=\\_blank} to view the development accounts for your chain.\n\n1. Create a new chain specification file with the updated runtime:\n\n ```bash\n chain-spec-builder create -t development \\\n --relay-chain paseo \\\n --para-id 1000 \\\n --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \\\n named-preset development\n ```\n\n2. Start the omni node with the generated chain specification:\n\n ```bash\n polkadot-omni-node --chain ./chain_spec.json --dev\n ```\n\n3. Verify you can interact with the new pallets using the [Polkadot.js Apps](https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944#/extrinsics){target=\\_blank} interface. Navigate to the **Extrinsics** tab and check that you can see both pallets:\n\n - Utility pallet\n\n ![](/images/parachains/customize-runtime/pallet-development/add-pallet-to-runtime/add-pallets-to-runtime-01.webp)\n \n\n - Custom pallet\n\n ![](/images/parachains/customize-runtime/pallet-development/add-pallet-to-runtime/add-pallets-to-runtime-02.webp)"} +{"page_id": "parachains-customize-runtime-pallet-development-add-pallet-to-runtime", "page_title": "Add Pallets to the Runtime", "index": 5, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 12339, "end_char": 13091, "estimated_token_count": 183, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n
\n\n- Tutorial __Deploy on Paseo TestNet__\n\n ---\n\n Deploy your Polkadot SDK blockchain on Paseo! Follow this step-by-step guide for a seamless journey to a successful TestNet deployment.\n\n [:octicons-arrow-right-24: Get Started](/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/)\n\n- Tutorial __Pallet Benchmarking (Optional)__\n\n ---\n\n Discover how to measure extrinsic costs and assign precise weights to optimize your pallet for accurate fees and runtime performance.\n\n [:octicons-arrow-right-24: Get Started](/tutorials/polkadot-sdk/parachains/zero-to-hero/pallet-benchmarking/)\n\n
"} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 0, "end_char": 662, "estimated_token_count": 125, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nBenchmarking is the process of measuring the computational resources (execution time and storage) required by your pallet's extrinsics. Accurate [weight](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/index.html){target=\\_blank} calculations are essential for ensuring your blockchain can process transactions efficiently while protecting against denial-of-service attacks.\n\nThis guide demonstrates how to benchmark a pallet and incorporate the resulting weight values. This example uses the custom counter pallet from previous guides in this series, but you can replace it with the code from another pallet if desired."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 1, "depth": 2, "title": "Prerequisites", "anchor": "prerequisites", "start_char": 662, "end_char": 1698, "estimated_token_count": 257, "token_estimator": "heuristic-v1", "text": "## Prerequisites\n\nBefore you begin, ensure you have:\n\n- A pallet to benchmark. If you followed the pallet development tutorials, you can use the counter pallet from the [Create a Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/){target=\\_blank} guide. You can also follow these steps to benchmark a custom pallet by updating the `benchmarking.rs` functions, and instances of usage in future steps, to calculate weights using your specific pallet functionality.\n- Basic understanding of [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity){target=\\_blank}.\n- Familiarity with [Rust's testing framework](https://doc.rust-lang.org/book/ch11-00-testing.html){target=\\_blank}.\n- Familiarity setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\\_blank}. Refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\\_blank} guide for instructions if needed."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 2, "depth": 2, "title": "Create the Benchmarking Module", "anchor": "create-the-benchmarking-module", "start_char": 1698, "end_char": 3422, "estimated_token_count": 452, "token_estimator": "heuristic-v1", "text": "## Create the Benchmarking Module\n\nCreate a new file `benchmarking.rs` in your pallet's `src` directory and add the following code:\n\n```rust title=\"pallets/pallet-custom/src/benchmarking.rs\"\n#![cfg(feature = \"runtime-benchmarks\")]\n\nuse super::*;\nuse frame::deps::frame_benchmarking::v2::*;\nuse frame::benchmarking::prelude::RawOrigin;\n\n#[benchmarks]\nmod benchmarks {\n use super::*;\n\n #[benchmark]\n fn set_counter_value() {\n let new_value: u32 = 100;\n\n #[extrinsic_call]\n _(RawOrigin::Root, new_value);\n\n assert_eq!(CounterValue::::get(), new_value);\n }\n\n #[benchmark]\n fn increment() {\n let caller: T::AccountId = whitelisted_caller();\n let amount: u32 = 50;\n\n #[extrinsic_call]\n _(RawOrigin::Signed(caller.clone()), amount);\n\n assert_eq!(CounterValue::::get(), amount);\n assert_eq!(UserInteractions::::get(caller), 1);\n }\n\n #[benchmark]\n fn decrement() {\n // First, set the counter to a non-zero value\n CounterValue::::put(100);\n\n let caller: T::AccountId = whitelisted_caller();\n let amount: u32 = 30;\n\n #[extrinsic_call]\n _(RawOrigin::Signed(caller.clone()), amount);\n\n assert_eq!(CounterValue::::get(), 70);\n assert_eq!(UserInteractions::::get(caller), 1);\n }\n\n impl_benchmark_test_suite!(Pallet, crate::mock::new_test_ext(), crate::mock::Test);\n}\n```\n\nThis module contains all the [benchmarking definitions](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\\_blank} for your pallet. If you are benchmarking a different pallet, update the testing logic as needed to test your pallet's functionality."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 3, "depth": 2, "title": "Define the Weight Trait", "anchor": "define-the-weight-trait", "start_char": 3422, "end_char": 4493, "estimated_token_count": 226, "token_estimator": "heuristic-v1", "text": "## Define the Weight Trait\n\nAdd a `weights` module to your pallet that defines the `WeightInfo` trait using the following code:\n\n```rust title=\"pallets/pallet-custom/src/weights.rs\"\n#[frame::pallet]\npub mod pallet {\n use frame::prelude::*;\n pub use weights::WeightInfo;\n\n pub mod weights {\n use frame::prelude::*;\n\n pub trait WeightInfo {\n fn set_counter_value() -> Weight;\n fn increment() -> Weight;\n fn decrement() -> Weight;\n }\n\n impl WeightInfo for () {\n fn set_counter_value() -> Weight {\n Weight::from_parts(10_000, 0)\n }\n fn increment() -> Weight {\n Weight::from_parts(15_000, 0)\n }\n fn decrement() -> Weight {\n Weight::from_parts(15_000, 0)\n }\n }\n }\n\n // ... rest of pallet\n}\n```\n\nThe `WeightInfo for ()` implementation provides placeholder weights for development. If you are using a different pallet, update the `weights` module to use your pallet's function names."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 4, "depth": 2, "title": "Add WeightInfo to Config", "anchor": "add-weightinfo-to-config", "start_char": 4493, "end_char": 5319, "estimated_token_count": 200, "token_estimator": "heuristic-v1", "text": "## Add WeightInfo to Config \n\nUpdate your pallet's `Config` trait to include `WeightInfo` by adding the following code:\n\n```rust title=\"pallets/pallet-custom/src/lib.rs\"\n#[pallet::config]\npub trait Config: frame_system::Config {\n type RuntimeEvent: From> + IsType<::RuntimeEvent>;\n\n #[pallet::constant]\n type CounterMaxValue: Get;\n\n type WeightInfo: weights::WeightInfo;\n}\n```\n\nThe [`WeightInfo`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/trait.WeightInfo.html){target=\\_blank} trait provides an abstraction layer that allows weights to be swapped at runtime configuration. By making `WeightInfo` an associated type in the `Config` trait, you will enable each runtime that uses your pallet to specify which weight implementation to use."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 5, "depth": 2, "title": "Update Extrinsic Weight Annotations", "anchor": "update-extrinsic-weight-annotations", "start_char": 5319, "end_char": 6607, "estimated_token_count": 311, "token_estimator": "heuristic-v1", "text": "## Update Extrinsic Weight Annotations\n\nReplace the placeholder weights in your extrinsics with calls to the `WeightInfo` trait by adding the following code:\n\n```rust title=\"pallets/pallet-custom/src/lib.rs\"\n#[pallet::call]\nimpl Pallet {\n #[pallet::call_index(0)]\n #[pallet::weight(T::WeightInfo::set_counter_value())]\n pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult {\n // ... implementation\n }\n\n #[pallet::call_index(1)]\n #[pallet::weight(T::WeightInfo::increment())]\n pub fn increment(origin: OriginFor, amount: u32) -> DispatchResult {\n // ... implementation\n }\n\n #[pallet::call_index(2)]\n #[pallet::weight(T::WeightInfo::decrement())]\n pub fn decrement(origin: OriginFor, amount: u32) -> DispatchResult {\n // ... implementation\n }\n}\n```\n\nBy calling `T::WeightInfo::function_name()` instead of using hardcoded `Weight::from_parts()` values, your extrinsics automatically use whichever weight implementation is configured in the runtime. You can switch between placeholder weights for testing and benchmarked weights for production easily, without changing any pallet code.\n\nIf you are using a different pallet, be sure to update the functions for `WeightInfo` accordingly."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 6, "depth": 2, "title": "Include the Benchmarking Module", "anchor": "include-the-benchmarking-module", "start_char": 6607, "end_char": 7144, "estimated_token_count": 141, "token_estimator": "heuristic-v1", "text": "## Include the Benchmarking Module\n\nAt the top of your `lib.rs`, add the module declaration by adding the following code:\n\n```rust title=\"pallets/pallet-custom/src/lib.rs\"\n#![cfg_attr(not(feature = \"std\"), no_std)]\n\nextern crate alloc;\nuse alloc::vec::Vec;\n\npub use pallet::*;\n\n#[cfg(feature = \"runtime-benchmarks\")]\nmod benchmarking;\n\n// Additional pallet code\n```\n\nThe `#[cfg(feature = \"runtime-benchmarks\")]` attribute ensures that benchmarking code is only compiled when explicitly needed to keep your production runtime efficient."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 7, "depth": 2, "title": "Configure Pallet Dependencies", "anchor": "configure-pallet-dependencies", "start_char": 7144, "end_char": 8054, "estimated_token_count": 212, "token_estimator": "heuristic-v1", "text": "## Configure Pallet Dependencies\n\nUpdate your pallet's `Cargo.toml` to enable the benchmarking feature by adding the following code:\n\n```toml title=\"pallets/pallet-custom/Cargo.toml\"\n[dependencies]\ncodec = { features = [\"derive\"], workspace = true }\nscale-info = { features = [\"derive\"], workspace = true }\nframe = { features = [\"experimental\", \"runtime\"], workspace = true }\n\n[features]\ndefault = [\"std\"]\nruntime-benchmarks = [\n \"frame/runtime-benchmarks\",\n]\nstd = [\n \"codec/std\",\n \"scale-info/std\",\n \"frame/std\",\n]\n```\n\nThe Cargo feature flag system lets you conditionally compile code based on which features are enabled. By defining a `runtime-benchmarks` feature that cascades to FRAME's benchmarking features, you create a clean way to build your pallet with or without benchmarking support, ensuring all necessary dependencies are available when needed but excluded from production builds."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 8, "depth": 2, "title": "Update Mock Runtime", "anchor": "update-mock-runtime", "start_char": 8054, "end_char": 8553, "estimated_token_count": 109, "token_estimator": "heuristic-v1", "text": "## Update Mock Runtime\n\nAdd the `WeightInfo` type to your test configuration in `mock.rs` by adding the following code:\n\n```rust title=\"pallets/pallet-custom/src/mock.rs\"\nimpl pallet_custom::Config for Test {\n type RuntimeEvent = RuntimeEvent;\n type CounterMaxValue = ConstU32<1000>;\n type WeightInfo = ();\n}\n```\n\nIn your mock runtime for testing, use the placeholder `()` implementation of `WeightInfo`, since unit tests focus on verifying functional correctness rather than performance."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 9, "depth": 2, "title": "Configure Runtime Benchmarking", "anchor": "configure-runtime-benchmarking", "start_char": 8553, "end_char": 10338, "estimated_token_count": 382, "token_estimator": "heuristic-v1", "text": "## Configure Runtime Benchmarking\n\nTo execute benchmarks, your pallet must be integrated into the runtime's benchmarking infrastructure. Follow these steps to update the runtime configuration:\n\n1. **Update `runtime/Cargo.toml`**: Add your pallet to the runtime's `runtime-benchmarks` feature as follows:\n\n ```toml title=\"runtime/Cargo.toml\"\n runtime-benchmarks = [\n \"cumulus-pallet-parachain-system/runtime-benchmarks\",\n \"hex-literal\",\n \"pallet-parachain-template/runtime-benchmarks\",\n \"polkadot-sdk/runtime-benchmarks\",\n \"pallet-custom/runtime-benchmarks\",\n ]\n ```\n\n When you build the runtime with `--features runtime-benchmarks`, this configuration ensures all necessary benchmarking code across all pallets (including yours) is included.\n\n2. **Update runtime configuration**: Using the the placeholder implementation, run development benchmarks as follows:\n\n ```rust title=\"runtime/src/configs/mod.rs\"\n impl pallet_custom::Config for Runtime {\n type RuntimeEvent = RuntimeEvent;\n type CounterMaxValue = ConstU32<1000>;\n type WeightInfo = ();\n }\n ```\n\n3. **Register benchmarks**: Add your pallet to the benchmark list in `runtime/src/benchmarks.rs` as follows:\n\n ```rust title=\"runtime/src/benchmarks.rs\"\n polkadot_sdk::frame_benchmarking::define_benchmarks!(\n [frame_system, SystemBench::]\n [pallet_balances, Balances]\n // ... other pallets\n [pallet_custom, CustomPallet]\n );\n ```\n\n The [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\\_blank} macro creates the infrastructure that allows the benchmarking CLI tool to discover and execute your pallet's benchmarks."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 10, "depth": 2, "title": "Test Benchmark Compilation", "anchor": "test-benchmark-compilation", "start_char": 10338, "end_char": 11330, "estimated_token_count": 245, "token_estimator": "heuristic-v1", "text": "## Test Benchmark Compilation\n\nRun the following command to verify your benchmarks compile and run as tests:\n\n```bash\ncargo test -p pallet-custom --features runtime-benchmarks\n```\n\nYou will see terminal output similar to the following as your benchmark tests pass:\n\n
\n cargo test -p pallet-custom --features runtime-benchmarks\n test benchmarking::benchmarks::bench_set_counter_value ... ok\n test benchmarking::benchmarks::bench_increment ... ok\n test benchmarking::benchmarks::bench_decrement ... ok\n \n
\n\nThe `impl_benchmark_test_suite!` macro generates unit tests for each benchmark. Running these tests verifies that your benchmarks compile correctly, execute without panicking, and pass their assertions, catching issues early before building the entire runtime."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 11, "depth": 2, "title": "Build the Runtime with Benchmarks", "anchor": "build-the-runtime-with-benchmarks", "start_char": 11330, "end_char": 12025, "estimated_token_count": 123, "token_estimator": "heuristic-v1", "text": "## Build the Runtime with Benchmarks\n\nCompile the runtime with benchmarking enabled to generate the Wasm binary using the following command:\n\n```bash\ncargo build --release --features runtime-benchmarks\n```\n\nThis command produces the runtime WASM file needed for benchmarking, typically located at: `target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm`\n\nThe build includes all the benchmarking infrastructure and special host functions needed for measurement. The resulting WASM runtime contains your benchmark code and can communicate with the benchmarking tool's execution environment. You'll create a different build later for operating your chain in production."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 12, "depth": 2, "title": "Install the Benchmarking Tool", "anchor": "install-the-benchmarking-tool", "start_char": 12025, "end_char": 12590, "estimated_token_count": 121, "token_estimator": "heuristic-v1", "text": "## Install the Benchmarking Tool\n\nInstall the `frame-omni-bencher` CLI tool using the following command:\n\n```bash\ncargo install frame-omni-bencher --locked\n```\n\n[`frame-omni-bencher`](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\\_blank} is the official Polkadot SDK tool designed explicitly for FRAME pallet benchmarking. It provides a standardized way to execute benchmarks, measure execution times and storage operations, and generate properly formatted weight files with full integration into the FRAME weight system."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 13, "depth": 2, "title": "Download the Weight Template", "anchor": "download-the-weight-template", "start_char": 12590, "end_char": 13392, "estimated_token_count": 161, "token_estimator": "heuristic-v1", "text": "## Download the Weight Template\n\nDownload the official weight template file using the following commands:\n\n```bash\ncurl -L https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \\\n--output ./pallets/pallet-custom/frame-weight-template.hbs\n```\n\nThe weight template is a Handlebars file that transforms raw benchmark data into a correctly formatted Rust source file. It defines the structure of the generated `weights.rs` file, including imports, trait definitions, documentation comments, and formatting. Using the official template ensures your weight files follow the Polkadot SDK conventions and include all necessary metadata, such as benchmark execution parameters, storage operation counts, and hardware information."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 14, "depth": 2, "title": "Execute Benchmarks", "anchor": "execute-benchmarks", "start_char": 13392, "end_char": 15444, "estimated_token_count": 423, "token_estimator": "heuristic-v1", "text": "## Execute Benchmarks\n\nRun benchmarks for your pallet to generate weight files using the following commands:\n\n```bash\nframe-omni-bencher v1 benchmark pallet \\\n --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \\\n --pallet pallet_custom \\\n --extrinsic \"\" \\\n --template ./pallets/pallet-custom/frame-weight-template.hbs \\\n --output ./pallets/pallet-custom/src/weights.rs\n```\n\nBenchmarks execute against the compiled WASM runtime rather than native code because WASM is what actually runs in production on the blockchain. WASM execution can have different performance characteristics than native code due to compilation and sandboxing overhead, so benchmarking against the WASM ensures your weight measurements reflect real-world conditions.\n\n??? note \"Additional customization\"\n\n You can customize benchmark execution with additional parameters for more detailed measurements, as shown in the sample code below:\n\n ```bash\n frame-omni-bencher v1 benchmark pallet \\\n --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \\\n --pallet pallet_custom \\\n --extrinsic \"\" \\\n --steps 50 \\\n --repeat 20 \\\n --template ./pallets/pallet-custom/frame-weight-template.hbs \\\n --output ./pallets/pallet-custom/src/weights.rs\n ```\n \n - **`--steps 50`**: Number of different input values to test when using linear components (default: 50). More steps provide finer granularity for detecting complexity trends but increase benchmarking time.\n - **`--repeat 20`**: Number of repetitions for each measurement (default: 20). More repetitions improve statistical accuracy by averaging out variance, reducing the impact of system noise, and providing more reliable weight estimates.\n - **`--heap-pages 4096`**: WASM heap pages allocation. Affects available memory during execution.\n - **`--wasm-execution compiled`**: WASM execution method. Use `compiled` for performance closest to production conditions."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 15, "depth": 2, "title": "Use Generated Weights", "anchor": "use-generated-weights", "start_char": 15444, "end_char": 19239, "estimated_token_count": 851, "token_estimator": "heuristic-v1", "text": "## Use Generated Weights\n\nAfter running benchmarks, a `weights.rs` file is generated containing measured weights based on actual measurements of your code running on real hardware, accounting for the specific complexity of your logic, storage access patterns, and computational requirements.\n\nFollow these steps to use the generated weights with your pallet:\n\n1. Integrate the generated weights by adding the weights module to your pallet's `lib.rs` as follows:\n\n ```rust title=\"pallets/pallet-custom/src/lib.rs\"\n #![cfg_attr(not(feature = \"std\"), no_std)]\n\n extern crate alloc;\n use alloc::vec::Vec;\n\n pub use pallet::*;\n\n #[cfg(feature = \"runtime-benchmarks\")]\n mod benchmarking;\n\n pub mod weights;\n\n #[frame::pallet]\n pub mod pallet {\n use super::*;\n use frame::prelude::*;\n use crate::weights::WeightInfo;\n // ... rest of pallet\n }\n ```\n\n Unlike the benchmarking module (which is only needed when running benchmarks), the weights module must be available in all builds because the runtime needs to call the weight functions during regular operation to calculate transaction fees and enforce block limits.\n\n2. Update your runtime configuration to use the generated weights instead of the placeholder `()` implementation by adding the following code:\n\n ```rust title=\"runtime/src/configs/mod.rs\"\n impl pallet_custom::Config for Runtime {\n type RuntimeEvent = RuntimeEvent;\n type CounterMaxValue = ConstU32<1000>;\n type WeightInfo = pallet_custom::weights::SubstrateWeight;\n }\n ```\n\n This change activates your benchmarked weights in the production runtime. Now, when users submit transactions that call your pallet's extrinsics, the runtime will use the actual measured weights to calculate fees and enforce block limits.\n\n??? code \"Example generated weight file\"\n \n The generated `weights.rs` file will look similar to this:\n\n ```rust title=\"pallets/pallet-custom/src/weights.rs\"\n //! Autogenerated weights for `pallet_custom`\n //!\n //! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 32.0.0\n //! DATE: 2025-01-15, STEPS: `50`, REPEAT: `20`\n\n #![cfg_attr(rustfmt, rustfmt_skip)]\n #![allow(unused_parens)]\n #![allow(unused_imports)]\n #![allow(missing_docs)]\n\n use frame_support::{traits::Get, weights::{Weight, constants::RocksDbWeight}};\n use core::marker::PhantomData;\n\n pub trait WeightInfo {\n fn set_counter_value() -> Weight;\n fn increment() -> Weight;\n fn decrement() -> Weight;\n }\n\n pub struct SubstrateWeight(PhantomData);\n impl WeightInfo for SubstrateWeight {\n fn set_counter_value() -> Weight {\n Weight::from_parts(8_234_000, 0)\n .saturating_add(T::DbWeight::get().reads(1))\n .saturating_add(T::DbWeight::get().writes(1))\n }\n\n fn increment() -> Weight {\n Weight::from_parts(12_456_000, 0)\n .saturating_add(T::DbWeight::get().reads(2))\n .saturating_add(T::DbWeight::get().writes(2))\n }\n\n fn decrement() -> Weight {\n Weight::from_parts(11_987_000, 0)\n .saturating_add(T::DbWeight::get().reads(2))\n .saturating_add(T::DbWeight::get().writes(2))\n }\n }\n ```\n\n The actual numbers in your `weights.rs` file will vary based on your hardware and implementation complexity. The [`DbWeight`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.RuntimeDbWeight.html){target=\\_blank} accounts for database read and write operations.\n\nCongratulations, you've successfully benchmarked a pallet and updated your runtime to use the generated weight values."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 16, "depth": 2, "title": "Related Resources", "anchor": "related-resources", "start_char": 19239, "end_char": 19780, "estimated_token_count": 153, "token_estimator": "heuristic-v1", "text": "## Related Resources\n\n- [FRAME Benchmarking Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/index.html){target=\\_blank}\n- [Weight Struct Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.Weight.html){target=\\_blank}\n- [Benchmarking v2 API](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\\_blank}\n- [frame-omni-bencher Tool](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\\_blank}"} {"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 26, "end_char": 847, "estimated_token_count": 167, "token_estimator": "heuristic-v1", "text": "## Introduction\n\n[Framework for Runtime Aggregation of Modular Entities (FRAME)](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/polkadot_sdk/frame_runtime/index.html){target=\\_blank} provides a powerful set of tools for blockchain development through modular components called [pallets](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/polkadot_sdk/frame_runtime/pallet/index.html){target=\\_blank}. These Rust-based runtime modules allow you to build custom blockchain functionality with precision and flexibility. While FRAME includes a library of pre-built pallets, its true strength lies in creating custom pallets tailored to your specific needs.\n\nIn this guide, you'll learn how to build a custom counter pallet from scratch that demonstrates core pallet development concepts."} {"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 1, "depth": 2, "title": "Prerequisites", "anchor": "prerequisites", "start_char": 847, "end_char": 1217, "estimated_token_count": 99, "token_estimator": "heuristic-v1", "text": "## Prerequisites\n\nBefore you begin, ensure you have:\n\n- [Polkadot SDK dependencies installed](/parachains/install-polkadot-sdk/){target=\\_blank}.\n- A [Polkadot SDK Parchain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\\_blank} set up locally.\n- Basic familiarity with [FRAME concepts](/parachains/customize-runtime/){target=\\_blank}."} {"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 2, "depth": 2, "title": "Core Pallet Components", "anchor": "core-pallet-components", "start_char": 1217, "end_char": 2092, "estimated_token_count": 193, "token_estimator": "heuristic-v1", "text": "## Core Pallet Components\n\nAs you build your custom pallet, you'll work with these key sections:\n\n- **Imports and dependencies**: Bring in necessary FRAME libraries and external modules.\n- **Runtime configuration trait**: Specify types and constants for pallet-runtime interaction.\n- **Runtime events**: Define signals that communicate state changes.\n- **Runtime errors**: Define error types returned from dispatchable calls.\n- **Runtime storage**: Declare on-chain storage items for your pallet's state.\n- **Genesis configuration**: Set initial blockchain state.\n- **Dispatchable functions (extrinsics)**: Create callable functions for user interactions.\n\nFor additional macros beyond those covered here, refer to the [pallet_macros](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/index.html){target=\\_blank} section of the Polkadot SDK Docs."} @@ -211,12 +225,12 @@ {"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 17, "depth": 3, "title": "Add to Runtime Construct", "anchor": "add-to-runtime-construct", "start_char": 22324, "end_char": 23326, "estimated_token_count": 214, "token_estimator": "heuristic-v1", "text": "### Add to Runtime Construct\n\nIn the `runtime/src/lib.rs` file, locate the [`#[frame_support::runtime]`](https://paritytech.github.io/polkadot-sdk/master/frame_support/attr.runtime.html){target=\\_blank} section and add your pallet with a unique `pallet_index`:\n\n```rust title=\"runtime/src/lib.rs\"\n#[frame_support::runtime]\nmod runtime {\n #[runtime::runtime]\n #[runtime::derive(\n RuntimeCall,\n RuntimeEvent,\n RuntimeError,\n RuntimeOrigin,\n RuntimeTask,\n RuntimeFreezeReason,\n RuntimeHoldReason,\n RuntimeSlashReason,\n RuntimeLockId,\n RuntimeViewFunction\n )]\n pub struct Runtime;\n\n #[runtime::pallet_index(0)]\n pub type System = frame_system;\n\n // ... other pallets\n\n #[runtime::pallet_index(51)]\n pub type CustomPallet = pallet_custom;\n}\n```\n\n!!!warning\n Each pallet must have a unique index. Duplicate indices will cause compilation errors. Choose an index that doesn't conflict with existing pallets."} {"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 18, "depth": 3, "title": "Configure Genesis for Your Runtime", "anchor": "configure-genesis-for-your-runtime", "start_char": 23326, "end_char": 23824, "estimated_token_count": 100, "token_estimator": "heuristic-v1", "text": "### Configure Genesis for Your Runtime\n\nTo set initial values for your pallet when the chain starts, you'll need to configure the genesis in your chain specification. Genesis configuration is typically done in the `node/src/chain_spec.rs` file or when generating the chain specification.\n\nFor development and testing, you can use the default values provided by the `#[derive(DefaultNoBound)]` macro. For production networks, you'll want to explicitly set these values in your chain specification."} {"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 19, "depth": 3, "title": "Verify Runtime Compilation", "anchor": "verify-runtime-compilation", "start_char": 23824, "end_char": 24047, "estimated_token_count": 41, "token_estimator": "heuristic-v1", "text": "### Verify Runtime Compilation\n\nCompile the runtime to ensure everything is configured correctly:\n\n```bash\ncargo build --release\n```\n\nThis command validates all pallet configurations and prepares the build for deployment."} -{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 20, "depth": 2, "title": "Run Your Chain Locally", "anchor": "run-your-chain-locally", "start_char": 24047, "end_char": 24235, "estimated_token_count": 47, "token_estimator": "heuristic-v1", "text": "## Run Your Chain Locally\n\nLaunch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\\_blank}."} -{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 21, "depth": 3, "title": "Generate a Chain Specification", "anchor": "generate-a-chain-specification", "start_char": 24235, "end_char": 24644, "estimated_token_count": 92, "token_estimator": "heuristic-v1", "text": "### Generate a Chain Specification\n\nCreate a chain specification file with the updated runtime:\n\n```bash\nchain-spec-builder create -t development \\\n--relay-chain paseo \\\n--para-id 1000 \\\n--runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \\\nnamed-preset development\n```\n\nThis command generates a `chain_spec.json` that includes your custom pallet."} -{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 22, "depth": 3, "title": "Start the Parachain Node", "anchor": "start-the-parachain-node", "start_char": 24644, "end_char": 24827, "estimated_token_count": 44, "token_estimator": "heuristic-v1", "text": "### Start the Parachain Node\n\nLaunch the parachain:\n\n```bash\npolkadot-omni-node --chain ./chain_spec.json --dev\n```\n\nVerify the node starts successfully and begins producing blocks."} -{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 23, "depth": 2, "title": "Interact with Your Pallet", "anchor": "interact-with-your-pallet", "start_char": 24827, "end_char": 25599, "estimated_token_count": 234, "token_estimator": "heuristic-v1", "text": "## Interact with Your Pallet\n\nUse the Polkadot.js Apps interface to test your pallet:\n\n1. Navigate to [Polkadot.js Apps](https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944#/extrinsics){target=\\_blank}.\n\n2. Ensure you're connected to your local node at `ws://127.0.0.1:9944`.\n\n3. Go to **Developer** > **Extrinsics**.\n\n4. Locate **customPallet** in the pallet dropdown.\n\n5. You should see the available extrinsics:\n\n - **`increment(amount)`**: Increase the counter by a specified amount.\n - **`decrement(amount)`**: Decrease the counter by a specified amount.\n - **`setCounterValue(newValue)`**: Set counter to a specific value (requires sudo/root).\n\n![](/images/parachains/customize-runtime/pallet-development/create-a-pallet/create-a-pallet-01.webp)"} -{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 24, "depth": 2, "title": "Key Takeaways", "anchor": "key-takeaways", "start_char": 25599, "end_char": 26322, "estimated_token_count": 129, "token_estimator": "heuristic-v1", "text": "## Key Takeaways\n\nYou've successfully created and integrated a custom pallet into a Polkadot SDK-based runtime. You have now successfully:\n\n- Defined runtime-specific types and constants via the `Config` trait.\n- Implemented on-chain state using `StorageValue` and `StorageMap`.\n- Created signals to communicate state changes to external systems.\n- Established clear error handling with descriptive error types.\n- Configured initial blockchain state for both production and testing.\n- Built callable functions with proper validation and access control.\n- Added the pallet to a runtime and tested it locally.\n\nThese components form the foundation for developing sophisticated blockchain logic in Polkadot SDK-based chains."} -{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 25, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 26322, "end_char": 26671, "estimated_token_count": 86, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n
\n\n- Guide __Mock Your Runtime__\n\n ---\n\n Learn to create a mock runtime environment for testing your pallet in isolation before integration.\n\n [:octicons-arrow-right-24: Continue](/parachains/customize-runtime/pallet-development/mock-runtime/)\n\n
"} +{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 20, "depth": 2, "title": "Run Your Chain Locally", "anchor": "run-your-chain-locally", "start_char": 24047, "end_char": 24522, "estimated_token_count": 128, "token_estimator": "heuristic-v1", "text": "## Run Your Chain Locally\n\nLaunch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\\_blank}. For instructions on setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\\_blank}, refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\\_blank} guide."} +{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 21, "depth": 3, "title": "Generate a Chain Specification", "anchor": "generate-a-chain-specification", "start_char": 24522, "end_char": 24931, "estimated_token_count": 92, "token_estimator": "heuristic-v1", "text": "### Generate a Chain Specification\n\nCreate a chain specification file with the updated runtime:\n\n```bash\nchain-spec-builder create -t development \\\n--relay-chain paseo \\\n--para-id 1000 \\\n--runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \\\nnamed-preset development\n```\n\nThis command generates a `chain_spec.json` that includes your custom pallet."} +{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 22, "depth": 3, "title": "Start the Parachain Node", "anchor": "start-the-parachain-node", "start_char": 24931, "end_char": 25114, "estimated_token_count": 44, "token_estimator": "heuristic-v1", "text": "### Start the Parachain Node\n\nLaunch the parachain:\n\n```bash\npolkadot-omni-node --chain ./chain_spec.json --dev\n```\n\nVerify the node starts successfully and begins producing blocks."} +{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 23, "depth": 2, "title": "Interact with Your Pallet", "anchor": "interact-with-your-pallet", "start_char": 25114, "end_char": 25886, "estimated_token_count": 234, "token_estimator": "heuristic-v1", "text": "## Interact with Your Pallet\n\nUse the Polkadot.js Apps interface to test your pallet:\n\n1. Navigate to [Polkadot.js Apps](https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944#/extrinsics){target=\\_blank}.\n\n2. Ensure you're connected to your local node at `ws://127.0.0.1:9944`.\n\n3. Go to **Developer** > **Extrinsics**.\n\n4. Locate **customPallet** in the pallet dropdown.\n\n5. You should see the available extrinsics:\n\n - **`increment(amount)`**: Increase the counter by a specified amount.\n - **`decrement(amount)`**: Decrease the counter by a specified amount.\n - **`setCounterValue(newValue)`**: Set counter to a specific value (requires sudo/root).\n\n![](/images/parachains/customize-runtime/pallet-development/create-a-pallet/create-a-pallet-01.webp)"} +{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 24, "depth": 2, "title": "Key Takeaways", "anchor": "key-takeaways", "start_char": 25886, "end_char": 26609, "estimated_token_count": 129, "token_estimator": "heuristic-v1", "text": "## Key Takeaways\n\nYou've successfully created and integrated a custom pallet into a Polkadot SDK-based runtime. You have now successfully:\n\n- Defined runtime-specific types and constants via the `Config` trait.\n- Implemented on-chain state using `StorageValue` and `StorageMap`.\n- Created signals to communicate state changes to external systems.\n- Established clear error handling with descriptive error types.\n- Configured initial blockchain state for both production and testing.\n- Built callable functions with proper validation and access control.\n- Added the pallet to a runtime and tested it locally.\n\nThese components form the foundation for developing sophisticated blockchain logic in Polkadot SDK-based chains."} +{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 25, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 26609, "end_char": 26958, "estimated_token_count": 86, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n
\n\n- Guide __Mock Your Runtime__\n\n ---\n\n Learn to create a mock runtime environment for testing your pallet in isolation before integration.\n\n [:octicons-arrow-right-24: Continue](/parachains/customize-runtime/pallet-development/mock-runtime/)\n\n
"} {"page_id": "parachains-customize-runtime-pallet-development-mock-runtime", "page_title": "Mock Your Runtime", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 21, "end_char": 806, "estimated_token_count": 158, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nTesting is a critical part of pallet development. Before integrating your pallet into a full runtime, you need a way to test its functionality in isolation. A mock runtime provides a minimal, simulated blockchain environment where you can verify your pallet's logic without the overhead of running a full node.\n\nIn this guide, you'll learn how to create a mock runtime for the custom counter pallet built in the [Make a Custom Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/){target=\\_blank} guide. This mock runtime will enable you to write comprehensive unit tests that verify:\n\n- Dispatchable function behavior.\n- Storage state changes.\n- Event emission.\n- Error handling.\n- Access control and origin validation.\n- Genesis configuration."} {"page_id": "parachains-customize-runtime-pallet-development-mock-runtime", "page_title": "Mock Your Runtime", "index": 1, "depth": 2, "title": "Prerequisites", "anchor": "prerequisites", "start_char": 806, "end_char": 1203, "estimated_token_count": 108, "token_estimator": "heuristic-v1", "text": "## Prerequisites\n\nBefore you begin, ensure you have:\n\n- Completed the [Make a Custom Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/){target=\\_blank} guide.\n- The custom counter pallet from the Make a Custom Pallet guide. Available in `pallets/pallet-custom`.\n- Basic understanding of [Rust testing](https://doc.rust-lang.org/book/ch11-00-testing.html){target=\\_blank}."} {"page_id": "parachains-customize-runtime-pallet-development-mock-runtime", "page_title": "Mock Your Runtime", "index": 2, "depth": 2, "title": "Understand Mock Runtimes", "anchor": "understand-mock-runtimes", "start_char": 1203, "end_char": 1737, "estimated_token_count": 90, "token_estimator": "heuristic-v1", "text": "## Understand Mock Runtimes\n\nA mock runtime is a minimal implementation of the runtime environment that:\n\n- Simulates blockchain state to provide storage and state management.\n- Satisfies your pallet's `Config` trait requirements.\n- Allows isolated testing without external dependencies.\n- Supports genesis configuration to set initial blockchain state for tests.\n- Provides instant feedback on code changes for a faster development cycle.\n\nMock runtimes are used exclusively for testing and are never deployed to a live blockchain."} @@ -230,27 +244,13 @@ {"page_id": "parachains-customize-runtime-pallet-development-mock-runtime", "page_title": "Mock Your Runtime", "index": 10, "depth": 2, "title": "Verify Mock Compilation", "anchor": "verify-mock-compilation", "start_char": 7931, "end_char": 10853, "estimated_token_count": 564, "token_estimator": "heuristic-v1", "text": "## Verify Mock Compilation\n\nBefore proceeding to write tests, ensure your mock runtime compiles correctly:\n\n```bash\ncargo test --package pallet-custom --lib\n```\n\nThis command compiles the test code (including the mock and genesis configuration) without running tests yet. Address any compilation errors before continuing.\n\n??? code \"Complete mock runtime script\"\n\n Here's the complete `mock.rs` file for reference:\n\n ```rust title=\"src/mock.rs\"\n use crate as pallet_custom;\n use frame::{\n deps::{\n frame_support::{ derive_impl, traits::ConstU32 },\n sp_io,\n sp_runtime::{ traits::IdentityLookup, BuildStorage },\n },\n prelude::*,\n };\n\n type Block = frame_system::mocking::MockBlock;\n\n // Configure a mock runtime to test the pallet.\n frame::deps::frame_support::construct_runtime!(\n pub enum Test\n {\n System: frame_system,\n CustomPallet: pallet_custom,\n }\n );\n\n #[derive_impl(frame_system::config_preludes::TestDefaultConfig)]\n impl frame_system::Config for Test {\n type Block = Block;\n type AccountId = u64;\n type Lookup = IdentityLookup;\n }\n\n impl pallet_custom::Config for Test {\n type RuntimeEvent = RuntimeEvent;\n type CounterMaxValue = ConstU32<1000>;\n }\n\n // Build genesis storage according to the mock runtime.\n pub fn new_test_ext() -> sp_io::TestExternalities {\n let mut t = frame_system::GenesisConfig::::default().build_storage().unwrap();\n\n (pallet_custom::GenesisConfig:: {\n initial_counter_value: 0,\n initial_user_interactions: vec![],\n })\n .assimilate_storage(&mut t)\n .unwrap();\n\n t.into()\n }\n\n // Helper function to create a test externalities with a specific initial counter value\n pub fn new_test_ext_with_counter(initial_value: u32) -> sp_io::TestExternalities {\n let mut t = frame_system::GenesisConfig::::default().build_storage().unwrap();\n\n (pallet_custom::GenesisConfig:: {\n initial_counter_value: initial_value,\n initial_user_interactions: vec![],\n })\n .assimilate_storage(&mut t)\n .unwrap();\n\n t.into()\n }\n\n // Helper function to create a test externalities with initial user interactions\n pub fn new_test_ext_with_interactions(\n initial_value: u32,\n interactions: Vec<(u64, u32)>\n ) -> sp_io::TestExternalities {\n let mut t = frame_system::GenesisConfig::::default().build_storage().unwrap();\n\n (pallet_custom::GenesisConfig:: {\n initial_counter_value: initial_value,\n initial_user_interactions: interactions,\n })\n .assimilate_storage(&mut t)\n .unwrap();\n\n t.into()\n }\n ```"} {"page_id": "parachains-customize-runtime-pallet-development-mock-runtime", "page_title": "Mock Your Runtime", "index": 11, "depth": 2, "title": "Key Takeaways", "anchor": "key-takeaways", "start_char": 10853, "end_char": 11416, "estimated_token_count": 98, "token_estimator": "heuristic-v1", "text": "## Key Takeaways\n\nYou've successfully created a mock runtime with a genesis configuration for your custom pallet. You can now:\n\n- Test your pallet without a full runtime.\n- Set initial blockchain state for different test scenarios.\n- Create different genesis setups for various testing needs.\n- Use this minimal setup to test all pallet functionality.\n\nThe mock runtime with a genesis configuration is essential for test-driven development, enabling you to verify logic under different initial conditions before integrating it into the actual parachain runtime."} {"page_id": "parachains-customize-runtime-pallet-development-mock-runtime", "page_title": "Mock Your Runtime", "index": 12, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 11416, "end_char": 11766, "estimated_token_count": 87, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n
\n\n- Guide __Pallet Unit Testing__\n\n ---\n\n Learn to write comprehensive unit tests for your pallet using the mock runtime you just created.\n\n [:octicons-arrow-right-24: Continue](/parachains/customize-runtime/pallet-development/pallet-testing/)\n\n
"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 23, "end_char": 686, "estimated_token_count": 129, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nUnit testing in the Polkadot SDK helps ensure that the functions provided by a pallet behave as expected. It also confirms that data and events associated with a pallet are processed correctly during interactions. With your mock runtime in place from the [previous guide](/parachains/customize-runtime/pallet-development/mock-runtime/), you can now write comprehensive tests that verify your pallet's behavior in isolation.\n\nIn this guide, you'll learn how to:\n\n- Structure test modules effectively.\n- Test dispatchable functions.\n- Verify storage changes.\n- Check event emission.\n- Test error conditions.\n- Use genesis configurations in tests."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 1, "depth": 2, "title": "Prerequisites", "anchor": "prerequisites", "start_char": 686, "end_char": 1149, "estimated_token_count": 123, "token_estimator": "heuristic-v1", "text": "## Prerequisites\n\nBefore you begin, ensure you:\n\n- Completed the [Make a Custom Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/) guide.\n- Completed the [Mock Your Runtime](/parachains/customize-runtime/pallet-development/mock-runtime/) guide.\n- Configured custom counter pallet with mock runtime in `pallets/pallet-custom`.\n- Understood the basics of [Rust testing](https://doc.rust-lang.org/book/ch11-00-testing.html){target=\\_blank}."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 2, "depth": 2, "title": "Understanding FRAME Testing Tools", "anchor": "understanding-frame-testing-tools", "start_char": 1149, "end_char": 1389, "estimated_token_count": 51, "token_estimator": "heuristic-v1", "text": "## Understanding FRAME Testing Tools\n\n[FRAME](/reference/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\\_blank} provides specialized testing macros and utilities that make pallet testing more efficient:"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 3, "depth": 3, "title": "Assertion Macros", "anchor": "assertion-macros", "start_char": 1389, "end_char": 2134, "estimated_token_count": 203, "token_estimator": "heuristic-v1", "text": "### Assertion Macros\n\n- **[`assert_ok!`](https://paritytech.github.io/polkadot-sdk/master/frame_support/macro.assert_ok.html){target=\\_blank}** - Asserts that a dispatchable call succeeds.\n- **[`assert_noop!`](https://paritytech.github.io/polkadot-sdk/master/frame_support/macro.assert_noop.html){target=\\_blank}** - Asserts that a call fails without changing state (no operation).\n- **[`assert_eq!`](https://doc.rust-lang.org/std/macro.assert_eq.html){target=\\_blank}** - Standard Rust equality assertion.\n\n!!!info \"`assert_noop!` Explained\"\n Use `assert_noop!` to ensure the operation fails without any state changes. This is critical for testing error conditions - it verifies both that the error occurs AND that no storage was modified."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 4, "depth": 3, "title": "System Pallet Test Helpers", "anchor": "system-pallet-test-helpers", "start_char": 2134, "end_char": 3130, "estimated_token_count": 279, "token_estimator": "heuristic-v1", "text": "### System Pallet Test Helpers\n\nThe [`frame_system`](https://paritytech.github.io/polkadot-sdk/master/frame_system/index.html){target=\\_blank} pallet provides useful methods for testing:\n\n- **[`System::events()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.events){target=\\_blank}** - Returns all events emitted during the test.\n- **[`System::assert_last_event()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_last_event){target=\\_blank}** - Asserts the last event matches expectations.\n- **[`System::set_block_number()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.set_block_number){target=\\_blank}** - Sets the current block number.\n\n!!!info \"Events and Block Number\"\n Events are not emitted on block 0 (genesis block). If you need to test events, ensure you set the block number to at least 1 using `System::set_block_number(1)`."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 5, "depth": 3, "title": "Origin Types", "anchor": "origin-types", "start_char": 3130, "end_char": 3921, "estimated_token_count": 230, "token_estimator": "heuristic-v1", "text": "### Origin Types\n\n- **[`RuntimeOrigin::root()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/enum.RawOrigin.html#variant.Root){target=\\_blank}** - Root/sudo origin for privileged operations.\n- **[`RuntimeOrigin::signed(account)`](https://paritytech.github.io/polkadot-sdk/master/frame_system/enum.RawOrigin.html#variant.Signed){target=\\_blank}** - Signed origin from a specific account.\n- **[`RuntimeOrigin::none()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/enum.RawOrigin.html#variant.None){target=\\_blank}** - No origin (typically fails for most operations).\n\nLearn more about origins in the [FRAME Origin reference document](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_origin/index.html){target=\\_blank}."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 6, "depth": 2, "title": "Create the Tests Module", "anchor": "create-the-tests-module", "start_char": 3921, "end_char": 4528, "estimated_token_count": 166, "token_estimator": "heuristic-v1", "text": "## Create the Tests Module\n\nCreate a new file for your tests within the pallet directory:\n\n1. Navigate to your pallet directory:\n\n ```bash\n cd pallets/pallet-custom/src\n ```\n\n2. Create a new file named `tests.rs`:\n\n ```bash\n touch tests.rs\n ```\n\n3. Open `src/lib.rs` and add the tests module declaration after the mock module:\n\n ```rust title=\"src/lib.rs\"\n #![cfg_attr(not(feature = \"std\"), no_std)]\n\n pub use pallet::*;\n\n #[cfg(test)]\n mod mock;\n\n #[cfg(test)]\n mod tests;\n\n #[frame::pallet]\n pub mod pallet {\n // ... existing pallet code\n }\n ```"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 7, "depth": 2, "title": "Set Up the Test Module", "anchor": "set-up-the-test-module", "start_char": 4528, "end_char": 9206, "estimated_token_count": 1011, "token_estimator": "heuristic-v1", "text": "## Set Up the Test Module\n\nOpen `src/tests.rs` and add the basic structure with necessary imports:\n\n```rust\nuse crate::{mock::*, Error, Event};\nuse frame::deps::frame_support::{assert_noop, assert_ok};\nuse frame::deps::sp_runtime::DispatchError;\n```\n\nThis setup imports:\n\n- The mock runtime and test utilities from `mock.rs`\n- Your pallet's `Error` and `Event` types\n- FRAME's assertion macros via `frame::deps`\n- `DispatchError` for testing origin checks\n\n???+ code \"Complete Pallet Code Reference\"\n Here's the complete pallet code that you'll be testing throughout this guide:\n\n ```rust\n #![cfg_attr(not(feature = \"std\"), no_std)]\n\n pub use pallet::*;\n\n #[frame::pallet]\n pub mod pallet {\n use frame::prelude::*;\n\n #[pallet::pallet]\n pub struct Pallet(_);\n\n #[pallet::config]\n pub trait Config: frame_system::Config {\n type RuntimeEvent: From> + IsType<::RuntimeEvent>;\n\n #[pallet::constant]\n type CounterMaxValue: Get;\n }\n\n #[pallet::event]\n #[pallet::generate_deposit(pub(super) fn deposit_event)]\n pub enum Event {\n CounterValueSet {\n new_value: u32,\n },\n CounterIncremented {\n new_value: u32,\n who: T::AccountId,\n amount: u32,\n },\n CounterDecremented {\n new_value: u32,\n who: T::AccountId,\n amount: u32,\n },\n }\n\n #[pallet::error]\n pub enum Error {\n NoneValue,\n Overflow,\n Underflow,\n CounterMaxValueExceeded,\n }\n\n #[pallet::storage]\n pub type CounterValue = StorageValue<_, u32, ValueQuery>;\n\n #[pallet::storage]\n pub type UserInteractions = StorageMap<\n _,\n Blake2_128Concat,\n T::AccountId,\n u32,\n ValueQuery\n >;\n\n #[pallet::genesis_config]\n #[derive(DefaultNoBound)]\n pub struct GenesisConfig {\n pub initial_counter_value: u32,\n pub initial_user_interactions: Vec<(T::AccountId, u32)>,\n }\n\n #[pallet::genesis_build]\n impl BuildGenesisConfig for GenesisConfig {\n fn build(&self) {\n CounterValue::::put(self.initial_counter_value);\n for (account, count) in &self.initial_user_interactions {\n UserInteractions::::insert(account, count);\n }\n }\n }\n\n #[pallet::call]\n impl Pallet {\n #[pallet::call_index(0)]\n #[pallet::weight(0)]\n pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult {\n ensure_root(origin)?;\n ensure!(new_value <= T::CounterMaxValue::get(), Error::::CounterMaxValueExceeded);\n CounterValue::::put(new_value);\n Self::deposit_event(Event::CounterValueSet { new_value });\n Ok(())\n }\n\n #[pallet::call_index(1)]\n #[pallet::weight(0)]\n pub fn increment(origin: OriginFor, amount: u32) -> DispatchResult {\n let who = ensure_signed(origin)?;\n let current_value = CounterValue::::get();\n let new_value = current_value.checked_add(amount).ok_or(Error::::Overflow)?;\n ensure!(new_value <= T::CounterMaxValue::get(), Error::::CounterMaxValueExceeded);\n CounterValue::::put(new_value);\n UserInteractions::::mutate(&who, |count| {\n *count = count.saturating_add(1);\n });\n Self::deposit_event(Event::CounterIncremented { new_value, who, amount });\n Ok(())\n }\n\n #[pallet::call_index(2)]\n #[pallet::weight(0)]\n pub fn decrement(origin: OriginFor, amount: u32) -> DispatchResult {\n let who = ensure_signed(origin)?;\n let current_value = CounterValue::::get();\n let new_value = current_value.checked_sub(amount).ok_or(Error::::Underflow)?;\n CounterValue::::put(new_value);\n UserInteractions::::mutate(&who, |count| {\n *count = count.saturating_add(1);\n });\n Self::deposit_event(Event::CounterDecremented { new_value, who, amount });\n Ok(())\n }\n }\n }\n\n ```"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 8, "depth": 2, "title": "Write Your First Test", "anchor": "write-your-first-test", "start_char": 9206, "end_char": 9314, "estimated_token_count": 22, "token_estimator": "heuristic-v1", "text": "## Write Your First Test\n\nLet's start with a simple test to verify the increment function works correctly."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 9, "depth": 3, "title": "Test Basic Increment", "anchor": "test-basic-increment", "start_char": 9314, "end_char": 10392, "estimated_token_count": 238, "token_estimator": "heuristic-v1", "text": "### Test Basic Increment\n\nTest that the increment function increases counter value and emits events.\n\n```rust\n#[test]\nfn increment_works() {\n new_test_ext().execute_with(|| {\n // Set block number to 1 so events are registered\n System::set_block_number(1);\n\n let account = 1u64;\n\n // Increment by 50\n assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 50));\n assert_eq!(crate::CounterValue::::get(), 50);\n\n // Check event was emitted\n System::assert_last_event(\n Event::CounterIncremented {\n new_value: 50,\n who: account,\n amount: 50,\n }\n .into(),\n );\n\n // Check user interactions were tracked\n assert_eq!(crate::UserInteractions::::get(account), 1);\n });\n}\n```\n\nRun your first test:\n\n```bash\ncargo test --package pallet-custom increment_works\n```\n\nYou should see:\n\n```\nrunning 1 test\ntest tests::increment_works ... ok\n```\n\nCongratulations! You've written and run your first pallet test."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 10, "depth": 2, "title": "Test Error Conditions", "anchor": "test-error-conditions", "start_char": 10392, "end_char": 10537, "estimated_token_count": 28, "token_estimator": "heuristic-v1", "text": "## Test Error Conditions\n\nNow let's test that our pallet correctly handles errors. Error testing is crucial to ensure your pallet fails safely."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 11, "depth": 3, "title": "Test Overflow Protection", "anchor": "test-overflow-protection", "start_char": 10537, "end_char": 11052, "estimated_token_count": 113, "token_estimator": "heuristic-v1", "text": "### Test Overflow Protection\n\nTest that incrementing at u32::MAX fails with Overflow error.\n\n```rust\n#[test]\nfn increment_fails_on_overflow() {\n new_test_ext_with_counter(u32::MAX).execute_with(|| {\n // Attempt to increment when at max u32 should fail\n assert_noop!(\n CustomPallet::increment(RuntimeOrigin::signed(1), 1),\n Error::::Overflow\n );\n });\n}\n```\n\nTest overflow protection:\n\n```bash\ncargo test --package pallet-custom increment_fails_on_overflow\n```"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 12, "depth": 3, "title": "Test Underflow Protection", "anchor": "test-underflow-protection", "start_char": 11052, "end_char": 11564, "estimated_token_count": 105, "token_estimator": "heuristic-v1", "text": "### Test Underflow Protection\n\nTest that decrementing below zero fails with Underflow error.\n\n```rust\n#[test]\nfn decrement_fails_on_underflow() {\n new_test_ext_with_counter(10).execute_with(|| {\n // Attempt to decrement below zero should fail\n assert_noop!(\n CustomPallet::decrement(RuntimeOrigin::signed(1), 11),\n Error::::Underflow\n );\n });\n}\n```\n\nVerify underflow protection:\n\n```bash\ncargo test --package pallet-custom decrement_fails_on_underflow\n```"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 13, "depth": 2, "title": "Test Access Control", "anchor": "test-access-control", "start_char": 11564, "end_char": 11668, "estimated_token_count": 17, "token_estimator": "heuristic-v1", "text": "## Test Access Control\n\nVerify that origin checks work correctly and unauthorized access is prevented."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 14, "depth": 3, "title": "Test Root-Only Access", "anchor": "test-root-only-access", "start_char": 11668, "end_char": 12433, "estimated_token_count": 164, "token_estimator": "heuristic-v1", "text": "### Test Root-Only Access\n\nTest that set_counter_value requires root origin and rejects signed origins.\n\n```rust\n#[test]\nfn set_counter_value_requires_root() {\n new_test_ext().execute_with(|| {\n let alice = 1u64;\n\n // When: non-root user tries to set counter\n // Then: should fail with BadOrigin\n assert_noop!(\n CustomPallet::set_counter_value(RuntimeOrigin::signed(alice), 100),\n DispatchError::BadOrigin\n );\n\n // But root should succeed\n assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 100));\n assert_eq!(crate::CounterValue::::get(), 100);\n });\n}\n```\n\nTest access control:\n\n```bash\ncargo test --package pallet-custom set_counter_value_requires_root\n```"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 15, "depth": 2, "title": "Test Event Emission", "anchor": "test-event-emission", "start_char": 12433, "end_char": 12520, "estimated_token_count": 16, "token_estimator": "heuristic-v1", "text": "## Test Event Emission\n\nVerify that events are emitted correctly with the right data."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 16, "depth": 3, "title": "Test Event Data", "anchor": "test-event-data", "start_char": 12520, "end_char": 13700, "estimated_token_count": 264, "token_estimator": "heuristic-v1", "text": "### Test Event Data\n\nThe [`increment_works`](/parachains/customize-runtime/pallet-development/pallet-testing/#test-basic-increment) test (shown earlier) already demonstrates event testing by:\n\n1. Setting the block number to 1 to enable event emission.\n2. Calling the dispatchable function.\n3. Using `System::assert_last_event()` to verify the correct event was emitted with expected data.\n\nThis pattern applies to all dispatchables that emit events. For a dedicated event-only test focusing on the `set_counter_value` function:\n\nTest that set_counter_value updates storage and emits correct event.\n\n```rust\n#[test]\nfn set_counter_value_works() {\n new_test_ext().execute_with(|| {\n // Set block number to 1 so events are registered\n System::set_block_number(1);\n\n // Set counter to 100\n assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 100));\n assert_eq!(crate::CounterValue::::get(), 100);\n\n // Check event was emitted\n System::assert_last_event(Event::CounterValueSet { new_value: 100 }.into());\n });\n}\n```\n\nRun the event test:\n\n```bash\ncargo test --package pallet-custom set_counter_value_works\n```"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 17, "depth": 2, "title": "Test Genesis Configuration", "anchor": "test-genesis-configuration", "start_char": 13700, "end_char": 13783, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## Test Genesis Configuration\n\nVerify that genesis configuration works correctly."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 18, "depth": 3, "title": "Test Genesis Setup", "anchor": "test-genesis-setup", "start_char": 13783, "end_char": 14402, "estimated_token_count": 160, "token_estimator": "heuristic-v1", "text": "### Test Genesis Setup\n\nTest that genesis configuration correctly initializes counter and user interactions.\n\n```rust\n#[test]\nfn genesis_config_works() {\n new_test_ext_with_interactions(42, vec![(1, 5), (2, 10)]).execute_with(|| {\n // Check initial counter value\n assert_eq!(crate::CounterValue::::get(), 42);\n\n // Check initial user interactions\n assert_eq!(crate::UserInteractions::::get(1), 5);\n assert_eq!(crate::UserInteractions::::get(2), 10);\n });\n}\n```\n\nTest genesis configuration:\n\n```bash\ncargo test --package pallet-custom genesis_config_works\n```"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 19, "depth": 2, "title": "Run All Tests", "anchor": "run-all-tests", "start_char": 14402, "end_char": 24742, "estimated_token_count": 2250, "token_estimator": "heuristic-v1", "text": "## Run All Tests\n\nNow run all your tests together:\n\n```bash\ncargo test --package pallet-custom\n```\n\nYou should see all tests passing:\n\n
\n $ cargo test --package pallet-custom\n running 15 tests\n test mock::__construct_runtime_integrity_test::runtime_integrity_tests ... ok\n test mock::test_genesis_config_builds ... ok\n test tests::decrement_fails_on_underflow ... ok\n test tests::decrement_tracks_multiple_interactions ... ok\n test tests::decrement_works ... ok\n test tests::different_users_tracked_separately ... ok\n test tests::genesis_config_works ... ok\n test tests::increment_fails_on_overflow ... ok\n test tests::increment_respects_max_value ... ok\n test tests::increment_tracks_multiple_interactions ... ok\n test tests::increment_works ... ok\n test tests::mixed_increment_and_decrement_works ... ok\n test tests::set_counter_value_requires_root ... ok\n test tests::set_counter_value_respects_max_value ... ok\n test tests::set_counter_value_works ... ok\n \n test result: ok. 15 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out\n
\n\n!!!note \"Mock Runtime Tests\"\n You'll notice 2 additional tests from the `mock` module:\n\n - `mock::__construct_runtime_integrity_test::runtime_integrity_tests` - Auto-generated test that validates runtime construction\n - `mock::test_genesis_config_builds` - Validates that genesis configuration builds correctly\n\n These tests are automatically generated from your mock runtime setup and help ensure the test environment itself is valid.\n\nCongratulations! You have a well-tested pallet covering the essential testing patterns!\n\nThese tests demonstrate comprehensive coverage including basic operations, error conditions, access control, event emission, state management, and genesis configuration. As you build more complex pallets, you'll apply these same patterns to test additional functionality.\n\n??? code \"Full Test Suite Code\"\n Here's the complete `tests.rs` file for quick reference:\n\n ```rust\n use crate::{mock::*, Error, Event};\n use frame::deps::frame_support::{assert_noop, assert_ok};\n use frame::deps::sp_runtime::DispatchError;\n\n #[test]\n fn set_counter_value_works() {\n new_test_ext().execute_with(|| {\n // Set block number to 1 so events are registered\n System::set_block_number(1);\n\n // Set counter to 100\n assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 100));\n assert_eq!(crate::CounterValue::::get(), 100);\n\n // Check event was emitted\n System::assert_last_event(Event::CounterValueSet { new_value: 100 }.into());\n });\n }\n\n #[test]\n fn set_counter_value_requires_root() {\n new_test_ext().execute_with(|| {\n // Attempt to set counter with non-root origin should fail\n assert_noop!(\n CustomPallet::set_counter_value(RuntimeOrigin::signed(1), 100),\n DispatchError::BadOrigin\n );\n });\n }\n\n #[test]\n fn set_counter_value_respects_max_value() {\n new_test_ext().execute_with(|| {\n // Attempt to set counter above max value (1000) should fail\n assert_noop!(\n CustomPallet::set_counter_value(RuntimeOrigin::root(), 1001),\n Error::::CounterMaxValueExceeded\n );\n\n // Setting to exactly max value should work\n assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 1000));\n assert_eq!(crate::CounterValue::::get(), 1000);\n });\n }\n\n #[test]\n fn increment_works() {\n new_test_ext().execute_with(|| {\n // Set block number to 1 so events are registered\n System::set_block_number(1);\n\n let account = 1u64;\n\n // Increment by 50\n assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 50));\n assert_eq!(crate::CounterValue::::get(), 50);\n\n // Check event was emitted\n System::assert_last_event(\n Event::CounterIncremented {\n new_value: 50,\n who: account,\n amount: 50,\n }\n .into(),\n );\n\n // Check user interactions were tracked\n assert_eq!(crate::UserInteractions::::get(account), 1);\n });\n }\n\n #[test]\n fn increment_tracks_multiple_interactions() {\n new_test_ext().execute_with(|| {\n let account = 1u64;\n\n // Increment multiple times\n assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 10));\n assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 20));\n assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 30));\n\n // Check counter value\n assert_eq!(crate::CounterValue::::get(), 60);\n\n // Check user interactions were tracked (should be 3)\n assert_eq!(crate::UserInteractions::::get(account), 3);\n });\n }\n\n #[test]\n fn increment_fails_on_overflow() {\n new_test_ext_with_counter(u32::MAX).execute_with(|| {\n // Attempt to increment when at max u32 should fail\n assert_noop!(\n CustomPallet::increment(RuntimeOrigin::signed(1), 1),\n Error::::Overflow\n );\n });\n }\n\n #[test]\n fn increment_respects_max_value() {\n new_test_ext_with_counter(950).execute_with(|| {\n // Incrementing past max value (1000) should fail\n assert_noop!(\n CustomPallet::increment(RuntimeOrigin::signed(1), 51),\n Error::::CounterMaxValueExceeded\n );\n\n // Incrementing to exactly max value should work\n assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(1), 50));\n assert_eq!(crate::CounterValue::::get(), 1000);\n });\n }\n\n #[test]\n fn decrement_works() {\n new_test_ext_with_counter(100).execute_with(|| {\n // Set block number to 1 so events are registered\n System::set_block_number(1);\n\n let account = 2u64;\n\n // Decrement by 30\n assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(account), 30));\n assert_eq!(crate::CounterValue::::get(), 70);\n\n // Check event was emitted\n System::assert_last_event(\n Event::CounterDecremented {\n new_value: 70,\n who: account,\n amount: 30,\n }\n .into(),\n );\n\n // Check user interactions were tracked\n assert_eq!(crate::UserInteractions::::get(account), 1);\n });\n }\n\n #[test]\n fn decrement_fails_on_underflow() {\n new_test_ext_with_counter(10).execute_with(|| {\n // Attempt to decrement below zero should fail\n assert_noop!(\n CustomPallet::decrement(RuntimeOrigin::signed(1), 11),\n Error::::Underflow\n );\n });\n }\n\n #[test]\n fn decrement_tracks_multiple_interactions() {\n new_test_ext_with_counter(100).execute_with(|| {\n let account = 3u64;\n\n // Decrement multiple times\n assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(account), 10));\n assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(account), 20));\n\n // Check counter value\n assert_eq!(crate::CounterValue::::get(), 70);\n\n // Check user interactions were tracked (should be 2)\n assert_eq!(crate::UserInteractions::::get(account), 2);\n });\n }\n\n #[test]\n fn mixed_increment_and_decrement_works() {\n new_test_ext_with_counter(50).execute_with(|| {\n let account = 4u64;\n\n // Mix of increment and decrement\n assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 25));\n assert_eq!(crate::CounterValue::::get(), 75);\n\n assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(account), 15));\n assert_eq!(crate::CounterValue::::get(), 60);\n\n assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account), 10));\n assert_eq!(crate::CounterValue::::get(), 70);\n\n // Check user interactions were tracked (should be 3)\n assert_eq!(crate::UserInteractions::::get(account), 3);\n });\n }\n\n #[test]\n fn different_users_tracked_separately() {\n new_test_ext().execute_with(|| {\n let account1 = 1u64;\n let account2 = 2u64;\n\n // User 1 increments\n assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account1), 10));\n assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(account1), 10));\n\n // User 2 decrements\n assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(account2), 5));\n\n // Check counter value (10 + 10 - 5 = 15)\n assert_eq!(crate::CounterValue::::get(), 15);\n\n // Check user interactions are tracked separately\n assert_eq!(crate::UserInteractions::::get(account1), 2);\n assert_eq!(crate::UserInteractions::::get(account2), 1);\n });\n }\n\n #[test]\n fn genesis_config_works() {\n new_test_ext_with_interactions(42, vec![(1, 5), (2, 10)]).execute_with(|| {\n // Check initial counter value\n assert_eq!(crate::CounterValue::::get(), 42);\n\n // Check initial user interactions\n assert_eq!(crate::UserInteractions::::get(1), 5);\n assert_eq!(crate::UserInteractions::::get(2), 10);\n });\n }\n ```"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Unit Testing", "index": 20, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 24742, "end_char": 25092, "estimated_token_count": 92, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n
\n\n- Guide __Add Your Custom Pallet to the Runtime__\n\n ---\n\n Your pallet is tested and ready! Learn how to integrate it into your runtime.\n\n [:octicons-arrow-right-24: Integrate](/parachains/customize-runtime/pallet-development/add-to-runtime/)\n\n
"} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 21, "end_char": 675, "estimated_token_count": 123, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nUnit testing in the Polkadot SDK helps ensure that the functions provided by a pallet behave as expected. It also confirms that data and events associated with a pallet are processed correctly during interactions. The Polkadot SDK offers a set of APIs to create a test environment to simulate runtime and mock transaction execution for extrinsics and queries.\n\nTo begin unit testing, you must first set up a mock runtime that simulates blockchain behavior, incorporating the necessary pallets. For a deeper understanding, consult the [Mock Runtime](/parachains/customize-runtime/pallet-development/mock-runtime/){target=\\_blank} guide."} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 1, "depth": 2, "title": "Writing Unit Tests", "anchor": "writing-unit-tests", "start_char": 675, "end_char": 2198, "estimated_token_count": 285, "token_estimator": "heuristic-v1", "text": "## Writing Unit Tests\n\nOnce the mock runtime is in place, the next step is to write unit tests that evaluate the functionality of your pallet. Unit tests allow you to test specific pallet features in isolation, ensuring that each function behaves correctly under various conditions. These tests typically reside in your pallet module's `test.rs` file.\n\nUnit tests in the Polkadot SDK use the Rust testing framework, and the mock runtime you've defined earlier will serve as the test environment. Below are the typical steps involved in writing unit tests for a pallet.\n\nThe tests confirm that:\n\n- **Pallets initialize correctly**: At the start of each test, the system should initialize with block number 0, and the pallets should be in their default states.\n- **Pallets modify each other's state**: The second test shows how one pallet can trigger changes in another pallet's internal state, confirming proper cross-pallet interactions.\n- **State transitions between blocks are seamless**: By simulating block transitions, the tests validate that the runtime responds correctly to changes in the block number.\n\nTesting pallet interactions within the runtime is critical for ensuring the blockchain behaves as expected under real-world conditions. Writing integration tests allows validation of how pallets function together, preventing issues that might arise when the system is fully assembled.\n\nThis approach provides a comprehensive view of the runtime's functionality, ensuring the blockchain is stable and reliable."} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 2, "depth": 3, "title": "Test Initialization", "anchor": "test-initialization", "start_char": 2198, "end_char": 2510, "estimated_token_count": 68, "token_estimator": "heuristic-v1", "text": "### Test Initialization\n\nEach test starts by initializing the runtime environment, typically using the `new_test_ext()` function, which sets up the mock storage and environment.\n\n```rust\n#[test]\nfn test_pallet_functionality() {\n new_test_ext().execute_with(|| {\n // Test logic goes here\n });\n}\n```"} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 3, "depth": 3, "title": "Function Call Testing", "anchor": "function-call-testing", "start_char": 2510, "end_char": 3283, "estimated_token_count": 167, "token_estimator": "heuristic-v1", "text": "### Function Call Testing\n\nCall the pallet's extrinsics or functions to simulate user interaction or internal logic. Use the `assert_ok!` macro to check for successful execution and `assert_err!` to verify that errors are correctly handled.\n\n```rust\n#[test]\nfn it_works_for_valid_input() {\n new_test_ext().execute_with(|| {\n // Call an extrinsic or function\n assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param));\n });\n}\n\n#[test]\nfn it_fails_for_invalid_input() {\n new_test_ext().execute_with(|| {\n // Call an extrinsic with invalid input and expect an error\n assert_err!(\n TemplateModule::some_function(Origin::signed(1), invalid_param),\n Error::::InvalidInput\n );\n });\n}\n```"} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 4, "depth": 3, "title": "Storage Testing", "anchor": "storage-testing", "start_char": 3283, "end_char": 4132, "estimated_token_count": 190, "token_estimator": "heuristic-v1", "text": "### Storage Testing\n\nAfter calling a function or extrinsic in your pallet, it's essential to verify that the state changes in the pallet's storage match the expected behavior to ensure data is updated correctly based on the actions taken.\n\nThe following example shows how to test the storage behavior before and after the function call:\n\n```rust\n#[test]\nfn test_storage_update_on_extrinsic_call() {\n new_test_ext().execute_with(|| {\n // Check the initial storage state (before the call)\n assert_eq!(Something::::get(), None);\n\n // Dispatch a signed extrinsic, which modifies storage\n assert_ok!(TemplateModule::do_something(RuntimeOrigin::signed(1), 42));\n\n // Validate that the storage has been updated as expected (after the call)\n assert_eq!(Something::::get(), Some(42));\n });\n}\n\n```"} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 5, "depth": 3, "title": "Event Testing", "anchor": "event-testing", "start_char": 4132, "end_char": 6153, "estimated_token_count": 519, "token_estimator": "heuristic-v1", "text": "### Event Testing\n\nIt's also crucial to test the events that your pallet emits during execution. By default, events generated in a pallet using the [`#generate_deposit`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.generate_deposit.html){target=\\_blank} macro are stored under the system's event storage key (system/events) as [`EventRecord`](https://paritytech.github.io/polkadot-sdk/master/frame_system/struct.EventRecord.html){target=\\_blank} entries. These can be accessed using [`System::events()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.events){target=\\_blank} or verified with specific helper methods provided by the system pallet, such as [`assert_has_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_has_event){target=\\_blank} and [`assert_last_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_last_event){target=\\_blank}.\n\nHere's an example of testing events in a mock runtime:\n\n```rust\n#[test]\nfn it_emits_events_on_success() {\n new_test_ext().execute_with(|| {\n // Call an extrinsic or function\n assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param));\n\n // Verify that the expected event was emitted\n assert!(System::events().iter().any(|record| {\n record.event == Event::TemplateModule(TemplateEvent::SomeEvent)\n }));\n });\n}\n```\n\nSome key considerations are:\n\n- **Block number**: Events are not emitted on the genesis block, so you need to set the block number using [`System::set_block_number()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.set_block_number){target=\\_blank} to ensure events are triggered.\n- **Converting events**: Use `.into()` when instantiating your pallet's event to convert it into a generic event type, as required by the system's event storage."} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 6, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 6153, "end_char": 6895, "estimated_token_count": 211, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n- Dive into the full implementation of the [`mock.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/mock.rs){target=\\_blank} and [`test.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/tests.rs){target=\\_blank} files in the [Solochain Template](https://github.com/paritytech/polkadot-sdk/tree/master/templates/solochain){target=_blank}.\n\n
\n\n- Guide __Benchmarking__\n\n ---\n\n Explore methods to measure the performance and execution cost of your pallet.\n\n [:octicons-arrow-right-24: Reference](/develop/parachains/testing/benchmarking)\n\n
"} {"page_id": "parachains-customize-runtime", "page_title": "Overview of FRAME", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 26, "end_char": 754, "estimated_token_count": 146, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nA blockchain runtime is more than just a fixed set of rules—it's a dynamic foundation that you can shape to match your specific needs. With Polkadot SDK's [FRAME (Framework for Runtime Aggregation of Modularized Entities)](/reference/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\\_blank}, customizing your runtime is straightforward and modular. Instead of building everything from scratch, you combine pre-built pallets with your own custom logic to create a runtime suited to your blockchain's purpose.\n\nThis overview explains how runtime customization works, introduces the building blocks you'll use, and guides you through the key patterns for extending your runtime."} {"page_id": "parachains-customize-runtime", "page_title": "Overview of FRAME", "index": 1, "depth": 2, "title": "Understanding Your Runtime", "anchor": "understanding-your-runtime", "start_char": 754, "end_char": 1533, "estimated_token_count": 158, "token_estimator": "heuristic-v1", "text": "## Understanding Your Runtime\n\nThe runtime is the core logic of your blockchain—it processes transactions, manages state, and enforces the rules that govern your network. When a transaction arrives at your blockchain, the [`frame_executive`](https://paritytech.github.io/polkadot-sdk/master/frame_executive/index.html){target=\\_blank} pallet receives it and routes it to the appropriate pallet for execution.\n\nThink of your runtime as a collection of specialized modules, each handling a different aspect of your blockchain. Need token balances? Use the Balances pallet. Want governance? Add the Governance pallets. Need something custom? Create your own pallet. By mixing and matching these modules, you build a runtime that's efficient, secure, and tailored to your use case."} {"page_id": "parachains-customize-runtime", "page_title": "Overview of FRAME", "index": 2, "depth": 2, "title": "Runtime Architecture", "anchor": "runtime-architecture", "start_char": 1533, "end_char": 2085, "estimated_token_count": 121, "token_estimator": "heuristic-v1", "text": "## Runtime Architecture\n\nThe following diagram shows how FRAME components work together to form your runtime:\n\n![](/images/parachains/customize-runtime/index/frame-overview-01.webp)\n\nThe main components are:\n\n- **`frame_executive`**: Routes all incoming transactions to the correct pallet for execution.\n- **Pallets**: Domain-specific modules that implement your blockchain's features and business logic.\n- **`frame_system`**: Provides core runtime primitives and storage.\n- **`frame_support`**: Utilities and macros that simplify pallet development."} diff --git a/llms.txt b/llms.txt index 844f73706..833ff4bad 100644 --- a/llms.txt +++ b/llms.txt @@ -68,10 +68,11 @@ Docs: Parachains - [Add an Existing Pallet to the Runtime](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-add-existing-pallets.md): Learn how to include and configure pallets in a Polkadot SDK-based runtime, from adding dependencies to implementing necessary traits. - [Add Multiple Pallet Instances](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-add-pallet-instances.md): Learn how to implement multiple instances of the same pallet in your Polkadot SDK-based runtime, from adding dependencies to configuring unique instances. - [Add Smart Contract Functionality](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-add-smart-contract-functionality.md): Add smart contract capabilities to your Polkadot SDK-based blockchain. Explore PVM, EVM, and Wasm integration for enhanced chain functionality. -- [Benchmarking FRAME Pallets](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md): Learn how to use FRAME's benchmarking framework to measure extrinsic execution costs and provide accurate weights for on-chain computations. +- [Add Pallets to the Runtime](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-add-pallet-to-runtime.md): Add pallets to your runtime for custom functionality. Learn to configure and integrate pallets in Polkadot SDK-based blockchains. +- [Benchmark Your Pallet](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md): Learn how to benchmark extrinsics in your custom pallet to generate precise weight calculations suitable for production use. - [Create a Custom Pallet](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-create-a-pallet.md): Learn how to create custom pallets using FRAME, allowing for flexible, modular, and scalable blockchain development. Follow the step-by-step guide. - [Mock Your Runtime](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-mock-runtime.md): Learn how to create a mock runtime environment for testing your custom pallets in isolation, enabling comprehensive unit testing before runtime integration. -- [Pallet Unit Testing](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md): Learn how to write comprehensive unit tests for your custom pallets using mock runtimes, ensuring reliability and correctness before deployment. +- [Unit Test Pallets](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md): Learn how to efficiently test pallets in the Polkadot SDK, ensuring the reliability and security of your pallets operations. - [Overview of FRAME](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime.md): Learn how Polkadot SDK’s FRAME framework simplifies blockchain development with modular pallets and support libraries for efficient runtime design. - [Get Started with Parachain Development](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-get-started.md): Practical examples and tutorials for building and deploying Polkadot parachains, covering everything from launch to customization and cross-chain messaging. - [Opening HRMP Channels Between Parachains](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-interoperability-channels-between-parachains.md): Learn how to open HRMP channels between parachains on Polkadot. Discover the step-by-step process for establishing uni- and bidirectional communication. diff --git a/parachains/customize-runtime/pallet-development/.nav.yml b/parachains/customize-runtime/pallet-development/.nav.yml index dd7abe926..43f919978 100644 --- a/parachains/customize-runtime/pallet-development/.nav.yml +++ b/parachains/customize-runtime/pallet-development/.nav.yml @@ -1,5 +1,6 @@ nav: - 'Create a Custom Pallet': create-a-pallet.md - 'Mock Your Runtime': mock-runtime.md - - 'Pallet Unit Testing': pallet-testing.md + - 'Unit Test Pallets': pallet-testing.md + - 'Add a Custom Pallet to Your Runtime': add-pallet-to-runtime.md - 'Benchmark a Custom Pallet': benchmark-pallet.md \ No newline at end of file diff --git a/parachains/customize-runtime/pallet-development/benchmark-pallet.md b/parachains/customize-runtime/pallet-development/benchmark-pallet.md index dd02ee0b4..e9dbc4933 100644 --- a/parachains/customize-runtime/pallet-development/benchmark-pallet.md +++ b/parachains/customize-runtime/pallet-development/benchmark-pallet.md @@ -1,215 +1,453 @@ --- -title: Benchmarking FRAME Pallets -description: Learn how to use FRAME's benchmarking framework to measure extrinsic execution costs and provide accurate weights for on-chain computations. +title: Benchmark Your Pallet +description: Learn how to benchmark extrinsics in your custom pallet to generate precise weight calculations suitable for production use. categories: Parachains --- -# Benchmarking - ## Introduction -Benchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise [weight](/reference/glossary/#weight){target=\_blank}, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks. +Benchmarking is the process of measuring the computational resources (execution time and storage) required by your pallet's extrinsics. Accurate [weight](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/index.html){target=\_blank} calculations are essential for ensuring your blockchain can process transactions efficiently while protecting against denial-of-service attacks. -The Polkadot SDK leverages the [FRAME](/reference/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\_blank} benchmarking framework, offering tools to measure and assign weights to extrinsics. These weights help determine the maximum number of transactions or system-level calls processed within a block. This guide covers how to use FRAME's [benchmarking framework](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank}, from setting up your environment to writing and running benchmarks for your custom pallets. You'll understand how to generate accurate weights by the end, ensuring your runtime remains performant and secure. +This guide demonstrates how to benchmark a pallet and incorporate the resulting weight values. This example uses the custom counter pallet from previous guides in this series, but you can replace it with the code from another pallet if desired. -## The Case for Benchmarking +## Prerequisites -Benchmarking helps validate that the required execution time for different functions is within reasonable boundaries to ensure your blockchain runtime can handle transactions efficiently and securely. By accurately measuring the weight of each extrinsic, you can prevent service interruptions caused by computationally intensive calls that exceed block time limits. Without benchmarking, runtime performance could be vulnerable to DoS attacks, where malicious users exploit functions with unoptimized weights. +Before you begin, ensure you have: -Benchmarking also ensures predictable transaction fees. Weights derived from benchmark tests accurately reflect the resource usage of function calls, allowing fair fee calculation. This approach discourages abuse while maintaining network reliability. +- A pallet to benchmark. If you followed the pallet development tutorials, you can use the counter pallet from the [Create a Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/){target=\_blank} guide. You can also follow these steps to benchmark a custom pallet by updating the `benchmarking.rs` functions, and instances of usage in future steps, to calculate weights using your specific pallet functionality. +- Basic understanding of [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity){target=\_blank}. +- Familiarity with [Rust's testing framework](https://doc.rust-lang.org/book/ch11-00-testing.html){target=\_blank}. +- Familiarity setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\_blank}. Refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\_blank} guide for instructions if needed. -### Benchmarking and Weight +## Create the Benchmarking Module -In Polkadot SDK-based chains, weight quantifies the computational effort needed to process transactions. This weight includes factors such as: +Create a new file `benchmarking.rs` in your pallet's `src` directory and add the following code: -- Computational complexity. -- Storage complexity (proof size). -- Database reads and writes. -- Hardware specifications. +```rust title="pallets/pallet-custom/src/benchmarking.rs" +#![cfg(feature = "runtime-benchmarks")] -Benchmarking uses real-world testing to simulate worst-case scenarios for extrinsics. The framework generates a linear model for weight calculation by running multiple iterations with varied parameters. These worst-case weights ensure blocks remain within execution limits, enabling the runtime to maintain throughput under varying loads. Excess fees can be refunded if a call uses fewer resources than expected, offering users a fair cost model. - -Because weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specifications of hardware used for benchmarking. By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period. +use super::*; +use frame::deps::frame_benchmarking::v2::*; +use frame::benchmarking::prelude::RawOrigin; -Within FRAME, each function call that is dispatched must have a `#[pallet::weight]` annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs: +#[benchmarks] +mod benchmarks { + use super::*; -```rust hl_lines="2" ---8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/dispatchable-pallet-weight.rs' -``` + #[benchmark] + fn set_counter_value() { + let new_value: u32 = 100; -The `WeightInfo` file is automatically generated during benchmarking. Based on these tests, this file provides accurate weights for each extrinsic. + #[extrinsic_call] + _(RawOrigin::Root, new_value); -## Benchmarking Process + assert_eq!(CounterValue::::get(), new_value); + } -Benchmarking a pallet involves the following steps: + #[benchmark] + fn increment() { + let caller: T::AccountId = whitelisted_caller(); + let amount: u32 = 50; -1. Creating a `benchmarking.rs` file within your pallet's structure. -2. Writing a benchmarking test for each extrinsic. -3. Executing the benchmarking tool to calculate weights based on performance metrics. + #[extrinsic_call] + _(RawOrigin::Signed(caller.clone()), amount); -The benchmarking tool runs multiple iterations to model worst-case execution times and determine the appropriate weight. By default, the benchmarking pipeline is deactivated. To activate it, compile your runtime with the `runtime-benchmarks` feature flag. + assert_eq!(CounterValue::::get(), amount); + assert_eq!(UserInteractions::::get(caller), 1); + } -### Prepare Your Environment + #[benchmark] + fn decrement() { + // First, set the counter to a non-zero value + CounterValue::::put(100); -Install the [`frame-omni-bencher`](https://crates.io/crates/frame-omni-bencher){target=\_blank} command-line tool: + let caller: T::AccountId = whitelisted_caller(); + let amount: u32 = 30; -```bash -cargo install frame-omni-bencher -``` + #[extrinsic_call] + _(RawOrigin::Signed(caller.clone()), amount); -Before writing benchmark tests, you need to ensure the `frame-benchmarking` crate is included in your pallet's `Cargo.toml` similar to the following: + assert_eq!(CounterValue::::get(), 70); + assert_eq!(UserInteractions::::get(caller), 1); + } -```toml title="Cargo.toml" ---8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/cargo.toml::1' + impl_benchmark_test_suite!(Pallet, crate::mock::new_test_ext(), crate::mock::Test); +} ``` -You must also ensure that you add the `runtime-benchmarks` feature flag as follows under the `[features]` section of your pallet's `Cargo.toml`: - -```toml title="Cargo.toml" ---8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/cargo.toml:2:7' +This module contains all the [benchmarking definitions](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} for your pallet. If you are benchmarking a different pallet, update the testing logic as needed to test your pallet's functionality. + +## Define the Weight Trait + +Add a `weights` module to your pallet that defines the `WeightInfo` trait using the following code: + +```rust title="pallets/pallet-custom/src/weights.rs" +#[frame::pallet] +pub mod pallet { + use frame::prelude::*; + pub use weights::WeightInfo; + + pub mod weights { + use frame::prelude::*; + + pub trait WeightInfo { + fn set_counter_value() -> Weight; + fn increment() -> Weight; + fn decrement() -> Weight; + } + + impl WeightInfo for () { + fn set_counter_value() -> Weight { + Weight::from_parts(10_000, 0) + } + fn increment() -> Weight { + Weight::from_parts(15_000, 0) + } + fn decrement() -> Weight { + Weight::from_parts(15_000, 0) + } + } + } + + // ... rest of pallet +} ``` -Lastly, ensure that `frame-benchmarking` is included in `std = []`: +The `WeightInfo for ()` implementation provides placeholder weights for development. If you are using a different pallet, update the `weights` module to use your pallet's function names. -```toml title="Cargo.toml" ---8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/cargo.toml:8:12' -``` +## Add WeightInfo to Config -Once complete, you have the required dependencies for writing benchmark tests for your pallet. +Update your pallet's `Config` trait to include `WeightInfo` by adding the following code: -### Write Benchmark Tests +```rust title="pallets/pallet-custom/src/lib.rs" +#[pallet::config] +pub trait Config: frame_system::Config { + type RuntimeEvent: From> + IsType<::RuntimeEvent>; -Create a `benchmarking.rs` file in your pallet's `src/`. Your directory structure should look similar to the following: + #[pallet::constant] + type CounterMaxValue: Get; + type WeightInfo: weights::WeightInfo; +} ``` -my-pallet/ -├── src/ -│ ├── lib.rs # Main pallet implementation -│ └── benchmarking.rs # Benchmarking -└── Cargo.toml + +The [`WeightInfo`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/trait.WeightInfo.html){target=\_blank} trait provides an abstraction layer that allows weights to be swapped at runtime configuration. By making `WeightInfo` an associated type in the `Config` trait, you will enable each runtime that uses your pallet to specify which weight implementation to use. + +## Update Extrinsic Weight Annotations + +Replace the placeholder weights in your extrinsics with calls to the `WeightInfo` trait by adding the following code: + +```rust title="pallets/pallet-custom/src/lib.rs" +#[pallet::call] +impl Pallet { + #[pallet::call_index(0)] + #[pallet::weight(T::WeightInfo::set_counter_value())] + pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { + // ... implementation + } + + #[pallet::call_index(1)] + #[pallet::weight(T::WeightInfo::increment())] + pub fn increment(origin: OriginFor, amount: u32) -> DispatchResult { + // ... implementation + } + + #[pallet::call_index(2)] + #[pallet::weight(T::WeightInfo::decrement())] + pub fn decrement(origin: OriginFor, amount: u32) -> DispatchResult { + // ... implementation + } +} ``` -With the directory structure set, you can use the [`polkadot-sdk-parachain-template`](https://github.com/paritytech/polkadot-sdk-parachain-template/tree/master/pallets){target=\_blank} to get started as follows: +By calling `T::WeightInfo::function_name()` instead of using hardcoded `Weight::from_parts()` values, your extrinsics automatically use whichever weight implementation is configured in the runtime. You can switch between placeholder weights for testing and benchmarked weights for production easily, without changing any pallet code. -```rust title="benchmarking.rs (starter template)" ---8<-- 'https://raw.githubusercontent.com/paritytech/polkadot-sdk-parachain-template/refs/tags/v0.0.2/pallets/template/src/benchmarking.rs' -``` +If you are using a different pallet, be sure to update the functions for `WeightInfo` accordingly. + +## Include the Benchmarking Module -In your benchmarking tests, employ these best practices: +At the top of your `lib.rs`, add the module declaration by adding the following code: -- **Write custom testing functions**: The function `do_something` in the preceding example is a placeholder. Similar to writing unit tests, you must write custom functions to benchmark test your extrinsics. Access the mock runtime and use functions such as `whitelisted_caller()` to sign transactions and facilitate testing. -- **Use the `#[extrinsic_call]` macro**: This macro is used when calling the extrinsic itself and is a required part of a benchmarking function. See the [`extrinsic_call`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html#extrinsic_call-and-block){target=\_blank} docs for more details. -- **Validate extrinsic behavior**: The `assert_eq` expression ensures that the extrinsic is working properly within the benchmark context. +```rust title="pallets/pallet-custom/src/lib.rs" +#![cfg_attr(not(feature = "std"), no_std)] -Add the `benchmarking` module to your pallet. In the pallet `lib.rs` file add the following: +extern crate alloc; +use alloc::vec::Vec; + +pub use pallet::*; -```rust #[cfg(feature = "runtime-benchmarks")] mod benchmarking; + +// Additional pallet code +``` + +The `#[cfg(feature = "runtime-benchmarks")]` attribute ensures that benchmarking code is only compiled when explicitly needed to keep your production runtime efficient. + +## Configure Pallet Dependencies + +Update your pallet's `Cargo.toml` to enable the benchmarking feature by adding the following code: + +```toml title="pallets/pallet-custom/Cargo.toml" +[dependencies] +codec = { features = ["derive"], workspace = true } +scale-info = { features = ["derive"], workspace = true } +frame = { features = ["experimental", "runtime"], workspace = true } + +[features] +default = ["std"] +runtime-benchmarks = [ + "frame/runtime-benchmarks", +] +std = [ + "codec/std", + "scale-info/std", + "frame/std", +] ``` -### Add Benchmarks to Runtime +The Cargo feature flag system lets you conditionally compile code based on which features are enabled. By defining a `runtime-benchmarks` feature that cascades to FRAME's benchmarking features, you create a clean way to build your pallet with or without benchmarking support, ensuring all necessary dependencies are available when needed but excluded from production builds. -Before running the benchmarking tool, you must integrate benchmarks with your runtime as follows: +## Update Mock Runtime -1. Navigate to your `runtime/src` directory and check if a `benchmarks.rs` file exists. If not, create one. This file will contain the macro that registers all pallets for benchmarking along with their respective configurations: +Add the `WeightInfo` type to your test configuration in `mock.rs` by adding the following code: - ```rust title="benchmarks.rs" - --8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/frame-benchmark-macro.rs' +```rust title="pallets/pallet-custom/src/mock.rs" +impl pallet_custom::Config for Test { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = (); +} +``` + +In your mock runtime for testing, use the placeholder `()` implementation of `WeightInfo`, since unit tests focus on verifying functional correctness rather than performance. + +## Configure Runtime Benchmarking + +To execute benchmarks, your pallet must be integrated into the runtime's benchmarking infrastructure. Follow these steps to update the runtime configuration: + +1. **Update `runtime/Cargo.toml`**: Add your pallet to the runtime's `runtime-benchmarks` feature as follows: + + ```toml title="runtime/Cargo.toml" + runtime-benchmarks = [ + "cumulus-pallet-parachain-system/runtime-benchmarks", + "hex-literal", + "pallet-parachain-template/runtime-benchmarks", + "polkadot-sdk/runtime-benchmarks", + "pallet-custom/runtime-benchmarks", + ] ``` - For example, to add a new pallet named `pallet_parachain_template` for benchmarking, include it in the macro as shown: - ```rust title="benchmarks.rs" hl_lines="3" - --8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/frame-benchmark-macro.rs::3' + When you build the runtime with `--features runtime-benchmarks`, this configuration ensures all necessary benchmarking code across all pallets (including yours) is included. + +2. **Update runtime configuration**: Using the the placeholder implementation, run development benchmarks as follows: + + ```rust title="runtime/src/configs/mod.rs" + impl pallet_custom::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = (); + } + ``` + +3. **Register benchmarks**: Add your pallet to the benchmark list in `runtime/src/benchmarks.rs` as follows: + + ```rust title="runtime/src/benchmarks.rs" + polkadot_sdk::frame_benchmarking::define_benchmarks!( + [frame_system, SystemBench::] + [pallet_balances, Balances] + // ... other pallets + [pallet_custom, CustomPallet] ); ``` - !!!warning "Updating `define_benchmarks!` macro is required" - Any pallet that needs to be benchmarked must be included in the [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro. The CLI will only be able to access and benchmark pallets that are registered here. + The [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro creates the infrastructure that allows the benchmarking CLI tool to discover and execute your pallet's benchmarks. -2. Check your runtime's `lib.rs` file to ensure the `benchmarks` module is imported. The import should look like this: +## Test Benchmark Compilation - ```rust title="lib.rs" - #[cfg(feature = "runtime-benchmarks")] - mod benchmarks; - ``` +Run the following command to verify your benchmarks compile and run as tests: - The `runtime-benchmarks` feature gate ensures benchmark tests are isolated from production runtime code. +```bash +cargo test -p pallet-custom --features runtime-benchmarks +``` -3. Enable runtime benchmarking for your pallet in `runtime/Cargo.toml`: +You will see terminal output similar to the following as your benchmark tests pass: - ```toml - --8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/runtime-cargo.toml' - ``` +
+ cargo test -p pallet-custom --features runtime-benchmarks + test benchmarking::benchmarks::bench_set_counter_value ... ok + test benchmarking::benchmarks::bench_increment ... ok + test benchmarking::benchmarks::bench_decrement ... ok + +
-### Run Benchmarks +The `impl_benchmark_test_suite!` macro generates unit tests for each benchmark. Running these tests verifies that your benchmarks compile correctly, execute without panicking, and pass their assertions, catching issues early before building the entire runtime. -You can now compile your runtime with the `runtime-benchmarks` feature flag. This feature flag is crucial as the benchmarking tool will look for this feature being enabled to know when it should run benchmark tests. Follow these steps to compile the runtime with benchmarking enabled: +## Build the Runtime with Benchmarks -1. Run `build` with the feature flag included: +Compile the runtime with benchmarking enabled to generate the Wasm binary using the following command: - ```bash - cargo build --features runtime-benchmarks --release - ``` +```bash +cargo build --release --features runtime-benchmarks +``` -2. Create a `weights.rs` file in your pallet's `src/` directory. This file will store the auto-generated weight calculations: +This command produces the runtime WASM file needed for benchmarking, typically located at: `target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm` - ```bash - touch weights.rs - ``` +The build includes all the benchmarking infrastructure and special host functions needed for measurement. The resulting WASM runtime contains your benchmark code and can communicate with the benchmarking tool's execution environment. You'll create a different build later for operating your chain in production. -3. Before running the benchmarking tool, you'll need a template file that defines how weight information should be formatted. Download the official template from the Polkadot SDK repository and save it in your project folders for future use: +## Install the Benchmarking Tool - ```bash - curl https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ - --output ./pallets/benchmarking/frame-weight-template.hbs - ``` +Install the `frame-omni-bencher` CLI tool using the following command: + +```bash +cargo install frame-omni-bencher --locked +``` + +[`frame-omni-bencher`](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\_blank} is the official Polkadot SDK tool designed explicitly for FRAME pallet benchmarking. It provides a standardized way to execute benchmarks, measure execution times and storage operations, and generate properly formatted weight files with full integration into the FRAME weight system. + +## Download the Weight Template + +Download the official weight template file using the following commands: + +```bash +curl -L https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ +--output ./pallets/pallet-custom/frame-weight-template.hbs +``` + +The weight template is a Handlebars file that transforms raw benchmark data into a correctly formatted Rust source file. It defines the structure of the generated `weights.rs` file, including imports, trait definitions, documentation comments, and formatting. Using the official template ensures your weight files follow the Polkadot SDK conventions and include all necessary metadata, such as benchmark execution parameters, storage operation counts, and hardware information. -4. Run the benchmarking tool to measure extrinsic weights: +## Execute Benchmarks + +Run benchmarks for your pallet to generate weight files using the following commands: + +```bash +frame-omni-bencher v1 benchmark pallet \ + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \ + --pallet pallet_custom \ + --extrinsic "" \ + --template ./pallets/pallet-custom/frame-weight-template.hbs \ + --output ./pallets/pallet-custom/src/weights.rs +``` + +Benchmarks execute against the compiled WASM runtime rather than native code because WASM is what actually runs in production on the blockchain. WASM execution can have different performance characteristics than native code due to compilation and sandboxing overhead, so benchmarking against the WASM ensures your weight measurements reflect real-world conditions. + +??? note "Additional customization" + + You can customize benchmark execution with additional parameters for more detailed measurements, as shown in the sample code below: ```bash frame-omni-bencher v1 benchmark pallet \ - --runtime INSERT_PATH_TO_WASM_RUNTIME \ - --pallet INSERT_NAME_OF_PALLET \ - --extrinsic "" \ - --template ./frame-weight-template.hbs \ - --output weights.rs + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \ + --pallet pallet_custom \ + --extrinsic "" \ + --steps 50 \ + --repeat 20 \ + --template ./pallets/pallet-custom/frame-weight-template.hbs \ + --output ./pallets/pallet-custom/src/weights.rs ``` + + - **`--steps 50`**: Number of different input values to test when using linear components (default: 50). More steps provide finer granularity for detecting complexity trends but increase benchmarking time. + - **`--repeat 20`**: Number of repetitions for each measurement (default: 20). More repetitions improve statistical accuracy by averaging out variance, reducing the impact of system noise, and providing more reliable weight estimates. + - **`--heap-pages 4096`**: WASM heap pages allocation. Affects available memory during execution. + - **`--wasm-execution compiled`**: WASM execution method. Use `compiled` for performance closest to production conditions. - !!! tip "Flag definitions" - - **`--runtime`**: The path to your runtime's Wasm. - - **`--pallet`**: The name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined in `define_benchmarks`. - - **`--extrinsic`**: Which extrinsic to test. Using `""` implies all extrinsics will be benchmarked. - - **`--template`**: Defines how weight information should be formatted. - - **`--output`**: Where the output of the auto-generated weights will reside. +## Use Generated Weights -The generated `weights.rs` file contains weight annotations for your extrinsics, ready to be added to your pallet. The output should be similar to the following. Some output is omitted for brevity: +After running benchmarks, a `weights.rs` file is generated containing measured weights based on actual measurements of your code running on real hardware, accounting for the specific complexity of your logic, storage access patterns, and computational requirements. ---8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/benchmark-output.html' +Follow these steps to use the generated weights with your pallet: -#### Add Benchmark Weights to Pallet +1. Integrate the generated weights by adding the weights module to your pallet's `lib.rs` as follows: -Once the `weights.rs` is generated, you must integrate it with your pallet. + ```rust title="pallets/pallet-custom/src/lib.rs" + #![cfg_attr(not(feature = "std"), no_std)] -1. To begin the integration, import the `weights` module and the `WeightInfo` trait, then add both to your pallet's `Config` trait. Complete the following steps to set up the configuration: + extern crate alloc; + use alloc::vec::Vec; - ```rust title="lib.rs" - --8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/weight-config.rs' - ``` + pub use pallet::*; + + #[cfg(feature = "runtime-benchmarks")] + mod benchmarking; -2. Next, you must add this to the `#[pallet::weight]` annotation in all the extrinsics via the `Config` as follows: + pub mod weights; - ```rust hl_lines="2" title="lib.rs" - --8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/dispatchable-pallet-weight.rs' + #[frame::pallet] + pub mod pallet { + use super::*; + use frame::prelude::*; + use crate::weights::WeightInfo; + // ... rest of pallet + } ``` -3. Finally, configure the actual weight values in your runtime. In `runtime/src/config/mod.rs`, add the following code: + Unlike the benchmarking module (which is only needed when running benchmarks), the weights module must be available in all builds because the runtime needs to call the weight functions during regular operation to calculate transaction fees and enforce block limits. + +2. Update your runtime configuration to use the generated weights instead of the placeholder `()` implementation by adding the following code: - ```rust title="mod.rs" - --8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/runtime-pallet-config.rs' + ```rust title="runtime/src/configs/mod.rs" + impl pallet_custom::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = pallet_custom::weights::SubstrateWeight; + } ``` -## Where to Go Next + This change activates your benchmarked weights in the production runtime. Now, when users submit transactions that call your pallet's extrinsics, the runtime will use the actual measured weights to calculate fees and enforce block limits. + +??? code "Example generated weight file" + + The generated `weights.rs` file will look similar to this: + + ```rust title="pallets/pallet-custom/src/weights.rs" + //! Autogenerated weights for `pallet_custom` + //! + //! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 32.0.0 + //! DATE: 2025-01-15, STEPS: `50`, REPEAT: `20` + + #![cfg_attr(rustfmt, rustfmt_skip)] + #![allow(unused_parens)] + #![allow(unused_imports)] + #![allow(missing_docs)] + + use frame_support::{traits::Get, weights::{Weight, constants::RocksDbWeight}}; + use core::marker::PhantomData; + + pub trait WeightInfo { + fn set_counter_value() -> Weight; + fn increment() -> Weight; + fn decrement() -> Weight; + } + + pub struct SubstrateWeight(PhantomData); + impl WeightInfo for SubstrateWeight { + fn set_counter_value() -> Weight { + Weight::from_parts(8_234_000, 0) + .saturating_add(T::DbWeight::get().reads(1)) + .saturating_add(T::DbWeight::get().writes(1)) + } + + fn increment() -> Weight { + Weight::from_parts(12_456_000, 0) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } + + fn decrement() -> Weight { + Weight::from_parts(11_987_000, 0) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } + } + ``` + + The actual numbers in your `weights.rs` file will vary based on your hardware and implementation complexity. The [`DbWeight`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.RuntimeDbWeight.html){target=\_blank} accounts for database read and write operations. + +Congratulations, you've successfully benchmarked a pallet and updated your runtime to use the generated weight values. + +## Related Resources -- View the Rust Docs for a more comprehensive, low-level view of the [FRAME V2 Benchmarking Suite](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=_blank}. -- Read the [FRAME Benchmarking and Weights](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_benchmarking_weight/index.html){target=_blank} reference document, a concise guide which details how weights and benchmarking work. +- [FRAME Benchmarking Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/index.html){target=\_blank} +- [Weight Struct Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.Weight.html){target=\_blank} +- [Benchmarking v2 API](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} +- [frame-omni-bencher Tool](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\_blank} diff --git a/parachains/customize-runtime/pallet-development/create-a-pallet.md b/parachains/customize-runtime/pallet-development/create-a-pallet.md index 26d75c2b8..f86b7b83b 100644 --- a/parachains/customize-runtime/pallet-development/create-a-pallet.md +++ b/parachains/customize-runtime/pallet-development/create-a-pallet.md @@ -340,7 +340,7 @@ This command validates all pallet configurations and prepares the build for depl ## Run Your Chain Locally -Launch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}. +Launch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}. For instructions on setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\_blank}, refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\_blank} guide. ### Generate a Chain Specification diff --git a/parachains/customize-runtime/pallet-development/pallet-testing.md b/parachains/customize-runtime/pallet-development/pallet-testing.md index 1bc102974..cacde9565 100644 --- a/parachains/customize-runtime/pallet-development/pallet-testing.md +++ b/parachains/customize-runtime/pallet-development/pallet-testing.md @@ -1,10 +1,10 @@ --- -title: Pallet Unit Testing -description: Learn how to write comprehensive unit tests for your custom pallets using mock runtimes, ensuring reliability and correctness before deployment. +title: Unit Test Pallets +description: Learn how to efficiently test pallets in the Polkadot SDK, ensuring the reliability and security of your pallets operations. categories: Parachains --- -# Pallet Unit Testing +# Unit Test Pallets ## Introduction