From ca70d0195c3b91b3768fbeb020ea1e6547695c81 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Mon, 10 Nov 2025 14:03:37 -0500 Subject: [PATCH 01/20] chore: Add lint link check job (#1392) Signed-off-by: Jonathan Oppenheimer <147infiniti@gmail.com> Co-authored-by: Austin Larson <78000745+alarso16@users.noreply.github.com> --- .github/CONTRIBUTING.md | 55 ++++++++++++++-------------- .github/ISSUE_TEMPLATE/bug_report.md | 2 +- .github/pull_request_template.md | 1 + .github/workflows/ci.yml | 8 ++++ README.md | 4 +- SECURITY.md | 6 +-- cmd/simulator/README.md | 4 +- consensus/dummy/README.md | 4 +- core/README.md | 4 +- docs/releasing/README.md | 24 +++++++++--- plugin/evm/README.md | 6 +-- plugin/evm/config/config.md | 12 +++++- precompile/contracts/warp/README.md | 10 ++--- sync/README.md | 29 ++++++++++----- 14 files changed, 106 insertions(+), 63 deletions(-) diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 7cb818ae8e..b242445d51 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -1,16 +1,8 @@ # Contributing -Thank you for considering to help out with the source code! We welcome -contributions from anyone on the internet, and are grateful for even the -smallest of fixes! - -If you'd like to contribute to subnet-evm, please fork, fix, commit and send a -pull request for the maintainers to review and merge into the main code base. If -you wish to submit more complex changes though, please check up with the core -devs first on [Discord](https://chat.avalabs.org) to -ensure those changes are in line with the general philosophy of the project -and/or get some early feedback which can make both your efforts much lighter as -well as our review and merge procedures quick and simple. +Thank you for considering to help out with the source code! We welcome contributions from anyone on the internet, and are grateful for even the smallest of fixes! + +If you'd like to contribute to subnet-evm, please fork, fix, commit and send a pull request for the maintainers to review and merge into the main code base. If you wish to submit more complex changes though, please check up with the core devs first on [Discord](https://chat.avalabs.org) to ensure those changes are in line with the general philosophy of the project and/or get some early feedback which can make both your efforts much lighter as well as our review and merge procedures quick and simple. ## Coding guidelines @@ -20,28 +12,34 @@ guidelines: - Code must adhere to the official Go [formatting](https://go.dev/doc/effective_go#formatting) guidelines (i.e. uses [gofmt](https://pkg.go.dev/cmd/gofmt)). +- Code must be documented adhering to the official Go + [commentary](https://go.dev/doc/effective_go#commentary) guidelines. - Pull requests need to be based on and opened against the `master` branch. - Pull reuqests should include a detailed description -- Commits are required to be signed. See [here](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits) +- Commits are required to be signed. See the [commit signature verification documentation](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits) for information on signing commits. - Commit messages should be prefixed with the package(s) they modify. - E.g. "eth, rpc: make trace configs optional" -### Mocks +## Can I have feature X + +Before you submit a feature request, please check and make sure that it isn't possible through some other means. + +## Mocks Mocks are auto-generated using [mockgen](https://pkg.go.dev/go.uber.org/mock/mockgen) and `//go:generate` commands in the code. -- To **re-generate all mocks**, use the command below from the root of the project: +- To **re-generate all mocks**, use the task below from the root of the project: - ```sh - go generate -run mockgen ./... - ``` + ```sh + task generate-mocks + ``` -* To **add** an interface that needs a corresponding mock generated: - * if the file `mocks_generate_test.go` exists in the package where the interface is located, either: - * modify its `//go:generate go tool -modfile=tools/go.mod mockgen` to generate a mock for your interface (preferred); or - * add another `//go:generate go tool -modfile=tools/go.mod mockgen` to generate a mock for your interface according to specific mock generation settings - * if the file `mocks_generate_test.go` does not exist in the package where the interface is located, create it with content (adapt as needed): +- To **add** an interface that needs a corresponding mock generated: + - if the file `mocks_generate_test.go` exists in the package where the interface is located, either: + - modify its `//go:generate go tool -modfile=tools/go.mod mockgen` to generate a mock for your interface (preferred); or + - add another `//go:generate go tool -modfile=tools/go.mod mockgen` to generate a mock for your interface according to specific mock generation settings + - if the file `mocks_generate_test.go` does not exist in the package where the interface is located, create it with content (adapt as needed): ```go // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. @@ -58,8 +56,8 @@ Mocks are auto-generated using [mockgen](https://pkg.go.dev/go.uber.org/mock/moc - To **remove** an interface from having a corresponding mock generated: 1. Edit the `mocks_generate_test.go` file in the directory where the interface is defined 1. If the `//go:generate` mockgen command line: - * generates a mock file for multiple interfaces, remove your interface from the line - * generates a mock file only for the interface, remove the entire line. If the file is empty, remove `mocks_generate_test.go` as well. + - generates a mock file for multiple interfaces, remove your interface from the line + - generates a mock file only for the interface, remove the entire line. If the file is empty, remove `mocks_generate_test.go` as well. ## Tool Dependencies @@ -67,17 +65,20 @@ This project uses `go tool` to manage development tool dependencies in `tools/go ### Managing Tools -* To **add a new tool**: +- To **add a new tool**: + ```sh go get -tool -modfile=tools/go.mod example.com/tool/cmd/toolname@version ``` -* To **upgrade a tool**: +- To **upgrade a tool**: + ```sh go get -tool -modfile=tools/go.mod example.com/tool/cmd/toolname@newversion ``` -* To **run a tool manually**: +- To **run a tool manually**: + ```sh go tool -modfile=tools/go.mod toolname [args...] ``` diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index a06a6658c8..4c524e036c 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -31,4 +31,4 @@ Which OS you used to reveal the bug. **Additional context** Add any other context about the problem here. -Avalanche Bug Bounty program can be found [here](https://immunefi.com/bug-bounty/avalanche/information/). +You can submit a bug on the [Avalanche Bug Bounty program page](https://hackenproof.com/avalanche/avalanche-protocol). diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 7751a8146f..d1b4cd3a7b 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -1,3 +1,4 @@ + ## Why this should be merged ## How this works diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 7b0c126272..4b3ea83be6 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -30,6 +30,14 @@ jobs: env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: ./scripts/run_task.sh check-avalanchego-version + links-lint: + name: Markdown Links Lint + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - uses: umbrelladocs/action-linkspector@de84085e0f51452a470558693d7d308fbb2fa261 #v1.2.5 + with: + fail_level: any unit_test: name: Golang Unit Tests (${{ matrix.os }}) diff --git a/README.md b/README.md index 312210791a..b1d3e5fa1b 100644 --- a/README.md +++ b/README.md @@ -44,7 +44,7 @@ The Subnet EVM supports the following API namespaces: Only the `eth` namespace is enabled by default. Subnet EVM is a simplified version of [Coreth VM (C-Chain)](https://github.com/ava-labs/coreth). -Full documentation for the C-Chain's API can be found [here](https://build.avax.network/docs/api-reference/c-chain/api). +Full documentation for the C-Chain's API can be found [in the builder docs](https://build.avax.network/docs/rpcs/c-chain). ## Compatibility @@ -70,7 +70,7 @@ To support these changes, there have been a number of changes to the SubnetEVM b ### Clone Subnet-evm -First install Go 1.24.9 or later. Follow the instructions [here](https://go.dev/doc/install). You can verify by running `go version`. +First install Go 1.24.9 or later. Follow the instructions [here]. You can verify by running `go version`. Set `$GOPATH` environment variable properly for Go to look for Go Workspaces. Please read [this](https://go.dev/doc/code) for details. You can verify by running `echo $GOPATH`. diff --git a/SECURITY.md b/SECURITY.md index 61603a456e..3c2ebb4c44 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -5,15 +5,15 @@ responsible disclosures. Valid reports will be eligible for a reward (terms and ## Reporting a Vulnerability -**Please do not file a public ticket** mentioning the vulnerability. To disclose a vulnerability submit it through our [Bug Bounty Program](https://immunefi.com/bug-bounty/avalanche/information/). +**Please do not file a public ticket** mentioning the vulnerability. To disclose a vulnerability submit it through our [Bug Bounty Program](https://immunefi.com/bug-bounty/avalabs/information/). Vulnerabilities must be disclosed to us privately with reasonable time to respond, and avoid compromise of other users and accounts, or loss of funds that are not your own. We do not reward spam or social engineering vulnerabilities. Do not test for or validate any security issues in the live Avalanche networks (Mainnet and Fuji testnet), confirm all exploits in a local private testnet. -Please refer to the [Bug Bounty Page](https://immunefi.com/bug-bounty/avalanche/information/) for the most up-to-date program rules and scope. +Please refer to the [Bug Bounty Page](https://immunefi.com/bug-bounty/avalabs/information/) for the most up-to-date program rules and scope. ## Supported Versions -Please use the [most recently released version](https://github.com/ava-labs/subnet-evm/releases/latest) to perform testing and to validate security issues. +Please use the [most recently released version](https://github.com/ava-labs/coreth/releases/latest) to perform testing and to validate security issues. diff --git a/cmd/simulator/README.md b/cmd/simulator/README.md index b199106934..f602591df5 100644 --- a/cmd/simulator/README.md +++ b/cmd/simulator/README.md @@ -24,7 +24,7 @@ To confirm that you built successfully, run the simulator and print the version: This should give the following output: -``` +```bash v0.1.0 ``` @@ -45,7 +45,7 @@ The `--sybil-protection-enabled=false` flag is only suitable for local testing. 1. Ignore stake weight on the P-Chain and count each connected peer as having a stake weight of 1 2. Automatically opts in to validate every Subnet -Once you have AvalancheGo running locally, it will be running an HTTP Server on the default port `9650`. This means that the RPC Endpoint for the C-Chain will be http://127.0.0.1:9650/ext/bc/C/rpc and ws://127.0.0.1:9650/ext/bc/C/ws for WebSocket connections. +Once you have AvalancheGo running locally, it will be running an HTTP Server on the default port `9650`. This means that the RPC Endpoint for the C-Chain will be `http://127.0.0.1:9650/ext/bc/C/rpc` and `ws://127.0.0.1:9650/ext/bc/C/ws` for WebSocket connections. Now, we can run the simulator command to simulate some load on the local C-Chain for 30s: diff --git a/consensus/dummy/README.md b/consensus/dummy/README.md index f2269e3019..cbe9a5f1be 100644 --- a/consensus/dummy/README.md +++ b/consensus/dummy/README.md @@ -12,7 +12,7 @@ The dummy consensus engine is responsible for performing verification on the hea ## Dynamic Fees -Subnet-EVM includes a dynamic fee algorithm based off of (EIP-1559)[https://eips.ethereum.org/EIPS/eip-1559]. This introduces a field to the block type called `BaseFee`. The Base Fee sets a minimum gas price for any transaction to be included in the block. For example, a transaction with a gas price of 49 gwei, will be invalid to include in a block with a base fee of 50 gwei. +As of Apricot Phase 3, the C-Chain includes a dynamic fee algorithm based off of [EIP-1559](https://eips.ethereum.org/EIPS/eip-1559). This introduces a field to the block type called `BaseFee`. The Base Fee sets a minimum gas price for any transaction to be included in the block. For example, a transaction with a gas price of 49 gwei, will be invalid to include in a block with a base fee of 50 gwei. The dynamic fee algorithm aims to adjust the base fee to handle network congestion. Subnet-EVM sets a target utilization on the network, and the dynamic fee algorithm adjusts the base fee accordingly. If the network operates above the target utilization, the dynamic fee algorithm will increase the base fee to make utilizing the network more expensive and bring overall utilization down. If the network operates below the target utilization, the dynamic fee algorithm will decrease the base fee to make it cheaper to use the network. @@ -30,4 +30,4 @@ The FinalizeAndAssemble callback is used as the final step in building a block w ### Finalize -Finalize is called as the final step in processing a block [here](../../core/state_processor.go). Since either Finalize or FinalizeAndAssemble are called, but not both, when building or verifying/processing a block they need to perform the exact same processing/verification step to ensure that a block produced by the miner where FinalizeAndAssemble is called will be processed and verified in the same way when Finalize gets called. +Finalize is called as the final step in processing a block in [state_processor.go](../../core/state_processor.go). Finalize adds a callback function in order to process atomic transactions as well. Since either Finalize or FinalizeAndAssemble are called, but not both, when building or verifying/processing a block they need to perform the exact same processing/verification step to ensure that a block produced by the miner where FinalizeAndAssemble is called will be processed and verified in the same way when Finalize gets called. diff --git a/core/README.md b/core/README.md index f8b71693f5..a094dab6d8 100644 --- a/core/README.md +++ b/core/README.md @@ -6,7 +6,7 @@ The core package maintains the backend for the blockchain, transaction pool, and The [BlockChain](./blockchain.go) struct handles the insertion of blocks into the maintained chain. It maintains a "canonical chain", which is essentially the preferred chain (the chain that ends with the block preferred by the AvalancheGo consensus engine). -When the consensus engine verifies blocks as they are ready to be issued into consensus, it calls `Verify()` on the ChainVM Block interface implemented [here](../plugin/evm/block.go). This calls `InsertBlockManual` on the BlockChain struct implemented in this package, which is the first entrypoint of a block into the blockchain. +When the consensus engine verifies blocks as they are ready to be issued into consensus, it calls `Verify()` on the ChainVM Block interface implemented in [wrapped_block.go](../plugin/evm/wrapped_block.go). This calls `InsertBlockManual` on the BlockChain struct implemented in this package, which is the first entrypoint of a block into the blockchain. InsertBlockManual verifies the block, inserts it into the state manager to track the merkle trie for the block, and adds it to the canonical chain if it extends the currently preferred chain. @@ -20,7 +20,7 @@ The transaction pool maintains the set of transactions that need to be issued in ## State Manager -The State Manager manages the [TrieDB](../trie/database.go). The TrieDB tracks a merkle forest of all of the merkle tries for the last accepted block and processing blocks. When a block is processed, the state transition results in a new merkle trie added to the merkle forest. The State Manager can operate in either archival or pruning mode. +The State Manager manages references to state roots in the TrieDB implementations (see [`triedb`](../triedb/) for hashdb, pathdb, and firewood implementations). The TrieDB stores trie nodes (the individual components of state tries) in memory and on disk. When a block is processed, the state transition results in a new state root, and the TrieDB updates or inserts the trie nodes that compose this state. The State Manager tracks which state roots are referenced by processing blocks and manages when to commit trie nodes to disk or dereference them. The State Manager can operate in either archival or pruning mode. ### Archival Mode diff --git a/docs/releasing/README.md b/docs/releasing/README.md index be480a2a93..a0994b33a4 100644 --- a/docs/releasing/README.md +++ b/docs/releasing/README.md @@ -18,7 +18,7 @@ export VERSION_RC=v0.7.3-rc.0 export VERSION=v0.7.3 ``` -Remember to use the appropriate versioning for your release. +Remember to use the appropriate versioning for your release. 1. Create your branch, usually from the tip of the `master` branch: @@ -31,10 +31,12 @@ Remember to use the appropriate versioning for your release. 2. Update the [RELEASES.md](../../RELEASES.md) file with the new release version `$VERSION`. 3. Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to the desired `$VERSION`. 4. Ensure the AvalancheGo version used in [go.mod](../../go.mod) is [its last release](https://github.com/ava-labs/avalanchego/releases). If not, upgrade it with, for example: + ```bash go get github.com/ava-labs/avalanchego@v1.13.0 go mod tidy ``` + And fix any errors that may arise from the upgrade. If it requires significant changes, you may want to create a separate PR for the upgrade and wait for it to be merged before continuing with this procedure. 5. Add an entry in the object in [compatibility.json](../../compatibility.json), adding the target release `$VERSION` as key and the AvalancheGo RPC chain VM protocol version as value, to the `"rpcChainVMProtocolVersion"` JSON object. For example, we would add: @@ -382,20 +384,26 @@ Following the previous example in the [Release candidate section](#release-candi 5. Finally, [create a release for precompile-evm](https://github.com/ava-labs/precompile-evm/blob/main/docs/releasing/README.md) ### Post-release + After you have successfully released a new subnet-evm version, you need to bump all of the versions again in preperation for the next release. Note that the release here is not final, and will be reassessed, and possibly changer prior to release. Some releases require a major version update, but this will usually be `$VERSION` + `0.0.1`. For example: + ```bash export P_VERSION=v0.7.4 ``` -1. Create a branch, from the tip of the `master` branch after the release PR has been merged: + +1. Create a branch, from the tip of the `master` branch after the release PR has been merged + ```bash git fetch origin master git checkout master git checkout -b "prep-$P_VERSION-release" ``` + 1. Bump the version number to the next pending release version, `$P_VERSION` - - Update the [RELEASES.md](../../RELEASES.md) file with `$P_VERSION`, creating a space for maintainers to place their changes as they make them. - - Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to `$P_VERSION`. -1. Add an entry in the object in [compatibility.json](../../compatibility.json), adding the next pending release versionas key and the AvalancheGo RPC chain VM protocol version as value, to the `"rpcChainVMProtocolVersion"` JSON object. For example, we would add: + +- Update the [RELEASES.md](../../RELEASES.md) file with `$P_VERSION`, creating a space for maintainers to place their changes as they make them. +- Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to `$P_VERSION`. +- Add an entry in the object in [compatibility.json](../../compatibility.json), adding the next pending release versionas key and the AvalancheGo RPC chain VM protocol version as value, to the `"rpcChainVMProtocolVersion"` JSON object. For example, we would add: ```json "v0.7.4": 39, @@ -422,15 +430,21 @@ export P_VERSION=v0.7.4 git push -u origin "prep-$P_VERSION-release" ``` 1. Create a pull request (PR) from your branch targeting master, for example using [`gh`](https://cli.github.com/): + ```bash gh pr create --repo github.com/ava-labs/subnet-evm --base master --title "chore: prep next release $P_VERSION" ``` + 1. Wait for the PR checks to pass with + ```bash gh pr checks --watch ``` + 1. Squash and merge your branch into `master`, for example: + ```bash gh pr merge "prep-$P_VERSION-release" --squash --subject "chore: prep next release $P_VERSION" ``` + 1. Pat yourself on the back for a job well done. diff --git a/plugin/evm/README.md b/plugin/evm/README.md index cb180bafc7..3399213e0b 100644 --- a/plugin/evm/README.md +++ b/plugin/evm/README.md @@ -12,12 +12,12 @@ The VM creates APIs for the node through the function `CreateHandlers()`. Create ## Block Handling -The VM implements `buildBlock`, `parseBlock`, and `getBlock` and uses the `chain` package from AvalancheGo to construct a metered state, which uses these functions to implement an efficient caching layer and maintain the required invariants for blocks that get returned to the consensus engine. +The VM implements `buildBlock`, `parseBlock`, and `getBlock` which are used by the `chain` package from AvalancheGo to construct a metered state. The metered state wraps blocks returned by these functions with an efficient caching layer and maintains the required invariants for blocks that get returned to the consensus engine. -To do this, the VM uses a modified version of the Ethereum RLP block type [here](../../core/types/block.go) and uses the core package's BlockChain type [here](../../core/blockchain.go) to handle the insertion and storage of blocks into the chain. +The VM uses the block type from [`libevm/core/types`](https://github.com/ava-labs/libevm/tree/master/core/types) and extends it with Avalanche-specific fields (such as `ExtDataHash`, `BlockGasCost`, and `Version`) using libevm's extensibility mechanism (defined in [`customtypes`](customtypes/)), then wraps it with [`wrappedBlock`](wrapped_block.go) to implement the AvalancheGo Block interface. The core package's BlockChain type in [blockchain.go](../../core/blockchain.go) handles the insertion and storage of blocks into the chain. ## Block The Block type implements the AvalancheGo ChainVM Block interface. The key functions for this interface are `Verify()`, `Accept()`, `Reject()`, and `Status()`. -The Block type wraps the stateless block type [here](../../core/types/block.go) and implements these functions to allow the consensus engine to verify blocks as valid, perform consensus, and mark them as accepted or rejected. See the documentation in AvalancheGo for the more detailed VM invariants that are maintained here. +The Block type (implemented as [`wrappedBlock`](wrapped_block.go)) wraps the block type from [`libevm/core/types`](https://github.com/ava-labs/libevm/tree/master/core/types) and implements these functions to allow the consensus engine to verify blocks as valid, perform consensus, and mark them as accepted or rejected. Blocks contain standard Ethereum transactions as well as atomic transactions (stored in the block's `ExtData` field) that enable cross-chain asset transfers. Blocks may also include optional block extensions for extensible VM functionality. See the documentation in AvalancheGo for the more detailed VM invariants that are maintained here. diff --git a/plugin/evm/config/config.md b/plugin/evm/config/config.md index 09672ceecf..1fe67c1d86 100644 --- a/plugin/evm/config/config.md +++ b/plugin/evm/config/config.md @@ -1,6 +1,8 @@ # Subnet-EVM Configuration -> **Note**: These are the configuration options available in the Subnet-EVM codebase. To set these values, you need to create a configuration file at `~/.avalanchego/configs/chains//config.json`. +> **Note**: These are the configuration options available in the subnet-evm codebase. To set these values, you need to create a configuration file at `{chain-config-dir}/C/config.json`. This file does not exist by default. +> +> For example if `chain-config-dir` has the default value which is `$HOME/.avalanchego/configs/chains`, then `config.json` should be placed at `$HOME/.avalanchego/configs/chains/C/config.json`. > > For the AvalancheGo node configuration options, see the AvalancheGo Configuration page. @@ -105,6 +107,8 @@ Configuration is provided as a JSON object. All fields are optional unless other ## Pruning and State Management + > **Note**: If a node is ever run with `pruning-enabled` as `false` (archival mode), setting `pruning-enabled` to `true` will result in a warning and the node will shut down. This is to protect against unintentional misconfigurations of an archival node. To override this and switch to pruning mode, in addition to `pruning-enabled: true`, `allow-missing-tries` should be set to `true` as well. + ### Basic Pruning | Option | Type | Description | Default | @@ -123,6 +127,8 @@ Configuration is provided as a JSON object. All fields are optional unless other ### Offline Pruning +> **Note**: If offline pruning is enabled it will run on startup and block until it completes (approximately one hour on Mainnet). This will reduce the size of the database by deleting old trie nodes. **While performing offline pruning, your node will not be able to process blocks and will be considered offline.** While ongoing, the pruning process consumes a small amount of additional disk space (for deletion markers and the bloom filter). For more information see the [disk space considerations documentation](https://build.avax.network/docs/nodes/maintain/reduce-disk-usage#disk-space-considerations). Since offline pruning deletes old state data, this should not be run on nodes that need to support archival API requests. This is meant to be run manually, so after running with this flag once, it must be toggled back to false before running the node again. Therefore, you should run with this flag set to true and then set it to false on the subsequent run. + | Option | Type | Description | Default | |--------|------|-------------|---------| | `offline-pruning-enabled` | bool | Enable offline pruning | `false` | @@ -223,6 +229,8 @@ Configuration is provided as a JSON object. All fields are optional unless other ### State Sync +> **Note:** If state-sync is enabled, the peer will download chain state from peers up to a recent block near tip, then proceed with normal bootstrapping. Please note that if you need historical data, state sync isn't the right option. However, it is sufficient if you are just running a validator. + | Option | Type | Description | Default | |--------|------|-------------|---------| | `state-sync-enabled` | bool | Enable state sync | `false` | @@ -251,7 +259,7 @@ Failing to set these options will result in errors on VM initialization. Additio | `database-config-file` | string | Path to database configuration file | - | | `use-standalone-database` | bool | Use standalone database instead of shared one | - | | `inspect-database` | bool | Inspect database on startup | `false` | -| `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash` or `firewood` | `hash` | +| `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash`, `firewood`, or `path` | `hash` | ## Transaction Indexing diff --git a/precompile/contracts/warp/README.md b/precompile/contracts/warp/README.md index ea270e7bec..c0d40ab1aa 100644 --- a/precompile/contracts/warp/README.md +++ b/precompile/contracts/warp/README.md @@ -25,7 +25,7 @@ The Avalanche Warp Precompile enables this flow to send a message from blockchai ### Warp Precompile -The Warp Precompile is broken down into three functions defined in the Solidity interface file [here](../../../contracts/contracts/interfaces/IWarpMessenger.sol). +The Warp Precompile is broken down into three functions defined in the Solidity interface file [IWarpMessenger.sol](../../../contracts/contracts/interfaces/IWarpMessenger.sol). #### sendWarpMessage @@ -59,7 +59,7 @@ This leads to the following advantages: 1. The EVM execution does not need to verify the Warp Message at runtime (no signature verification or external calls to the P-Chain) 2. The EVM can deterministically re-execute and re-verify blocks assuming the predicate was verified by the network (e.g., in bootstrapping) -This pre-verification is performed using the ProposerVM Block header during [block verification](../../../plugin/evm/block.go#L355) & [block building](../../../miner/worker.go#L200). +This pre-verification is performed using the ProposerVM Block header during [block verification](../../../plugin/evm/wrapped_block.go) & [block building](../../../miner/worker.go). #### getBlockchainID @@ -67,7 +67,7 @@ This pre-verification is performed using the ProposerVM Block header during [blo This is different from the conventional Ethereum ChainID registered to [ChainList](https://chainlist.org/). -The `blockchainID` in Avalanche refers to the txID that created the blockchain on the Avalanche P-Chain ([docs](https://docs.avax.network/specs/platform-transaction-serialization#unsigned-create-chain-tx)). +The `sourceChainID` in Avalanche refers to the txID that created the blockchain on the Avalanche P-Chain ([docs](https://build.avax.network/docs/cross-chain/avalanche-warp-messaging/deep-dive#icm-serialization)). ### Predicate Encoding @@ -75,7 +75,7 @@ Avalanche Warp Messages are encoded as a signed Avalanche [Warp Message](https:/ Since the predicate is encoded into the [Transaction Access List](https://eips.ethereum.org/EIPS/eip-2930), it is packed into 32 byte hashes intended to declare storage slots that should be pre-warmed into the cache prior to transaction execution. -Therefore, we use the [Predicate Utils](https://github.com/ava-labs/subnet-evm/blob/master/predicate/Predicate.md) package to encode the actual byte slice of size N into the access list. +Therefore, we use the [`predicate`](https://github.com/ava-labs/avalanchego/tree/master/vms/evm/predicate) package to encode the actual byte slice of size N into the access list. ### Performance Optimization: Primary Network to Avalanche L1 @@ -85,7 +85,7 @@ The Primary Network has a large validator set compared to most Subnets and L1s, Recall that Avalanche Subnet validators must also validate the Primary Network, so it tracks all of the blockchains in the Primary Network (X, C, and P-Chains). -When an Avalanche Subnet receives a message from a blockchain on the Primary Network, we use the validator set of the receiving Subnet instead of the entire network when validating the message. +When an Avalanche Subnet receives a message from a blockchain on the Primary Network, we use the validator set of the receiving Subnet instead of the entire network when validating the message. Sending messages from the X, C, or P-Chain remains unchanged. However, when the Subnet receives the message, it changes the semantics to the following: diff --git a/sync/README.md b/sync/README.md index b96c4bcb8b..56604980f4 100644 --- a/sync/README.md +++ b/sync/README.md @@ -1,6 +1,7 @@ # State sync ## Overview + Normally, a node joins the network through bootstrapping: First it fetches all blocks from genesis to the chain's last accepted block from peers, then it applies the state transition specified in each block to reach the state necessary to join consensus. State sync is an alternative in which a node downloads the state of the chain from its peers at a specific _syncable_ block height. Then, the node processes the rest of the chain's blocks (from syncable block to tip) via normal bootstrapping. @@ -8,18 +9,22 @@ Blocks at heights divisible by `defaultSyncableInterval` (= 16,384 or 2**14) are _Note: `defaultSyncableInterval` must be divisible by `CommitInterval` (= 4096). This is so the state corresponding to syncable blocks is available on nodes with pruning enabled._ State sync is faster than bootstrapping and uses less bandwidth and computation: + - Nodes joining the network do not process all the state transitions. - The amount of data sent over the network is proportionate to the amount of state not the chain's length _Note: nodes joining the network through state sync will not have historical state prior to the syncable block._ ## What is the chain state? + The node needs the following data from its peers to continue processing blocks from a syncable block: + - Accounts trie & storage tries for all accounts (at the state root corresponding to the syncable block), - Contract code referenced in the account trie, - 256 parents of the syncable block (required for the BLOCKHASH opcode) ## Code structure + State sync code is structured as follows: - `sync/handlers`: Nodes that have joined the network are expected to respond to valid requests for the chain state: @@ -35,8 +40,8 @@ State sync code is structured as follows: - `peer`: Contains abstractions used by `sync/statesync` to send requests to peers (`AppRequest`) and receive responses from peers (`AppResponse`). - `message`: Contains structs that are serialized and sent over the network during state sync. - ## Sync summaries & engine involvement + When a new node wants to join the network via state sync, it will need a few pieces of information as a starting point so it can make valid requests to its peers: - Number (height) and hash of the latest available syncable block, @@ -44,22 +49,24 @@ When a new node wants to join the network via state sync, it will need a few pie The above information is called a _state summary_, and each syncable block corresponds to one such summary (see `message.SyncSummary`). The engine and VM interact as follows to find a syncable state summary: - -1. The engine calls `StateSyncEnabled`. The VM returns `true` to initiate state sync, or `false` to start bootstrapping. In `subnet-evm`, this is controlled by the `state-sync-enabled` flag. -1. The engine calls `GetOngoingSyncStateSummary`. If the VM has a previously interrupted sync to resume it returns that summary. Otherwise, it returns `ErrNotFound`. By default, `subnet-evm` will resume an interrupted sync. -1. The engine samples peers for their latest available summaries, then verifies the correctness and availability of each sampled summary with validators. The messaging flow is documented [here](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/README.md). -1. The engine calls `Accept` on the chosen summary. The VM may return `false` to skip syncing to this summary (`subnet-evm` skips state sync for less than `defaultStateSyncMinBlocks = 300_000` blocks). If the VM decides to perform the sync, it must return `true` without blocking and fetch the state from its peers asynchronously. +1. The engine calls `StateSyncEnabled`. The VM returns `true` to initiate state sync, or `false` to start bootstrapping. In `coreth`, this is controlled by the `state-sync-enabled` flag. +1. The engine calls `GetOngoingSyncStateSummary`. If the VM has a previously interrupted sync to resume it returns that summary. Otherwise, it returns `ErrNotFound`. By default, `coreth` will resume an interrupted sync. +1. The engine samples peers for their latest available summaries, then verifies the correctness and availability of each sampled summary with validators. The messaging flow is documented in the [block engine README](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/README.md). +1. The engine calls `Accept` on the chosen summary. The VM may return `false` to skip syncing to this summary (`coreth` skips state sync for less than `defaultStateSyncMinBlocks = 300_000` blocks). If the VM decides to perform the sync, it must return `true` without blocking and fetch the state from its peers asynchronously. 1. The VM sends `common.StateSyncDone` on the `toEngine` channel on completion. 1. The engine calls `VM.SetState(Bootstrapping)`. Then, blocks after the syncable block are processed one by one. ## Syncing state + The following steps are executed by the VM to sync its state from peers (see `stateSyncClient.StateSync`): + 1. Wipe snapshot data 1. Sync 256 parents of the syncable block (see `BlockRequest`), 1. Sync the EVM state: account trie, code, and storage tries, 1. Update in-memory and on-disk pointers. Steps 3 and 4 involve syncing tries. To sync trie data, the VM will send a series of `LeafRequests` to its peers. Each request specifies: + - Type of trie (`NodeType`): - `statesync.StateTrieNode` (account trie and storage tries share the same database) - `Root` of the trie to sync, @@ -68,17 +75,20 @@ Steps 3 and 4 involve syncing tries. To sync trie data, the VM will send a serie Peers responding to these requests send back trie leafs (key/value pairs) beginning at `Start` and up to `End` (or a maximum number of leafs). The response must also contain include a merkle proof for the range of leafs it contains. Nodes serving state sync data are responsible for constructing these proofs (see `sync/handlers/leafs_request.go`) `client.GetLeafs` handles sending a single request and validating the response. This method will retry the request from a different peer up to `maxRetryAttempts` (= 32) times if the peer's response is: + - malformed, - does not contain a valid merkle proof, - or is not received in time. - If there are more leafs in a trie than can be returned in a single response, the client will make successive requests to continue fetching data (with `Start` set to the last key received) until the trie is complete. `CallbackLeafSyncer` manages this process and does a callback on each batch of received leafs. ### EVM state: Account trie, code, and storage tries + `sync/statesync.stateSyncer` uses `CallbackLeafSyncer` to sync the account trie. When the leaf callback is invoked, each leaf represents an account: + - If the account has contract code, it is requested from peers using `client.GetCode` - If the account has a storage root, it is added to the list of trie roots returned from the callback. `CallbackLeafSyncer` has `defaultNumThreads` (= 4) goroutines to fetch these tries concurrently. + If the account trie encounters a new storage trie task and there are already 4 in-progress trie tasks (1 for the account trie and 3 for in-progress storage trie tasks), then the account trie worker will block until one of the storage trie tasks finishes and it can create a new task. When an account leaf is received, it is converted to `SlimRLP` format and written to the snapshot. @@ -88,6 +98,7 @@ When the trie is complete, an `OnFinish` callback is called and we hash any rema When a storage trie leaf is received, it is stored in the account's storage snapshot. A `StackTrie` is used here to reconstruct intermediary trie nodes & root as well. ### Updating in-memory and on-disk pointers + `plugin/evm.stateSyncClient.StateSyncSetLastSummaryBlock` is the last step in state sync. Once the tries have been synced, this method: @@ -96,8 +107,8 @@ Once the tries have been synced, this method: - Resets in-memory and on disk pointers on the `core.BlockChain` struct. - Updates VM's last accepted block. - ## Resuming a partial sync operation + While state sync is faster than normal bootstrapping, the process may take several hours to complete. In case the node is shut down in the middle of a state sync, progress on syncing the account trie and storage tries is preserved: - When starting a sync, `stateSyncClient` persists the state summary to disk. This is so if the node is shut down while the sync is ongoing, this summary can be found and returned to the engine from `GetOngoingSyncStateSummary` upon node restart. @@ -114,4 +125,4 @@ While state sync is faster than normal bootstrapping, the process may take sever | `state-sync-skip-resume` | `bool` | set to true to avoid resuming an ongoing sync | `false` | | `state-sync-min-blocks` | `uint64` | Minimum number of blocks the chain must be ahead of local state to prefer state sync over bootstrapping | `300,000` | | `state-sync-server-trie-cache` | `int` | Size of trie cache to serve state sync data in MB. Should be set to multiples of `64`. | `64` | -| `state-sync-ids` | `string` | a comma separated list of `NodeID-` prefixed node IDs to sync data from. If not provided, peers are randomly selected. | | \ No newline at end of file +| `state-sync-ids` | `string` | a comma separated list of `NodeID-` prefixed node IDs to sync data from. If not provided, peers are randomly selected. | | From 6514a5418d77a959b90ddd262c1b9eb1dc37a130 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Sat, 22 Nov 2025 00:00:54 -0500 Subject: [PATCH 02/20] chore: lint markdown --- .github/CONTRIBUTING.md | 5 +- .github/ISSUE_TEMPLATE/feature_spec.md | 2 +- README.md | 6 +- RELEASES.md | 2 + cmd/evm/README.md | 209 ++++++++++++++----------- cmd/evm/testdata/13/readme.md | 4 +- cmd/evm/testdata/14/readme.md | 10 +- cmd/evm/testdata/18/README.md | 6 +- cmd/evm/testdata/19/readme.md | 7 +- cmd/evm/testdata/23/readme.md | 2 +- cmd/evm/testdata/29/readme.md | 6 +- cmd/evm/testdata/3/readme.md | 2 +- cmd/evm/testdata/4/readme.md | 2 +- cmd/evm/testdata/5/readme.md | 2 +- cmd/simulator/README.md | 2 +- consensus/dummy/README.md | 12 +- contracts/README.md | 12 +- core/README.md | 2 +- docs/releasing/README.md | 63 ++++---- plugin/evm/README.md | 4 +- plugin/evm/config/config.md | 2 +- sync/README.md | 38 ++--- tests/README.md | 8 +- 23 files changed, 224 insertions(+), 184 deletions(-) diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index b242445d51..4836b62ebe 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -6,8 +6,7 @@ If you'd like to contribute to subnet-evm, please fork, fix, commit and send a p ## Coding guidelines -Please make sure your contributions adhere to our coding and documentation -guidelines: +Please make sure your contributions adhere to our coding guidelines: - Code must adhere to the official Go [formatting](https://go.dev/doc/effective_go#formatting) guidelines @@ -42,7 +41,7 @@ Mocks are auto-generated using [mockgen](https://pkg.go.dev/go.uber.org/mock/moc - if the file `mocks_generate_test.go` does not exist in the package where the interface is located, create it with content (adapt as needed): ```go - // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. + // Copyright (C) 2025-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. package mypackage diff --git a/.github/ISSUE_TEMPLATE/feature_spec.md b/.github/ISSUE_TEMPLATE/feature_spec.md index a219c86753..660752cf6f 100644 --- a/.github/ISSUE_TEMPLATE/feature_spec.md +++ b/.github/ISSUE_TEMPLATE/feature_spec.md @@ -16,4 +16,4 @@ Include a description of the changes to be made to the code along with alternati that were considered, including pro/con analysis where relevant. **Open questions** -Questions that are still being discussed. \ No newline at end of file +Questions that are still being discussed diff --git a/README.md b/README.md index b1d3e5fa1b..ed5716915c 100644 --- a/README.md +++ b/README.md @@ -44,11 +44,11 @@ The Subnet EVM supports the following API namespaces: Only the `eth` namespace is enabled by default. Subnet EVM is a simplified version of [Coreth VM (C-Chain)](https://github.com/ava-labs/coreth). -Full documentation for the C-Chain's API can be found [in the builder docs](https://build.avax.network/docs/rpcs/c-chain). +Full documentation for the C-Chain's API can be found in the [builder docs](https://build.avax.network/docs/rpcs/c-chain). ## Compatibility -The Subnet EVM is compatible with almost all Ethereum tooling, including [Remix](https://docs.avax.network/build/dapp/smart-contracts/remix-deploy), [Metamask](https://docs.avax.network/build/dapp/chain-settings), and [Foundry](https://docs.avax.network/build/dapp/smart-contracts/toolchains/foundry). +The Subnet EVM is compatible with almost all Ethereum tooling, including [Remix](https://build.avax.network/docs/dapps/smart-contract-dev/deploy-with-remix-ide), [Metamask](https://build.avax.network/docs/dapps), and [Foundry](https://build.avax.network/docs/dapps/toolchains/foundry). ## Differences Between Subnet EVM and Coreth @@ -70,7 +70,7 @@ To support these changes, there have been a number of changes to the SubnetEVM b ### Clone Subnet-evm -First install Go 1.24.9 or later. Follow the instructions [here]. You can verify by running `go version`. +First install Go 1.24.9 or later. Follow the instructions on the [go docs](https://go.dev/doc/install). You can verify by running `go version`. Set `$GOPATH` environment variable properly for Go to look for Go Workspaces. Please read [this](https://go.dev/doc/code) for details. You can verify by running `echo $GOPATH`. diff --git a/RELEASES.md b/RELEASES.md index 8147a1181b..a8f8f5e64a 100644 --- a/RELEASES.md +++ b/RELEASES.md @@ -1,3 +1,5 @@ + + # Release Notes ## [v0.8.1](https://github.com/ava-labs/subnet-evm/releases/tag/v0.8.1) diff --git a/cmd/evm/README.md b/cmd/evm/README.md index 6306dbf892..3224e259f6 100644 --- a/cmd/evm/README.md +++ b/cmd/evm/README.md @@ -9,20 +9,19 @@ layer. ## State transition tool (`t8n`) - The `evm t8n` tool is a stateless state transition utility. It is a utility which can 1. Take a prestate, including - - Accounts, - - Block context information, - - Previous blockhashes (*optional) -2. Apply a set of transactions, -3. Apply a mining-reward (*optional), -4. And generate a post-state, including - - State root, transaction root, receipt root, - - Information about rejected transactions, - - Optionally: a full or partial post-state dump + * Accounts, + * Block context information, + * Previous blockhashes (*optional) +1. Apply a set of transactions, +1. Apply a mining-reward (*optional), +1. And generate a post-state, including + * State root, transaction root, receipt root, + * Information about rejected transactions, + * Optionally: a full or partial post-state dump ### Specification @@ -35,7 +34,7 @@ implementation. Command line params that need to be supported are -``` +```bash --input.alloc value (default: "alloc.json") --input.env value (default: "env.json") --input.txs value (default: "txs.json") @@ -52,6 +51,7 @@ Command line params that need to be supported are --trace.nostack (default: false) --trace.returndata (default: false) ``` + #### Objects The transition tool uses JSON objects to read and write data related to the transition operation. The @@ -118,50 +118,50 @@ The `txs` object is an array of any of the transaction types: `LegacyTx`, ```go type LegacyTx struct { - Nonce uint64 `json:"nonce"` - GasPrice *big.Int `json:"gasPrice"` - Gas uint64 `json:"gas"` - To *common.Address `json:"to"` - Value *big.Int `json:"value"` - Data []byte `json:"data"` - V *big.Int `json:"v"` - R *big.Int `json:"r"` - S *big.Int `json:"s"` - SecretKey *common.Hash `json:"secretKey"` + Nonce uint64 `json:"nonce"` + GasPrice *big.Int `json:"gasPrice"` + Gas uint64 `json:"gas"` + To *common.Address `json:"to"` + Value *big.Int `json:"value"` + Data []byte `json:"data"` + V *big.Int `json:"v"` + R *big.Int `json:"r"` + S *big.Int `json:"s"` + SecretKey *common.Hash `json:"secretKey"` } type AccessList []AccessTuple type AccessTuple struct { - Address common.Address `json:"address" gencodec:"required"` - StorageKeys []common.Hash `json:"storageKeys" gencodec:"required"` + Address common.Address `json:"address" gencodec:"required"` + StorageKeys []common.Hash `json:"storageKeys" gencodec:"required"` } type AccessListTx struct { - ChainID *big.Int `json:"chainId"` - Nonce uint64 `json:"nonce"` - GasPrice *big.Int `json:"gasPrice"` - Gas uint64 `json:"gas"` - To *common.Address `json:"to"` - Value *big.Int `json:"value"` - Data []byte `json:"data"` - AccessList AccessList `json:"accessList"` - V *big.Int `json:"v"` - R *big.Int `json:"r"` - S *big.Int `json:"s"` - SecretKey *common.Hash `json:"secretKey"` + ChainID *big.Int `json:"chainId"` + Nonce uint64 `json:"nonce"` + GasPrice *big.Int `json:"gasPrice"` + Gas uint64 `json:"gas"` + To *common.Address `json:"to"` + Value *big.Int `json:"value"` + Data []byte `json:"data"` + AccessList AccessList `json:"accessList"` + V *big.Int `json:"v"` + R *big.Int `json:"r"` + S *big.Int `json:"s"` + SecretKey *common.Hash `json:"secretKey"` } type DynamicFeeTx struct { - ChainID *big.Int `json:"chainId"` - Nonce uint64 `json:"nonce"` - GasTipCap *big.Int `json:"maxPriorityFeePerGas"` - GasFeeCap *big.Int `json:"maxFeePerGas"` - Gas uint64 `json:"gas"` - To *common.Address `json:"to"` - Value *big.Int `json:"value"` - Data []byte `json:"data"` - AccessList AccessList `json:"accessList"` - V *big.Int `json:"v"` - R *big.Int `json:"r"` - S *big.Int `json:"s"` - SecretKey *common.Hash `json:"secretKey"` + ChainID *big.Int `json:"chainId"` + Nonce uint64 `json:"nonce"` + GasTipCap *big.Int `json:"maxPriorityFeePerGas"` + GasFeeCap *big.Int `json:"maxFeePerGas"` + Gas uint64 `json:"gas"` + To *common.Address `json:"to"` + Value *big.Int `json:"value"` + Data []byte `json:"data"` + AccessList AccessList `json:"accessList"` + V *big.Int `json:"v"` + R *big.Int `json:"r"` + S *big.Int `json:"s"` + SecretKey *common.Hash `json:"secretKey"` } ``` @@ -192,40 +192,44 @@ There are a few (not many) errors that can occur, those are defined below. ##### EVM-based errors (`2` to `9`) -- Other EVM error. Exit code `2` -- Failed configuration: when a non-supported or invalid fork was specified. Exit code `3`. -- Block history is not supplied, but needed for a `BLOCKHASH` operation. If `BLOCKHASH` +* Other EVM error. Exit code `2` +* Failed configuration: when a non-supported or invalid fork was specified. Exit code `3`. +* Block history is not supplied, but needed for a `BLOCKHASH` operation. If `BLOCKHASH` is invoked targeting a block which history has not been provided for, the program will exit with code `4`. ##### IO errors (`10`-`20`) -- Invalid input json: the supplied data could not be marshalled. +* Invalid input json: the supplied data could not be marshalled. The program will exit with code `10` -- IO problems: failure to load or save files, the program will exit with code `11` +* IO problems: failure to load or save files, the program will exit with code `11` -``` +```bash # This should exit with 3 ./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Frontier+1346 2>/dev/null exitcode:3 OK ``` + #### Forks -### Basic usage The chain configuration to be used for a transition is specified via the `--state.fork` CLI flag. A list of possible values and configurations can be found in [`tests/init.go`](../../tests/init.go). #### Examples + ##### Basic usage Invoking it with the provided example files -``` + +```bash ./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Berlin ``` + Two resulting files: `alloc.json`: + ```json { "0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192": { @@ -241,7 +245,9 @@ Two resulting files: } } ``` + `result.json`: + ```json { "stateRoot": "0x84208a19bc2b46ada7445180c1db162be5b39b9abc8c0a54b05d32943eae4e13", @@ -275,10 +281,13 @@ Two resulting files: ``` We can make them spit out the data to e.g. `stdout` like this: -``` + +```bash ./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --output.result=stdout --output.alloc=stdout --state.fork=Berlin ``` + Output: + ```json { "alloc": { @@ -330,19 +339,19 @@ Output: Mining rewards and ommer rewards might need to be added. This is how those are applied: -- `block_reward` is the block mining reward for the miner (`0xaa`), of a block at height `N`. -- For each ommer (mined by `0xbb`), with blocknumber `N-delta` - - (where `delta` is the difference between the current block and the ommer) - - The account `0xbb` (ommer miner) is awarded `(8-delta)/ 8 * block_reward` - - The account `0xaa` (block miner) is awarded `block_reward / 32` +* `block_reward` is the block mining reward for the miner (`0xaa`), of a block at height `N`. +* For each ommer (mined by `0xbb`), with blocknumber `N-delta` + * (where `delta` is the difference between the current block and the ommer) + * The account `0xbb` (ommer miner) is awarded `(8-delta)/ 8 * block_reward` + * The account `0xaa` (block miner) is awarded `block_reward / 32` To make `t8n` apply these, the following inputs are required: -- `--state.reward` - - For ethash, it is `5000000000000000000` `wei`, - - If this is not defined, mining rewards are not applied, - - A value of `0` is valid, and causes accounts to be 'touched'. -- For each ommer, the tool needs to be given an `address\` and a `delta`. This +* `--state.reward` + * For ethash, it is `5000000000000000000` `wei`, + * If this is not defined, mining rewards are not applied, + * A value of `0` is valid, and causes accounts to be 'touched'. +* For each ommer, the tool needs to be given an `address\` and a `delta`. This is done via the `ommers` field in `env`. Note: the tool does not verify that e.g. the normal uncle rules apply, @@ -350,7 +359,9 @@ and allows e.g two uncles at the same height, or the uncle-distance. This means the tool allows for negative uncle reward (distance > 8) Example: + `./testdata/5/env.json`: + ```json { "currentCoinbase": "0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", @@ -364,8 +375,11 @@ Example: ] } ``` + When applying this, using a reward of `0x08` + Output: + ```json { "alloc": { @@ -381,11 +395,14 @@ Output: } } ``` + #### Future EIPS It is also possible to experiment with future eips that are not yet defined in a hard fork. + Example, putting EIP-1344 into Frontier: -``` + +```bash ./evm t8n --state.fork=Frontier+1344 --input.pre=./testdata/1/pre.json --input.txs=./testdata/1/txs.json --input.env=/testdata/1/env.json ``` @@ -393,16 +410,18 @@ Example, putting EIP-1344 into Frontier: The `BLOCKHASH` opcode requires blockhashes to be provided by the caller, inside the `env`. If a required blockhash is not provided, the exit code should be `4`: + Example where blockhashes are provided: -``` -./evm t8n --input.alloc=./testdata/3/alloc.json --input.txs=./testdata/3/txs.json --input.env=./testdata/3/env.json --trace --state.fork=Berlin +```bash +./evm t8n --input.alloc=./testdata/3/alloc.json --input.txs=./testdata/3/txs.json --input.env=./testdata/3/env.json --trace --state.fork=Berlin ``` -``` +```bash cat trace-0-0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81.jsonl | grep BLOCKHASH -C2 ``` -``` + +```json {"pc":0,"op":96,"gas":"0x5f58ef8","gasCost":"0x3","memSize":0,"stack":[],"depth":1,"refund":0,"opName":"PUSH1"} {"pc":2,"op":64,"gas":"0x5f58ef5","gasCost":"0x14","memSize":0,"stack":["0x1"],"depth":1,"refund":0,"opName":"BLOCKHASH"} {"pc":3,"op":0,"gas":"0x5f58ee1","gasCost":"0x0","memSize":0,"stack":["0xdac58aa524e50956d0c0bae7f3f8bb9d35381365d07804dd5b48a5a297c06af4"],"depth":1,"refund":0,"opName":"STOP"} @@ -410,19 +429,22 @@ cat trace-0-0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81.j ``` In this example, the caller has not provided the required blockhash: -``` + +```bash ./evm t8n --input.alloc=./testdata/4/alloc.json --input.txs=./testdata/4/txs.json --input.env=./testdata/4/env.json --trace --state.fork=Berlin ERROR(4): getHash(3) invoked, blockhash for that block not provided ``` + Error code: 4 #### Chaining Another thing that can be done, is to chain invocations: -``` -./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Berlin --output.alloc=stdout | ./evm t8n --input.alloc=stdin --input.env=./testdata/1/env.json --input.txs=./testdata/1/txs.json --state.fork=Berlin +```bash +./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Berlin --output.alloc=stdout | ./evm t8n --input.alloc=stdin --input.env=./testdata/1/env.json --input.txs=./testdata/1/txs.json --state.fork=Berlin ``` + What happened here, is that we first applied two identical transactions, so the second one was rejected. Then, taking the poststate alloc as the input for the next state, we tried again to include the same two transactions: this time, both failed due to too low nonce. @@ -437,7 +459,8 @@ The input format for RLP-form transactions is _identical_ to the _output_ format to use the evm to go from `json` input to `rlp` input. The following command takes **json** the transactions in `./testdata/13/txs.json` and signs them. After execution, they are output to `signed_txs.rlp`.: -``` + +```bash ./evm t8n --state.fork=London --input.alloc=./testdata/13/alloc.json --input.txs=./testdata/13/txs.json --input.env=./testdata/13/env.json --output.result=alloc_jsontx.json --output.body=signed_txs.rlp INFO [12-27|09:25:11.102] Trie dumping started root=e4b924..6aef61 INFO [12-27|09:25:11.102] Trie dumping complete accounts=3 elapsed="275.66µs" @@ -447,30 +470,36 @@ INFO [12-27|09:25:11.103] Wrote file file=signed_t ``` The `output.body` is the rlp-list of transactions, encoded in hex and placed in a string a'la `json` encoding rules: -``` + +```bash cat signed_txs.rlp "0xf8d2b86702f864010180820fa08284d09411111111111111111111111111111111111111118080c001a0b7dfab36232379bb3d1497a4f91c1966b1f932eae3ade107bf5d723b9cb474e0a06261c359a10f2132f126d250485b90cf20f30340801244a08ef6142ab33d1904b86702f864010280820fa08284d09411111111111111111111111111111111111111118080c080a0d4ec563b6568cd42d998fc4134b36933c6568d01533b5adf08769270243c6c7fa072bf7c21eac6bbeae5143371eef26d5e279637f3bd73482b55979d76d935b1e9" ``` We can use `rlpdump` to check what the contents are: -``` + +```bash rlpdump -hex $(cat signed_txs.rlp | jq -r ) [ 02f864010180820fa08284d09411111111111111111111111111111111111111118080c001a0b7dfab36232379bb3d1497a4f91c1966b1f932eae3ade107bf5d723b9cb474e0a06261c359a10f2132f126d250485b90cf20f30340801244a08ef6142ab33d1904, 02f864010280820fa08284d09411111111111111111111111111111111111111118080c080a0d4ec563b6568cd42d998fc4134b36933c6568d01533b5adf08769270243c6c7fa072bf7c21eac6bbeae5143371eef26d5e279637f3bd73482b55979d76d935b1e9, ] ``` + Now, we can now use those (or any other already signed transactions), as input, like so: -``` + +```bash ./evm t8n --state.fork=London --input.alloc=./testdata/13/alloc.json --input.txs=./signed_txs.rlp --input.env=./testdata/13/env.json --output.result=alloc_rlptx.json INFO [12-27|09:25:11.187] Trie dumping started root=e4b924..6aef61 INFO [12-27|09:25:11.187] Trie dumping complete accounts=3 elapsed="123.676µs" INFO [12-27|09:25:11.187] Wrote file file=alloc.json INFO [12-27|09:25:11.187] Wrote file file=alloc_rlptx.json ``` + You might have noticed that the results from these two invocations were stored in two separate files. And we can now finally check that they match. -``` + +```bash cat alloc_jsontx.json | jq .stateRoot && cat alloc_rlptx.json | jq .stateRoot "0xe4b924a6adb5959fccf769d5b7bb2f6359e26d1e76a2443c5a91a36d826aef61" "0xe4b924a6adb5959fccf769d5b7bb2f6359e26d1e76a2443c5a91a36d826aef61" @@ -479,6 +508,7 @@ cat alloc_jsontx.json | jq .stateRoot && cat alloc_rlptx.json | jq .stateRoot ## Transaction tool The transaction tool is used to perform static validity checks on transactions such as: + * intrinsic gas calculation * max values on integers * fee semantics, such as `maxFeePerGas < maxPriorityFeePerGas` @@ -486,7 +516,7 @@ The transaction tool is used to perform static validity checks on transactions s ### Examples -``` +```bash ./evm t9n --state.fork Homestead --input.txs testdata/15/signed_txs.rlp [ { @@ -499,7 +529,8 @@ The transaction tool is used to perform static validity checks on transactions s } ] ``` -``` + +```bash ./evm t9n --state.fork London --input.txs testdata/15/signed_txs.rlp [ { @@ -514,6 +545,7 @@ The transaction tool is used to perform static validity checks on transactions s } ] ``` + ## Block builder tool (b11r) The `evm b11r` tool is used to assemble and seal full block rlps. @@ -524,7 +556,7 @@ The `evm b11r` tool is used to assemble and seal full block rlps. Command line params that need to be supported are: -``` +```bash --input.header value `stdin` or file name of where to find the block header to use. (default: "header.json") --input.ommers value `stdin` or file name of where to find the list of ommer header RLPs to use. --input.txs value `stdin` or file name of where to find the transactions list in RLP form. (default: "txs.rlp") @@ -546,7 +578,7 @@ Command line params that need to be supported are: The `header` object is a consensus header. -```go= +```go type Header struct { ParentHash common.Hash `json:"parentHash"` OmmerHash *common.Hash `json:"sha3Uncles"` @@ -566,12 +598,13 @@ type Header struct { BaseFee *big.Int `json:"baseFeePerGas"` } ``` + #### `ommers` The `ommers` object is a list of RLP-encoded ommer blocks in hex representation. -```go= +```go type Ommers []string ``` @@ -579,7 +612,7 @@ type Ommers []string The `txs` object is a list of RLP-encoded transactions in hex representation. -```go= +```go type Txs []string ``` @@ -588,7 +621,7 @@ type Txs []string The `clique` object provides the necessary information to complete a clique seal of the block. -```go= +```go var CliqueInfo struct { Key *common.Hash `json:"secretKey"` Voted *common.Address `json:"voted"` @@ -601,7 +634,7 @@ var CliqueInfo struct { The `output` object contains two values, the block RLP and the block hash. -```go= +```go type BlockInfo struct { Rlp []byte `json:"rlp"` Hash common.Hash `json:"hash"` diff --git a/cmd/evm/testdata/13/readme.md b/cmd/evm/testdata/13/readme.md index 889975d47e..36dfbd6579 100644 --- a/cmd/evm/testdata/13/readme.md +++ b/cmd/evm/testdata/13/readme.md @@ -1,4 +1,4 @@ ## Input transactions in RLP form -This testdata folder is used to exemplify how transaction input can be provided in rlp form. -Please see the README in `evm` folder for how this is performed. \ No newline at end of file +This testdata folder is used to exemplify how transaction input can be provided in rlp form. +Please see the README in `evm` folder for how this is performed. diff --git a/cmd/evm/testdata/14/readme.md b/cmd/evm/testdata/14/readme.md index 40dd75486e..44f63578d2 100644 --- a/cmd/evm/testdata/14/readme.md +++ b/cmd/evm/testdata/14/readme.md @@ -1,9 +1,10 @@ ## Difficulty calculation -This test shows how the `evm t8n` can be used to calculate the (ethash) difficulty, if none is provided by the caller. +This test shows how the `evm t8n` can be used to calculate the (ethash) difficulty, if none is provided by the caller. Calculating it (with an empty set of txs) using `London` rules (and no provided unclehash for the parent block): -``` + +```bash [user@work evm]$ ./evm t8n --input.alloc=./testdata/14/alloc.json --input.txs=./testdata/14/txs.json --input.env=./testdata/14/env.json --output.result=stdout --state.fork=London INFO [03-09|10:43:57.070] Trie dumping started root=6f0588..7f4bdc INFO [03-09|10:43:57.070] Trie dumping complete accounts=2 elapsed="214.663µs" @@ -22,8 +23,10 @@ INFO [03-09|10:43:57.071] Wrote file file=alloc.js } } ``` + Same thing, but this time providing a non-empty (and non-`emptyKeccak`) unclehash, which leads to a slightly different result: -``` + +```bash [user@work evm]$ ./evm t8n --input.alloc=./testdata/14/alloc.json --input.txs=./testdata/14/txs.json --input.env=./testdata/14/env.uncles.json --output.result=stdout --state.fork=London INFO [03-09|10:44:20.511] Trie dumping started root=6f0588..7f4bdc INFO [03-09|10:44:20.511] Trie dumping complete accounts=2 elapsed="184.319µs" @@ -42,4 +45,3 @@ INFO [03-09|10:44:20.512] Wrote file file=alloc.js } } ``` - diff --git a/cmd/evm/testdata/18/README.md b/cmd/evm/testdata/18/README.md index 360a9bba01..4448f51725 100644 --- a/cmd/evm/testdata/18/README.md +++ b/cmd/evm/testdata/18/README.md @@ -1,9 +1,9 @@ # Invalid rlp This folder contains a sample of invalid RLP, and it's expected -that the t9n handles this properly: +that the t9n handles this properly: -``` +```bash $ go run . t9n --input.txs=./testdata/18/invalid.rlp --state.fork=London ERROR(11): rlp: value size exceeds available input length -``` \ No newline at end of file +``` diff --git a/cmd/evm/testdata/19/readme.md b/cmd/evm/testdata/19/readme.md index 9c7c4b3656..a9934751bf 100644 --- a/cmd/evm/testdata/19/readme.md +++ b/cmd/evm/testdata/19/readme.md @@ -1,10 +1,11 @@ ## Difficulty calculation -This test shows how the `evm t8n` can be used to calculate the (ethash) difficulty, if none is provided by the caller, +This test shows how the `evm t8n` can be used to calculate the (ethash) difficulty, if none is provided by the caller, this time on `GrayGlacier` (Eip 5133). Calculating it (with an empty set of txs) using `GrayGlacier` rules (and no provided unclehash for the parent block): -``` + +```bash [user@work evm]$ ./evm t8n --input.alloc=./testdata/19/alloc.json --input.txs=./testdata/19/txs.json --input.env=./testdata/19/env.json --output.result=stdout --state.fork=GrayGlacier INFO [03-09|10:45:26.777] Trie dumping started root=6f0588..7f4bdc INFO [03-09|10:45:26.777] Trie dumping complete accounts=2 elapsed="176.471µs" @@ -22,4 +23,4 @@ INFO [03-09|10:45:26.777] Wrote file file=alloc.js "currentBaseFee": "0x500" } } -``` \ No newline at end of file +``` diff --git a/cmd/evm/testdata/23/readme.md b/cmd/evm/testdata/23/readme.md index f31b64de2f..0413e80d1e 100644 --- a/cmd/evm/testdata/23/readme.md +++ b/cmd/evm/testdata/23/readme.md @@ -1 +1 @@ -These files exemplify how to sign a transaction using the pre-EIP155 scheme. +These files exemplify how to sign a transaction using the pre-EIP155 scheme. diff --git a/cmd/evm/testdata/29/readme.md b/cmd/evm/testdata/29/readme.md index ab02ce9cf8..f88c4e0fc8 100644 --- a/cmd/evm/testdata/29/readme.md +++ b/cmd/evm/testdata/29/readme.md @@ -1,11 +1,11 @@ ## EIP 4788 -This test contains testcases for EIP-4788. The 4788-contract is +This test contains testcases for EIP-4788. The 4788-contract is located at address `0x000F3df6D732807Ef1319fB7B8bB8522d0Beac02`, and this test executes a simple transaction. It also -implicitly invokes the system tx, which sets calls the contract and sets the +implicitly invokes the system tx, which sets calls the contract and sets the storage values -``` +```bash $ dir=./testdata/29/ && go run . t8n --state.fork=Cancun --input.alloc=$dir/alloc.json --input.txs=$dir/txs.json --input.env=$dir/env.json --output.alloc=stdout INFO [09-27|15:34:53.049] Trie dumping started root=19a4f8..01573c INFO [09-27|15:34:53.049] Trie dumping complete accounts=2 elapsed="192.759µs" diff --git a/cmd/evm/testdata/3/readme.md b/cmd/evm/testdata/3/readme.md index 246c58ef3b..dfb2ea031e 100644 --- a/cmd/evm/testdata/3/readme.md +++ b/cmd/evm/testdata/3/readme.md @@ -1,2 +1,2 @@ These files exemplify a transition where a transaction (executed on block 5) requests -the blockhash for block `1`. +the blockhash for block `1`. diff --git a/cmd/evm/testdata/4/readme.md b/cmd/evm/testdata/4/readme.md index eede41a9fd..56846dfdd2 100644 --- a/cmd/evm/testdata/4/readme.md +++ b/cmd/evm/testdata/4/readme.md @@ -1,3 +1,3 @@ These files exemplify a transition where a transaction (executed on block 5) requests -the blockhash for block `4`, but where the hash for that block is missing. +the blockhash for block `4`, but where the hash for that block is missing. It's expected that executing these should cause `exit` with errorcode `4`. diff --git a/cmd/evm/testdata/5/readme.md b/cmd/evm/testdata/5/readme.md index 1a84afaab6..f31c0760ae 100644 --- a/cmd/evm/testdata/5/readme.md +++ b/cmd/evm/testdata/5/readme.md @@ -1 +1 @@ -These files exemplify a transition where there are no transactions, two ommers, at block `N-1` (delta 1) and `N-2` (delta 2). \ No newline at end of file +These files exemplify a transition where there are no transactions, two ommers, at block `N-1` (delta 1) and `N-2` (delta 2). diff --git a/cmd/simulator/README.md b/cmd/simulator/README.md index f602591df5..1223cf1680 100644 --- a/cmd/simulator/README.md +++ b/cmd/simulator/README.md @@ -7,7 +7,7 @@ When building developing your own blockchain using `subnet-evm`, you may want to To build the load simulator, navigate to the base of the simulator directory: ```bash -cd $GOPATH/src/github.com/ava-labs/subnet-evm/cmd/simulator +cd $GOPATH/src/github.com/ava-labs/coreth/cmd/simulator ``` Build the simulator: diff --git a/consensus/dummy/README.md b/consensus/dummy/README.md index cbe9a5f1be..bd13cc4cec 100644 --- a/consensus/dummy/README.md +++ b/consensus/dummy/README.md @@ -4,7 +4,7 @@ Disclaimer: the consensus package in subnet-evm is a complete misnomer. The consensus package in go-ethereum handles block validation and specifically handles validating the PoW portion of consensus - thus the name. -Since AvalancheGo handles consensus for Subnet-EVM, Subnet-EVM is just the VM, but we keep the consensus package in place to handle part of the block verification process. +Since AvalancheGo handles consensus for subnet-evm, subnet-evm is just the VM, but we keep the consensus package in place to handle part of the block verification process. ## Block Verification @@ -12,17 +12,17 @@ The dummy consensus engine is responsible for performing verification on the hea ## Dynamic Fees -As of Apricot Phase 3, the C-Chain includes a dynamic fee algorithm based off of [EIP-1559](https://eips.ethereum.org/EIPS/eip-1559). This introduces a field to the block type called `BaseFee`. The Base Fee sets a minimum gas price for any transaction to be included in the block. For example, a transaction with a gas price of 49 gwei, will be invalid to include in a block with a base fee of 50 gwei. +subnet-evm includes a dynamic fee algorithm based off of [EIP-1559](https://eips.ethereum.org/EIPS/eip-1559). This introduces a field to the block type called `BaseFee`. The Base Fee sets a minimum gas price for any transaction to be included in the block. For example, a transaction with a gas price of 49 gwei, will be invalid to include in a block with a base fee of 50 gwei. -The dynamic fee algorithm aims to adjust the base fee to handle network congestion. Subnet-EVM sets a target utilization on the network, and the dynamic fee algorithm adjusts the base fee accordingly. If the network operates above the target utilization, the dynamic fee algorithm will increase the base fee to make utilizing the network more expensive and bring overall utilization down. If the network operates below the target utilization, the dynamic fee algorithm will decrease the base fee to make it cheaper to use the network. +The dynamic fee algorithm aims to adjust the base fee to handle network congestion. subnet-evm sets a target utilization on the network, and the dynamic fee algorithm adjusts the base fee accordingly. If the network operates above the target utilization, the dynamic fee algorithm will increase the base fee to make utilizing the network more expensive and bring overall utilization down. If the network operates below the target utilization, the dynamic fee algorithm will decrease the base fee to make it cheaper to use the network. - EIP-1559 is intended for Ethereum where a block is produced roughly every 10s - The dynamic fee algorithm needs to handle the case that the network quiesces and there are no blocks for a long period of time -- Since Subnet-EVM produces blocks at a different cadence, it adapts EIP-1559 to sum the amount of gas consumed within a 10-second interval instead of using only the amount of gas consumed in the parent block +- Since subnet-evm produces blocks at a different cadence, it adapts EIP-1559 to sum the amount of gas consumed within a 10-second interval instead of using only the amount of gas consumed in the parent block ## Consensus Engine Callbacks -The consensus engine is called while blocks are being both built and processed and Subnet-EVM adds callback functions into the dummy consensus engine to insert its own logic into these stages. +The consensus engine is called while blocks are being both built and processed and subnet-evm adds callback functions into the dummy consensus engine to insert its own logic into these stages. ### FinalizeAndAssemble @@ -30,4 +30,4 @@ The FinalizeAndAssemble callback is used as the final step in building a block w ### Finalize -Finalize is called as the final step in processing a block in [state_processor.go](../../core/state_processor.go). Finalize adds a callback function in order to process atomic transactions as well. Since either Finalize or FinalizeAndAssemble are called, but not both, when building or verifying/processing a block they need to perform the exact same processing/verification step to ensure that a block produced by the miner where FinalizeAndAssemble is called will be processed and verified in the same way when Finalize gets called. +Finalize is called as the final step in processing a block in [state_processor.go](../../core/state_processor.go). Since either Finalize or FinalizeAndAssemble are called, but not both, when building or verifying/processing a block they need to perform the exact same processing/verification step to ensure that a block produced by the miner where FinalizeAndAssemble is called will be processed and verified in the same way when Finalize gets called. diff --git a/contracts/README.md b/contracts/README.md index 3e37fea846..6fa8972942 100644 --- a/contracts/README.md +++ b/contracts/README.md @@ -21,11 +21,13 @@ This project requires Go 1.21 or later. Install from [golang.org](https://golang The Solidity compiler version 0.8.30 is required to compile contracts. In CI, this is installed automatically via the [setup-solc](https://github.com/ARR4N/setup-solc) GitHub Action. For local development, install solc 0.8.30: -- **macOS**: `brew install solidity` + +- **macOS**: `brew install solidity` - **Linux**: Follow instructions at [solidity docs](https://docs.soliditylang.org/en/latest/installing-solidity.html) - **CI**: Automatically installed via GitHub Actions After installation, create a version-specific alias or symlink: + ```bash # Option 1: Symlink (works in all contexts including go generate) sudo ln -sf $(which solc) /usr/local/bin/solc-v0.8.30 # Linux @@ -62,8 +64,9 @@ From the repository root, run: ``` This will: + 1. Compile all Solidity contracts in `contracts/contracts/` to ABIs and bytecode -2. Generate Go bindings in `contracts/bindings/` +1. Generate Go bindings in `contracts/bindings/` The compilation artifacts (`.abi` and `.bin` files) are stored in `contracts/artifacts/` (gitignored). The generated Go bindings in `contracts/bindings/` are committed to the repository. @@ -79,8 +82,9 @@ go generate ./... # Compile contracts and generate bindings ``` All compilation and code generation is configured in `contracts/contracts/compile.go` using `go:generate` directives. The directives execute in order: + 1. First, `solc` compiles `.sol` files to `.abi` and `.bin` files in `artifacts/` -2. Then, `abigen` generates Go bindings from the artifacts to `bindings/*.go` +1. Then, `abigen` generates Go bindings from the artifacts to `bindings/*.go` ## Write Contracts @@ -98,7 +102,7 @@ For more information about precompiles see [subnet-evm precompiles](https://gith ## Hardhat Config -Hardhat uses `hardhat.config.js` as the configuration file. You can define tasks, networks, compilers and more in that file. For more information see [here](https://hardhat.org/config/). +Hardhat uses `hardhat.config.js` as the configuration file. You can define tasks, networks, compilers and more in that file. For more information see [the hardhat configuration docs](https://hardhat.org/config/). In Subnet-EVM, we provide a pre-configured file [hardhat.config.ts](https://github.com/ava-labs/subnet-evm/blob/master/contracts/hardhat.config.ts). diff --git a/core/README.md b/core/README.md index a094dab6d8..3ab0710573 100644 --- a/core/README.md +++ b/core/README.md @@ -10,7 +10,7 @@ When the consensus engine verifies blocks as they are ready to be issued into co InsertBlockManual verifies the block, inserts it into the state manager to track the merkle trie for the block, and adds it to the canonical chain if it extends the currently preferred chain. -Subnet-EVM adds functions for Accept and Reject, which take care of marking a block as finalized and performing garbage collection where possible. +Coreth adds functions for Accept and Reject, which take care of marking a block as finalized and performing garbage collection where possible. The consensus engine can also call `SetPreference` on a VM to tell the VM that a specific block is preferred by the consensus engine to be accepted. This triggers a call to `reorg` the blockchain and set the newly preferred block as the preferred chain. diff --git a/docs/releasing/README.md b/docs/releasing/README.md index a0994b33a4..4b51194520 100644 --- a/docs/releasing/README.md +++ b/docs/releasing/README.md @@ -28,9 +28,9 @@ Remember to use the appropriate versioning for your release. git checkout -b "releases/$VERSION_RC" ``` -2. Update the [RELEASES.md](../../RELEASES.md) file with the new release version `$VERSION`. -3. Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to the desired `$VERSION`. -4. Ensure the AvalancheGo version used in [go.mod](../../go.mod) is [its last release](https://github.com/ava-labs/avalanchego/releases). If not, upgrade it with, for example: +1. Update the [RELEASES.md](../../RELEASES.md) file with the new release version `$VERSION`. +1. Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to the desired `$VERSION`. +1. Ensure the AvalancheGo version used in [go.mod](../../go.mod) is [its last release](https://github.com/ava-labs/avalanchego/releases). If not, upgrade it with, for example: ```bash go get github.com/ava-labs/avalanchego@v1.13.0 @@ -39,7 +39,7 @@ Remember to use the appropriate versioning for your release. And fix any errors that may arise from the upgrade. If it requires significant changes, you may want to create a separate PR for the upgrade and wait for it to be merged before continuing with this procedure. -5. Add an entry in the object in [compatibility.json](../../compatibility.json), adding the target release `$VERSION` as key and the AvalancheGo RPC chain VM protocol version as value, to the `"rpcChainVMProtocolVersion"` JSON object. For example, we would add: +1. Add an entry in the object in [compatibility.json](../../compatibility.json), adding the target release `$VERSION` as key and the AvalancheGo RPC chain VM protocol version as value, to the `"rpcChainVMProtocolVersion"` JSON object. For example, we would add: ```json "v0.7.3": 39, @@ -58,14 +58,14 @@ Remember to use the appropriate versioning for your release. ``` This message can help you figure out what the correct RPC chain VM protocol version (here `39`) has to be in compatibility.json for your current release. Alternatively, you can refer to the [Avalanchego repository `version/compatibility.json` file](https://github.com/ava-labs/avalanchego/blob/main/version/compatibility.json) to find the RPC chain VM protocol version matching the AvalancheGo version we use here. -6. Specify the AvalancheGo compatibility in the [README.md relevant section](../../README.md#avalanchego-compatibility). For example we would add: +1. Specify the AvalancheGo compatibility in the [README.md relevant section](../../README.md#avalanchego-compatibility). For example we would add: ```text ... [v0.7.3] AvalancheGo@v1.12.2/1.13.0-fuji/1.13.0 (Protocol Version: 39) ``` -7. Commit your changes and push the branch +1. Commit your changes and push the branch ```bash git add . @@ -73,26 +73,27 @@ Remember to use the appropriate versioning for your release. git push -u origin "releases/$VERSION_RC" ``` -8. Create a pull request (PR) from your branch targeting master, for example using [`gh`](https://cli.github.com/): +1. Create a pull request (PR) from your branch targeting master, for example using [`gh`](https://cli.github.com/): ```bash gh pr create --repo github.com/ava-labs/subnet-evm --base master --title "chore: release $VERSION_RC" ``` -9. Wait for the PR checks to pass with +1. Wait for the PR checks to pass with ```bash gh pr checks --watch ``` -10. Squash and merge your release branch into `master`, for example: +1. Squash and merge your release branch into `master`, for example: ```bash gh pr merge "releases/$VERSION_RC" --squash --subject "chore: release $VERSION_RC" --body "\n- Update AvalancheGo from v1.1X.X to v1.1X.X" ``` + Ensure you properly label the AvalancheGo version. -11. Create and push a tag from the `master` branch: +1. Create and push a tag from the `master` branch: ```bash git fetch origin master @@ -113,7 +114,7 @@ Once the tag is created, you need to test it on the Fuji testnet both locally an 1. Find the Dispatch and Echo L1s blockchain ID and subnet ID: - [Dispatch L1 details](https://subnets-test.avax.network/dispatch/details). Its subnet id is `7WtoAMPhrmh5KosDUsFL9yTcvw7YSxiKHPpdfs4JsgW47oZT5`. - [Echo L1 details](https://subnets-test.avax.network/echo/details). Its subnet id is `i9gFpZQHPLcGfZaQLiwFAStddQD7iTKBpFfurPFJsXm1CkTZK`. -2. Get the blockchain ID and VM ID of the Echo and Dispatch L1s with: +1. Get the blockchain ID and VM ID of the Echo and Dispatch L1s with: - Dispatch: ```bash @@ -154,13 +155,13 @@ Once the tag is created, you need to test it on the Fuji testnet both locally an VM id: meq3bv7qCMZZ69L8xZRLwyKnWp6chRwyscq8VPtHWignRQVVF ``` -3. In the subnet-evm directory, build the VM using +1. In the subnet-evm directory, build the VM using ```bash ./scripts/build.sh vm.bin ``` -4. Copy the VM binary to the plugins directory, naming it with the VM ID: +1. Copy the VM binary to the plugins directory, naming it with the VM ID: ```bash mkdir -p ~/.avalanchego/plugins @@ -169,20 +170,20 @@ Once the tag is created, you need to test it on the Fuji testnet both locally an rm vm.bin ``` -5. Clone [AvalancheGo](https://github.com/ava-labs/avalanchego): +1. Clone [AvalancheGo](https://github.com/ava-labs/avalanchego): ```bash git clone git@github.com:ava-labs/avalanchego.git ``` -6. Checkout correct AvalancheGo version, the version should match the one used in Subnet-EVM `go.mod` file +1. Checkout correct AvalancheGo version, the version should match the one used in Subnet-EVM `go.mod` file ```bash cd avalanchego git checkout v1.13.0 ``` -7. Get upgrades for each L1 and write them out to `~/.avalanchego/configs/chains//upgrade.json`: +1. Get upgrades for each L1 and write them out to `~/.avalanchego/configs/chains//upgrade.json`: ```bash mkdir -p ~/.avalanchego/configs/chains/2D8RG4UpSXbPbvPCAWppNJyqTG2i2CAXSkTgmTBBvs7GKNZjsY @@ -208,32 +209,32 @@ Once the tag is created, you need to test it on the Fuji testnet both locally an jq -r '.result.upgrades' > ~/.avalanchego/configs/chains/98qnjenm7MBd8G2cPZoRvZrgJC33JGSAAKghsQ6eojbLCeRNp/upgrade.json ``` -8. (Optional) You can tweak the `config.json` for each L1 if you want to test a particular feature for example. +1. (Optional) You can tweak the `config.json` for each L1 if you want to test a particular feature for example. - Dispatch: `~/.avalanchego/configs/chains/2D8RG4UpSXbPbvPCAWppNJyqTG2i2CAXSkTgmTBBvs7GKNZjsY/config.json` - Echo: `~/.avalanchego/configs/chains/98qnjenm7MBd8G2cPZoRvZrgJC33JGSAAKghsQ6eojbLCeRNp/config.json` -9. (Optional) If you want to reboostrap completely the chain, you can remove `~/.avalanchego/chainData//db/pebbledb`, for example: +1. (Optional) If you want to reboostrap completely the chain, you can remove `~/.avalanchego/chainData//db/pebbledb`, for example: - Dispatch: `rm -r ~/.avalanchego/chainData/2D8RG4UpSXbPbvPCAWppNJyqTG2i2CAXSkTgmTBBvs7GKNZjsY/db/pebbledb` - Echo: `rm -r ~/.avalanchego/chainData/98qnjenm7MBd8G2cPZoRvZrgJC33JGSAAKghsQ6eojbLCeRNp/db/pebbledb` AvalancheGo keeps its database in `~/.avalanchego/db/fuji/v1.4.5/*.ldb` which you should not delete. -10. Build AvalancheGo: +1. Build AvalancheGo: ```bash ./scripts/build.sh ``` -11. Run AvalancheGo tracking the Dispatch and Echo Subnet IDs: +1. Run AvalancheGo tracking the Dispatch and Echo Subnet IDs: ```bash ./build/avalanchego --network-id=fuji --partial-sync-primary-network --public-ip=127.0.0.1 \ --track-subnets=7WtoAMPhrmh5KosDUsFL9yTcvw7YSxiKHPpdfs4JsgW47oZT5,i9gFpZQHPLcGfZaQLiwFAStddQD7iTKBpFfurPFJsXm1CkTZK ``` -12. Follow the logs and wait until you see the following lines: +1. Follow the logs and wait until you see the following lines: - line stating the health `check started passing` - line containing `consensus started` - line containing `bootstrapped healthy nodes` -13. In another terminal, check you can obtain the current block number for both chains: +1. In another terminal, check you can obtain the current block number for both chains: - Dispatch: @@ -331,7 +332,7 @@ Following the previous example in the [Release candidate section](#release-candi git push origin "$VERSION" ``` -2. Create a new release on Github, either using: +1. Create a new release on Github, either using: - the [Github web interface](https://github.com/ava-labs/subnet-evm/releases/new) 1. In the "Choose a tag" box, select the tag previously created `$VERSION` (`v0.7.3`) 2. Pick the previous tag, for example as `v0.7.2`. @@ -375,13 +376,13 @@ Following the previous example in the [Release candidate section](#release-candi gh release create "$VERSION" --notes-start-tag "$PREVIOUS_VERSION" --notes-from-tag "$VERSION" --title "$VERSION" --notes "$NOTES" --verify-tag ``` -3. Monitor the [release Github workflow](https://github.com/ava-labs/subnet-evm/actions/workflows/release.yml) to ensure the GoReleaser step succeeds and check the binaries are then published to [the releases page](https://github.com/ava-labs/subnet-evm/releases). In case this fails, you can trigger the workflow manually: +1. Monitor the [release Github workflow](https://github.com/ava-labs/subnet-evm/actions/workflows/release.yml) to ensure the GoReleaser step succeeds and check the binaries are then published to [the releases page](https://github.com/ava-labs/subnet-evm/releases). In case this fails, you can trigger the workflow manually: 1. Go to [github.com/ava-labs/subnet-evm/actions/workflows/release.yml](https://github.com/ava-labs/subnet-evm/actions/workflows/release.yml) 1. Click on the "Run workflow" button 1. Enter the branch name, usually with goreleaser related fixes 1. Enter the tag name `$VERSION` (i.e. `v0.7.3`) -4. Monitor the [Publish Docker image workflow](https://github.com/ava-labs/subnet-evm/actions/workflows/publish_docker.yml) succeeds. Note this workflow is triggered when pushing the tag, unlike Goreleaser which triggers when publishing the release. -5. Finally, [create a release for precompile-evm](https://github.com/ava-labs/precompile-evm/blob/main/docs/releasing/README.md) +1. Monitor the [Publish Docker image workflow](https://github.com/ava-labs/subnet-evm/actions/workflows/publish_docker.yml) succeeds. Note this workflow is triggered when pushing the tag, unlike Goreleaser which triggers when publishing the release. +1. Finally, [create a release for precompile-evm](https://github.com/ava-labs/precompile-evm/blob/main/docs/releasing/README.md) ### Post-release @@ -391,7 +392,7 @@ After you have successfully released a new subnet-evm version, you need to bump export P_VERSION=v0.7.4 ``` -1. Create a branch, from the tip of the `master` branch after the release PR has been merged +1. Create a branch, from the tip of the `master` branch after the release PR has been merged: ```bash git fetch origin master @@ -400,10 +401,9 @@ export P_VERSION=v0.7.4 ``` 1. Bump the version number to the next pending release version, `$P_VERSION` - -- Update the [RELEASES.md](../../RELEASES.md) file with `$P_VERSION`, creating a space for maintainers to place their changes as they make them. -- Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to `$P_VERSION`. -- Add an entry in the object in [compatibility.json](../../compatibility.json), adding the next pending release versionas key and the AvalancheGo RPC chain VM protocol version as value, to the `"rpcChainVMProtocolVersion"` JSON object. For example, we would add: + - Update the [RELEASES.md](../../RELEASES.md) file with `$P_VERSION`, creating a space for maintainers to place their changes as they make them. + - Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to `$P_VERSION`. + - Add an entry in the object in [compatibility.json](../../compatibility.json), adding the next pending release versionas key and the AvalancheGo RPC chain VM protocol version as value, to the `"rpcChainVMProtocolVersion"` JSON object. For example, we would add: ```json "v0.7.4": 39, @@ -429,6 +429,7 @@ export P_VERSION=v0.7.4 git commit -S -m "chore: prep release $P_VERSION" git push -u origin "prep-$P_VERSION-release" ``` + 1. Create a pull request (PR) from your branch targeting master, for example using [`gh`](https://cli.github.com/): ```bash diff --git a/plugin/evm/README.md b/plugin/evm/README.md index 3399213e0b..3bc1c00b08 100644 --- a/plugin/evm/README.md +++ b/plugin/evm/README.md @@ -8,7 +8,7 @@ The VM creates the Ethereum backend and provides basic block building, parsing, ## APIs -The VM creates APIs for the node through the function `CreateHandlers()`. CreateHandlers returns the `Service` struct to serve Subnet-EVM specific APIs. Additionally, the Ethereum backend APIs are also returned at the `/rpc` extension. +The VM creates APIs for the node through the function `CreateHandlers()`. CreateHandlers returns the `Service` struct to serve Coreth specific APIs. Additionally, the Ethereum backend APIs are also returned at the `/rpc` extension. ## Block Handling @@ -20,4 +20,4 @@ The VM uses the block type from [`libevm/core/types`](https://github.com/ava-lab The Block type implements the AvalancheGo ChainVM Block interface. The key functions for this interface are `Verify()`, `Accept()`, `Reject()`, and `Status()`. -The Block type (implemented as [`wrappedBlock`](wrapped_block.go)) wraps the block type from [`libevm/core/types`](https://github.com/ava-labs/libevm/tree/master/core/types) and implements these functions to allow the consensus engine to verify blocks as valid, perform consensus, and mark them as accepted or rejected. Blocks contain standard Ethereum transactions as well as atomic transactions (stored in the block's `ExtData` field) that enable cross-chain asset transfers. Blocks may also include optional block extensions for extensible VM functionality. See the documentation in AvalancheGo for the more detailed VM invariants that are maintained here. +The Block type (implemented as [`wrappedBlock`](wrapped_block.go)) wraps the block type from [`libevm/core/types`](https://github.com/ava-labs/libevm/tree/master/core/types) and implements these functions to allow the consensus engine to verify blocks as valid, perform consensus, and mark them as accepted or rejected. Blocks contain standard Ethereum transactions that enable cross-chain asset transfers. Blocks may also include optional block extensions for extensible VM functionality. See the documentation in AvalancheGo for the more detailed VM invariants that are maintained here. diff --git a/plugin/evm/config/config.md b/plugin/evm/config/config.md index 1fe67c1d86..65786640cb 100644 --- a/plugin/evm/config/config.md +++ b/plugin/evm/config/config.md @@ -57,7 +57,7 @@ Configuration is provided as a JSON object. All fields are optional unless other | `api-max-duration` | duration | Maximum duration for API calls (0 = no limit) | `0` | | `api-max-blocks-per-request` | int64 | Maximum number of blocks per getLogs request (0 = no limit) | `0` | | `http-body-limit` | uint64 | Maximum size of HTTP request bodies | - | -| `batch-request-limit` | uint64 | Maximum number of requests that can be batched in an RPC call. For no limit, set either this or `batch-response-max-size` to 0 | `1000` | +| `batch-request-limit` | uint64 | Maximum number of requests that can be batched in an RPC call. For no limit, set either this or `batch-response-max-size` to 0 | `1000` | | `batch-response-max-size` | uint64 | Maximum size (in bytes) of response that can be returned from a batched RPC call. For no limit, set either this or `batch-request-limit` to 0. Defaults to `25 MB`| `1000` | ### WebSocket Settings diff --git a/sync/README.md b/sync/README.md index 56604980f4..effb645256 100644 --- a/sync/README.md +++ b/sync/README.md @@ -19,8 +19,8 @@ _Note: nodes joining the network through state sync will not have historical sta The node needs the following data from its peers to continue processing blocks from a syncable block: -- Accounts trie & storage tries for all accounts (at the state root corresponding to the syncable block), -- Contract code referenced in the account trie, +- Accounts trie & storage tries for all accounts (at the state root corresponding to the syncable block) +- Contract code referenced in the account trie - 256 parents of the syncable block (required for the BLOCKHASH opcode) ## Code structure @@ -49,10 +49,10 @@ When a new node wants to join the network via state sync, it will need a few pie The above information is called a _state summary_, and each syncable block corresponds to one such summary (see `message.SyncSummary`). The engine and VM interact as follows to find a syncable state summary: -1. The engine calls `StateSyncEnabled`. The VM returns `true` to initiate state sync, or `false` to start bootstrapping. In `coreth`, this is controlled by the `state-sync-enabled` flag. -1. The engine calls `GetOngoingSyncStateSummary`. If the VM has a previously interrupted sync to resume it returns that summary. Otherwise, it returns `ErrNotFound`. By default, `coreth` will resume an interrupted sync. -1. The engine samples peers for their latest available summaries, then verifies the correctness and availability of each sampled summary with validators. The messaging flow is documented in the [block engine README](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/README.md). -1. The engine calls `Accept` on the chosen summary. The VM may return `false` to skip syncing to this summary (`coreth` skips state sync for less than `defaultStateSyncMinBlocks = 300_000` blocks). If the VM decides to perform the sync, it must return `true` without blocking and fetch the state from its peers asynchronously. +1. The engine calls `StateSyncEnabled`. The VM returns `true` to initiate state sync, or `false` to start bootstrapping. In `subnet-evm`, this is controlled by the `state-sync-enabled` flag. +1. The engine calls `GetOngoingSyncStateSummary`. If the VM has a previously interrupted sync to resume it returns that summary. Otherwise, it returns `ErrNotFound`. By default, `subnet-evm` will resume an interrupted sync. +1. The engine samples peers for their latest available summaries, then verifies the correctness and availability of each sampled summary with validators. The messaging flow is documented in the [state sync readme](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/README.md). +1. The engine calls `Accept` on the chosen summary. The VM may return `false` to skip syncing to this summary (`subnet-evm` skips state sync for less than `defaultStateSyncMinBlocks = 300_000` blocks). If the VM decides to perform the sync, it must return `true` without blocking and fetch the state from its peers asynchronously. 1. The VM sends `common.StateSyncDone` on the `toEngine` channel on completion. 1. The engine calls `VM.SetState(Bootstrapping)`. Then, blocks after the syncable block are processed one by one. @@ -61,24 +61,24 @@ The above information is called a _state summary_, and each syncable block corre The following steps are executed by the VM to sync its state from peers (see `stateSyncClient.StateSync`): 1. Wipe snapshot data -1. Sync 256 parents of the syncable block (see `BlockRequest`), -1. Sync the EVM state: account trie, code, and storage tries, -1. Update in-memory and on-disk pointers. +1. Sync 256 parents of the syncable block (see `BlockRequest`) +1. Sync the EVM state: account trie, code, and storage tries +1. Update in-memory and on-disk pointers Steps 3 and 4 involve syncing tries. To sync trie data, the VM will send a series of `LeafRequests` to its peers. Each request specifies: -- Type of trie (`NodeType`): +- Type of trie (`NodeType`) - `statesync.StateTrieNode` (account trie and storage tries share the same database) -- `Root` of the trie to sync, -- `Start` and `End` specify a range of keys. +- `Root` of the trie to sync +- `Start` and `End` specify a range of keys Peers responding to these requests send back trie leafs (key/value pairs) beginning at `Start` and up to `End` (or a maximum number of leafs). The response must also contain include a merkle proof for the range of leafs it contains. Nodes serving state sync data are responsible for constructing these proofs (see `sync/handlers/leafs_request.go`) `client.GetLeafs` handles sending a single request and validating the response. This method will retry the request from a different peer up to `maxRetryAttempts` (= 32) times if the peer's response is: -- malformed, -- does not contain a valid merkle proof, -- or is not received in time. +- malformed +- does not contain a valid merkle proof +- not received in time If there are more leafs in a trie than can be returned in a single response, the client will make successive requests to continue fetching data (with `Start` set to the last key received) until the trie is complete. `CallbackLeafSyncer` manages this process and does a callback on each batch of received leafs. @@ -102,10 +102,10 @@ When a storage trie leaf is received, it is stored in the account's storage snap `plugin/evm.stateSyncClient.StateSyncSetLastSummaryBlock` is the last step in state sync. Once the tries have been synced, this method: -- Verifies the block the engine has received matches the expected block hash and block number in the summary, -- Adds a checkpoint to the `core.ChainIndexer` (to avoid indexing missing blocks) -- Resets in-memory and on disk pointers on the `core.BlockChain` struct. -- Updates VM's last accepted block. +1. Verifies the block the engine has received matches the expected block hash and block number in the summary, +1. Adds a checkpoint to the `core.ChainIndexer` (to avoid indexing missing blocks) +1. Resets in-memory and on disk pointers on the `core.BlockChain` struct +1. Updates VM's last accepted block ## Resuming a partial sync operation diff --git a/tests/README.md b/tests/README.md index 54cc0497d0..04350921d4 100644 --- a/tests/README.md +++ b/tests/README.md @@ -43,11 +43,9 @@ test run, require binary dependencies. One way of making these dependencies avai to use a nix shell which will give access to the dependencies expected by the test tooling: - - Install [nix](https://nixos.org/). The [determinate systems - installer](https://github.com/DeterminateSystems/nix-installer?tab=readme-ov-file#install-nix) - is recommended. - - Use ./scripts/dev_shell.sh to start a nix shell - - Execute the dependency-requiring command (e.g. `ginkgo -v ./tests/warp -- --start-collectors`) +- Install [nix](https://nixos.org/). The [determinate systems installer](https://github.com/DeterminateSystems/nix-installer?tab=readme-ov-file#install-nix) is recommended. +- Use ./scripts/dev_shell.sh to start a nix shell +- Execute the dependency-requiring command (e.g. `ginkgo -v ./tests/warp -- --start-collectors`) This repo also defines a `.envrc` file to configure [devenv](https://direnv.net/). With `devenv` and `nix` installed, a shell at the root of the repo will automatically start a nix dev From 885adc20ff60783b7c26020a9d0574dbd4171910 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Sat, 22 Nov 2025 00:16:47 -0500 Subject: [PATCH 03/20] fix: update broken links --- README.md | 8 ++++---- RELEASES.md | 2 +- cmd/precompilegen/template-readme.md | 2 +- contracts/README.md | 2 +- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index ed5716915c..83ded2b7b9 100644 --- a/README.md +++ b/README.md @@ -5,11 +5,11 @@ [![CodeQL](https://github.com/ava-labs/subnet-evm/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ava-labs/subnet-evm/actions/workflows/codeql-analysis.yml) [![License](https://img.shields.io/github/license/ava-labs/subnet-evm)](https://github.com/ava-labs/subnet-evm/blob/master/LICENSE) -[Avalanche](https://docs.avax.network/avalanche-l1s) is a network composed of multiple blockchains. +[Avalanche](https://build.avax.network/docs/avalanche-l1s) is a network composed of multiple blockchains. Each blockchain is an instance of a Virtual Machine (VM), much like an object in an object-oriented language is an instance of a class. That is, the VM defines the behavior of the blockchain. -Subnet EVM is the [Virtual Machine (VM)](https://docs.avax.network/learn/virtual-machines) that defines the Subnet Contract Chains. Subnet EVM is a simplified version of [Coreth VM (C-Chain)](https://github.com/ava-labs/coreth). +Subnet EVM is the [Virtual Machine (VM)](https://build.avax.network/docs/quick-start/virtual-machines) that defines the Subnet Contract Chains. Subnet EVM is a simplified version of [Coreth VM (C-Chain)](https://github.com/ava-labs/coreth). This chain implements the Ethereum Virtual Machine and supports Solidity smart contracts as well as most other Ethereum client functionality. @@ -94,8 +94,8 @@ To run a local network, it is recommended to use the [avalanche-cli](https://git There are two options when using the Avalanche-CLI: -1. Use an official Subnet-EVM release: -2. Build and deploy a locally built (and optionally modified) version of Subnet-EVM: +1. Use an official Subnet-EVM release: +1. Build and deploy a locally built (and optionally modified) version of Subnet-EVM: ## Releasing diff --git a/RELEASES.md b/RELEASES.md index a8f8f5e64a..bc5caefed0 100644 --- a/RELEASES.md +++ b/RELEASES.md @@ -202,7 +202,7 @@ The plugin version is unchanged at 37 and is compatible with AvalancheGo version - Added following new database options: - `"use-standalone-database"` (`bool`): If true it enables creation of standalone database. If false it uses the GRPC Database provided by AvalancheGo. Default is nil and creates the standalone database only if there is no accepted block in the AvalancheGo database (node has not accepted any blocks for this chain) - `"database-type"` (`string`): Specifies the type of database to use. Must be one of `pebbledb`, `leveldb` or `memdb`. memdb is an in-memory, non-persisted database. Default is `pebbledb` - - `"database-config-file"` (`string`): Path to the database config file. Config file is changed for every database type. See [docs](https://docs.avax.network/api-reference/avalanche-go-configs-flags#database-config) for available configs per database type. Ignored if --config-file-content is specified + - `"database-config-file"` (`string`): Path to the database config file. Config file is changed for every database type. See [docs](https://build.avax.network/docs/nodes/configure/configs-flags#database-config) for available configs per database type. Ignored if --config-file-content is specified - `"database-config-file-content"` (`string`): As an alternative to `database-config-file`, it allows specifying base64 encoded database config content - `"database-path"` (`string`): Specifies the directory to which the standalone database is persisted. Defaults to "`$HOME/.avalanchego/chainData/{chainID}`" - `"database-read-only"` (`bool`) : Specifies if the standalone database should be a read-only type. Defaults to false diff --git a/cmd/precompilegen/template-readme.md b/cmd/precompilegen/template-readme.md index 09aa152658..b5591b83c1 100644 --- a/cmd/precompilegen/template-readme.md +++ b/cmd/precompilegen/template-readme.md @@ -2,7 +2,7 @@ There are some must-be-done changes waiting in the generated file. Each area req Additionally there are other files you need to edit to activate your precompile. These areas are highlighted with comments "ADD YOUR PRECOMPILE HERE". For testing take a look at other precompile tests in contract_test.go and config_test.go in other precompile folders. -See the tutorial in for more information about precompile development. +See the tutorial in for more information about precompile development. General guidelines for precompile development: diff --git a/contracts/README.md b/contracts/README.md index 6fa8972942..db0284bf83 100644 --- a/contracts/README.md +++ b/contracts/README.md @@ -39,7 +39,7 @@ echo "alias solc-v0.8.30='solc'" >> ~/.bashrc # or ~/.zshrc ### Solidity and Avalanche -It is also helpful to have a basic understanding of [Solidity](https://docs.soliditylang.org) and [Avalanche](https://docs.avax.network). +It is also helpful to have a basic understanding of [Solidity](https://docs.soliditylang.org) and [Avalanche](https://build.avax.network/docs/quick-start). ## Dependencies From 80bc72b2a7ce38f549a8c1b754d1062899989486 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Mon, 24 Nov 2025 10:30:39 -0500 Subject: [PATCH 04/20] Update plugin/evm/config/config.md Co-authored-by: Austin Larson <78000745+alarso16@users.noreply.github.com> Signed-off-by: Jonathan Oppenheimer <147infiniti@gmail.com> --- plugin/evm/config/config.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugin/evm/config/config.md b/plugin/evm/config/config.md index 65786640cb..70fbfa9de4 100644 --- a/plugin/evm/config/config.md +++ b/plugin/evm/config/config.md @@ -127,7 +127,7 @@ Configuration is provided as a JSON object. All fields are optional unless other ### Offline Pruning -> **Note**: If offline pruning is enabled it will run on startup and block until it completes (approximately one hour on Mainnet). This will reduce the size of the database by deleting old trie nodes. **While performing offline pruning, your node will not be able to process blocks and will be considered offline.** While ongoing, the pruning process consumes a small amount of additional disk space (for deletion markers and the bloom filter). For more information see the [disk space considerations documentation](https://build.avax.network/docs/nodes/maintain/reduce-disk-usage#disk-space-considerations). Since offline pruning deletes old state data, this should not be run on nodes that need to support archival API requests. This is meant to be run manually, so after running with this flag once, it must be toggled back to false before running the node again. Therefore, you should run with this flag set to true and then set it to false on the subsequent run. +> **Note**: If offline pruning is enabled it will run on startup and block until it completes (approximately one hour on Mainnet). This will reduce the size of the database by deleting old trie nodes. **While performing offline pruning, your node will not be able to process blocks and will be considered offline.** While ongoing, the pruning process consumes a small amount of additional disk space (for deletion markers and the bloom filter). For more information see the [disk space considerations documentation](https://build.avax.network/docs/nodes/maintain/reduce-disk-usage#disk-space-considerations). Since offline pruning deletes old state data, this should not be run on nodes that need to support archival API requests. This is meant to be run manually, so after running with this flag once, it must be toggled back to false before running the node again. Therefore, you should run with this flag set to true and then set it to false on the subsequent run. | Option | Type | Description | Default | |--------|------|-------------|---------| From de0818ce4a0c77f0f5d050d700483678a6769bb3 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Mon, 24 Nov 2025 10:31:20 -0500 Subject: [PATCH 05/20] Update sync/README.md Co-authored-by: Austin Larson <78000745+alarso16@users.noreply.github.com> Signed-off-by: Jonathan Oppenheimer <147infiniti@gmail.com> --- sync/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sync/README.md b/sync/README.md index effb645256..7e09feb5c1 100644 --- a/sync/README.md +++ b/sync/README.md @@ -51,7 +51,7 @@ The above information is called a _state summary_, and each syncable block corre 1. The engine calls `StateSyncEnabled`. The VM returns `true` to initiate state sync, or `false` to start bootstrapping. In `subnet-evm`, this is controlled by the `state-sync-enabled` flag. 1. The engine calls `GetOngoingSyncStateSummary`. If the VM has a previously interrupted sync to resume it returns that summary. Otherwise, it returns `ErrNotFound`. By default, `subnet-evm` will resume an interrupted sync. -1. The engine samples peers for their latest available summaries, then verifies the correctness and availability of each sampled summary with validators. The messaging flow is documented in the [state sync readme](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/README.md). +1. The engine samples peers for their latest available summaries, then verifies the correctness and availability of each sampled summary with validators. The messaging flow is documented in the [state sync README](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/README.md). 1. The engine calls `Accept` on the chosen summary. The VM may return `false` to skip syncing to this summary (`subnet-evm` skips state sync for less than `defaultStateSyncMinBlocks = 300_000` blocks). If the VM decides to perform the sync, it must return `true` without blocking and fetch the state from its peers asynchronously. 1. The VM sends `common.StateSyncDone` on the `toEngine` channel on completion. 1. The engine calls `VM.SetState(Bootstrapping)`. Then, blocks after the syncable block are processed one by one. From 8b7f384103849b30ad0399149811f8235d5f28e7 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Mon, 24 Nov 2025 10:33:23 -0500 Subject: [PATCH 06/20] chore: fix coreth cherry-pick mistakes --- SECURITY.md | 2 +- core/README.md | 2 +- plugin/evm/README.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/SECURITY.md b/SECURITY.md index 3c2ebb4c44..60412d905d 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -16,4 +16,4 @@ Please refer to the [Bug Bounty Page](https://immunefi.com/bug-bounty/avalabs/in ## Supported Versions -Please use the [most recently released version](https://github.com/ava-labs/coreth/releases/latest) to perform testing and to validate security issues. +Please use the [most recently released version](https://github.com/ava-labs/subnet-evm/releases/latest) to perform testing and to validate security issues. diff --git a/core/README.md b/core/README.md index 3ab0710573..cf13016df4 100644 --- a/core/README.md +++ b/core/README.md @@ -10,7 +10,7 @@ When the consensus engine verifies blocks as they are ready to be issued into co InsertBlockManual verifies the block, inserts it into the state manager to track the merkle trie for the block, and adds it to the canonical chain if it extends the currently preferred chain. -Coreth adds functions for Accept and Reject, which take care of marking a block as finalized and performing garbage collection where possible. +subnet-evm adds functions for Accept and Reject, which take care of marking a block as finalized and performing garbage collection where possible. The consensus engine can also call `SetPreference` on a VM to tell the VM that a specific block is preferred by the consensus engine to be accepted. This triggers a call to `reorg` the blockchain and set the newly preferred block as the preferred chain. diff --git a/plugin/evm/README.md b/plugin/evm/README.md index 3bc1c00b08..688cfbcd5f 100644 --- a/plugin/evm/README.md +++ b/plugin/evm/README.md @@ -8,7 +8,7 @@ The VM creates the Ethereum backend and provides basic block building, parsing, ## APIs -The VM creates APIs for the node through the function `CreateHandlers()`. CreateHandlers returns the `Service` struct to serve Coreth specific APIs. Additionally, the Ethereum backend APIs are also returned at the `/rpc` extension. +The VM creates APIs for the node through the function `CreateHandlers()`. CreateHandlers returns the `Service` struct to serve subnet-evm specific APIs. Additionally, the Ethereum backend APIs are also returned at the `/rpc` extension. ## Block Handling From 6d45128c67a42ebdb239dbbea90860ee755c1b29 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Mon, 24 Nov 2025 10:35:03 -0500 Subject: [PATCH 07/20] chore: remove path support --- plugin/evm/config/config.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugin/evm/config/config.md b/plugin/evm/config/config.md index 70fbfa9de4..4db3f1f8cd 100644 --- a/plugin/evm/config/config.md +++ b/plugin/evm/config/config.md @@ -259,7 +259,7 @@ Failing to set these options will result in errors on VM initialization. Additio | `database-config-file` | string | Path to database configuration file | - | | `use-standalone-database` | bool | Use standalone database instead of shared one | - | | `inspect-database` | bool | Inspect database on startup | `false` | -| `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash`, `firewood`, or `path` | `hash` | +| `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash` or `firewood` | `hash` | ## Transaction Indexing From 0ee527b5400b260914bbec3f34c73b43579aa6be Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Mon, 24 Nov 2025 10:36:16 -0500 Subject: [PATCH 08/20] chore: revert upstream file changes --- cmd/evm/testdata/13/readme.md | 4 ++-- cmd/evm/testdata/14/readme.md | 10 ++++------ cmd/evm/testdata/18/README.md | 6 +++--- cmd/evm/testdata/19/readme.md | 7 +++---- cmd/evm/testdata/23/readme.md | 2 +- cmd/evm/testdata/29/readme.md | 6 +++--- cmd/evm/testdata/3/readme.md | 2 +- cmd/evm/testdata/4/readme.md | 2 +- cmd/evm/testdata/5/readme.md | 2 +- 9 files changed, 19 insertions(+), 22 deletions(-) diff --git a/cmd/evm/testdata/13/readme.md b/cmd/evm/testdata/13/readme.md index 36dfbd6579..889975d47e 100644 --- a/cmd/evm/testdata/13/readme.md +++ b/cmd/evm/testdata/13/readme.md @@ -1,4 +1,4 @@ ## Input transactions in RLP form -This testdata folder is used to exemplify how transaction input can be provided in rlp form. -Please see the README in `evm` folder for how this is performed. +This testdata folder is used to exemplify how transaction input can be provided in rlp form. +Please see the README in `evm` folder for how this is performed. \ No newline at end of file diff --git a/cmd/evm/testdata/14/readme.md b/cmd/evm/testdata/14/readme.md index 44f63578d2..40dd75486e 100644 --- a/cmd/evm/testdata/14/readme.md +++ b/cmd/evm/testdata/14/readme.md @@ -1,10 +1,9 @@ ## Difficulty calculation -This test shows how the `evm t8n` can be used to calculate the (ethash) difficulty, if none is provided by the caller. +This test shows how the `evm t8n` can be used to calculate the (ethash) difficulty, if none is provided by the caller. Calculating it (with an empty set of txs) using `London` rules (and no provided unclehash for the parent block): - -```bash +``` [user@work evm]$ ./evm t8n --input.alloc=./testdata/14/alloc.json --input.txs=./testdata/14/txs.json --input.env=./testdata/14/env.json --output.result=stdout --state.fork=London INFO [03-09|10:43:57.070] Trie dumping started root=6f0588..7f4bdc INFO [03-09|10:43:57.070] Trie dumping complete accounts=2 elapsed="214.663µs" @@ -23,10 +22,8 @@ INFO [03-09|10:43:57.071] Wrote file file=alloc.js } } ``` - Same thing, but this time providing a non-empty (and non-`emptyKeccak`) unclehash, which leads to a slightly different result: - -```bash +``` [user@work evm]$ ./evm t8n --input.alloc=./testdata/14/alloc.json --input.txs=./testdata/14/txs.json --input.env=./testdata/14/env.uncles.json --output.result=stdout --state.fork=London INFO [03-09|10:44:20.511] Trie dumping started root=6f0588..7f4bdc INFO [03-09|10:44:20.511] Trie dumping complete accounts=2 elapsed="184.319µs" @@ -45,3 +42,4 @@ INFO [03-09|10:44:20.512] Wrote file file=alloc.js } } ``` + diff --git a/cmd/evm/testdata/18/README.md b/cmd/evm/testdata/18/README.md index 4448f51725..360a9bba01 100644 --- a/cmd/evm/testdata/18/README.md +++ b/cmd/evm/testdata/18/README.md @@ -1,9 +1,9 @@ # Invalid rlp This folder contains a sample of invalid RLP, and it's expected -that the t9n handles this properly: +that the t9n handles this properly: -```bash +``` $ go run . t9n --input.txs=./testdata/18/invalid.rlp --state.fork=London ERROR(11): rlp: value size exceeds available input length -``` +``` \ No newline at end of file diff --git a/cmd/evm/testdata/19/readme.md b/cmd/evm/testdata/19/readme.md index a9934751bf..9c7c4b3656 100644 --- a/cmd/evm/testdata/19/readme.md +++ b/cmd/evm/testdata/19/readme.md @@ -1,11 +1,10 @@ ## Difficulty calculation -This test shows how the `evm t8n` can be used to calculate the (ethash) difficulty, if none is provided by the caller, +This test shows how the `evm t8n` can be used to calculate the (ethash) difficulty, if none is provided by the caller, this time on `GrayGlacier` (Eip 5133). Calculating it (with an empty set of txs) using `GrayGlacier` rules (and no provided unclehash for the parent block): - -```bash +``` [user@work evm]$ ./evm t8n --input.alloc=./testdata/19/alloc.json --input.txs=./testdata/19/txs.json --input.env=./testdata/19/env.json --output.result=stdout --state.fork=GrayGlacier INFO [03-09|10:45:26.777] Trie dumping started root=6f0588..7f4bdc INFO [03-09|10:45:26.777] Trie dumping complete accounts=2 elapsed="176.471µs" @@ -23,4 +22,4 @@ INFO [03-09|10:45:26.777] Wrote file file=alloc.js "currentBaseFee": "0x500" } } -``` +``` \ No newline at end of file diff --git a/cmd/evm/testdata/23/readme.md b/cmd/evm/testdata/23/readme.md index 0413e80d1e..f31b64de2f 100644 --- a/cmd/evm/testdata/23/readme.md +++ b/cmd/evm/testdata/23/readme.md @@ -1 +1 @@ -These files exemplify how to sign a transaction using the pre-EIP155 scheme. +These files exemplify how to sign a transaction using the pre-EIP155 scheme. diff --git a/cmd/evm/testdata/29/readme.md b/cmd/evm/testdata/29/readme.md index f88c4e0fc8..ab02ce9cf8 100644 --- a/cmd/evm/testdata/29/readme.md +++ b/cmd/evm/testdata/29/readme.md @@ -1,11 +1,11 @@ ## EIP 4788 -This test contains testcases for EIP-4788. The 4788-contract is +This test contains testcases for EIP-4788. The 4788-contract is located at address `0x000F3df6D732807Ef1319fB7B8bB8522d0Beac02`, and this test executes a simple transaction. It also -implicitly invokes the system tx, which sets calls the contract and sets the +implicitly invokes the system tx, which sets calls the contract and sets the storage values -```bash +``` $ dir=./testdata/29/ && go run . t8n --state.fork=Cancun --input.alloc=$dir/alloc.json --input.txs=$dir/txs.json --input.env=$dir/env.json --output.alloc=stdout INFO [09-27|15:34:53.049] Trie dumping started root=19a4f8..01573c INFO [09-27|15:34:53.049] Trie dumping complete accounts=2 elapsed="192.759µs" diff --git a/cmd/evm/testdata/3/readme.md b/cmd/evm/testdata/3/readme.md index dfb2ea031e..246c58ef3b 100644 --- a/cmd/evm/testdata/3/readme.md +++ b/cmd/evm/testdata/3/readme.md @@ -1,2 +1,2 @@ These files exemplify a transition where a transaction (executed on block 5) requests -the blockhash for block `1`. +the blockhash for block `1`. diff --git a/cmd/evm/testdata/4/readme.md b/cmd/evm/testdata/4/readme.md index 56846dfdd2..eede41a9fd 100644 --- a/cmd/evm/testdata/4/readme.md +++ b/cmd/evm/testdata/4/readme.md @@ -1,3 +1,3 @@ These files exemplify a transition where a transaction (executed on block 5) requests -the blockhash for block `4`, but where the hash for that block is missing. +the blockhash for block `4`, but where the hash for that block is missing. It's expected that executing these should cause `exit` with errorcode `4`. diff --git a/cmd/evm/testdata/5/readme.md b/cmd/evm/testdata/5/readme.md index f31c0760ae..1a84afaab6 100644 --- a/cmd/evm/testdata/5/readme.md +++ b/cmd/evm/testdata/5/readme.md @@ -1 +1 @@ -These files exemplify a transition where there are no transactions, two ommers, at block `N-1` (delta 1) and `N-2` (delta 2). +These files exemplify a transition where there are no transactions, two ommers, at block `N-1` (delta 1) and `N-2` (delta 2). \ No newline at end of file From d0c8053f1886297c888ef4e5c2e28dd1d219659d Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Mon, 24 Nov 2025 10:39:11 -0500 Subject: [PATCH 09/20] fix: use subnet-evm rather than coreth items --- cmd/simulator/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/cmd/simulator/README.md b/cmd/simulator/README.md index 1223cf1680..b64e286770 100644 --- a/cmd/simulator/README.md +++ b/cmd/simulator/README.md @@ -7,7 +7,7 @@ When building developing your own blockchain using `subnet-evm`, you may want to To build the load simulator, navigate to the base of the simulator directory: ```bash -cd $GOPATH/src/github.com/ava-labs/coreth/cmd/simulator +cd $GOPATH/src/github.com/ava-labs/subnet-evm/cmd/simulator ``` Build the simulator: @@ -28,7 +28,7 @@ This should give the following output: v0.1.0 ``` -To run the load simulator, you must first start an EVM based network. The load simulator works on both the C-Chain and Subnet-EVM, so we will start a single node network and run the load simulator on the C-Chain. +To run the load simulator, you must first start an EVM based network. The load simulator works on both the C-Chain and Subnet-EVM, so we will start a single node network and run the load simulator on a Subnet-EVM blockchain. To start a single node network, follow the instructions from the AvalancheGo [README](https://github.com/ava-labs/avalanchego#building-avalanchego) to build from source. @@ -45,9 +45,9 @@ The `--sybil-protection-enabled=false` flag is only suitable for local testing. 1. Ignore stake weight on the P-Chain and count each connected peer as having a stake weight of 1 2. Automatically opts in to validate every Subnet -Once you have AvalancheGo running locally, it will be running an HTTP Server on the default port `9650`. This means that the RPC Endpoint for the C-Chain will be `http://127.0.0.1:9650/ext/bc/C/rpc` and `ws://127.0.0.1:9650/ext/bc/C/ws` for WebSocket connections. +Once you have AvalancheGo running locally, it will be running an HTTP Server on the default port `9650`. This means that the RPC Endpoint for your Subnet-EVM blockchain will be `http://127.0.0.1:9650/ext/bc/BLOCKCHAIN_ID/rpc` and `ws://127.0.0.1:9650/ext/bc/BLOCKCHAIN_ID/ws` for WebSocket connections, where `BLOCKCHAIN_ID` is the blockchain ID of your deployed Subnet-EVM instance. -Now, we can run the simulator command to simulate some load on the local C-Chain for 30s: +Now, we can run the simulator command to simulate some load on the local Subnet-EVM blockchain: ```bash ./simulator --timeout=1m --workers=1 --max-fee-cap=300 --max-tip-cap=10 --txs-per-worker=50 From 6cc0d555f47d625cc66df1df32d492362e1d3452 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Mon, 1 Dec 2025 16:31:48 -0500 Subject: [PATCH 10/20] chore: Austin feedback --- plugin/evm/config/config.md | 780 +++++++++++++++++++++++++----------- 1 file changed, 548 insertions(+), 232 deletions(-) diff --git a/plugin/evm/config/config.md b/plugin/evm/config/config.md index 9707aaa190..2d4c5e435d 100644 --- a/plugin/evm/config/config.md +++ b/plugin/evm/config/config.md @@ -1,309 +1,625 @@ -# Subnet-EVM Configuration +# EVM tool -> **Note**: These are the configuration options available in the subnet-evm codebase. To set these values, you need to create a configuration file at `{chain-config-dir}/C/config.json`. This file does not exist by default. -> -> For example if `chain-config-dir` has the default value which is `$HOME/.avalanchego/configs/chains`, then `config.json` should be placed at `$HOME/.avalanchego/configs/chains/C/config.json`. -> -> For the AvalancheGo node configuration options, see the AvalancheGo Configuration page. +The EVM tool provides a few useful subcommands to facilitate testing at the EVM +layer. -This document describes all configuration options available for Subnet-EVM. +* transition tool (`t8n`) : a stateless state transition utility +* transaction tool (`t9n`) : a transaction validation utility +* block builder tool (`b11r`): a block assembler utility -## Example Configuration +## State transition tool (`t8n`) -```json -{ - "eth-apis": ["eth", "eth-filter", "net", "web3"], - "pruning-enabled": true, - "commit-interval": 4096, - "trie-clean-cache": 512, - "trie-dirty-cache": 512, - "snapshot-cache": 256, - "rpc-gas-cap": 50000000, - "log-level": "info", - "metrics-expensive-enabled": true, - "continuous-profiler-dir": "./profiles", - "state-sync-enabled": false, - "accepted-cache-size": 32 -} -``` -## Configuration Format +The `evm t8n` tool is a stateless state transition utility. It is a utility +which can -Configuration is provided as a JSON object. All fields are optional unless otherwise specified. +1. Take a prestate, including + - Accounts, + - Block context information, + - Previous blockhashes (*optional) +2. Apply a set of transactions, +3. Apply a mining-reward (*optional), +4. And generate a post-state, including + - State root, transaction root, receipt root, + - Information about rejected transactions, + - Optionally: a full or partial post-state dump -## API Configuration +### Specification -### Ethereum APIs +The idea is to specify the behaviour of this binary very _strict_, so that other +node implementors can build replicas based on their own state-machines, and the +state generators can swap between a \`geth\`-based implementation and a \`parityvm\`-based +implementation. -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `eth-apis` | array of strings | List of Ethereum services that should be enabled | `["eth", "eth-filter", "net", "web3", "internal-eth", "internal-blockchain", "internal-transaction"]` | +#### Command line params -### Subnet-EVM Specific APIs +Command line params that need to be supported are -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `validators-api-enabled` | bool | Enable the validators API | `true` | -| `admin-api-enabled` | bool | Enable the admin API for administrative operations | `false` | -| `admin-api-dir` | string | Directory for admin API operations | - | -| `warp-api-enabled` | bool | Enable the Warp API for cross-chain messaging | `false` | +``` + --input.alloc value (default: "alloc.json") + --input.env value (default: "env.json") + --input.txs value (default: "txs.json") + --output.alloc value (default: "alloc.json") + --output.basedir value + --output.body value + --output.result value (default: "result.json") + --state.chainid value (default: 1) + --state.fork value (default: "GrayGlacier") + --state.reward value (default: 0) + --trace.memory (default: false) + --trace.nomemory (default: true) + --trace.noreturndata (default: true) + --trace.nostack (default: false) + --trace.returndata (default: false) +``` +#### Objects -### API Limits and Security +The transition tool uses JSON objects to read and write data related to the transition operation. The +following object definitions are required. -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `rpc-gas-cap` | uint64 | Maximum gas limit for RPC calls | `50,000,000` | -| `rpc-tx-fee-cap` | float64 | Maximum transaction fee cap in AVAX | `100` | -| `api-max-duration` | duration | Maximum duration for API calls (0 = no limit) | `0` | -| `api-max-blocks-per-request` | int64 | Maximum number of blocks per getLogs request (0 = no limit) | `0` | -| `http-body-limit` | uint64 | Maximum size of HTTP request bodies | - | -| `batch-request-limit` | uint64 | Maximum number of requests that can be batched in an RPC call. For no limit, set either this or `batch-response-max-size` to 0 | `1000` | -| `batch-response-max-size` | uint64 | Maximum size (in bytes) of response that can be returned from a batched RPC call. For no limit, set either this or `batch-request-limit` to 0. Defaults to `25 MB`| `1000` | +##### `alloc` -### WebSocket Settings +The `alloc` object defines the prestate that transition will begin with. -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `ws-cpu-refill-rate` | duration | Rate at which WebSocket CPU usage quota is refilled (0 = no limit) | `0` | -| `ws-cpu-max-stored` | duration | Maximum stored WebSocket CPU usage quota (0 = no limit) | `0` | +```go +// Map of address to account definition. +type Alloc map[common.Address]Account +// Genesis account. Each field is optional. +type Account struct { + Code []byte `json:"code"` + Storage map[common.Hash]common.Hash `json:"storage"` + Balance *big.Int `json:"balance"` + Nonce uint64 `json:"nonce"` + SecretKey []byte `json:"secretKey"` +} +``` -## Cache Configuration +##### `env` + +The `env` object defines the environmental context in which the transition will +take place. + +```go +type Env struct { + // required + CurrentCoinbase common.Address `json:"currentCoinbase"` + CurrentGasLimit uint64 `json:"currentGasLimit"` + CurrentNumber uint64 `json:"currentNumber"` + CurrentTimestamp uint64 `json:"currentTimestamp"` + Withdrawals []*Withdrawal `json:"withdrawals"` + // optional + CurrentDifficulty *big.Int `json:"currentDifficulty"` + CurrentRandom *big.Int `json:"currentRandom"` + CurrentBaseFee *big.Int `json:"currentBaseFee"` + ParentDifficulty *big.Int `json:"parentDifficulty"` + ParentGasUsed uint64 `json:"parentGasUsed"` + ParentGasLimit uint64 `json:"parentGasLimit"` + ParentTimestamp uint64 `json:"parentTimestamp"` + BlockHashes map[uint64]common.Hash `json:"blockHashes"` + ParentUncleHash common.Hash `json:"parentUncleHash"` + Ommers []Ommer `json:"ommers"` +} +type Ommer struct { + Delta uint64 `json:"delta"` + Address common.Address `json:"address"` +} +type Withdrawal struct { + Index uint64 `json:"index"` + ValidatorIndex uint64 `json:"validatorIndex"` + Recipient common.Address `json:"recipient"` + Amount *big.Int `json:"amount"` +} +``` -### Trie Caches +##### `txs` + +The `txs` object is an array of any of the transaction types: `LegacyTx`, +`AccessListTx`, or `DynamicFeeTx`. + +```go +type LegacyTx struct { + Nonce uint64 `json:"nonce"` + GasPrice *big.Int `json:"gasPrice"` + Gas uint64 `json:"gas"` + To *common.Address `json:"to"` + Value *big.Int `json:"value"` + Data []byte `json:"data"` + V *big.Int `json:"v"` + R *big.Int `json:"r"` + S *big.Int `json:"s"` + SecretKey *common.Hash `json:"secretKey"` +} +type AccessList []AccessTuple +type AccessTuple struct { + Address common.Address `json:"address" gencodec:"required"` + StorageKeys []common.Hash `json:"storageKeys" gencodec:"required"` +} +type AccessListTx struct { + ChainID *big.Int `json:"chainId"` + Nonce uint64 `json:"nonce"` + GasPrice *big.Int `json:"gasPrice"` + Gas uint64 `json:"gas"` + To *common.Address `json:"to"` + Value *big.Int `json:"value"` + Data []byte `json:"data"` + AccessList AccessList `json:"accessList"` + V *big.Int `json:"v"` + R *big.Int `json:"r"` + S *big.Int `json:"s"` + SecretKey *common.Hash `json:"secretKey"` +} +type DynamicFeeTx struct { + ChainID *big.Int `json:"chainId"` + Nonce uint64 `json:"nonce"` + GasTipCap *big.Int `json:"maxPriorityFeePerGas"` + GasFeeCap *big.Int `json:"maxFeePerGas"` + Gas uint64 `json:"gas"` + To *common.Address `json:"to"` + Value *big.Int `json:"value"` + Data []byte `json:"data"` + AccessList AccessList `json:"accessList"` + V *big.Int `json:"v"` + R *big.Int `json:"r"` + S *big.Int `json:"s"` + SecretKey *common.Hash `json:"secretKey"` +} +``` -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `trie-clean-cache` | int | Size of the trie clean cache in MB | `512` | -| `trie-dirty-cache` | int | Size of the trie dirty cache in MB | `512` | -| `trie-dirty-commit-target` | int | Memory limit to target in the dirty cache before performing a commit in MB | `20` | -| `trie-prefetcher-parallelism` | int | Maximum concurrent disk reads trie prefetcher should perform | `16` | +##### `result` + +The `result` object is output after a transition is executed. It includes +information about the post-transition environment. + +```go +type ExecutionResult struct { + StateRoot common.Hash `json:"stateRoot"` + TxRoot common.Hash `json:"txRoot"` + ReceiptRoot common.Hash `json:"receiptsRoot"` + LogsHash common.Hash `json:"logsHash"` + Bloom types.Bloom `json:"logsBloom"` + Receipts types.Receipts `json:"receipts"` + Rejected []*rejectedTx `json:"rejected,omitempty"` + Difficulty *big.Int `json:"currentDifficulty"` + GasUsed uint64 `json:"gasUsed"` + BaseFee *big.Int `json:"currentBaseFee,omitempty"` +} +``` -### Other Caches +#### Error codes and output -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `snapshot-cache` | int | Size of the snapshot disk layer clean cache in MB | `256` | -| `accepted-cache-size` | int | Depth to keep in the accepted headers and logs cache (blocks) | `32` | -| `state-sync-server-trie-cache` | int | Trie cache size for state sync server in MB | `64` | +All logging should happen against the `stderr`. +There are a few (not many) errors that can occur, those are defined below. -## Ethereum Settings +##### EVM-based errors (`2` to `9`) -### Transaction Processing +- Other EVM error. Exit code `2` +- Failed configuration: when a non-supported or invalid fork was specified. Exit code `3`. +- Block history is not supplied, but needed for a `BLOCKHASH` operation. If `BLOCKHASH` + is invoked targeting a block which history has not been provided for, the program will + exit with code `4`. -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `preimages-enabled` | bool | Enable preimage recording | `false` | -| `allow-unfinalized-queries` | bool | Allow queries for unfinalized blocks | `false` | -| `allow-unprotected-txs` | bool | Allow unprotected transactions (without EIP-155) | `false` | -| `allow-unprotected-tx-hashes` | array | List of specific transaction hashes allowed to be unprotected | EIP-1820 registry tx | -| `local-txs-enabled` | bool | Enable treatment of transactions from local accounts as local | `false` | +##### IO errors (`10`-`20`) -### Snapshots +- Invalid input json: the supplied data could not be marshalled. + The program will exit with code `10` +- IO problems: failure to load or save files, the program will exit with code `11` -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `snapshot-wait` | bool | Wait for snapshot generation on startup | `false` | -| `snapshot-verification-enabled` | bool | Enable snapshot verification | `false` | +``` +# This should exit with 3 +./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Frontier+1346 2>/dev/null +exitcode:3 OK +``` +#### Forks +### Basic usage -## Pruning and State Management +The chain configuration to be used for a transition is specified via the +`--state.fork` CLI flag. A list of possible values and configurations can be +found in [`tests/init.go`](../../tests/init.go). - > **Note**: If a node is ever run with `pruning-enabled` as `false` (archival mode), setting `pruning-enabled` to `true` will result in a warning and the node will shut down. This is to protect against unintentional misconfigurations of an archival node. To override this and switch to pruning mode, in addition to `pruning-enabled: true`, `allow-missing-tries` should be set to `true` as well. +#### Examples +##### Basic usage -### Basic Pruning +Invoking it with the provided example files +``` +./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Berlin +``` +Two resulting files: -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `pruning-enabled` | bool | Enable state pruning to save disk space | `true` | -| `commit-interval` | uint64 | Interval at which to persist EVM and atomic tries (blocks) | `4096` | -| `accepted-queue-limit` | int | Maximum blocks to queue before blocking during acceptance | `64` | +`alloc.json`: +```json +{ + "0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192": { + "balance": "0xfeed1a9d", + "nonce": "0x1" + }, + "0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b": { + "balance": "0x5ffd4878be161d74", + "nonce": "0xac" + }, + "0xc94f5374fce5edbc8e2a8697c15331677e6ebf0b": { + "balance": "0xa410" + } +} +``` +`result.json`: +```json +{ + "stateRoot": "0x84208a19bc2b46ada7445180c1db162be5b39b9abc8c0a54b05d32943eae4e13", + "txRoot": "0xc4761fd7b87ff2364c7c60b6c5c8d02e522e815328aaea3f20e3b7b7ef52c42d", + "receiptsRoot": "0x056b23fbba480696b65fe5a59b8f2148a1299103c4f57df839233af2cf4ca2d2", + "logsHash": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347", + "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", + "receipts": [ + { + "root": "0x", + "status": "0x1", + "cumulativeGasUsed": "0x5208", + "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", + "logs": null, + "transactionHash": "0x0557bacce3375c98d806609b8d5043072f0b6a8bae45ae5a67a00d3a1a18d673", + "contractAddress": "0x0000000000000000000000000000000000000000", + "gasUsed": "0x5208", + "blockHash": "0x0000000000000000000000000000000000000000000000000000000000000000", + "transactionIndex": "0x0" + } + ], + "rejected": [ + { + "index": 1, + "error": "nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1" + } + ], + "currentDifficulty": "0x20000", + "gasUsed": "0x5208" +} +``` -### State Reconstruction +We can make them spit out the data to e.g. `stdout` like this: +``` +./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --output.result=stdout --output.alloc=stdout --state.fork=Berlin +``` +Output: +```json +{ + "alloc": { + "0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192": { + "balance": "0xfeed1a9d", + "nonce": "0x1" + }, + "0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b": { + "balance": "0x5ffd4878be161d74", + "nonce": "0xac" + }, + "0xc94f5374fce5edbc8e2a8697c15331677e6ebf0b": { + "balance": "0xa410" + } + }, + "result": { + "stateRoot": "0x84208a19bc2b46ada7445180c1db162be5b39b9abc8c0a54b05d32943eae4e13", + "txRoot": "0xc4761fd7b87ff2364c7c60b6c5c8d02e522e815328aaea3f20e3b7b7ef52c42d", + "receiptsRoot": "0x056b23fbba480696b65fe5a59b8f2148a1299103c4f57df839233af2cf4ca2d2", + "logsHash": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347", + "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", + "receipts": [ + { + "root": "0x", + "status": "0x1", + "cumulativeGasUsed": "0x5208", + "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", + "logs": null, + "transactionHash": "0x0557bacce3375c98d806609b8d5043072f0b6a8bae45ae5a67a00d3a1a18d673", + "contractAddress": "0x0000000000000000000000000000000000000000", + "gasUsed": "0x5208", + "blockHash": "0x0000000000000000000000000000000000000000000000000000000000000000", + "transactionIndex": "0x0" + } + ], + "rejected": [ + { + "index": 1, + "error": "nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1" + } + ], + "currentDifficulty": "0x20000", + "gasUsed": "0x5208" + } +} +``` -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `allow-missing-tries` | bool | Suppress warnings about incomplete trie index | `false` | -| `populate-missing-tries` | uint64 | Starting block for re-populating missing tries (null = disabled) | `null` | -| `populate-missing-tries-parallelism` | int | Concurrent readers for re-populating missing tries | `1024` | +#### About Ommers -### Offline Pruning +Mining rewards and ommer rewards might need to be added. This is how those are applied: -> **Note**: If offline pruning is enabled it will run on startup and block until it completes (approximately one hour on Mainnet). This will reduce the size of the database by deleting old trie nodes. **While performing offline pruning, your node will not be able to process blocks and will be considered offline.** While ongoing, the pruning process consumes a small amount of additional disk space (for deletion markers and the bloom filter). For more information see the [disk space considerations documentation](https://build.avax.network/docs/nodes/maintain/reduce-disk-usage#disk-space-considerations). Since offline pruning deletes old state data, this should not be run on nodes that need to support archival API requests. This is meant to be run manually, so after running with this flag once, it must be toggled back to false before running the node again. Therefore, you should run with this flag set to true and then set it to false on the subsequent run. +- `block_reward` is the block mining reward for the miner (`0xaa`), of a block at height `N`. +- For each ommer (mined by `0xbb`), with blocknumber `N-delta` + - (where `delta` is the difference between the current block and the ommer) + - The account `0xbb` (ommer miner) is awarded `(8-delta)/ 8 * block_reward` + - The account `0xaa` (block miner) is awarded `block_reward / 32` -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `offline-pruning-enabled` | bool | Enable offline pruning | `false` | -| `offline-pruning-bloom-filter-size` | uint64 | Bloom filter size for offline pruning in MB | `512` | -| `offline-pruning-data-directory` | string | Directory for offline pruning data | - | +To make `t8n` apply these, the following inputs are required: -### Historical Data +- `--state.reward` + - For ethash, it is `5000000000000000000` `wei`, + - If this is not defined, mining rewards are not applied, + - A value of `0` is valid, and causes accounts to be 'touched'. +- For each ommer, the tool needs to be given an `address\` and a `delta`. This + is done via the `ommers` field in `env`. -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `historical-proof-query-window` | uint64 | Number of blocks before last accepted for proof queries (archive mode only, ~24 hours) | `43200` | -| `state-history` | uint64 | Number of most recent states that are accesible on disk (pruning mode only) | `32` | +Note: the tool does not verify that e.g. the normal uncle rules apply, +and allows e.g two uncles at the same height, or the uncle-distance. This means that +the tool allows for negative uncle reward (distance > 8) -## Transaction Pool Configuration +Example: +`./testdata/5/env.json`: +```json +{ + "currentCoinbase": "0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", + "currentDifficulty": "0x20000", + "currentGasLimit": "0x750a163df65e8a", + "currentNumber": "1", + "currentTimestamp": "1000", + "ommers": [ + {"delta": 1, "address": "0xbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb" }, + {"delta": 2, "address": "0xcccccccccccccccccccccccccccccccccccccccc" } + ] +} +``` +When applying this, using a reward of `0x08` +Output: +```json +{ + "alloc": { + "0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": { + "balance": "0x88" + }, + "0xbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb": { + "balance": "0x70" + }, + "0xcccccccccccccccccccccccccccccccccccccccc": { + "balance": "0x60" + } + } +} +``` +#### Future EIPS -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `tx-pool-price-limit` | uint64 | Minimum gas price for transaction acceptance | - | -| `tx-pool-price-bump` | uint64 | Minimum price bump percentage for transaction replacement | - | -| `tx-pool-account-slots` | uint64 | Maximum number of executable transaction slots per account | - | -| `tx-pool-global-slots` | uint64 | Maximum number of executable transaction slots for all accounts | - | -| `tx-pool-account-queue` | uint64 | Maximum number of non-executable transaction slots per account | - | -| `tx-pool-global-queue` | uint64 | Maximum number of non-executable transaction slots for all accounts | - | -| `tx-pool-lifetime` | duration | Maximum time transactions can stay in the pool | - | +It is also possible to experiment with future eips that are not yet defined in a hard fork. +Example, putting EIP-1344 into Frontier: +``` +./evm t8n --state.fork=Frontier+1344 --input.pre=./testdata/1/pre.json --input.txs=./testdata/1/txs.json --input.env=/testdata/1/env.json +``` -## Gossip Configuration +#### Block history -### Push Gossip Settings +The `BLOCKHASH` opcode requires blockhashes to be provided by the caller, inside the `env`. +If a required blockhash is not provided, the exit code should be `4`: +Example where blockhashes are provided: +``` +./evm t8n --input.alloc=./testdata/3/alloc.json --input.txs=./testdata/3/txs.json --input.env=./testdata/3/env.json --trace --state.fork=Berlin -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `push-gossip-percent-stake` | float64 | Percentage of total stake to push gossip to (range: [0, 1]) | `0.9` | -| `push-gossip-num-validators` | int | Number of validators to push gossip to | `100` | -| `push-gossip-num-peers` | int | Number of non-validator peers to push gossip to | `0` | +``` -### Regossip Settings +``` +cat trace-0-0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81.jsonl | grep BLOCKHASH -C2 +``` +``` +{"pc":0,"op":96,"gas":"0x5f58ef8","gasCost":"0x3","memSize":0,"stack":[],"depth":1,"refund":0,"opName":"PUSH1"} +{"pc":2,"op":64,"gas":"0x5f58ef5","gasCost":"0x14","memSize":0,"stack":["0x1"],"depth":1,"refund":0,"opName":"BLOCKHASH"} +{"pc":3,"op":0,"gas":"0x5f58ee1","gasCost":"0x0","memSize":0,"stack":["0xdac58aa524e50956d0c0bae7f3f8bb9d35381365d07804dd5b48a5a297c06af4"],"depth":1,"refund":0,"opName":"STOP"} +{"output":"","gasUsed":"0x17"} +``` -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `push-regossip-num-validators` | int | Number of validators to regossip to | `10` | -| `push-regossip-num-peers` | int | Number of non-validator peers to regossip to | `0` | -| `priority-regossip-addresses` | array | Addresses to prioritize for regossip | - | +In this example, the caller has not provided the required blockhash: +``` +./evm t8n --input.alloc=./testdata/4/alloc.json --input.txs=./testdata/4/txs.json --input.env=./testdata/4/env.json --trace --state.fork=Berlin +ERROR(4): getHash(3) invoked, blockhash for that block not provided +``` +Error code: 4 -### Timing Configuration +#### Chaining -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `push-gossip-frequency` | duration | Frequency of push gossip | `100ms` | -| `pull-gossip-frequency` | duration | Frequency of pull gossip | `1s` | -| `regossip-frequency` | duration | Frequency of regossip | `30s` | +Another thing that can be done, is to chain invocations: +``` +./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Berlin --output.alloc=stdout | ./evm t8n --input.alloc=stdin --input.env=./testdata/1/env.json --input.txs=./testdata/1/txs.json --state.fork=Berlin -## Logging and Monitoring +``` +What happened here, is that we first applied two identical transactions, so the second one was rejected. +Then, taking the poststate alloc as the input for the next state, we tried again to include +the same two transactions: this time, both failed due to too low nonce. -### Logging +In order to meaningfully chain invocations, one would need to provide meaningful new `env`, otherwise the +actual blocknumber (exposed to the EVM) would not increase. -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `log-level` | string | Logging level (trace, debug, info, warn, error, crit) | `"info"` | -| `log-json-format` | bool | Use JSON format for logs | `false` | +#### Transactions in RLP form -### Profiling +It is possible to provide already-signed transactions as input to, using an `input.txs` which ends with the `rlp` suffix. +The input format for RLP-form transactions is _identical_ to the _output_ format for block bodies. Therefore, it's fully possible +to use the evm to go from `json` input to `rlp` input. -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `continuous-profiler-dir` | string | Directory for continuous profiler output (empty = disabled) | - | -| `continuous-profiler-frequency` | duration | Frequency to run continuous profiler | `15m` | -| `continuous-profiler-max-files` | int | Maximum number of profiler files to maintain | `5` | +The following command takes **json** the transactions in `./testdata/13/txs.json` and signs them. After execution, they are output to `signed_txs.rlp`.: +``` +./evm t8n --state.fork=London --input.alloc=./testdata/13/alloc.json --input.txs=./testdata/13/txs.json --input.env=./testdata/13/env.json --output.result=alloc_jsontx.json --output.body=signed_txs.rlp +INFO [12-27|09:25:11.102] Trie dumping started root=e4b924..6aef61 +INFO [12-27|09:25:11.102] Trie dumping complete accounts=3 elapsed="275.66µs" +INFO [12-27|09:25:11.102] Wrote file file=alloc.json +INFO [12-27|09:25:11.103] Wrote file file=alloc_jsontx.json +INFO [12-27|09:25:11.103] Wrote file file=signed_txs.rlp +``` -### Metrics +The `output.body` is the rlp-list of transactions, encoded in hex and placed in a string a'la `json` encoding rules: +``` +cat signed_txs.rlp +"0xf8d2b86702f864010180820fa08284d09411111111111111111111111111111111111111118080c001a0b7dfab36232379bb3d1497a4f91c1966b1f932eae3ade107bf5d723b9cb474e0a06261c359a10f2132f126d250485b90cf20f30340801244a08ef6142ab33d1904b86702f864010280820fa08284d09411111111111111111111111111111111111111118080c080a0d4ec563b6568cd42d998fc4134b36933c6568d01533b5adf08769270243c6c7fa072bf7c21eac6bbeae5143371eef26d5e279637f3bd73482b55979d76d935b1e9" +``` -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `metrics-expensive-enabled` | bool | Enable expensive debug-level metrics; this includes Firewood metrics | `true` | +We can use `rlpdump` to check what the contents are: +``` +rlpdump -hex $(cat signed_txs.rlp | jq -r ) +[ + 02f864010180820fa08284d09411111111111111111111111111111111111111118080c001a0b7dfab36232379bb3d1497a4f91c1966b1f932eae3ade107bf5d723b9cb474e0a06261c359a10f2132f126d250485b90cf20f30340801244a08ef6142ab33d1904, + 02f864010280820fa08284d09411111111111111111111111111111111111111118080c080a0d4ec563b6568cd42d998fc4134b36933c6568d01533b5adf08769270243c6c7fa072bf7c21eac6bbeae5143371eef26d5e279637f3bd73482b55979d76d935b1e9, +] +``` +Now, we can now use those (or any other already signed transactions), as input, like so: +``` +./evm t8n --state.fork=London --input.alloc=./testdata/13/alloc.json --input.txs=./signed_txs.rlp --input.env=./testdata/13/env.json --output.result=alloc_rlptx.json +INFO [12-27|09:25:11.187] Trie dumping started root=e4b924..6aef61 +INFO [12-27|09:25:11.187] Trie dumping complete accounts=3 elapsed="123.676µs" +INFO [12-27|09:25:11.187] Wrote file file=alloc.json +INFO [12-27|09:25:11.187] Wrote file file=alloc_rlptx.json +``` +You might have noticed that the results from these two invocations were stored in two separate files. +And we can now finally check that they match. +``` +cat alloc_jsontx.json | jq .stateRoot && cat alloc_rlptx.json | jq .stateRoot +"0xe4b924a6adb5959fccf769d5b7bb2f6359e26d1e76a2443c5a91a36d826aef61" +"0xe4b924a6adb5959fccf769d5b7bb2f6359e26d1e76a2443c5a91a36d826aef61" +``` -## Security and Access +## Transaction tool -### Keystore +The transaction tool is used to perform static validity checks on transactions such as: +* intrinsic gas calculation +* max values on integers +* fee semantics, such as `maxFeePerGas < maxPriorityFeePerGas` +* newer tx types on old forks -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `keystore-directory` | string | Directory for keystore files (absolute or relative path) | - | -| `keystore-external-signer` | string | External signer configuration | - | -| `keystore-insecure-unlock-allowed` | bool | Allow insecure account unlocking | `false` | +### Examples -### Fee Configuration +``` +./evm t9n --state.fork Homestead --input.txs testdata/15/signed_txs.rlp +[ + { + "error": "transaction type not supported", + "hash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476" + }, + { + "error": "transaction type not supported", + "hash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a" + } +] +``` +``` +./evm t9n --state.fork London --input.txs testdata/15/signed_txs.rlp +[ + { + "address": "0xd02d72e067e77158444ef2020ff2d325f929b363", + "hash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476", + "intrinsicGas": "0x5208" + }, + { + "address": "0xd02d72e067e77158444ef2020ff2d325f929b363", + "hash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a", + "intrinsicGas": "0x5208" + } +] +``` +## Block builder tool (b11r) -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `feeRecipient` | string | Address to send transaction fees to (leave empty if not supported) | - | +The `evm b11r` tool is used to assemble and seal full block rlps. -## Network and Sync +### Specification -### Network +#### Command line params -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `max-outbound-active-requests` | int64 | Maximum number of outbound active requests for VM2VM network | `16` | +Command line params that need to be supported are: -### State Sync +``` + --input.header value `stdin` or file name of where to find the block header to use. (default: "header.json") + --input.ommers value `stdin` or file name of where to find the list of ommer header RLPs to use. + --input.txs value `stdin` or file name of where to find the transactions list in RLP form. (default: "txs.rlp") + --output.basedir value Specifies where output files are placed. Will be created if it does not exist. + --output.block value Determines where to put the alloc of the post-state. (default: "block.json") + - into the file + `stdout` - into the stdout output + `stderr` - into the stderr output + --seal.clique value Seal block with Clique. `stdin` or file name of where to find the Clique sealing data. + --seal.ethash Seal block with ethash. (default: false) + --seal.ethash.dir value Path to ethash DAG. If none exists, a new DAG will be generated. + --seal.ethash.mode value Defines the type and amount of PoW verification an ethash engine makes. (default: "normal") + --verbosity value Sets the verbosity level. (default: 3) +``` -> **Note:** If state-sync is enabled, the peer will download chain state from peers up to a recent block near tip, then proceed with normal bootstrapping. Please note that if you need historical data, state sync isn't the right option. However, it is sufficient if you are just running a validator. +#### Objects + +##### `header` + +The `header` object is a consensus header. + +```go= +type Header struct { + ParentHash common.Hash `json:"parentHash"` + OmmerHash *common.Hash `json:"sha3Uncles"` + Coinbase *common.Address `json:"miner"` + Root common.Hash `json:"stateRoot" gencodec:"required"` + TxHash *common.Hash `json:"transactionsRoot"` + ReceiptHash *common.Hash `json:"receiptsRoot"` + Bloom types.Bloom `json:"logsBloom"` + Difficulty *big.Int `json:"difficulty"` + Number *big.Int `json:"number" gencodec:"required"` + GasLimit uint64 `json:"gasLimit" gencodec:"required"` + GasUsed uint64 `json:"gasUsed"` + Time uint64 `json:"timestamp" gencodec:"required"` + Extra []byte `json:"extraData"` + MixDigest common.Hash `json:"mixHash"` + Nonce *types.BlockNonce `json:"nonce"` + BaseFee *big.Int `json:"baseFeePerGas"` +} +``` +#### `ommers` -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `state-sync-enabled` | bool | Enable state sync | `false` | -| `state-sync-skip-resume` | bool | Force state sync to use highest available summary block | `false` | -| `state-sync-ids` | string | Comma-separated list of state sync IDs | - | -| `state-sync-commit-interval` | uint64 | Commit interval for state sync (blocks) | `16384` | -| `state-sync-min-blocks` | uint64 | Minimum blocks ahead required for state sync | `300000` | -| `state-sync-request-size` | uint16 | Number of key/values to request per state sync request | `1024` | +The `ommers` object is a list of RLP-encoded ommer blocks in hex +representation. -## Database Configuration +```go= +type Ommers []string +``` -> **WARNING**: `firewood` and `path` schemes are untested in production. Using `path` is strongly discouraged. To use `firewood`, you must also set the following config options: -> -> - `populate-missing-tries: nil` -> - `state-sync-enabled: false` -> - `snapshot-cache: 0` +#### `txs` -Failing to set these options will result in errors on VM initialization. Additionally, not all APIs are available - see these portions of the config documentation for more details. +The `txs` object is a list of RLP-encoded transactions in hex representation. -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `database-type` | string | Type of database to use | `"pebbledb"` | -| `database-path` | string | Path to database directory | - | -| `database-read-only` | bool | Open database in read-only mode | `false` | -| `database-config` | string | Inline database configuration | - | -| `database-config-file` | string | Path to database configuration file | - | -| `use-standalone-database` | bool | Use standalone database instead of shared one | - | -| `inspect-database` | bool | Inspect database on startup | `false` | -| `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash` or `firewood` | `hash` | +```go= +type Txs []string +``` -## Transaction Indexing +#### `clique` -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `transaction-history` | uint64 | Maximum number of blocks from head whose transaction indices are reserved (0 = no limit) | - | -| `tx-lookup-limit` | uint64 | **Deprecated** - use `transaction-history` instead | - | -| `skip-tx-indexing` | bool | Skip indexing transactions entirely | `false` | +The `clique` object provides the necessary information to complete a clique +seal of the block. -## Warp Configuration +```go= +var CliqueInfo struct { + Key *common.Hash `json:"secretKey"` + Voted *common.Address `json:"voted"` + Authorize *bool `json:"authorize"` + Vanity common.Hash `json:"vanity"` +} +``` -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `warp-off-chain-messages` | array | Off-chain messages the node should be willing to sign | - | -| `prune-warp-db-enabled` | bool | Clear warp database on startup | `false` | +#### `output` -## Miscellaneous +The `output` object contains two values, the block RLP and the block hash. -| Option | Type | Description | Default | -|--------|------|-------------|---------| -| `airdrop` | string | Path to airdrop file | - | -| `skip-upgrade-check` | bool | Skip checking that upgrades occur before last accepted block ⚠️ **Warning**: Only use when you understand the implications | `false` | -| `min-delay-target` | integer | The minimum delay between blocks (in milliseconds) that this node will attempt to use when creating blocks | Parent block's target | +```go= +type BlockInfo struct { + Rlp []byte `json:"rlp"` + Hash common.Hash `json:"hash"` +} +``` -## Gossip Constants +## A Note on Encoding -The following constants are defined for transaction gossip behavior and cannot be configured without a custom build of Subnet-EVM: +The encoding of values for `evm` utility attempts to be relatively flexible. It +generally supports hex-encoded or decimal-encoded numeric values, and +hex-encoded byte values (like `common.Address`, `common.Hash`, etc). When in +doubt, the [`execution-apis`](https://github.com/ethereum/execution-apis) way +of encoding should always be accepted. -| Constant | Type | Description | Value | -|----------|------|-------------|-------| -| Bloom Filter Min Target Elements | int | Minimum target elements for bloom filter | `8,192` | -| Bloom Filter Target False Positive Rate | float | Target false positive rate | `1%` | -| Bloom Filter Reset False Positive Rate | float | Reset false positive rate | `5%` | -| Bloom Filter Churn Multiplier | int | Churn multiplier | `3` | -| Push Gossip Discarded Elements | int | Number of discarded elements | `16,384` | -| Tx Gossip Target Message Size | size | Target message size for transaction gossip | `20 KiB` | -| Tx Gossip Throttling Period | duration | Throttling period | `10s` | -| Tx Gossip Throttling Limit | int | Throttling limit | `2` | -| Tx Gossip Poll Size | int | Poll size | `1` | - -## Validation Notes +## Testing -- Cannot enable `populate-missing-tries` while pruning or offline pruning is enabled -- Cannot run offline pruning while pruning is disabled -- Commit interval must be non-zero when pruning is enabled -- `push-gossip-percent-stake` must be in range `[0, 1]` -- Some settings may require node restart to take effect +There are many test cases in the [`cmd/evm/testdata`](./testdata) directory. +These fixtures are used to power the `t8n` tests in +[`t8n_test.go`](./t8n_test.go). The best way to verify correctness of new `evm` +implementations is to execute these and verify the output and error codes match +the expected values. \ No newline at end of file From 0698fa11564c941e228335c7c0c7779664c3ad7b Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Mon, 1 Dec 2025 16:33:27 -0500 Subject: [PATCH 11/20] fix: wrong readme --- cmd/evm/README.md | 211 +++++----- plugin/evm/config/config.md | 778 +++++++++++------------------------- 2 files changed, 319 insertions(+), 670 deletions(-) diff --git a/cmd/evm/README.md b/cmd/evm/README.md index 3224e259f6..2d4c5e435d 100644 --- a/cmd/evm/README.md +++ b/cmd/evm/README.md @@ -9,19 +9,20 @@ layer. ## State transition tool (`t8n`) + The `evm t8n` tool is a stateless state transition utility. It is a utility which can 1. Take a prestate, including - * Accounts, - * Block context information, - * Previous blockhashes (*optional) -1. Apply a set of transactions, -1. Apply a mining-reward (*optional), -1. And generate a post-state, including - * State root, transaction root, receipt root, - * Information about rejected transactions, - * Optionally: a full or partial post-state dump + - Accounts, + - Block context information, + - Previous blockhashes (*optional) +2. Apply a set of transactions, +3. Apply a mining-reward (*optional), +4. And generate a post-state, including + - State root, transaction root, receipt root, + - Information about rejected transactions, + - Optionally: a full or partial post-state dump ### Specification @@ -34,7 +35,7 @@ implementation. Command line params that need to be supported are -```bash +``` --input.alloc value (default: "alloc.json") --input.env value (default: "env.json") --input.txs value (default: "txs.json") @@ -51,7 +52,6 @@ Command line params that need to be supported are --trace.nostack (default: false) --trace.returndata (default: false) ``` - #### Objects The transition tool uses JSON objects to read and write data related to the transition operation. The @@ -118,50 +118,50 @@ The `txs` object is an array of any of the transaction types: `LegacyTx`, ```go type LegacyTx struct { - Nonce uint64 `json:"nonce"` - GasPrice *big.Int `json:"gasPrice"` - Gas uint64 `json:"gas"` - To *common.Address `json:"to"` - Value *big.Int `json:"value"` - Data []byte `json:"data"` - V *big.Int `json:"v"` - R *big.Int `json:"r"` - S *big.Int `json:"s"` - SecretKey *common.Hash `json:"secretKey"` + Nonce uint64 `json:"nonce"` + GasPrice *big.Int `json:"gasPrice"` + Gas uint64 `json:"gas"` + To *common.Address `json:"to"` + Value *big.Int `json:"value"` + Data []byte `json:"data"` + V *big.Int `json:"v"` + R *big.Int `json:"r"` + S *big.Int `json:"s"` + SecretKey *common.Hash `json:"secretKey"` } type AccessList []AccessTuple type AccessTuple struct { - Address common.Address `json:"address" gencodec:"required"` - StorageKeys []common.Hash `json:"storageKeys" gencodec:"required"` + Address common.Address `json:"address" gencodec:"required"` + StorageKeys []common.Hash `json:"storageKeys" gencodec:"required"` } type AccessListTx struct { - ChainID *big.Int `json:"chainId"` - Nonce uint64 `json:"nonce"` - GasPrice *big.Int `json:"gasPrice"` - Gas uint64 `json:"gas"` - To *common.Address `json:"to"` - Value *big.Int `json:"value"` - Data []byte `json:"data"` - AccessList AccessList `json:"accessList"` - V *big.Int `json:"v"` - R *big.Int `json:"r"` - S *big.Int `json:"s"` - SecretKey *common.Hash `json:"secretKey"` + ChainID *big.Int `json:"chainId"` + Nonce uint64 `json:"nonce"` + GasPrice *big.Int `json:"gasPrice"` + Gas uint64 `json:"gas"` + To *common.Address `json:"to"` + Value *big.Int `json:"value"` + Data []byte `json:"data"` + AccessList AccessList `json:"accessList"` + V *big.Int `json:"v"` + R *big.Int `json:"r"` + S *big.Int `json:"s"` + SecretKey *common.Hash `json:"secretKey"` } type DynamicFeeTx struct { - ChainID *big.Int `json:"chainId"` - Nonce uint64 `json:"nonce"` - GasTipCap *big.Int `json:"maxPriorityFeePerGas"` - GasFeeCap *big.Int `json:"maxFeePerGas"` - Gas uint64 `json:"gas"` - To *common.Address `json:"to"` - Value *big.Int `json:"value"` - Data []byte `json:"data"` - AccessList AccessList `json:"accessList"` - V *big.Int `json:"v"` - R *big.Int `json:"r"` - S *big.Int `json:"s"` - SecretKey *common.Hash `json:"secretKey"` + ChainID *big.Int `json:"chainId"` + Nonce uint64 `json:"nonce"` + GasTipCap *big.Int `json:"maxPriorityFeePerGas"` + GasFeeCap *big.Int `json:"maxFeePerGas"` + Gas uint64 `json:"gas"` + To *common.Address `json:"to"` + Value *big.Int `json:"value"` + Data []byte `json:"data"` + AccessList AccessList `json:"accessList"` + V *big.Int `json:"v"` + R *big.Int `json:"r"` + S *big.Int `json:"s"` + SecretKey *common.Hash `json:"secretKey"` } ``` @@ -192,44 +192,40 @@ There are a few (not many) errors that can occur, those are defined below. ##### EVM-based errors (`2` to `9`) -* Other EVM error. Exit code `2` -* Failed configuration: when a non-supported or invalid fork was specified. Exit code `3`. -* Block history is not supplied, but needed for a `BLOCKHASH` operation. If `BLOCKHASH` +- Other EVM error. Exit code `2` +- Failed configuration: when a non-supported or invalid fork was specified. Exit code `3`. +- Block history is not supplied, but needed for a `BLOCKHASH` operation. If `BLOCKHASH` is invoked targeting a block which history has not been provided for, the program will exit with code `4`. ##### IO errors (`10`-`20`) -* Invalid input json: the supplied data could not be marshalled. +- Invalid input json: the supplied data could not be marshalled. The program will exit with code `10` -* IO problems: failure to load or save files, the program will exit with code `11` +- IO problems: failure to load or save files, the program will exit with code `11` -```bash +``` # This should exit with 3 ./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Frontier+1346 2>/dev/null exitcode:3 OK ``` - #### Forks +### Basic usage The chain configuration to be used for a transition is specified via the `--state.fork` CLI flag. A list of possible values and configurations can be found in [`tests/init.go`](../../tests/init.go). #### Examples - ##### Basic usage Invoking it with the provided example files - -```bash +``` ./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Berlin ``` - Two resulting files: `alloc.json`: - ```json { "0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192": { @@ -245,9 +241,7 @@ Two resulting files: } } ``` - `result.json`: - ```json { "stateRoot": "0x84208a19bc2b46ada7445180c1db162be5b39b9abc8c0a54b05d32943eae4e13", @@ -281,13 +275,10 @@ Two resulting files: ``` We can make them spit out the data to e.g. `stdout` like this: - -```bash +``` ./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --output.result=stdout --output.alloc=stdout --state.fork=Berlin ``` - Output: - ```json { "alloc": { @@ -339,19 +330,19 @@ Output: Mining rewards and ommer rewards might need to be added. This is how those are applied: -* `block_reward` is the block mining reward for the miner (`0xaa`), of a block at height `N`. -* For each ommer (mined by `0xbb`), with blocknumber `N-delta` - * (where `delta` is the difference between the current block and the ommer) - * The account `0xbb` (ommer miner) is awarded `(8-delta)/ 8 * block_reward` - * The account `0xaa` (block miner) is awarded `block_reward / 32` +- `block_reward` is the block mining reward for the miner (`0xaa`), of a block at height `N`. +- For each ommer (mined by `0xbb`), with blocknumber `N-delta` + - (where `delta` is the difference between the current block and the ommer) + - The account `0xbb` (ommer miner) is awarded `(8-delta)/ 8 * block_reward` + - The account `0xaa` (block miner) is awarded `block_reward / 32` To make `t8n` apply these, the following inputs are required: -* `--state.reward` - * For ethash, it is `5000000000000000000` `wei`, - * If this is not defined, mining rewards are not applied, - * A value of `0` is valid, and causes accounts to be 'touched'. -* For each ommer, the tool needs to be given an `address\` and a `delta`. This +- `--state.reward` + - For ethash, it is `5000000000000000000` `wei`, + - If this is not defined, mining rewards are not applied, + - A value of `0` is valid, and causes accounts to be 'touched'. +- For each ommer, the tool needs to be given an `address\` and a `delta`. This is done via the `ommers` field in `env`. Note: the tool does not verify that e.g. the normal uncle rules apply, @@ -359,9 +350,7 @@ and allows e.g two uncles at the same height, or the uncle-distance. This means the tool allows for negative uncle reward (distance > 8) Example: - `./testdata/5/env.json`: - ```json { "currentCoinbase": "0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", @@ -375,11 +364,8 @@ Example: ] } ``` - When applying this, using a reward of `0x08` - Output: - ```json { "alloc": { @@ -395,14 +381,11 @@ Output: } } ``` - #### Future EIPS It is also possible to experiment with future eips that are not yet defined in a hard fork. - Example, putting EIP-1344 into Frontier: - -```bash +``` ./evm t8n --state.fork=Frontier+1344 --input.pre=./testdata/1/pre.json --input.txs=./testdata/1/txs.json --input.env=/testdata/1/env.json ``` @@ -410,18 +393,16 @@ Example, putting EIP-1344 into Frontier: The `BLOCKHASH` opcode requires blockhashes to be provided by the caller, inside the `env`. If a required blockhash is not provided, the exit code should be `4`: - Example where blockhashes are provided: - -```bash +``` ./evm t8n --input.alloc=./testdata/3/alloc.json --input.txs=./testdata/3/txs.json --input.env=./testdata/3/env.json --trace --state.fork=Berlin + ``` -```bash +``` cat trace-0-0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81.jsonl | grep BLOCKHASH -C2 ``` - -```json +``` {"pc":0,"op":96,"gas":"0x5f58ef8","gasCost":"0x3","memSize":0,"stack":[],"depth":1,"refund":0,"opName":"PUSH1"} {"pc":2,"op":64,"gas":"0x5f58ef5","gasCost":"0x14","memSize":0,"stack":["0x1"],"depth":1,"refund":0,"opName":"BLOCKHASH"} {"pc":3,"op":0,"gas":"0x5f58ee1","gasCost":"0x0","memSize":0,"stack":["0xdac58aa524e50956d0c0bae7f3f8bb9d35381365d07804dd5b48a5a297c06af4"],"depth":1,"refund":0,"opName":"STOP"} @@ -429,22 +410,19 @@ cat trace-0-0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81.j ``` In this example, the caller has not provided the required blockhash: - -```bash +``` ./evm t8n --input.alloc=./testdata/4/alloc.json --input.txs=./testdata/4/txs.json --input.env=./testdata/4/env.json --trace --state.fork=Berlin ERROR(4): getHash(3) invoked, blockhash for that block not provided ``` - Error code: 4 #### Chaining Another thing that can be done, is to chain invocations: - -```bash -./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Berlin --output.alloc=stdout | ./evm t8n --input.alloc=stdin --input.env=./testdata/1/env.json --input.txs=./testdata/1/txs.json --state.fork=Berlin ``` +./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Berlin --output.alloc=stdout | ./evm t8n --input.alloc=stdin --input.env=./testdata/1/env.json --input.txs=./testdata/1/txs.json --state.fork=Berlin +``` What happened here, is that we first applied two identical transactions, so the second one was rejected. Then, taking the poststate alloc as the input for the next state, we tried again to include the same two transactions: this time, both failed due to too low nonce. @@ -459,8 +437,7 @@ The input format for RLP-form transactions is _identical_ to the _output_ format to use the evm to go from `json` input to `rlp` input. The following command takes **json** the transactions in `./testdata/13/txs.json` and signs them. After execution, they are output to `signed_txs.rlp`.: - -```bash +``` ./evm t8n --state.fork=London --input.alloc=./testdata/13/alloc.json --input.txs=./testdata/13/txs.json --input.env=./testdata/13/env.json --output.result=alloc_jsontx.json --output.body=signed_txs.rlp INFO [12-27|09:25:11.102] Trie dumping started root=e4b924..6aef61 INFO [12-27|09:25:11.102] Trie dumping complete accounts=3 elapsed="275.66µs" @@ -470,36 +447,30 @@ INFO [12-27|09:25:11.103] Wrote file file=signed_t ``` The `output.body` is the rlp-list of transactions, encoded in hex and placed in a string a'la `json` encoding rules: - -```bash +``` cat signed_txs.rlp "0xf8d2b86702f864010180820fa08284d09411111111111111111111111111111111111111118080c001a0b7dfab36232379bb3d1497a4f91c1966b1f932eae3ade107bf5d723b9cb474e0a06261c359a10f2132f126d250485b90cf20f30340801244a08ef6142ab33d1904b86702f864010280820fa08284d09411111111111111111111111111111111111111118080c080a0d4ec563b6568cd42d998fc4134b36933c6568d01533b5adf08769270243c6c7fa072bf7c21eac6bbeae5143371eef26d5e279637f3bd73482b55979d76d935b1e9" ``` We can use `rlpdump` to check what the contents are: - -```bash +``` rlpdump -hex $(cat signed_txs.rlp | jq -r ) [ 02f864010180820fa08284d09411111111111111111111111111111111111111118080c001a0b7dfab36232379bb3d1497a4f91c1966b1f932eae3ade107bf5d723b9cb474e0a06261c359a10f2132f126d250485b90cf20f30340801244a08ef6142ab33d1904, 02f864010280820fa08284d09411111111111111111111111111111111111111118080c080a0d4ec563b6568cd42d998fc4134b36933c6568d01533b5adf08769270243c6c7fa072bf7c21eac6bbeae5143371eef26d5e279637f3bd73482b55979d76d935b1e9, ] ``` - Now, we can now use those (or any other already signed transactions), as input, like so: - -```bash +``` ./evm t8n --state.fork=London --input.alloc=./testdata/13/alloc.json --input.txs=./signed_txs.rlp --input.env=./testdata/13/env.json --output.result=alloc_rlptx.json INFO [12-27|09:25:11.187] Trie dumping started root=e4b924..6aef61 INFO [12-27|09:25:11.187] Trie dumping complete accounts=3 elapsed="123.676µs" INFO [12-27|09:25:11.187] Wrote file file=alloc.json INFO [12-27|09:25:11.187] Wrote file file=alloc_rlptx.json ``` - You might have noticed that the results from these two invocations were stored in two separate files. And we can now finally check that they match. - -```bash +``` cat alloc_jsontx.json | jq .stateRoot && cat alloc_rlptx.json | jq .stateRoot "0xe4b924a6adb5959fccf769d5b7bb2f6359e26d1e76a2443c5a91a36d826aef61" "0xe4b924a6adb5959fccf769d5b7bb2f6359e26d1e76a2443c5a91a36d826aef61" @@ -508,7 +479,6 @@ cat alloc_jsontx.json | jq .stateRoot && cat alloc_rlptx.json | jq .stateRoot ## Transaction tool The transaction tool is used to perform static validity checks on transactions such as: - * intrinsic gas calculation * max values on integers * fee semantics, such as `maxFeePerGas < maxPriorityFeePerGas` @@ -516,7 +486,7 @@ The transaction tool is used to perform static validity checks on transactions s ### Examples -```bash +``` ./evm t9n --state.fork Homestead --input.txs testdata/15/signed_txs.rlp [ { @@ -529,8 +499,7 @@ The transaction tool is used to perform static validity checks on transactions s } ] ``` - -```bash +``` ./evm t9n --state.fork London --input.txs testdata/15/signed_txs.rlp [ { @@ -545,7 +514,6 @@ The transaction tool is used to perform static validity checks on transactions s } ] ``` - ## Block builder tool (b11r) The `evm b11r` tool is used to assemble and seal full block rlps. @@ -556,7 +524,7 @@ The `evm b11r` tool is used to assemble and seal full block rlps. Command line params that need to be supported are: -```bash +``` --input.header value `stdin` or file name of where to find the block header to use. (default: "header.json") --input.ommers value `stdin` or file name of where to find the list of ommer header RLPs to use. --input.txs value `stdin` or file name of where to find the transactions list in RLP form. (default: "txs.rlp") @@ -578,7 +546,7 @@ Command line params that need to be supported are: The `header` object is a consensus header. -```go +```go= type Header struct { ParentHash common.Hash `json:"parentHash"` OmmerHash *common.Hash `json:"sha3Uncles"` @@ -598,13 +566,12 @@ type Header struct { BaseFee *big.Int `json:"baseFeePerGas"` } ``` - #### `ommers` The `ommers` object is a list of RLP-encoded ommer blocks in hex representation. -```go +```go= type Ommers []string ``` @@ -612,7 +579,7 @@ type Ommers []string The `txs` object is a list of RLP-encoded transactions in hex representation. -```go +```go= type Txs []string ``` @@ -621,7 +588,7 @@ type Txs []string The `clique` object provides the necessary information to complete a clique seal of the block. -```go +```go= var CliqueInfo struct { Key *common.Hash `json:"secretKey"` Voted *common.Address `json:"voted"` @@ -634,7 +601,7 @@ var CliqueInfo struct { The `output` object contains two values, the block RLP and the block hash. -```go +```go= type BlockInfo struct { Rlp []byte `json:"rlp"` Hash common.Hash `json:"hash"` @@ -655,4 +622,4 @@ There are many test cases in the [`cmd/evm/testdata`](./testdata) directory. These fixtures are used to power the `t8n` tests in [`t8n_test.go`](./t8n_test.go). The best way to verify correctness of new `evm` implementations is to execute these and verify the output and error codes match -the expected values. +the expected values. \ No newline at end of file diff --git a/plugin/evm/config/config.md b/plugin/evm/config/config.md index 2d4c5e435d..1bc7377403 100644 --- a/plugin/evm/config/config.md +++ b/plugin/evm/config/config.md @@ -1,625 +1,307 @@ -# EVM tool +# Subnet-EVM Configuration -The EVM tool provides a few useful subcommands to facilitate testing at the EVM -layer. +> **Note**: These are the configuration options available in the Subnet-EVM codebase. To set these values, you need to create a configuration file at `~/.avalanchego/configs/chains//config.json`. +> +> For the AvalancheGo node configuration options, see the AvalancheGo Configuration page. -* transition tool (`t8n`) : a stateless state transition utility -* transaction tool (`t9n`) : a transaction validation utility -* block builder tool (`b11r`): a block assembler utility +This document describes all configuration options available for Subnet-EVM. -## State transition tool (`t8n`) +## Example Configuration +```json +{ + "eth-apis": ["eth", "eth-filter", "net", "web3"], + "pruning-enabled": true, + "commit-interval": 4096, + "trie-clean-cache": 512, + "trie-dirty-cache": 512, + "snapshot-cache": 256, + "rpc-gas-cap": 50000000, + "log-level": "info", + "metrics-expensive-enabled": true, + "continuous-profiler-dir": "./profiles", + "state-sync-enabled": false, + "accepted-cache-size": 32 +} +``` -The `evm t8n` tool is a stateless state transition utility. It is a utility -which can +## Configuration Format -1. Take a prestate, including - - Accounts, - - Block context information, - - Previous blockhashes (*optional) -2. Apply a set of transactions, -3. Apply a mining-reward (*optional), -4. And generate a post-state, including - - State root, transaction root, receipt root, - - Information about rejected transactions, - - Optionally: a full or partial post-state dump +Configuration is provided as a JSON object. All fields are optional unless otherwise specified. -### Specification +## API Configuration -The idea is to specify the behaviour of this binary very _strict_, so that other -node implementors can build replicas based on their own state-machines, and the -state generators can swap between a \`geth\`-based implementation and a \`parityvm\`-based -implementation. +### Ethereum APIs -#### Command line params +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `eth-apis` | array of strings | List of Ethereum services that should be enabled | `["eth", "eth-filter", "net", "web3", "internal-eth", "internal-blockchain", "internal-transaction"]` | -Command line params that need to be supported are +### Subnet-EVM Specific APIs -``` - --input.alloc value (default: "alloc.json") - --input.env value (default: "env.json") - --input.txs value (default: "txs.json") - --output.alloc value (default: "alloc.json") - --output.basedir value - --output.body value - --output.result value (default: "result.json") - --state.chainid value (default: 1) - --state.fork value (default: "GrayGlacier") - --state.reward value (default: 0) - --trace.memory (default: false) - --trace.nomemory (default: true) - --trace.noreturndata (default: true) - --trace.nostack (default: false) - --trace.returndata (default: false) -``` -#### Objects +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `validators-api-enabled` | bool | Enable the validators API | `true` | +| `admin-api-enabled` | bool | Enable the admin API for administrative operations | `false` | +| `admin-api-dir` | string | Directory for admin API operations | - | +| `warp-api-enabled` | bool | Enable the Warp API for cross-chain messaging | `false` | -The transition tool uses JSON objects to read and write data related to the transition operation. The -following object definitions are required. +### API Limits and Security -##### `alloc` +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `rpc-gas-cap` | uint64 | Maximum gas limit for RPC calls | `50,000,000` | +| `rpc-tx-fee-cap` | float64 | Maximum transaction fee cap in AVAX | `100` | +| `api-max-duration` | duration | Maximum duration for API calls (0 = no limit) | `0` | +| `api-max-blocks-per-request` | int64 | Maximum number of blocks per getLogs request (0 = no limit) | `0` | +| `http-body-limit` | uint64 | Maximum size of HTTP request bodies | - | +| `batch-request-limit` | uint64 | Maximum number of requests that can be batched in an RPC call. For no limit, set either this or `batch-response-max-size` to 0 | `1000` | +| `batch-response-max-size` | uint64 | Maximum size (in bytes) of response that can be returned from a batched RPC call. For no limit, set either this or `batch-request-limit` to 0. Defaults to `25 MB`| `1000` | -The `alloc` object defines the prestate that transition will begin with. +### WebSocket Settings -```go -// Map of address to account definition. -type Alloc map[common.Address]Account -// Genesis account. Each field is optional. -type Account struct { - Code []byte `json:"code"` - Storage map[common.Hash]common.Hash `json:"storage"` - Balance *big.Int `json:"balance"` - Nonce uint64 `json:"nonce"` - SecretKey []byte `json:"secretKey"` -} -``` +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `ws-cpu-refill-rate` | duration | Rate at which WebSocket CPU usage quota is refilled (0 = no limit) | `0` | +| `ws-cpu-max-stored` | duration | Maximum stored WebSocket CPU usage quota (0 = no limit) | `0` | -##### `env` - -The `env` object defines the environmental context in which the transition will -take place. - -```go -type Env struct { - // required - CurrentCoinbase common.Address `json:"currentCoinbase"` - CurrentGasLimit uint64 `json:"currentGasLimit"` - CurrentNumber uint64 `json:"currentNumber"` - CurrentTimestamp uint64 `json:"currentTimestamp"` - Withdrawals []*Withdrawal `json:"withdrawals"` - // optional - CurrentDifficulty *big.Int `json:"currentDifficulty"` - CurrentRandom *big.Int `json:"currentRandom"` - CurrentBaseFee *big.Int `json:"currentBaseFee"` - ParentDifficulty *big.Int `json:"parentDifficulty"` - ParentGasUsed uint64 `json:"parentGasUsed"` - ParentGasLimit uint64 `json:"parentGasLimit"` - ParentTimestamp uint64 `json:"parentTimestamp"` - BlockHashes map[uint64]common.Hash `json:"blockHashes"` - ParentUncleHash common.Hash `json:"parentUncleHash"` - Ommers []Ommer `json:"ommers"` -} -type Ommer struct { - Delta uint64 `json:"delta"` - Address common.Address `json:"address"` -} -type Withdrawal struct { - Index uint64 `json:"index"` - ValidatorIndex uint64 `json:"validatorIndex"` - Recipient common.Address `json:"recipient"` - Amount *big.Int `json:"amount"` -} -``` +## Cache Configuration -##### `txs` - -The `txs` object is an array of any of the transaction types: `LegacyTx`, -`AccessListTx`, or `DynamicFeeTx`. - -```go -type LegacyTx struct { - Nonce uint64 `json:"nonce"` - GasPrice *big.Int `json:"gasPrice"` - Gas uint64 `json:"gas"` - To *common.Address `json:"to"` - Value *big.Int `json:"value"` - Data []byte `json:"data"` - V *big.Int `json:"v"` - R *big.Int `json:"r"` - S *big.Int `json:"s"` - SecretKey *common.Hash `json:"secretKey"` -} -type AccessList []AccessTuple -type AccessTuple struct { - Address common.Address `json:"address" gencodec:"required"` - StorageKeys []common.Hash `json:"storageKeys" gencodec:"required"` -} -type AccessListTx struct { - ChainID *big.Int `json:"chainId"` - Nonce uint64 `json:"nonce"` - GasPrice *big.Int `json:"gasPrice"` - Gas uint64 `json:"gas"` - To *common.Address `json:"to"` - Value *big.Int `json:"value"` - Data []byte `json:"data"` - AccessList AccessList `json:"accessList"` - V *big.Int `json:"v"` - R *big.Int `json:"r"` - S *big.Int `json:"s"` - SecretKey *common.Hash `json:"secretKey"` -} -type DynamicFeeTx struct { - ChainID *big.Int `json:"chainId"` - Nonce uint64 `json:"nonce"` - GasTipCap *big.Int `json:"maxPriorityFeePerGas"` - GasFeeCap *big.Int `json:"maxFeePerGas"` - Gas uint64 `json:"gas"` - To *common.Address `json:"to"` - Value *big.Int `json:"value"` - Data []byte `json:"data"` - AccessList AccessList `json:"accessList"` - V *big.Int `json:"v"` - R *big.Int `json:"r"` - S *big.Int `json:"s"` - SecretKey *common.Hash `json:"secretKey"` -} -``` +### Trie Caches -##### `result` - -The `result` object is output after a transition is executed. It includes -information about the post-transition environment. - -```go -type ExecutionResult struct { - StateRoot common.Hash `json:"stateRoot"` - TxRoot common.Hash `json:"txRoot"` - ReceiptRoot common.Hash `json:"receiptsRoot"` - LogsHash common.Hash `json:"logsHash"` - Bloom types.Bloom `json:"logsBloom"` - Receipts types.Receipts `json:"receipts"` - Rejected []*rejectedTx `json:"rejected,omitempty"` - Difficulty *big.Int `json:"currentDifficulty"` - GasUsed uint64 `json:"gasUsed"` - BaseFee *big.Int `json:"currentBaseFee,omitempty"` -} -``` +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `trie-clean-cache` | int | Size of the trie clean cache in MB | `512` | +| `trie-dirty-cache` | int | Size of the trie dirty cache in MB | `512` | +| `trie-dirty-commit-target` | int | Memory limit to target in the dirty cache before performing a commit in MB | `20` | +| `trie-prefetcher-parallelism` | int | Maximum concurrent disk reads trie prefetcher should perform | `16` | -#### Error codes and output +### Other Caches -All logging should happen against the `stderr`. -There are a few (not many) errors that can occur, those are defined below. +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `snapshot-cache` | int | Size of the snapshot disk layer clean cache in MB | `256` | +| `accepted-cache-size` | int | Depth to keep in the accepted headers and logs cache (blocks) | `32` | +| `state-sync-server-trie-cache` | int | Trie cache size for state sync server in MB | `64` | -##### EVM-based errors (`2` to `9`) +## Ethereum Settings -- Other EVM error. Exit code `2` -- Failed configuration: when a non-supported or invalid fork was specified. Exit code `3`. -- Block history is not supplied, but needed for a `BLOCKHASH` operation. If `BLOCKHASH` - is invoked targeting a block which history has not been provided for, the program will - exit with code `4`. +### Transaction Processing -##### IO errors (`10`-`20`) +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `preimages-enabled` | bool | Enable preimage recording | `false` | +| `allow-unfinalized-queries` | bool | Allow queries for unfinalized blocks | `false` | +| `allow-unprotected-txs` | bool | Allow unprotected transactions (without EIP-155) | `false` | +| `allow-unprotected-tx-hashes` | array | List of specific transaction hashes allowed to be unprotected | EIP-1820 registry tx | +| `local-txs-enabled` | bool | Enable treatment of transactions from local accounts as local | `false` | -- Invalid input json: the supplied data could not be marshalled. - The program will exit with code `10` -- IO problems: failure to load or save files, the program will exit with code `11` +### Snapshots -``` -# This should exit with 3 -./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Frontier+1346 2>/dev/null -exitcode:3 OK -``` -#### Forks -### Basic usage +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `snapshot-wait` | bool | Wait for snapshot generation on startup | `false` | +| `snapshot-verification-enabled` | bool | Enable snapshot verification | `false` | -The chain configuration to be used for a transition is specified via the -`--state.fork` CLI flag. A list of possible values and configurations can be -found in [`tests/init.go`](../../tests/init.go). +## Pruning and State Management -#### Examples -##### Basic usage + > **Note**: If a node is ever run with `pruning-enabled` as `false` (archival mode), setting `pruning-enabled` to `true` will result in a warning and the node will shut down. This is to protect against unintentional misconfigurations of an archival node. To override this and switch to pruning mode, in addition to `pruning-enabled: true`, `allow-missing-tries` should be set to `true` as well. -Invoking it with the provided example files -``` -./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Berlin -``` -Two resulting files: +### Basic Pruning -`alloc.json`: -```json -{ - "0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192": { - "balance": "0xfeed1a9d", - "nonce": "0x1" - }, - "0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b": { - "balance": "0x5ffd4878be161d74", - "nonce": "0xac" - }, - "0xc94f5374fce5edbc8e2a8697c15331677e6ebf0b": { - "balance": "0xa410" - } -} -``` -`result.json`: -```json -{ - "stateRoot": "0x84208a19bc2b46ada7445180c1db162be5b39b9abc8c0a54b05d32943eae4e13", - "txRoot": "0xc4761fd7b87ff2364c7c60b6c5c8d02e522e815328aaea3f20e3b7b7ef52c42d", - "receiptsRoot": "0x056b23fbba480696b65fe5a59b8f2148a1299103c4f57df839233af2cf4ca2d2", - "logsHash": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347", - "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", - "receipts": [ - { - "root": "0x", - "status": "0x1", - "cumulativeGasUsed": "0x5208", - "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", - "logs": null, - "transactionHash": "0x0557bacce3375c98d806609b8d5043072f0b6a8bae45ae5a67a00d3a1a18d673", - "contractAddress": "0x0000000000000000000000000000000000000000", - "gasUsed": "0x5208", - "blockHash": "0x0000000000000000000000000000000000000000000000000000000000000000", - "transactionIndex": "0x0" - } - ], - "rejected": [ - { - "index": 1, - "error": "nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1" - } - ], - "currentDifficulty": "0x20000", - "gasUsed": "0x5208" -} -``` +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `pruning-enabled` | bool | Enable state pruning to save disk space | `true` | +| `commit-interval` | uint64 | Interval at which to persist EVM and atomic tries (blocks) | `4096` | +| `accepted-queue-limit` | int | Maximum blocks to queue before blocking during acceptance | `64` | -We can make them spit out the data to e.g. `stdout` like this: -``` -./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --output.result=stdout --output.alloc=stdout --state.fork=Berlin -``` -Output: -```json -{ - "alloc": { - "0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192": { - "balance": "0xfeed1a9d", - "nonce": "0x1" - }, - "0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b": { - "balance": "0x5ffd4878be161d74", - "nonce": "0xac" - }, - "0xc94f5374fce5edbc8e2a8697c15331677e6ebf0b": { - "balance": "0xa410" - } - }, - "result": { - "stateRoot": "0x84208a19bc2b46ada7445180c1db162be5b39b9abc8c0a54b05d32943eae4e13", - "txRoot": "0xc4761fd7b87ff2364c7c60b6c5c8d02e522e815328aaea3f20e3b7b7ef52c42d", - "receiptsRoot": "0x056b23fbba480696b65fe5a59b8f2148a1299103c4f57df839233af2cf4ca2d2", - "logsHash": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347", - "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", - "receipts": [ - { - "root": "0x", - "status": "0x1", - "cumulativeGasUsed": "0x5208", - "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", - "logs": null, - "transactionHash": "0x0557bacce3375c98d806609b8d5043072f0b6a8bae45ae5a67a00d3a1a18d673", - "contractAddress": "0x0000000000000000000000000000000000000000", - "gasUsed": "0x5208", - "blockHash": "0x0000000000000000000000000000000000000000000000000000000000000000", - "transactionIndex": "0x0" - } - ], - "rejected": [ - { - "index": 1, - "error": "nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1" - } - ], - "currentDifficulty": "0x20000", - "gasUsed": "0x5208" - } -} -``` +### State Reconstruction -#### About Ommers +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `allow-missing-tries` | bool | Suppress warnings about incomplete trie index | `false` | +| `populate-missing-tries` | uint64 | Starting block for re-populating missing tries (null = disabled) | `null` | +| `populate-missing-tries-parallelism` | int | Concurrent readers for re-populating missing tries | `1024` | -Mining rewards and ommer rewards might need to be added. This is how those are applied: +### Offline Pruning -- `block_reward` is the block mining reward for the miner (`0xaa`), of a block at height `N`. -- For each ommer (mined by `0xbb`), with blocknumber `N-delta` - - (where `delta` is the difference between the current block and the ommer) - - The account `0xbb` (ommer miner) is awarded `(8-delta)/ 8 * block_reward` - - The account `0xaa` (block miner) is awarded `block_reward / 32` +> **Note**: If offline pruning is enabled it will run on startup and block until it completes (approximately one hour on Mainnet). This will reduce the size of the database by deleting old trie nodes. **While performing offline pruning, your node will not be able to process blocks and will be considered offline.** While ongoing, the pruning process consumes a small amount of additional disk space (for deletion markers and the bloom filter). For more information see the [disk space considerations documentation](https://build.avax.network/docs/nodes/maintain/reduce-disk-usage#disk-space-considerations). Since offline pruning deletes old state data, this should not be run on nodes that need to support archival API requests. This is meant to be run manually, so after running with this flag once, it must be toggled back to false before running the node again. Therefore, you should run with this flag set to true and then set it to false on the subsequent run. -To make `t8n` apply these, the following inputs are required: +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `offline-pruning-enabled` | bool | Enable offline pruning | `false` | +| `offline-pruning-bloom-filter-size` | uint64 | Bloom filter size for offline pruning in MB | `512` | +| `offline-pruning-data-directory` | string | Directory for offline pruning data | - | -- `--state.reward` - - For ethash, it is `5000000000000000000` `wei`, - - If this is not defined, mining rewards are not applied, - - A value of `0` is valid, and causes accounts to be 'touched'. -- For each ommer, the tool needs to be given an `address\` and a `delta`. This - is done via the `ommers` field in `env`. +### Historical Data -Note: the tool does not verify that e.g. the normal uncle rules apply, -and allows e.g two uncles at the same height, or the uncle-distance. This means that -the tool allows for negative uncle reward (distance > 8) +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `historical-proof-query-window` | uint64 | Number of blocks before last accepted for proof queries (archive mode only, ~24 hours) | `43200` | +| `state-history` | uint64 | Number of most recent states that are accesible on disk (pruning mode only) | `32` | -Example: -`./testdata/5/env.json`: -```json -{ - "currentCoinbase": "0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", - "currentDifficulty": "0x20000", - "currentGasLimit": "0x750a163df65e8a", - "currentNumber": "1", - "currentTimestamp": "1000", - "ommers": [ - {"delta": 1, "address": "0xbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb" }, - {"delta": 2, "address": "0xcccccccccccccccccccccccccccccccccccccccc" } - ] -} -``` -When applying this, using a reward of `0x08` -Output: -```json -{ - "alloc": { - "0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": { - "balance": "0x88" - }, - "0xbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb": { - "balance": "0x70" - }, - "0xcccccccccccccccccccccccccccccccccccccccc": { - "balance": "0x60" - } - } -} -``` -#### Future EIPS +## Transaction Pool Configuration -It is also possible to experiment with future eips that are not yet defined in a hard fork. -Example, putting EIP-1344 into Frontier: -``` -./evm t8n --state.fork=Frontier+1344 --input.pre=./testdata/1/pre.json --input.txs=./testdata/1/txs.json --input.env=/testdata/1/env.json -``` +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `tx-pool-price-limit` | uint64 | Minimum gas price for transaction acceptance | - | +| `tx-pool-price-bump` | uint64 | Minimum price bump percentage for transaction replacement | - | +| `tx-pool-account-slots` | uint64 | Maximum number of executable transaction slots per account | - | +| `tx-pool-global-slots` | uint64 | Maximum number of executable transaction slots for all accounts | - | +| `tx-pool-account-queue` | uint64 | Maximum number of non-executable transaction slots per account | - | +| `tx-pool-global-queue` | uint64 | Maximum number of non-executable transaction slots for all accounts | - | +| `tx-pool-lifetime` | duration | Maximum time transactions can stay in the pool | - | -#### Block history +## Gossip Configuration -The `BLOCKHASH` opcode requires blockhashes to be provided by the caller, inside the `env`. -If a required blockhash is not provided, the exit code should be `4`: -Example where blockhashes are provided: -``` -./evm t8n --input.alloc=./testdata/3/alloc.json --input.txs=./testdata/3/txs.json --input.env=./testdata/3/env.json --trace --state.fork=Berlin +### Push Gossip Settings -``` +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `push-gossip-percent-stake` | float64 | Percentage of total stake to push gossip to (range: [0, 1]) | `0.9` | +| `push-gossip-num-validators` | int | Number of validators to push gossip to | `100` | +| `push-gossip-num-peers` | int | Number of non-validator peers to push gossip to | `0` | -``` -cat trace-0-0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81.jsonl | grep BLOCKHASH -C2 -``` -``` -{"pc":0,"op":96,"gas":"0x5f58ef8","gasCost":"0x3","memSize":0,"stack":[],"depth":1,"refund":0,"opName":"PUSH1"} -{"pc":2,"op":64,"gas":"0x5f58ef5","gasCost":"0x14","memSize":0,"stack":["0x1"],"depth":1,"refund":0,"opName":"BLOCKHASH"} -{"pc":3,"op":0,"gas":"0x5f58ee1","gasCost":"0x0","memSize":0,"stack":["0xdac58aa524e50956d0c0bae7f3f8bb9d35381365d07804dd5b48a5a297c06af4"],"depth":1,"refund":0,"opName":"STOP"} -{"output":"","gasUsed":"0x17"} -``` +### Regossip Settings -In this example, the caller has not provided the required blockhash: -``` -./evm t8n --input.alloc=./testdata/4/alloc.json --input.txs=./testdata/4/txs.json --input.env=./testdata/4/env.json --trace --state.fork=Berlin -ERROR(4): getHash(3) invoked, blockhash for that block not provided -``` -Error code: 4 +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `push-regossip-num-validators` | int | Number of validators to regossip to | `10` | +| `push-regossip-num-peers` | int | Number of non-validator peers to regossip to | `0` | +| `priority-regossip-addresses` | array | Addresses to prioritize for regossip | - | -#### Chaining +### Timing Configuration -Another thing that can be done, is to chain invocations: -``` -./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --state.fork=Berlin --output.alloc=stdout | ./evm t8n --input.alloc=stdin --input.env=./testdata/1/env.json --input.txs=./testdata/1/txs.json --state.fork=Berlin +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `push-gossip-frequency` | duration | Frequency of push gossip | `100ms` | +| `pull-gossip-frequency` | duration | Frequency of pull gossip | `1s` | +| `regossip-frequency` | duration | Frequency of regossip | `30s` | -``` -What happened here, is that we first applied two identical transactions, so the second one was rejected. -Then, taking the poststate alloc as the input for the next state, we tried again to include -the same two transactions: this time, both failed due to too low nonce. +## Logging and Monitoring -In order to meaningfully chain invocations, one would need to provide meaningful new `env`, otherwise the -actual blocknumber (exposed to the EVM) would not increase. +### Logging -#### Transactions in RLP form +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `log-level` | string | Logging level (trace, debug, info, warn, error, crit) | `"info"` | +| `log-json-format` | bool | Use JSON format for logs | `false` | -It is possible to provide already-signed transactions as input to, using an `input.txs` which ends with the `rlp` suffix. -The input format for RLP-form transactions is _identical_ to the _output_ format for block bodies. Therefore, it's fully possible -to use the evm to go from `json` input to `rlp` input. +### Profiling -The following command takes **json** the transactions in `./testdata/13/txs.json` and signs them. After execution, they are output to `signed_txs.rlp`.: -``` -./evm t8n --state.fork=London --input.alloc=./testdata/13/alloc.json --input.txs=./testdata/13/txs.json --input.env=./testdata/13/env.json --output.result=alloc_jsontx.json --output.body=signed_txs.rlp -INFO [12-27|09:25:11.102] Trie dumping started root=e4b924..6aef61 -INFO [12-27|09:25:11.102] Trie dumping complete accounts=3 elapsed="275.66µs" -INFO [12-27|09:25:11.102] Wrote file file=alloc.json -INFO [12-27|09:25:11.103] Wrote file file=alloc_jsontx.json -INFO [12-27|09:25:11.103] Wrote file file=signed_txs.rlp -``` +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `continuous-profiler-dir` | string | Directory for continuous profiler output (empty = disabled) | - | +| `continuous-profiler-frequency` | duration | Frequency to run continuous profiler | `15m` | +| `continuous-profiler-max-files` | int | Maximum number of profiler files to maintain | `5` | -The `output.body` is the rlp-list of transactions, encoded in hex and placed in a string a'la `json` encoding rules: -``` -cat signed_txs.rlp -"0xf8d2b86702f864010180820fa08284d09411111111111111111111111111111111111111118080c001a0b7dfab36232379bb3d1497a4f91c1966b1f932eae3ade107bf5d723b9cb474e0a06261c359a10f2132f126d250485b90cf20f30340801244a08ef6142ab33d1904b86702f864010280820fa08284d09411111111111111111111111111111111111111118080c080a0d4ec563b6568cd42d998fc4134b36933c6568d01533b5adf08769270243c6c7fa072bf7c21eac6bbeae5143371eef26d5e279637f3bd73482b55979d76d935b1e9" -``` +### Metrics -We can use `rlpdump` to check what the contents are: -``` -rlpdump -hex $(cat signed_txs.rlp | jq -r ) -[ - 02f864010180820fa08284d09411111111111111111111111111111111111111118080c001a0b7dfab36232379bb3d1497a4f91c1966b1f932eae3ade107bf5d723b9cb474e0a06261c359a10f2132f126d250485b90cf20f30340801244a08ef6142ab33d1904, - 02f864010280820fa08284d09411111111111111111111111111111111111111118080c080a0d4ec563b6568cd42d998fc4134b36933c6568d01533b5adf08769270243c6c7fa072bf7c21eac6bbeae5143371eef26d5e279637f3bd73482b55979d76d935b1e9, -] -``` -Now, we can now use those (or any other already signed transactions), as input, like so: -``` -./evm t8n --state.fork=London --input.alloc=./testdata/13/alloc.json --input.txs=./signed_txs.rlp --input.env=./testdata/13/env.json --output.result=alloc_rlptx.json -INFO [12-27|09:25:11.187] Trie dumping started root=e4b924..6aef61 -INFO [12-27|09:25:11.187] Trie dumping complete accounts=3 elapsed="123.676µs" -INFO [12-27|09:25:11.187] Wrote file file=alloc.json -INFO [12-27|09:25:11.187] Wrote file file=alloc_rlptx.json -``` -You might have noticed that the results from these two invocations were stored in two separate files. -And we can now finally check that they match. -``` -cat alloc_jsontx.json | jq .stateRoot && cat alloc_rlptx.json | jq .stateRoot -"0xe4b924a6adb5959fccf769d5b7bb2f6359e26d1e76a2443c5a91a36d826aef61" -"0xe4b924a6adb5959fccf769d5b7bb2f6359e26d1e76a2443c5a91a36d826aef61" -``` +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `metrics-expensive-enabled` | bool | Enable expensive debug-level metrics; this includes Firewood metrics | `true` | -## Transaction tool +## Security and Access -The transaction tool is used to perform static validity checks on transactions such as: -* intrinsic gas calculation -* max values on integers -* fee semantics, such as `maxFeePerGas < maxPriorityFeePerGas` -* newer tx types on old forks +### Keystore -### Examples +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `keystore-directory` | string | Directory for keystore files (absolute or relative path) | - | +| `keystore-external-signer` | string | External signer configuration | - | +| `keystore-insecure-unlock-allowed` | bool | Allow insecure account unlocking | `false` | -``` -./evm t9n --state.fork Homestead --input.txs testdata/15/signed_txs.rlp -[ - { - "error": "transaction type not supported", - "hash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476" - }, - { - "error": "transaction type not supported", - "hash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a" - } -] -``` -``` -./evm t9n --state.fork London --input.txs testdata/15/signed_txs.rlp -[ - { - "address": "0xd02d72e067e77158444ef2020ff2d325f929b363", - "hash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476", - "intrinsicGas": "0x5208" - }, - { - "address": "0xd02d72e067e77158444ef2020ff2d325f929b363", - "hash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a", - "intrinsicGas": "0x5208" - } -] -``` -## Block builder tool (b11r) +### Fee Configuration -The `evm b11r` tool is used to assemble and seal full block rlps. +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `feeRecipient` | string | Address to send transaction fees to (leave empty if not supported) | - | -### Specification +## Network and Sync -#### Command line params +### Network -Command line params that need to be supported are: +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `max-outbound-active-requests` | int64 | Maximum number of outbound active requests for VM2VM network | `16` | -``` - --input.header value `stdin` or file name of where to find the block header to use. (default: "header.json") - --input.ommers value `stdin` or file name of where to find the list of ommer header RLPs to use. - --input.txs value `stdin` or file name of where to find the transactions list in RLP form. (default: "txs.rlp") - --output.basedir value Specifies where output files are placed. Will be created if it does not exist. - --output.block value Determines where to put the alloc of the post-state. (default: "block.json") - - into the file - `stdout` - into the stdout output - `stderr` - into the stderr output - --seal.clique value Seal block with Clique. `stdin` or file name of where to find the Clique sealing data. - --seal.ethash Seal block with ethash. (default: false) - --seal.ethash.dir value Path to ethash DAG. If none exists, a new DAG will be generated. - --seal.ethash.mode value Defines the type and amount of PoW verification an ethash engine makes. (default: "normal") - --verbosity value Sets the verbosity level. (default: 3) -``` +### State Sync -#### Objects - -##### `header` - -The `header` object is a consensus header. - -```go= -type Header struct { - ParentHash common.Hash `json:"parentHash"` - OmmerHash *common.Hash `json:"sha3Uncles"` - Coinbase *common.Address `json:"miner"` - Root common.Hash `json:"stateRoot" gencodec:"required"` - TxHash *common.Hash `json:"transactionsRoot"` - ReceiptHash *common.Hash `json:"receiptsRoot"` - Bloom types.Bloom `json:"logsBloom"` - Difficulty *big.Int `json:"difficulty"` - Number *big.Int `json:"number" gencodec:"required"` - GasLimit uint64 `json:"gasLimit" gencodec:"required"` - GasUsed uint64 `json:"gasUsed"` - Time uint64 `json:"timestamp" gencodec:"required"` - Extra []byte `json:"extraData"` - MixDigest common.Hash `json:"mixHash"` - Nonce *types.BlockNonce `json:"nonce"` - BaseFee *big.Int `json:"baseFeePerGas"` -} -``` -#### `ommers` +> **Note:** If state-sync is enabled, the peer will download chain state from peers up to a recent block near tip, then proceed with normal bootstrapping. Please note that if you need historical data, state sync isn't the right option. However, it is sufficient if you are just running a validator. -The `ommers` object is a list of RLP-encoded ommer blocks in hex -representation. +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `state-sync-enabled` | bool | Enable state sync | `false` | +| `state-sync-skip-resume` | bool | Force state sync to use highest available summary block | `false` | +| `state-sync-ids` | string | Comma-separated list of state sync IDs | - | +| `state-sync-commit-interval` | uint64 | Commit interval for state sync (blocks) | `16384` | +| `state-sync-min-blocks` | uint64 | Minimum blocks ahead required for state sync | `300000` | +| `state-sync-request-size` | uint16 | Number of key/values to request per state sync request | `1024` | -```go= -type Ommers []string -``` +## Database Configuration -#### `txs` +> **WARNING**: `firewood` and `path` schemes are untested in production. Using `path` is strongly discouraged. To use `firewood`, you must also set the following config options: +> +> - `populate-missing-tries: nil` +> - `state-sync-enabled: false` +> - `snapshot-cache: 0` -The `txs` object is a list of RLP-encoded transactions in hex representation. +Failing to set these options will result in errors on VM initialization. Additionally, not all APIs are available - see these portions of the config documentation for more details. -```go= -type Txs []string -``` +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `database-type` | string | Type of database to use | `"pebbledb"` | +| `database-path` | string | Path to database directory | - | +| `database-read-only` | bool | Open database in read-only mode | `false` | +| `database-config` | string | Inline database configuration | - | +| `database-config-file` | string | Path to database configuration file | - | +| `use-standalone-database` | bool | Use standalone database instead of shared one | - | +| `inspect-database` | bool | Inspect database on startup | `false` | +| `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash` or `firewood` | `hash` | -#### `clique` +## Transaction Indexing -The `clique` object provides the necessary information to complete a clique -seal of the block. +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `transaction-history` | uint64 | Maximum number of blocks from head whose transaction indices are reserved (0 = no limit) | - | +| `tx-lookup-limit` | uint64 | **Deprecated** - use `transaction-history` instead | - | +| `skip-tx-indexing` | bool | Skip indexing transactions entirely | `false` | -```go= -var CliqueInfo struct { - Key *common.Hash `json:"secretKey"` - Voted *common.Address `json:"voted"` - Authorize *bool `json:"authorize"` - Vanity common.Hash `json:"vanity"` -} -``` +## Warp Configuration -#### `output` +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `warp-off-chain-messages` | array | Off-chain messages the node should be willing to sign | - | +| `prune-warp-db-enabled` | bool | Clear warp database on startup | `false` | -The `output` object contains two values, the block RLP and the block hash. +## Miscellaneous -```go= -type BlockInfo struct { - Rlp []byte `json:"rlp"` - Hash common.Hash `json:"hash"` -} -``` +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `airdrop` | string | Path to airdrop file | - | +| `skip-upgrade-check` | bool | Skip checking that upgrades occur before last accepted block ⚠️ **Warning**: Only use when you understand the implications | `false` | +| `min-delay-target` | integer | The minimum delay between blocks (in milliseconds) that this node will attempt to use when creating blocks | Parent block's target | -## A Note on Encoding +## Gossip Constants -The encoding of values for `evm` utility attempts to be relatively flexible. It -generally supports hex-encoded or decimal-encoded numeric values, and -hex-encoded byte values (like `common.Address`, `common.Hash`, etc). When in -doubt, the [`execution-apis`](https://github.com/ethereum/execution-apis) way -of encoding should always be accepted. +The following constants are defined for transaction gossip behavior and cannot be configured without a custom build of Subnet-EVM: -## Testing +| Constant | Type | Description | Value | +|----------|------|-------------|-------| +| Bloom Filter Min Target Elements | int | Minimum target elements for bloom filter | `8,192` | +| Bloom Filter Target False Positive Rate | float | Target false positive rate | `1%` | +| Bloom Filter Reset False Positive Rate | float | Reset false positive rate | `5%` | +| Bloom Filter Churn Multiplier | int | Churn multiplier | `3` | +| Push Gossip Discarded Elements | int | Number of discarded elements | `16,384` | +| Tx Gossip Target Message Size | size | Target message size for transaction gossip | `20 KiB` | +| Tx Gossip Throttling Period | duration | Throttling period | `10s` | +| Tx Gossip Throttling Limit | int | Throttling limit | `2` | +| Tx Gossip Poll Size | int | Poll size | `1` | + +## Validation Notes -There are many test cases in the [`cmd/evm/testdata`](./testdata) directory. -These fixtures are used to power the `t8n` tests in -[`t8n_test.go`](./t8n_test.go). The best way to verify correctness of new `evm` -implementations is to execute these and verify the output and error codes match -the expected values. \ No newline at end of file +- Cannot enable `populate-missing-tries` while pruning or offline pruning is enabled +- Cannot run offline pruning while pruning is disabled +- Commit interval must be non-zero when pruning is enabled +- `push-gossip-percent-stake` must be in range `[0, 1]` +- Some settings may require node restart to take effect From 42a1e95747a5abbd7f202b49592103efda0905ff Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Mon, 1 Dec 2025 16:53:24 -0500 Subject: [PATCH 12/20] docs: align with coreth --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 2ddd6fcbc8..43a2ff1154 100644 --- a/README.md +++ b/README.md @@ -48,7 +48,7 @@ Full documentation for the C-Chain's API can be found in the [builder docs](http ## Compatibility -The Subnet EVM is compatible with almost all Ethereum tooling, including [Remix](https://build.avax.network/docs/dapps/smart-contract-dev/deploy-with-remix-ide), [Metamask](https://build.avax.network/docs/dapps), and [Foundry](https://build.avax.network/docs/dapps/toolchains/foundry). +The C-Chain is compatible with almost all Ethereum tooling, including [Core,](https://docs.avax.network/build/dapp/launch-dapp#through-core) [Metamask,](https://docs.avax.network/build/dapp/launch-dapp#through-metamask) and [Remix](https://build.avax.network/docs/avalanche-l1s/add-utility/deploy-smart-contract#using-remix). **Note:** Subnet-EVM and Avalanche C-Chain currently implement the Ethereum Cancun fork and do not yet support newer hardforks (such as Pectra). Since Solidity v0.8.30 switched its default target EVM version to Pectra, contracts compiled with default settings may emit bytecode using instructions/features that Avalanche does not support. To avoid this mismatch, explicitly set the Solidity compiler’s `evmVersion` to `cancun` when deploying to Subnet-EVM or the C-Chain. From 4a57943b74b8c3072d6c92385539978820d17fcf Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Mon, 1 Dec 2025 18:56:53 -0500 Subject: [PATCH 13/20] fix: compatability links --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 43a2ff1154..ec0584ae9b 100644 --- a/README.md +++ b/README.md @@ -48,7 +48,7 @@ Full documentation for the C-Chain's API can be found in the [builder docs](http ## Compatibility -The C-Chain is compatible with almost all Ethereum tooling, including [Core,](https://docs.avax.network/build/dapp/launch-dapp#through-core) [Metamask,](https://docs.avax.network/build/dapp/launch-dapp#through-metamask) and [Remix](https://build.avax.network/docs/avalanche-l1s/add-utility/deploy-smart-contract#using-remix). +Subnet-EVM is compatible with almost all Ethereum tooling, including [Foundry](https://build.avax.network/academy/blockchain/solidity-foundry/03-smart-contracts/03-foundry-quickstart) and [Remix](https://build.avax.network/docs/avalanche-l1s/add-utility/deploy-smart-contract#using-remix). **Note:** Subnet-EVM and Avalanche C-Chain currently implement the Ethereum Cancun fork and do not yet support newer hardforks (such as Pectra). Since Solidity v0.8.30 switched its default target EVM version to Pectra, contracts compiled with default settings may emit bytecode using instructions/features that Avalanche does not support. To avoid this mismatch, explicitly set the Solidity compiler’s `evmVersion` to `cancun` when deploying to Subnet-EVM or the C-Chain. From 3b35ce2148fc87a56cbc2ebb386d06651a5a9405 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Wed, 3 Dec 2025 15:35:20 -0500 Subject: [PATCH 14/20] fix: bug bounty link --- .github/ISSUE_TEMPLATE/bug_report.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index 4c524e036c..66282cc3f0 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -31,4 +31,4 @@ Which OS you used to reveal the bug. **Additional context** Add any other context about the problem here. -You can submit a bug on the [Avalanche Bug Bounty program page](https://hackenproof.com/avalanche/avalanche-protocol). +You can submit a bug on the [Avalanche Bug Bounty program page](https://immunefi.com/bug-bounty/avalanche/information/). From a0b1798bd6464e6a0f9ec2bbfb59c9d052f26e39 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Wed, 3 Dec 2025 15:49:00 -0500 Subject: [PATCH 15/20] Update sync/README.md Co-authored-by: Michael Kaplan <55204436+michaelkaplan13@users.noreply.github.com> Signed-off-by: Jonathan Oppenheimer <147infiniti@gmail.com> --- sync/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sync/README.md b/sync/README.md index d3c61e06ae..4a815ca539 100644 --- a/sync/README.md +++ b/sync/README.md @@ -51,7 +51,7 @@ The above information is called a _state summary_, and each syncable block corre 1. The engine calls `StateSyncEnabled`. The VM returns `true` to initiate state sync, or `false` to start bootstrapping. In `subnet-evm`, this is controlled by the `state-sync-enabled` flag. 1. The engine calls `GetOngoingSyncStateSummary`. If the VM has a previously interrupted sync to resume it returns that summary. Otherwise, it returns `ErrNotFound`. By default, `subnet-evm` will resume an interrupted sync. -1. The engine samples peers for their latest available summaries, then verifies the correctness and availability of each sampled summary with validators. The messaging flow is documented in the [state sync README](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/README.md). +1. The engine samples peers for their latest available summaries, then verifies the correctness and availability of each sampled summary with validators. The messaging flow is documented in the [block engine README](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/README.md). 1. The engine calls `Accept` on the chosen summary. The VM may return `false` to skip syncing to this summary (`subnet-evm` skips state sync for less than `defaultStateSyncMinBlocks = 300_000` blocks). If the VM decides to perform the sync, it must return `true` without blocking and fetch the state from its peers asynchronously. 1. The VM sends `common.StateSyncDone` on the `toEngine` channel on completion. 1. The engine calls `VM.SetState(Bootstrapping)`. Then, blocks after the syncable block are processed one by one. From 906241a3e1b9b36b5f10d33439c379866d3edabe Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Wed, 3 Dec 2025 15:52:26 -0500 Subject: [PATCH 16/20] style: revert comma removal --- .github/CONTRIBUTING.md | 2 +- sync/README.md | 18 +++++++++--------- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 4836b62ebe..742a189671 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -41,7 +41,7 @@ Mocks are auto-generated using [mockgen](https://pkg.go.dev/go.uber.org/mock/moc - if the file `mocks_generate_test.go` does not exist in the package where the interface is located, create it with content (adapt as needed): ```go - // Copyright (C) 2025-2025, Ava Labs, Inc. All rights reserved. + // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. package mypackage diff --git a/sync/README.md b/sync/README.md index 4a815ca539..1ee1473208 100644 --- a/sync/README.md +++ b/sync/README.md @@ -61,24 +61,24 @@ The above information is called a _state summary_, and each syncable block corre The following steps are executed by the VM to sync its state from peers (see `stateSyncClient.StateSync`): 1. Wipe snapshot data -1. Sync 256 parents of the syncable block (see `BlockRequest`) -1. Sync the EVM state: account trie, code, and storage tries -1. Update in-memory and on-disk pointers +1. Sync 256 parents of the syncable block (see `BlockRequest`), +1. Sync the EVM state: account trie, code, and storage tries, +1. Update in-memory and on-disk pointers. Steps 3 and 4 involve syncing tries. To sync trie data, the VM will send a series of `LeafRequests` to its peers. Each request specifies: -- Type of trie (`NodeType`) +- Type of trie (`NodeType`): - `statesync.StateTrieNode` (account trie and storage tries share the same database) -- `Root` of the trie to sync -- `Start` and `End` specify a range of keys +- `Root` of the trie to sync, +- `Start` and `End` specify a range of keys. Peers responding to these requests send back trie leafs (key/value pairs) beginning at `Start` and up to `End` (or a maximum number of leafs). The response must also contain include a merkle proof for the range of leafs it contains. Nodes serving state sync data are responsible for constructing these proofs (see `sync/handlers/leafs_request.go`) `client.GetLeafs` handles sending a single request and validating the response. This method will retry the request from a different peer up to `maxRetryAttempts` (= 32) times if the peer's response is: -- malformed -- does not contain a valid merkle proof -- not received in time +- malformed, +- does not contain a valid merkle proof, +- not received in time. If there are more leafs in a trie than can be returned in a single response, the client will make successive requests to continue fetching data (with `Start` set to the last key received) until the trie is complete. `CallbackLeafSyncer` manages this process and does a callback on each batch of received leafs. From ac9ef224d3da864068337a38892369527136e919 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Wed, 3 Dec 2025 15:55:00 -0500 Subject: [PATCH 17/20] fix: bug bounty correct link --- SECURITY.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/SECURITY.md b/SECURITY.md index 60412d905d..61603a456e 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -5,14 +5,14 @@ responsible disclosures. Valid reports will be eligible for a reward (terms and ## Reporting a Vulnerability -**Please do not file a public ticket** mentioning the vulnerability. To disclose a vulnerability submit it through our [Bug Bounty Program](https://immunefi.com/bug-bounty/avalabs/information/). +**Please do not file a public ticket** mentioning the vulnerability. To disclose a vulnerability submit it through our [Bug Bounty Program](https://immunefi.com/bug-bounty/avalanche/information/). Vulnerabilities must be disclosed to us privately with reasonable time to respond, and avoid compromise of other users and accounts, or loss of funds that are not your own. We do not reward spam or social engineering vulnerabilities. Do not test for or validate any security issues in the live Avalanche networks (Mainnet and Fuji testnet), confirm all exploits in a local private testnet. -Please refer to the [Bug Bounty Page](https://immunefi.com/bug-bounty/avalabs/information/) for the most up-to-date program rules and scope. +Please refer to the [Bug Bounty Page](https://immunefi.com/bug-bounty/avalanche/information/) for the most up-to-date program rules and scope. ## Supported Versions From 80e349cfe18345aa56c2c87bef4b6ca190dc1858 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Wed, 3 Dec 2025 16:33:58 -0500 Subject: [PATCH 18/20] style: revert capitilization change --- consensus/dummy/README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/consensus/dummy/README.md b/consensus/dummy/README.md index bd13cc4cec..d49ff0c350 100644 --- a/consensus/dummy/README.md +++ b/consensus/dummy/README.md @@ -1,10 +1,10 @@ # Consensus -Disclaimer: the consensus package in subnet-evm is a complete misnomer. +Disclaimer: the consensus package in Subnet-EVM is a complete misnomer. The consensus package in go-ethereum handles block validation and specifically handles validating the PoW portion of consensus - thus the name. -Since AvalancheGo handles consensus for subnet-evm, subnet-evm is just the VM, but we keep the consensus package in place to handle part of the block verification process. +Since AvalancheGo handles consensus for Subnet-EVM, Subnet-EVM is just the VM, but we keep the consensus package in place to handle part of the block verification process. ## Block Verification @@ -12,17 +12,17 @@ The dummy consensus engine is responsible for performing verification on the hea ## Dynamic Fees -subnet-evm includes a dynamic fee algorithm based off of [EIP-1559](https://eips.ethereum.org/EIPS/eip-1559). This introduces a field to the block type called `BaseFee`. The Base Fee sets a minimum gas price for any transaction to be included in the block. For example, a transaction with a gas price of 49 gwei, will be invalid to include in a block with a base fee of 50 gwei. +Subnet-EVM includes a dynamic fee algorithm based off of [EIP-1559](https://eips.ethereum.org/EIPS/eip-1559). This introduces a field to the block type called `BaseFee`. The Base Fee sets a minimum gas price for any transaction to be included in the block. For example, a transaction with a gas price of 49 gwei, will be invalid to include in a block with a base fee of 50 gwei. -The dynamic fee algorithm aims to adjust the base fee to handle network congestion. subnet-evm sets a target utilization on the network, and the dynamic fee algorithm adjusts the base fee accordingly. If the network operates above the target utilization, the dynamic fee algorithm will increase the base fee to make utilizing the network more expensive and bring overall utilization down. If the network operates below the target utilization, the dynamic fee algorithm will decrease the base fee to make it cheaper to use the network. +The dynamic fee algorithm aims to adjust the base fee to handle network congestion. Subnet-EVM sets a target utilization on the network, and the dynamic fee algorithm adjusts the base fee accordingly. If the network operates above the target utilization, the dynamic fee algorithm will increase the base fee to make utilizing the network more expensive and bring overall utilization down. If the network operates below the target utilization, the dynamic fee algorithm will decrease the base fee to make it cheaper to use the network. - EIP-1559 is intended for Ethereum where a block is produced roughly every 10s - The dynamic fee algorithm needs to handle the case that the network quiesces and there are no blocks for a long period of time -- Since subnet-evm produces blocks at a different cadence, it adapts EIP-1559 to sum the amount of gas consumed within a 10-second interval instead of using only the amount of gas consumed in the parent block +- Since Subnet-EVM produces blocks at a different cadence, it adapts EIP-1559 to sum the amount of gas consumed within a 10-second interval instead of using only the amount of gas consumed in the parent block ## Consensus Engine Callbacks -The consensus engine is called while blocks are being both built and processed and subnet-evm adds callback functions into the dummy consensus engine to insert its own logic into these stages. +The consensus engine is called while blocks are being both built and processed and Subnet-EVM adds callback functions into the dummy consensus engine to insert its own logic into these stages. ### FinalizeAndAssemble From f2dcbbf464cbe13789f78478bdd46b637e5d9631 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Wed, 3 Dec 2025 16:37:21 -0500 Subject: [PATCH 19/20] style: revert capitilization change --- core/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/core/README.md b/core/README.md index cf13016df4..a094dab6d8 100644 --- a/core/README.md +++ b/core/README.md @@ -10,7 +10,7 @@ When the consensus engine verifies blocks as they are ready to be issued into co InsertBlockManual verifies the block, inserts it into the state manager to track the merkle trie for the block, and adds it to the canonical chain if it extends the currently preferred chain. -subnet-evm adds functions for Accept and Reject, which take care of marking a block as finalized and performing garbage collection where possible. +Subnet-EVM adds functions for Accept and Reject, which take care of marking a block as finalized and performing garbage collection where possible. The consensus engine can also call `SetPreference` on a VM to tell the VM that a specific block is preferred by the consensus engine to be accepted. This triggers a call to `reorg` the blockchain and set the newly preferred block as the preferred chain. From cecac17f09ba65355a539ced0a23abde64717c91 Mon Sep 17 00:00:00 2001 From: Jonathan Oppenheimer Date: Wed, 3 Dec 2025 17:17:14 -0500 Subject: [PATCH 20/20] fix: more links --- docs/releasing/README.md | 4 ++-- tests/antithesis/README.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/releasing/README.md b/docs/releasing/README.md index 4b51194520..3887dd1fb1 100644 --- a/docs/releasing/README.md +++ b/docs/releasing/README.md @@ -57,7 +57,7 @@ Remember to use the appropriate versioning for your release. compatibility.json has subnet-evm version v0.7.3 stated as compatible with RPC chain VM protocol version 0 but AvalancheGo protocol version is 39 ``` - This message can help you figure out what the correct RPC chain VM protocol version (here `39`) has to be in compatibility.json for your current release. Alternatively, you can refer to the [Avalanchego repository `version/compatibility.json` file](https://github.com/ava-labs/avalanchego/blob/main/version/compatibility.json) to find the RPC chain VM protocol version matching the AvalancheGo version we use here. + This message can help you figure out what the correct RPC chain VM protocol version (here `39`) has to be in compatibility.json for your current release. Alternatively, you can refer to the [Avalanchego repository `version/compatibility.json` file](https://github.com/ava-labs/avalanchego/blob/master/version/compatibility.json) to find the RPC chain VM protocol version matching the AvalancheGo version we use here. 1. Specify the AvalancheGo compatibility in the [README.md relevant section](../../README.md#avalanchego-compatibility). For example we would add: ```text @@ -421,7 +421,7 @@ export P_VERSION=v0.7.4 compatibility.json has subnet-evm version v0.7.4 stated as compatible with RPC chain VM protocol version 0 but AvalancheGo protocol version is 39 ``` - This message can help you figure out what the correct RPC chain VM protocol version (here `39`) has to be in compatibility.json for your current release. Alternatively, you can refer to the [Avalanchego repository `version/compatibility.json` file](https://github.com/ava-labs/avalanchego/blob/main/version/compatibility.json) to find the RPC chain VM protocol version matching the AvalancheGo version we use here. + This message can help you figure out what the correct RPC chain VM protocol version (here `39`) has to be in compatibility.json for your current release. Alternatively, you can refer to the [Avalanchego repository `version/compatibility.json` file](https://github.com/ava-labs/avalanchego/blob/master/version/compatibility.json) to find the RPC chain VM protocol version matching the AvalancheGo version we use here. 1. Commit your changes and push the branch ```bash diff --git a/tests/antithesis/README.md b/tests/antithesis/README.md index 0924b9f63d..9f0c39b689 100644 --- a/tests/antithesis/README.md +++ b/tests/antithesis/README.md @@ -1,7 +1,7 @@ # Antithesis Testing This package supports testing with -[Antithesis](https://antithesis.com/docs/introduction/introduction.html), +[Antithesis](https://antithesis.com/docs/), a SaaS offering that enables deployment of distributed systems (such as Avalanche) to a deterministic and simulated environment that enables discovery and reproduction of anomalous behavior.