Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 14 additions & 9 deletions docs/protocol/node-ops/node-operation/node-provisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,23 @@ sidebar_position: 10

The hardware your Node will need varies depending on the role your Node will play in the Flow network. For an overview of the differences see the [Node Roles Overview](./node-roles.md).

| Node Type | CPU | Memory | Disk | Example GCP Instance | Example AWS Instance |
|:----------------:| ---------:| ------:| ------:|:--------------:|:--------------:|
| **Collection** | 4 cores | 32 GB | 200 GB | n2-highmem-4 | r6i.xlarge |
| **Consensus** | 2 cores | 16 GB | 200 GB | n2-standard-4 | m6a.xlarge |
| **Execution** | 128 cores | 864 GB | 9 TB (with maintenance see: [pruning chunk data pack](https://forum.flow.com/t/execution-node-upgrade-to-v0-31-15-and-managing-disk-space-usage/5167) or 30 TB without maintenance) | n2-highmem-128 | |
| **Verification** | 2 cores | 16 GB | 200 GB | n2-highmem-2 | r6a.large |
| **Access** | 16 cores | 64 GB | 750 GB | n2-standard-16 | m6i.4xlarge |
| **Observer** | 2 cores | 4 GB | 300 GB | n2-standard-4 | m6i.xlarge |
| **EVM Gateway** | 2 cores | 32 GB | 30 GB | n2-highmem-4 | r6i.xlarge |
| Node Type | CPU | Memory | Disk | Example GCP Instance | Example AWS Instance |
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

|:----------------:|:------------------------------:|:-------------------------:|:----------------:|:--------------------:|:--------------------:|
| **Collection** | 2 cores 🆕<br />(was 4 cores) | 8 GB 🆕<br />(was 32 GB) | 200 GB | n2-standard-2 | m5.large |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I would put the NEW tag and previous value here. This page will probably remain unchanged until the next time we recommend hardware (months or years). At some point soon after the change in recommendation, storing the historical recommendation is more confusing than useful.

I assume people who may want to change their node hardware will arrive on this page because we have announced elsewhere that the recommendations have changed, in which case the NEW tag is unlikely to help them.

Suggested change
| **Collection** | 2 cores 🆕<br />(was 4 cores) | 8 GB 🆕<br />(was 32 GB) | 200 GB | n2-standard-2 | m5.large |
| **Collection** | 2 cores | 8 GB | 200 GB | n2-standard-2 | m5.large |

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I couldn't find a emoji to express "updated".
  2. My plan was to revise the page and remove new emoji after a few weeks of the spork.

Copy link
Contributor

@peterargue peterargue Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about just putting the current values in the docs, and using the new annotations in the forum post and announcements?

or, set a reminder to update the docs in 3-6 months to remove it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its a bit simple to have it in the docs directly tbh - I can then also use the link to this page in marketing content.
Unless you both feel strongly about it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel strongly about it

| **Consensus** | 2 cores | 8 GB 🆕<br />(was 16 GB) | 200 GB | n2-standard-2 | m5.large |
| **Execution** | 128 cores | 864 GB | 9 TB<sup>1</sup> | n2-highmem-128 | |
| **Verification** | 2 cores | 8 GB 🆕<br />(was 16 GB) | 200 GB | n2-standard-2 | m5.large |
| **Access** | 8 cores 🆕<br />(was 16 cores) | 32 GB 🆕<br />(was 64 GB) | 750 GB | n2-standard-8 | m5.2xlarge |
| **Observer** | 2 cores | 4 GB | 300 GB | n2-standard-4 | m6i.xlarge |
| **EVM Gateway** | 2 cores | 32 GB | 30 GB | n2-highmem-4 | r6i.xlarge |

<sub>1: Recommended with maintenance see: [pruning chunk data pack](https://forum.flow.com/t/execution-node-upgrade-to-v0-31-15-and-managing-disk-space-usage/5167), 30 TB without maintenance.</sub>


_Note: The above numbers represent our current best estimate for the state of the network. These will be actively updated as we continue benchmarking the network's performance._

_Note: If you are running your node on bare metal, we recommend provisioning a machine with higher CPU and memory than the minimum requirements. Unlike cloud instances, bare metal servers cannot be easily scaled up, and over-provisioning upfront helps avoid the need for disruptive hardware upgrades later._

## Networking Requirements

Most of the load on your nodes will be messages sent back and forth between other nodes on the network. Make sure you have a sufficiently fast connection; we recommend at _least_ 1Gbps, and 5Gbps is better.
Expand Down