-
Notifications
You must be signed in to change notification settings - Fork 76
Hardware spec changes for the forte upgrade #1510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
9e6f7cc
b9970ce
529cd93
87c65e0
17f3cf7
cbe365a
4b89b8b
16dfea5
4c96faa
878a569
a125cae
4d0d0d2
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -9,18 +9,23 @@ sidebar_position: 10 | |||||
|
||||||
The hardware your Node will need varies depending on the role your Node will play in the Flow network. For an overview of the differences see the [Node Roles Overview](./node-roles.md). | ||||||
|
||||||
| Node Type | CPU | Memory | Disk | Example GCP Instance | Example AWS Instance | | ||||||
|:----------------:| ---------:| ------:| ------:|:--------------:|:--------------:| | ||||||
| **Collection** | 4 cores | 32 GB | 200 GB | n2-highmem-4 | r6i.xlarge | | ||||||
| **Consensus** | 2 cores | 16 GB | 200 GB | n2-standard-4 | m6a.xlarge | | ||||||
| **Execution** | 128 cores | 864 GB | 9 TB (with maintenance see: [pruning chunk data pack](https://forum.flow.com/t/execution-node-upgrade-to-v0-31-15-and-managing-disk-space-usage/5167) or 30 TB without maintenance) | n2-highmem-128 | | | ||||||
| **Verification** | 2 cores | 16 GB | 200 GB | n2-highmem-2 | r6a.large | | ||||||
| **Access** | 16 cores | 64 GB | 750 GB | n2-standard-16 | m6i.4xlarge | | ||||||
| **Observer** | 2 cores | 4 GB | 300 GB | n2-standard-4 | m6i.xlarge | | ||||||
| **EVM Gateway** | 2 cores | 32 GB | 30 GB | n2-highmem-4 | r6i.xlarge | | ||||||
| Node Type | CPU | Memory | Disk | Example GCP Instance | Example AWS Instance | | ||||||
|:----------------:|:------------------------------:|:-------------------------:|:----------------:|:--------------------:|:--------------------:| | ||||||
| **Collection** | 2 cores 🆕<br />(was 4 cores) | 8 GB 🆕<br />(was 32 GB) | 200 GB | n2-standard-2 | m5.large | | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure I would put the NEW tag and previous value here. This page will probably remain unchanged until the next time we recommend hardware (months or years). At some point soon after the change in recommendation, storing the historical recommendation is more confusing than useful. I assume people who may want to change their node hardware will arrive on this page because we have announced elsewhere that the recommendations have changed, in which case the NEW tag is unlikely to help them.
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. what about just putting the current values in the docs, and using the new annotations in the forum post and announcements? or, set a reminder to update the docs in 3-6 months to remove it There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. its a bit simple to have it in the docs directly tbh - I can then also use the link to this page in marketing content. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't feel strongly about it |
||||||
| **Consensus** | 2 cores | 8 GB 🆕<br />(was 16 GB) | 200 GB | n2-standard-2 | m5.large | | ||||||
| **Execution** | 128 cores | 864 GB | 9 TB<sup>1</sup> | n2-highmem-128 | | | ||||||
| **Verification** | 2 cores | 8 GB 🆕<br />(was 16 GB) | 200 GB | n2-standard-2 | m5.large | | ||||||
| **Access** | 8 cores 🆕<br />(was 16 cores) | 32 GB 🆕<br />(was 64 GB) | 750 GB | n2-standard-8 | m5.2xlarge | | ||||||
| **Observer** | 2 cores | 4 GB | 300 GB | n2-standard-4 | m6i.xlarge | | ||||||
| **EVM Gateway** | 2 cores | 32 GB | 30 GB | n2-highmem-4 | r6i.xlarge | | ||||||
|
||||||
<sub>1: Recommended with maintenance see: [pruning chunk data pack](https://forum.flow.com/t/execution-node-upgrade-to-v0-31-15-and-managing-disk-space-usage/5167), 30 TB without maintenance.</sub> | ||||||
|
||||||
|
||||||
_Note: The above numbers represent our current best estimate for the state of the network. These will be actively updated as we continue benchmarking the network's performance._ | ||||||
|
||||||
_Note: If you are running your node on bare metal, we recommend provisioning a machine with higher CPU and memory than the minimum requirements. Unlike cloud instances, bare metal servers cannot be easily scaled up, and over-provisioning upfront helps avoid the need for disruptive hardware upgrades later._ | ||||||
|
||||||
## Networking Requirements | ||||||
|
||||||
Most of the load on your nodes will be messages sent back and forth between other nodes on the network. Make sure you have a sufficiently fast connection; we recommend at _least_ 1Gbps, and 5Gbps is better. | ||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preview here: https://docs-git-vishal-fortehwspecchange-onflow.vercel.app/protocol/node-ops/node-operation/node-provisioning