You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you already have encrypted key and password files:
154
+
155
+
### Password and private key
156
+
On the first start the Node will generate a random `password` and encrypted `private_key` files. You can find the files under `~/ssv-stack/ssv-node-data` directory. 
157
+
158
+
**If you already have encrypted key and password files**:
153
159
* Copy/move them to `/ssv-stack/ssv-node-data`
154
160
* Edit the environment variables to the correct file names, e.g.:
Copy file name to clipboardExpand all lines: docs/operators/operator-node/node-setup/best-practices.md
+54-40Lines changed: 54 additions & 40 deletions
Original file line number
Diff line number
Diff line change
@@ -7,10 +7,11 @@ sidebar_position: 3
7
7
8
8
This guide outlines key recommendations to ensure the best Performance and Correctness of your node. The guide will be updated as we get new information.
9
9
10
-
The page is divided into three parts:
11
-
-[**Introduction**](#introduction) - understanding the Performance, Correctness, and their key factors.
12
-
-[**Major impact**](#major-impact) - vital tips, more obvious.
13
-
-[**Minor impact**](#minor-impact) - fine-tuning tips, nice-to-have, less obvious.
10
+
The page is divided into four parts:
11
+
-[**Introduction**](#introduction) - Understanding the Performance, Correctness, and their key factors.
12
+
-[**SSV-specific**](#ssv-specific) - Features of SSV that will improve Performance or Correctness.
13
+
-[**Major impact**](#major-impact) - Vital tips, more obvious.
14
+
-[**Minor impact**](#minor-impact) - Fine-tuning tips, nice-to-have, less obvious.
14
15
15
16
16
17
## **Introduction**
@@ -22,6 +23,52 @@ Several key factors influence this:
22
23
-**Network Throughput & Latency:** Minimal network delay is critical, especially for production setups.
23
24
-**Hardware Resources:** Adequate CPU, sufficient RAM, and fast, reliable storage are necessary.
- When the first node goes offline or out of sync, SSV will switch to the next endpoint.
39
+
- Endpoints should be set in the order you want them to be used. The first node will be the primary one.
40
+
- Failover works in round-robin way, so once SSV circled through all of the endpoints it will start from the first one again.
41
+
- When SSV node restarts it will always start with the first endpoint from the list.
42
+
43
+
44
+
#### Weighted Attestation Data
45
+
```yaml
46
+
eth2:
47
+
WithWeightedAttestationData: true # Enables WAD
48
+
```
49
+
- The feature only works with 2+ CL endpoints (`BeaconNodeAddr`) configured.
50
+
- Improves attestation accuracy by scoring responses from multiple Beacon nodes based on epoch and slot proximity. Adds slight latency to duties but includes safeguards (timeouts, retries).
51
+
52
+
#### Parallel Data Submission
53
+
```yaml
54
+
eth2:
55
+
WithParallelSubmissions: true # Sends duties to all nodes concurrently
56
+
```
57
+
- The feature only works with 2+ CL endpoints (`BeaconNodeAddr`) configured.
58
+
- SSV will submit duty data to all Beacon nodes in parallel. This built-in configuration ensures high availability and improves performance.
- Doppelganger Protection (DG) checks if the managed validators are attesting elsewhere at the moment of SSV node start. That prevents double signature which is a slashable event.
66
+
- SSV with DG enabled will be offline for the first 3 epochs after the node start.
67
+
- If enough nodes in the cluster have DG enabled and are online - a restarted node will not wait for 3 epochs. Node can identify that there's a DG quorum going on and will join it. This process is almost as quick as restart of a node without DG.
68
+
- If you manage all nodes in the cluster — it is recommended to restart one by one. Otherwise, it will take 3 epochs to do the DG checks.
69
+
- To learn more technical details please refer to [our documentation page on GitHub](https://github.com/ssvlabs/ssv/blob/v2.3.0/doppelganger/README.md).
70
+
71
+
25
72
## **Major impact**
26
73
High‑priority practices that ensure reliable, on‑time duty submissions. Might sound obvious, but are often overlooked.
27
74
@@ -32,7 +79,7 @@ Most of hardware-related suggestions below are for Execution (EL) and Consensus
32
79
### **For Home Setups**
33
80
- **Hardware:** Usual setup has 4- to 8-core CPU (focus on single-thread performance) and 32GB RAM.
34
81
- **Disk:** NVMe SSDs are strongly recommended. If you're unsure with filesystem to use, stick to `ext4`, as it is performant and the easiest to maintain.
35
-
-**Reliability:** Use a UPS and ensure the machine is well-ventilated. Sudden power loss or thermal throttling can cause downtime or result in missed duties
82
+
- **Reliability:** Use a UPS and ensure the machine is well-ventilated. Sudden power loss or thermal throttling can cause downtime or result in missed duties.
36
83
- **Internet Connectivity:** Ensure your ISP doesn't impose strict data caps. Choose a plan with at least 10 Mbps upload speed, but latency and reliability make the difference.
37
84
- **[Follow EthStaker hardware section](https://ethstaker.org/staking-hardware)** as their guides are focused on running Execution and Consensus nodes.
38
85
@@ -62,8 +109,8 @@ Ensure your hardware meets or exceeds [SSV recommended specifications](./hardwar
62
109
Ensure all required ports are open and correctly configured on your setup.
63
110
64
111
**Critical Ports:**
65
-
-**SSV P2P:** Typically 12001 UDP and 13001 TCP (or as specified in your configuration)
66
-
-**Execution P2P:** Typically 30303 TCP and UDP (for Geth and Nethermind)
112
+
- **SSV P2P:** Typically 12001 UDP and 13001 TCP (or as specified in your configuration).
113
+
- **Execution P2P:** Typically 30303 TCP and UDP (for Geth and Nethermind).
67
114
- **Execution RPC:** HTTP 8545 and WS 8546 (for Geth and Nethermind), **open only to SSV Node!**.
68
115
- **Consensus P2P:** Depends on your client, make sure to follow your client's documentation to open the correct ports.
69
116
- **Consensus RPC:** Depends on your client (e.g. 3500 for Prysm, 5052 for Lighthouse), **open only to SSV Node!**.
@@ -84,9 +131,6 @@ Location of your clients plays a direct role in performance. Co-hosting Executio
84
131
85
132
### Network Throughput and Latency
86
133
87
-
**Recommendation:**
88
-
Optimize your overall network performance by implementing the following best practices:
89
-
90
134
**Minimizing Intermediate Layers**
91
135
- Reduce the number of intermediate hops (such as redundant routers, proxies, or firewalls) between your nodes and connected services.
92
136
- Simplify your firewall configuration by using minimal, direct rule sets (e.g., UFW with straightforward rules) to avoid delays.
@@ -145,36 +189,6 @@ Keep your clients updated, while prioritizing stability over following every upg
145
189
**TCP Congestion Control**
146
190
- Set your TCP congestion control to BBR by [following this guide](https://www.cyberciti.biz/cloud-computing/increase-your-linux-server-internet-speed-with-tcp-bbr-congestion-control/), if supported by your kernel version.
- The feature only works with 2+ CL endpoints configured
164
-
- Improves attestation accuracy by scoring responses from multiple Beacon nodes based on epoch and slot proximity. Adds slight latency to duties but includes safeguards (timeouts, retries).
165
-
```yaml
166
-
eth2:
167
-
WithWeightedAttestationData: true # Enables WAD
168
-
```
169
-
170
-
**Parallel Data Submission**
171
-
- The feature only works with 2+ CL endpoints (`BeaconNodeAddr`) configured
172
-
- Take advantage of the SSV client's native ability to submit duty data in parallel. This built-in configuration helps ensure high availability and improve performance:
173
-
```yaml
174
-
eth2:
175
-
WithParallelSubmissions: true # Sends duties to all nodes concurrently
Copy file name to clipboardExpand all lines: docs/stakers/clusters/reactivation.md
+2Lines changed: 2 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,7 @@
1
1
# Reactivation
2
2
3
+
The following the theoretical context behind Cluster Reactivation. To see the actionable steps, please follow [this guide instead](../cluster-management/re-activating-a-cluster.md).
4
+
3
5
In order to reactivate a liquidated cluster, the user must supply the liquidation collateral required for their cluster. It is advised to deposit more than the reactivation amount so the cluster will have an operational runway. Users that only deposit the liquidation collateral may be liquidated soon after because they did not compensate for the operational cost of their cluster’s managed validator(s).
4
6
5
7
Once reactivated, the clusters’ validator(s) operation will continue. To calculate how much minimal funding (liquidation collateral) is needed to reactivate a cluster:
0 commit comments