You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/operators/liquidator-bot/README.md
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -26,3 +26,4 @@ OWNER OPERATORIDS BALANCE BURNR
26
26
2.**Liquidating accounts**\
27
27
Once the potential liquidation block is reached the liquidator bot will call the [liquidate()](../../developers/smart-contracts/ssvnetwork#liquidateowner-operatorids-cluster) function in the network contract, if the bot was the first to successfully pass the transaction the cluster will be liquidated and its SSV collateral will be sent to the wallet address which performed the liquidation  
28
28
29
+
You can find the [installation instructions here](./installation).
Copy file name to clipboardExpand all lines: docs/operators/operator-node/README.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -7,9 +7,9 @@ sidebar_position: 7
7
7
8
8
Operators provide hardware infrastructure, run the SSV protocol, and are responsible for maintaining the overall health of the SSV network. Operators determine their own fees and are compensated for their integral services to the network by operating and maintaining validators on-behalf of stakers.
9
9
10
-
To join the network as an operator a user must [install](installation.md) the SSV node software, and [register](../operator-management/registration.md) the operator to the network.
10
+
To join the network as an operator a user must [install](./node-setup) the SSV node software, and [register](../operator-management/registration.md) the operator to the network.
Copy file name to clipboardExpand all lines: docs/operators/operator-node/maintenance/dkg-operator-migration.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -13,16 +13,16 @@ The recommended migration process could be summarised in the following steps:
13
13
14
14
* Backup DKG files (if applicable)
15
15
* Shut down DKG operator (if applicable) on the current machine
16
-
*[Start DKG operator on the new machine](../enabling-dkg.md#start-ssv-dkg)
17
-
*[Update operator metadata on the SSV WebApp](enabling-dkg.md#update-operator-metadata)
16
+
*[Start DKG operator on the new machine](../node-setup/enabling-dkg/start-dkg-node/)
17
+
*[Update operator metadata on the SSV WebApp](../node-setup/enabling-dkg/final-steps#update-operator-metadata)
18
18
19
19
:::info
20
20
Please note: since the DKG node does not have to be on the same machine as the SSV node, one can be migrated without having to migrate the other.
21
21
:::
22
22
23
23
### DKG backup (if necessary)
24
24
25
-
If you have followed [the dedicated guide to enable DKG for your operator](../enabling-dkg), you most likely have (at least) these files in the folder with your node configuration:
25
+
If you have followed [the dedicated guide to enable DKG for your operator](../node-setup/enabling-dkg/start-dkg-node/), you most likely have (at least) these files in the folder with your node configuration:
Copy file name to clipboardExpand all lines: docs/operators/operator-node/maintenance/node-migration.md
+15-9Lines changed: 15 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ As a node operator, it may happen that the software stack needs to be migrated t
7
7
8
8
In such a scenario, it is very important to know what operations must be performed, in which order, and what are the sensitive pieces of data that need to be preserved and copied over to the new hardware. Here is a summary:
9
9
10
-
###Procedure
10
+
## Procedure
11
11
12
12
In order to migrate the SSV Node to a different machine, it is necessary to shut down the current setup, **before** launching the new one.
13
13
@@ -17,19 +17,25 @@ Two nodes with the same public key should never be running at the same time. The
17
17
18
18
So, for this reason, the migration process could be easily summarised in the following steps:
19
19
20
-
* Backup node files
21
-
* Shut down SSV Node on the current machine
22
-
* Setup SSV Node on the new machine using backups
23
-
* Wait at least one epoch
24
-
* Start SSV Node service on the new machine
20
+
1. Backup node files
21
+
2. Shut down SSV Node on the current machine
22
+
3. Setup SSV Node on the new machine using backups
23
+
4. Wait at least one epoch
24
+
5. Start SSV Node service on the new machine
25
25
26
26
:::warning
27
27
Please note: if you are also running a DKG operator node, you may have to [follow the DKG operator migration guide](./dkg-operator-migration), if it is running on the same machine as the SSV node, or if it is running on a different machine, but you need to decommission that machine as well.
28
28
:::
29
29
30
-
###Node backup
30
+
## Node backup
31
31
32
-
If you have followed [the dedicated Node setup guide](../installation.md), you most likely have (at least) these files in the folder with your node configuration:
32
+
### SSV Stack setup
33
+
34
+
If you have followed the [automatic node setup with SSV Stack](../node-setup), your files should be in `/ssv-stack/ssv-node-data` directory.
35
+
36
+
### Manual Node setup
37
+
38
+
If you have followed [the Manual Node setup guide](../node-setup/manual-setup), you most likely have (at least) these files in the folder with your node configuration:
33
39
34
40
```
35
41
⇒ tree
@@ -64,7 +70,7 @@ The configuration file (`config.yaml` in the code snippet above), is necessary f
64
70
65
71
Operator keys are, essentially, the authentication method to identify an SSV node, and link it to an operator ID. As a consequence, whenever a node is moved to a different machine, they **absolutely must** be preserved and copied from the existing setup to the new one.
66
72
67
-
The files in question are `encrypted_private_key.json` and `password` in the snippet above and if you have followed [the Node setup guide](../installation.md), the filenames should be the same for you.
73
+
The files in question are `encrypted_private_key.json` and `password` in the snippet above and if you have followed [the Manual Node setup guide](../node-setup/manual-setup), the filenames should be the same for you.
Copy file name to clipboardExpand all lines: docs/operators/operator-node/maintenance/troubleshooting.md
+28-23Lines changed: 28 additions & 23 deletions
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,10 @@ import TabItem from '@theme/TabItem';
12
12
13
13
In order to troubleshoot any issues with the SSV Node, it's a good start to use the `/health` endpoint.
14
14
15
-
First and foremost, the `SSV_API` port environment variable, or configuration parameter must be set. For that, refer to the [Node Configuration Reference page](../node-configuration-reference.md).
15
+
To use this endpoint you'll first need to configure and open a port:
16
+
- If you are using a `.yaml` file to configure your SSV node — you can just add `SSVAPIPort: 16000` (or any other port) at the end of the file and restart SSV to apply.
17
+
- If you are using `.env` to configure SSV — use an `SSV_API` environment variable.
18
+
- Then you need to make sure your SSV node/container has `16000` port opened (or other port you chose).
16
19
17
20
Assuming that the SSV node is running on the local machine, and that the `SSV_API` port is set to `16000`, the health check endpoint can be reached using the `curl` command, for example, just as shown below:
18
21
@@ -29,9 +32,9 @@ This request will provide a JSON response, here is an example of a response from
29
32
"execution_node": "good",
30
33
"event_syncer": "good",
31
34
"advanced": {
32
-
"peers": 89,
33
-
"inbound_conns": 67,
34
-
"outbound_conns": 22,
35
+
"peers": 19,
36
+
"inbound_conns": 7,
37
+
"outbound_conns": 17,
35
38
"p2p_listen_addresses": [
36
39
"tcp://<X.Y.W.Z>:13001",
37
40
"udp://<X.Y.W.Z>:12001"
@@ -43,7 +46,7 @@ This request will provide a JSON response, here is an example of a response from
43
46
This "self-diagnose" report of the node can be useful to make sure that some essential indicators have the correct values:
44
47
45
48
*`p2p_listen_addresses` should show the correct public IP & port and the TCP port should be open when checking this IP with a port checker (they have been rendered anonymous for the purpose of this page)
46
-
*`peers` should be at least 60 for operators with more than 100 validators
49
+
*`peers` should be at least 15 for operators with more than 100 validators
47
50
*`inbound_conns` should be at least 20% of the peers (though not an exact number, this is a good indication of healthy connections from the node)
48
51
49
52
Below, an example of the same report, from a node in bad state:
@@ -56,7 +59,7 @@ Below, an example of the same report, from a node in bad state:
56
59
"event_syncer": "good",
57
60
"advanced": {
58
61
"peers": 5,
59
-
"inbound_conns": 1,
62
+
"inbound_conns": 0,
60
63
"outbound_conns": 4,
61
64
"p2p_listen_addresses": [
62
65
"tcp://<X.Y.W.Z>:13004",
@@ -68,6 +71,8 @@ Below, an example of the same report, from a node in bad state:
68
71
69
72
## SSV-Pulse benchmarking tool
70
73
74
+
Before using this tool — make sure to open [SSV Node Health Endpoint](#ssv-node-health-endpoint).
75
+
71
76
Our team developed a tool to ease your troubleshooting process, as it analyzes SSV Node, Consensus Node, and Execution Node at the same time. You can find more details on [ssv-pulse GitHub page](https://github.com/ssvlabs/ssv-pulse).
72
77
73
78
To use this tool you can use docker compose or a docker command below:
@@ -83,14 +88,14 @@ ssv-pulse:
83
88
image: ghcr.io/ssvlabs/ssv-pulse:latest
84
89
command:
85
90
- 'benchmark'
86
-
- '--consensus-addr=<YOUR_ADDRESS_HERE>'# Change to your Consensus Node's address, e.g. http://lighthouse:5052
87
-
- '--execution-addr=<YOUR_ADDRESS_HERE>'# Change to your Execution Node's address, e.g. http://geth:8545
88
-
- '--ssv-addr=http://ssv_node:16000'#Or Change to your SSV Node's address with SSVAPIPort
91
+
- '--consensus-addr=<YOUR_ADDRESS_HERE>'# Change to Consensus Node's address, e.g. http://lighthouse:5052
92
+
- '--execution-addr=<YOUR_ADDRESS_HERE>'# Change to Execution Node's address, e.g. http://geth:8545
93
+
- '--ssv-addr=<YOUR_ADDRESS_HERE>'# Change to SSV Node's address, e.g. http://ssv_node:16000
89
94
- '--duration=60m'
90
95
# - '--network=holesky' # Add this if you run a Holesky Node
91
96
# - '--platform linux/arm64' # Add this if you run on an arm64 machine
92
97
networks:
93
-
- ssv
98
+
- local-docker # Make sure network is the same as yours. Check with command docker network ls.
94
99
pull_policy: always
95
100
```
96
101
@@ -284,7 +289,7 @@ Then pull the latest image from SSV:
284
289
docker pull ssvlabs/ssv-node:latest
285
290
```
286
291
287
-
And finally... [run the creation command again](../installation.md#start-the-node) to create a new Docker container with the latest SSV image.
292
+
And finally... [run the creation command again](../node-setup/manual-setup#start-the-node) to create a new Docker container with the latest SSV image.
288
293
289
294
</details>
290
295
@@ -358,7 +363,7 @@ This section is a collection of common warnings, error messages, statuses and ot
358
363
FATAL failed to create beacon go-client {"error": "failed to create http client: failed to confirm node connection: failed to fetch genesis: failed to request genesis: failed to call GET endpoint: Get \"http://5.104.175.133:5057/eth/v1/beacon/genesis\": context deadline exceeded", "errorVerbose":…………….\nfailed to create http client", "address": "http://5.104.175.133:5057"}
359
364
```
360
365
361
-
This is likely due to issues with the Beacon layer Node. Verify that `BeaconNodeAddr` has the correct address and port in [`config.yaml` configuration file](../installation).
366
+
This is likely due to issues with the Beacon layer Node. Verify that `BeaconNodeAddr` has the correct address and port in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file).
362
367
363
368
***
364
369
@@ -368,7 +373,7 @@ This is likely due to issues with the Beacon layer Node. Verify that `BeaconNode
368
373
FATAL could not connect to execution client {"error": "failed to connect to execution client: dial tcp 5.104.175.133:8541: i/o timeout"}
369
374
```
370
375
371
-
This is likely due to issues with the Execution layer Node. Verify that `ETH1Addr` has the correct address and port in [`config.yaml` configuration file](../installation).
376
+
This is likely due to issues with the Execution layer Node. Verify that `ETH1Addr` has the correct address and port in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file).
372
377
373
378
Finally, make sure that your ETH1 endpoint is running using Websocket. This is required in order to stream events from the network contracts.
374
379
@@ -380,7 +385,7 @@ Finally, make sure that your ETH1 endpoint is running using Websocket. This is r
380
385
FATAL could not setup operator private key {"error": "Operator private key is not matching the one encrypted the storage", "errorVerbose": ...{
381
386
```
382
387
383
-
Verify that the Operator Private Key is correctly set in [`config.yaml` configuration file](../installation). In particular, if using unencrypted (raw) keys, that the **private (secret) key** was copied in the configuration file and that it contains all characters (sometimes it contains a `=` character that can easily be left out).
388
+
Verify that the Operator Private Key is correctly set in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file). In particular, if using unencrypted (raw) keys, that the **private (secret) key** was copied in the configuration file and that it contains all characters (sometimes it contains a `=` character that can easily be left out).
384
389
385
390
If the node has been stopped and restart, verify that the same configuration has been applied, that the private key has not been changed, and that the `db.Path` configuration points to the same directory as before.
386
391
@@ -392,7 +397,7 @@ If the node has been stopped and restart, verify that the same configuration has
392
397
FATAL could not setup network {"error": "network not supported: jatov2"}
393
398
```
394
399
395
-
In the example above, the `Network` in [`config.yaml` configuration file](../installation) was wrongly set to `jatov2` instead of `jato-v2`, so be sure to look for thinks like spelling mistakes.
400
+
In the example above, the `Network` in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file) was wrongly set to `jatov2` instead of `jato-v2`, so be sure to look for thinks like spelling mistakes.
396
401
397
402
***
398
403
@@ -403,7 +408,7 @@ could not create loggerlogging.SetGlobalLogger: unrecognized level: "infor"
403
408
make: *** [Makefile:97: start-node] Error 1
404
409
```
405
410
406
-
In the example above, the `LogLevel` variable in [`config.yaml` configuration file](../installation) was wrongly set to `infor` instead of `info`, so be sure to look for thinks like spelling mistakes.
411
+
In the example above, the `LogLevel` variable in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file) was wrongly set to `infor` instead of `info`, so be sure to look for thinks like spelling mistakes.
407
412
408
413
***
409
414
@@ -423,35 +428,35 @@ This error could be caused by using multiple SSV nodes within one Nimbus setup.
423
428
ERROR P2PNetwork unable to create external multiaddress {"error": "invalid ip address provided: ...
424
429
```
425
430
426
-
This error signalizes the node could not figure the public IP address of your node on a startup. You need to provide your SSV Node's address in `p2p: HostAddress:` variable in [your `config.yaml` file.](https://docs.ssv.network/operator-user-guides/operator-node/installation#peer-to-peer-ports-configuration-and-firewall)
431
+
This error signalizes the node could not figure the public IP address of your node on a startup. You need to provide your SSV Node's address in `p2p: HostAddress:` variable in [your `config.yaml` file.](../node-setup/manual-setup#peer-to-peer-ports-configuration-and-firewall)
427
432
428
433
***
429
434
430
435
### Node Metrics not showing up in Prometheus/Grafana
431
436
432
-
Please verify that the `MetricsAPIPort` variable is correctly set in [`config.yaml` configuration file](../installation).
437
+
Please verify that the `MetricsAPIPort` variable is correctly set in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file).
433
438
434
-
For a more in-depth guide on how to set up Node monitoring, refer to [the dedicated page in this section](../monitoring/monitoring.md).
439
+
For a more in-depth guide on how to set up Node monitoring, refer to [the dedicated page in this section](../monitoring).
435
440
436
441
***
437
442
438
443
### Node does not generate a log file
439
444
440
-
Please verify that the `LogFilePath` variable is correctly set in [`config.yaml` configuration file](../installation). Be sure to look for thinks like spelling mistakes.
445
+
Please verify that the `LogFilePath` variable is correctly set in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file). Be sure to look for thinks like spelling mistakes.
441
446
442
447
***
443
448
444
449
### Node takes a long time to become active
445
450
446
-
Please verify that the `Path` under the `db` section is correctly set in [`config.yaml` configuration file](../installation). Be sure to look for thinks like spelling mistakes.
451
+
Please verify that the `Path` under the `db` section is correctly set in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file). Be sure to look for thinks like spelling mistakes.
447
452
448
453
If the Node was working correctly and becomes inactive after a configuration change, make sure that `Path` wasn't accidentally changed. This will cause the database to be recostructed and will lead to a slower startup.
449
454
450
455
***
451
456
452
457
### `"port 13000 already running"` message
453
458
454
-
This could happen if you run both consensus node and SSV node on the same machine - please make sure to change your SSV node port to any other port. Refer to [the p2p section of the installation guide for details](../installation).
459
+
This could happen if you run both consensus node and SSV node on the same machine - please make sure to change your SSV node port to any other port. Refer to [the p2p section of the node-setup/manual-setup#create-configuration-file guide for details](../node-setup/manual-setup#create-configuration-file).
455
460
456
461
After updating your port, please restart the SSV node and confirm the error does not appear.
457
462
@@ -480,7 +485,7 @@ Steps to confirm you use the same key:
480
485
481
486
1. Find the operator key that you have registered to the network in the [ssv explorer](https://explorer.ssv.network/).
482
487
2. Find the operator public key you have generated in your node during setup.
483
-
3. Compare between the keys - if they do not match you must update your private key in the node config.yaml file, according to the key generated during your node installation.
488
+
3. Compare between the keys - if they do not match you must update your private key in the node config.yaml file, according to the key generated during your node node-setup/manual-setup#create-configuration-file.
0 commit comments