Skip to content

Commit 6101868

Browse files
Aleksandr ZamiatinAleksandr Zamiatin
authored andcommitted
Multiple changes and initial cleanup
1 parent 0f494a2 commit 6101868

24 files changed

+338
-345
lines changed

docs/operators/liquidator-bot/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,3 +26,4 @@ OWNER OPERATORIDS BALANCE BURNR
2626
2. **Liquidating accounts** \
2727
Once the potential liquidation block is reached the liquidator bot will call the [liquidate()](../../developers/smart-contracts/ssvnetwork#liquidateowner-operatorids-cluster) function in the network contract, if the bot was the first to successfully pass the transaction the cluster will be liquidated and its SSV collateral will be sent to the wallet address which performed the liquidation  
2828

29+
You can find the [installation instructions here](./installation).

docs/operators/operator-node/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@ sidebar_position: 7
77

88
Operators provide hardware infrastructure, run the SSV protocol, and are responsible for maintaining the overall health of the SSV network. Operators determine their own fees and are compensated for their integral services to the network by operating and maintaining validators on-behalf of stakers.
99

10-
To join the network as an operator a user must [install](installation.md) the SSV node software, and [register](../operator-management/registration.md) the operator to the network.
10+
To join the network as an operator a user must [install](./node-setup) the SSV node software, and [register](../operator-management/registration.md) the operator to the network.
1111

12-
* [Installation Guide](installation.md)
12+
* [Installation Guide](./node-setup)
1313
* [Configuring MEV](configuring-mev.md)
1414
* [Enabling DKG](enabling-dkg.md)
1515
* [Operator Registration](../operator-management/registration.md)
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,8 @@
11
# Maintenance
22

3+
If you are having troubles with your SSV node visit
4+
* [Troubleshooting section](./troubleshooting.md).
5+
***
6+
To migrate your SSV node follow the guides
7+
* [Node Migration](./node-migration.md)
8+
* [DKG Migration](./dkg-operator-migration)

docs/operators/operator-node/maintenance/dkg-operator-migration.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,16 +13,16 @@ The recommended migration process could be summarised in the following steps:
1313

1414
* Backup DKG files (if applicable)
1515
* Shut down DKG operator (if applicable) on the current machine
16-
* [Start DKG operator on the new machine](../enabling-dkg.md#start-ssv-dkg)
17-
* [Update operator metadata on the SSV WebApp](enabling-dkg.md#update-operator-metadata)
16+
* [Start DKG operator on the new machine](../node-setup/enabling-dkg/start-dkg-node/)
17+
* [Update operator metadata on the SSV WebApp](../node-setup/enabling-dkg/final-steps#update-operator-metadata)
1818

1919
:::info
2020
Please note: since the DKG node does not have to be on the same machine as the SSV node, one can be migrated without having to migrate the other.
2121
:::
2222

2323
### DKG backup (if necessary)
2424

25-
If you have followed [the dedicated guide to enable DKG for your operator](../enabling-dkg), you most likely have (at least) these files in the folder with your node configuration:
25+
If you have followed [the dedicated guide to enable DKG for your operator](../node-setup/enabling-dkg/start-dkg-node/), you most likely have (at least) these files in the folder with your node configuration:
2626

2727
```
2828
⇒ tree

docs/operators/operator-node/maintenance/node-migration.md

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ As a node operator, it may happen that the software stack needs to be migrated t
77

88
In such a scenario, it is very important to know what operations must be performed, in which order, and what are the sensitive pieces of data that need to be preserved and copied over to the new hardware. Here is a summary:
99

10-
### Procedure
10+
## Procedure
1111

1212
In order to migrate the SSV Node to a different machine, it is necessary to shut down the current setup, **before** launching the new one.
1313

@@ -17,19 +17,25 @@ Two nodes with the same public key should never be running at the same time. The
1717

1818
So, for this reason, the migration process could be easily summarised in the following steps:
1919

20-
* Backup node files
21-
* Shut down SSV Node on the current machine
22-
* Setup SSV Node on the new machine using backups
23-
* Wait at least one epoch
24-
* Start SSV Node service on the new machine
20+
1. Backup node files
21+
2. Shut down SSV Node on the current machine
22+
3. Setup SSV Node on the new machine using backups
23+
4. Wait at least one epoch
24+
5. Start SSV Node service on the new machine
2525

2626
:::warning
2727
Please note: if you are also running a DKG operator node, you may have to [follow the DKG operator migration guide](./dkg-operator-migration), if it is running on the same machine as the SSV node, or if it is running on a different machine, but you need to decommission that machine as well.
2828
:::
2929

30-
### Node backup
30+
## Node backup
3131

32-
If you have followed [the dedicated Node setup guide](../installation.md), you most likely have (at least) these files in the folder with your node configuration:
32+
### SSV Stack setup
33+
34+
If you have followed the [automatic node setup with SSV Stack](../node-setup), your files should be in `/ssv-stack/ssv-node-data` directory.
35+
36+
### Manual Node setup
37+
38+
If you have followed [the Manual Node setup guide](../node-setup/manual-setup), you most likely have (at least) these files in the folder with your node configuration:
3339

3440
```
3541
⇒ tree
@@ -64,7 +70,7 @@ The configuration file (`config.yaml` in the code snippet above), is necessary f
6470

6571
Operator keys are, essentially, the authentication method to identify an SSV node, and link it to an operator ID. As a consequence, whenever a node is moved to a different machine, they **absolutely must** be preserved and copied from the existing setup to the new one.
6672

67-
The files in question are `encrypted_private_key.json` and `password` in the snippet above and if you have followed [the Node setup guide](../installation.md), the filenames should be the same for you.
73+
The files in question are `encrypted_private_key.json` and `password` in the snippet above and if you have followed [the Manual Node setup guide](../node-setup/manual-setup), the filenames should be the same for you.
6874

6975
#### Node database
7076

docs/operators/operator-node/maintenance/troubleshooting.md

Lines changed: 28 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,10 @@ import TabItem from '@theme/TabItem';
1212

1313
In order to troubleshoot any issues with the SSV Node, it's a good start to use the `/health` endpoint.
1414

15-
First and foremost, the `SSV_API` port environment variable, or configuration parameter must be set. For that, refer to the [Node Configuration Reference page](../node-configuration-reference.md).
15+
To use this endpoint you'll first need to configure and open a port:
16+
- If you are using a `.yaml` file to configure your SSV node — you can just add `SSVAPIPort: 16000` (or any other port) at the end of the file and restart SSV to apply.
17+
- If you are using `.env` to configure SSV — use an `SSV_API` environment variable.
18+
- Then you need to make sure your SSV node/container has `16000` port opened (or other port you chose).
1619

1720
Assuming that the SSV node is running on the local machine, and that the `SSV_API` port is set to `16000`, the health check endpoint can be reached using the `curl` command, for example, just as shown below:
1821

@@ -29,9 +32,9 @@ This request will provide a JSON response, here is an example of a response from
2932
"execution_node": "good",
3033
"event_syncer": "good",
3134
"advanced": {
32-
"peers": 89,
33-
"inbound_conns": 67,
34-
"outbound_conns": 22,
35+
"peers": 19,
36+
"inbound_conns": 7,
37+
"outbound_conns": 17,
3538
"p2p_listen_addresses": [
3639
"tcp://<X.Y.W.Z>:13001",
3740
"udp://<X.Y.W.Z>:12001"
@@ -43,7 +46,7 @@ This request will provide a JSON response, here is an example of a response from
4346
This "self-diagnose" report of the node can be useful to make sure that some essential indicators have the correct values:
4447

4548
* `p2p_listen_addresses` should show the correct public IP & port and the TCP port should be open when checking this IP with a port checker (they have been rendered anonymous for the purpose of this page)
46-
* `peers` should be at least 60 for operators with more than 100 validators
49+
* `peers` should be at least 15 for operators with more than 100 validators
4750
* `inbound_conns` should be at least 20% of the peers (though not an exact number, this is a good indication of healthy connections from the node)
4851

4952
Below, an example of the same report, from a node in bad state:
@@ -56,7 +59,7 @@ Below, an example of the same report, from a node in bad state:
5659
"event_syncer": "good",
5760
"advanced": {
5861
"peers": 5,
59-
"inbound_conns": 1,
62+
"inbound_conns": 0,
6063
"outbound_conns": 4,
6164
"p2p_listen_addresses": [
6265
"tcp://<X.Y.W.Z>:13004",
@@ -68,6 +71,8 @@ Below, an example of the same report, from a node in bad state:
6871

6972
## SSV-Pulse benchmarking tool
7073

74+
Before using this tool — make sure to open [SSV Node Health Endpoint](#ssv-node-health-endpoint).
75+
7176
Our team developed a tool to ease your troubleshooting process, as it analyzes SSV Node, Consensus Node, and Execution Node at the same time. You can find more details on [ssv-pulse GitHub page](https://github.com/ssvlabs/ssv-pulse).
7277

7378
To use this tool you can use docker compose or a docker command below:
@@ -83,14 +88,14 @@ ssv-pulse:
8388
image: ghcr.io/ssvlabs/ssv-pulse:latest
8489
command:
8590
- 'benchmark'
86-
- '--consensus-addr=<YOUR_ADDRESS_HERE>' # Change to your Consensus Node's address, e.g. http://lighthouse:5052
87-
- '--execution-addr=<YOUR_ADDRESS_HERE>' # Change to your Execution Node's address, e.g. http://geth:8545
88-
- '--ssv-addr=http://ssv_node:16000' #Or Change to your SSV Node's address with SSVAPIPort
91+
- '--consensus-addr=<YOUR_ADDRESS_HERE>' # Change to Consensus Node's address, e.g. http://lighthouse:5052
92+
- '--execution-addr=<YOUR_ADDRESS_HERE>' # Change to Execution Node's address, e.g. http://geth:8545
93+
- '--ssv-addr=<YOUR_ADDRESS_HERE>' # Change to SSV Node's address, e.g. http://ssv_node:16000
8994
- '--duration=60m'
9095
# - '--network=holesky' # Add this if you run a Holesky Node
9196
# - '--platform linux/arm64' # Add this if you run on an arm64 machine
9297
networks:
93-
- ssv
98+
- local-docker # Make sure network is the same as yours. Check with command docker network ls.
9499
pull_policy: always
95100
```
96101
@@ -284,7 +289,7 @@ Then pull the latest image from SSV:
284289
docker pull ssvlabs/ssv-node:latest
285290
```
286291

287-
And finally... [run the creation command again](../installation.md#start-the-node) to create a new Docker container with the latest SSV image.
292+
And finally... [run the creation command again](../node-setup/manual-setup#start-the-node) to create a new Docker container with the latest SSV image.
288293

289294
</details>
290295

@@ -358,7 +363,7 @@ This section is a collection of common warnings, error messages, statuses and ot
358363
FATAL failed to create beacon go-client {"error": "failed to create http client: failed to confirm node connection: failed to fetch genesis: failed to request genesis: failed to call GET endpoint: Get \"http://5.104.175.133:5057/eth/v1/beacon/genesis\": context deadline exceeded", "errorVerbose":…………….\nfailed to create http client", "address": "http://5.104.175.133:5057"}
359364
```
360365

361-
This is likely due to issues with the Beacon layer Node. Verify that `BeaconNodeAddr` has the correct address and port in [`config.yaml` configuration file](../installation).
366+
This is likely due to issues with the Beacon layer Node. Verify that `BeaconNodeAddr` has the correct address and port in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file).
362367

363368
***
364369

@@ -368,7 +373,7 @@ This is likely due to issues with the Beacon layer Node. Verify that `BeaconNode
368373
FATAL could not connect to execution client {"error": "failed to connect to execution client: dial tcp 5.104.175.133:8541: i/o timeout"}
369374
```
370375

371-
This is likely due to issues with the Execution layer Node. Verify that `ETH1Addr` has the correct address and port in [`config.yaml` configuration file](../installation).
376+
This is likely due to issues with the Execution layer Node. Verify that `ETH1Addr` has the correct address and port in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file).
372377

373378
Finally, make sure that your ETH1 endpoint is running using Websocket. This is required in order to stream events from the network contracts.
374379

@@ -380,7 +385,7 @@ Finally, make sure that your ETH1 endpoint is running using Websocket. This is r
380385
FATAL could not setup operator private key {"error": "Operator private key is not matching the one encrypted the storage", "errorVerbose": ...{
381386
```
382387

383-
Verify that the Operator Private Key is correctly set in [`config.yaml` configuration file](../installation). In particular, if using unencrypted (raw) keys, that the **private (secret) key** was copied in the configuration file and that it contains all characters (sometimes it contains a `=` character that can easily be left out).
388+
Verify that the Operator Private Key is correctly set in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file). In particular, if using unencrypted (raw) keys, that the **private (secret) key** was copied in the configuration file and that it contains all characters (sometimes it contains a `=` character that can easily be left out).
384389

385390
If the node has been stopped and restart, verify that the same configuration has been applied, that the private key has not been changed, and that the `db.Path` configuration points to the same directory as before.
386391

@@ -392,7 +397,7 @@ If the node has been stopped and restart, verify that the same configuration has
392397
FATAL could not setup network {"error": "network not supported: jatov2"}
393398
```
394399

395-
In the example above, the `Network` in [`config.yaml` configuration file](../installation) was wrongly set to `jatov2` instead of `jato-v2`, so be sure to look for thinks like spelling mistakes.
400+
In the example above, the `Network` in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file) was wrongly set to `jatov2` instead of `jato-v2`, so be sure to look for thinks like spelling mistakes.
396401

397402
***
398403

@@ -403,7 +408,7 @@ could not create loggerlogging.SetGlobalLogger: unrecognized level: "infor"
403408
make: *** [Makefile:97: start-node] Error 1
404409
```
405410

406-
In the example above, the `LogLevel` variable in [`config.yaml` configuration file](../installation) was wrongly set to `infor` instead of `info`, so be sure to look for thinks like spelling mistakes.
411+
In the example above, the `LogLevel` variable in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file) was wrongly set to `infor` instead of `info`, so be sure to look for thinks like spelling mistakes.
407412

408413
***
409414

@@ -423,35 +428,35 @@ This error could be caused by using multiple SSV nodes within one Nimbus setup.
423428
ERROR P2PNetwork unable to create external multiaddress {"error": "invalid ip address provided: ...
424429
```
425430

426-
This error signalizes the node could not figure the public IP address of your node on a startup. You need to provide your SSV Node's address in `p2p: HostAddress:` variable in [your `config.yaml` file.](https://docs.ssv.network/operator-user-guides/operator-node/installation#peer-to-peer-ports-configuration-and-firewall)
431+
This error signalizes the node could not figure the public IP address of your node on a startup. You need to provide your SSV Node's address in `p2p: HostAddress:` variable in [your `config.yaml` file.](../node-setup/manual-setup#peer-to-peer-ports-configuration-and-firewall)
427432

428433
***
429434

430435
### Node Metrics not showing up in Prometheus/Grafana
431436

432-
Please verify that the `MetricsAPIPort` variable is correctly set in [`config.yaml` configuration file](../installation).
437+
Please verify that the `MetricsAPIPort` variable is correctly set in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file).
433438

434-
For a more in-depth guide on how to set up Node monitoring, refer to [the dedicated page in this section](../monitoring/monitoring.md).
439+
For a more in-depth guide on how to set up Node monitoring, refer to [the dedicated page in this section](../monitoring).
435440

436441
***
437442

438443
### Node does not generate a log file
439444

440-
Please verify that the `LogFilePath` variable is correctly set in [`config.yaml` configuration file](../installation). Be sure to look for thinks like spelling mistakes.
445+
Please verify that the `LogFilePath` variable is correctly set in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file). Be sure to look for thinks like spelling mistakes.
441446

442447
***
443448

444449
### Node takes a long time to become active
445450

446-
Please verify that the `Path` under the `db` section is correctly set in [`config.yaml` configuration file](../installation). Be sure to look for thinks like spelling mistakes.
451+
Please verify that the `Path` under the `db` section is correctly set in [`config.yaml` configuration file](../node-setup/manual-setup#create-configuration-file). Be sure to look for thinks like spelling mistakes.
447452

448453
If the Node was working correctly and becomes inactive after a configuration change, make sure that `Path` wasn't accidentally changed. This will cause the database to be recostructed and will lead to a slower startup.
449454

450455
***
451456

452457
### `"port 13000 already running"` message
453458

454-
This could happen if you run both consensus node and SSV node on the same machine - please make sure to change your SSV node port to any other port. Refer to [the p2p section of the installation guide for details](../installation).
459+
This could happen if you run both consensus node and SSV node on the same machine - please make sure to change your SSV node port to any other port. Refer to [the p2p section of the node-setup/manual-setup#create-configuration-file guide for details](../node-setup/manual-setup#create-configuration-file).
455460

456461
After updating your port, please restart the SSV node and confirm the error does not appear.
457462

@@ -480,7 +485,7 @@ Steps to confirm you use the same key:
480485

481486
1. Find the operator key that you have registered to the network in the [ssv explorer](https://explorer.ssv.network/).
482487
2. Find the operator public key you have generated in your node during setup.
483-
3. Compare between the keys - if they do not match you must update your private key in the node config.yaml file, according to the key generated during your node installation.
488+
3. Compare between the keys - if they do not match you must update your private key in the node config.yaml file, according to the key generated during your node node-setup/manual-setup#create-configuration-file.
484489

485490
:::info
486491
Example log output showing the public key:

0 commit comments

Comments
 (0)