diff --git a/docs/admin.md b/docs/admin.md index b888de3d8..506acc071 100644 --- a/docs/admin.md +++ b/docs/admin.md @@ -59,11 +59,57 @@ $ warnet auth alice-wargames-red-team-kubeconfig ### Set up a network for your users Before letting the users into your cluster, make sure to create a network of tanks for them to view. - ```shell $ warnet deploy networks/mynet --to-all-users ``` Observe that the *wargames-red-team* namespace now has tanks in it. -**TODO**: What's the logging approach here? +### User roles in `namespace-defaults.yaml` + +The `namespace-defaults.yaml` file controls what permissions are granted to users within each wargame namespace. The `roles` list under each user entry maps to Kubernetes RBAC roles created by the namespaces Helm chart: + +```yaml +users: + - name: warnet-user + roles: + - pod-viewer # read pod logs and status + - pod-manager # exec into pods and manage lifecycle + - ingress-viewer # view ingress resources + - ingress-controller-viewer # view ingress controller resources +``` + +| Role | Grants | +|------|--------| +| `pod-viewer` | Read pod logs, status, and descriptions | +| `pod-manager` | Exec into pods, port-forward, and manage pod lifecycle | +| `ingress-viewer` | View ingress resources in the namespace | +| `ingress-controller-viewer` | View ingress controller resources | + +## Managing namespaces + +List all active wargame namespaces (those with the `wargames-` prefix): + +```shell +$ warnet admin namespaces list +``` + +Destroy a specific namespace or all wargame namespaces: + +```shell +# Destroy a single namespace +$ warnet admin namespaces destroy wargames-red-team + +# Destroy all wargames- prefixed namespaces +$ warnet admin namespaces destroy --all +``` + +## Reverting authentication + +To revert to the previous kubeconfig context after switching to a user context with `warnet auth`: + +```shell +$ warnet auth --revert +``` + +This restores the kubeconfig that was in place before the most recent `warnet auth` call. diff --git a/docs/circuit-breaker.md b/docs/circuit-breaker.md index d6124c07d..a7e093e8b 100644 --- a/docs/circuit-breaker.md +++ b/docs/circuit-breaker.md @@ -39,8 +39,11 @@ nodes: ### Configuration Options -- `enabled`: Set to `true` to enable Circuit Breaker for the node -- `httpPort`: Override the default HTTP port (9235) for the web UI (optional) +| Option | Description | +|--------|-------------| +| `enabled` | Set to `true` to enable Circuit Breaker for the node | +| `httpPort` | Override the default HTTP port (`9235`) for the web UI (optional) | +| `image` | Override the Circuit Breaker Docker image (e.g. `"bitcoindevproject/circuitbreaker:v0.5.0"`) | ### Complete Example diff --git a/docs/config.md b/docs/config.md index d87363766..65ac3bb75 100644 --- a/docs/config.md +++ b/docs/config.md @@ -59,3 +59,171 @@ graph TD ``` Users should only concern themselves therefore with setting configuration in the `/[network|node-defaults].yaml` files. + +## Network file reference + +The top-level keys recognised in `network.yaml` are: + +| Key | Description | +|-----|-------------| +| `nodes:` | List of node definitions (see below) | +| `caddy:` | `enabled: true` to deploy the Caddy reverse-proxy dashboard | +| `fork_observer:` | `enabled: true` to deploy Fork Observer | +| `services:` | Extra services to register on the Caddy dashboard (see below) | +| `plugins:` | Plugin hooks (`preDeploy`, `postDeploy`, `preNode`, `postNode`, `preNetwork`, `postNetwork`) | +| `warnet:` | Deployment label/identifier string (e.g. `"my_network"`) | + +### `services:` — extra dashboard entries + +Any additional web services running inside the cluster (e.g. a Lightning-network visualiser) can be surfaced on the Caddy dashboard alongside the built-in Grafana and Fork Observer entries: + +```yaml +services: + - title: LN Visualizer Web UI + path: /lnvisualizer/ + host: lnvisualizer.default + port: 80 +``` + +Each entry supports the following fields: + +| Field | Description | +|-------|-------------| +| `title` | Display name shown on the dashboard landing page | +| `path` | URL path prefix that Caddy will proxy to this service | +| `host` | Kubernetes service hostname (use the `.default` suffix for cluster-internal hostnames) | +| `port` | Port the service listens on | + +## Node configuration reference + +Each entry in the `nodes:` list is a Bitcoin Core tank. To add a Lightning node to a tank, two sibling keys work together: `ln:` enables the implementation, and `lnd:` or `cln:` holds its configuration. + +### Adding a Lightning node + +Enable LND or CLN with the `ln:` key, then configure it with a matching sibling key at the same level: + +```yaml +nodes: + - name: tank-0000 + ln: + lnd: true # enable LND — use cln: true for Core Lightning instead + lnd: # LND configuration (sibling of ln:, not nested inside it) + config: | + color=#3399FF + channels: + - id: + block: 500 + index: 1 + target: tank-0001-ln + capacity: 1000000 +``` + +The `ln:` key is the on/off switch. The `lnd:` (or `cln:`) key is the configuration object. They are always at the same indentation level inside the node entry — `lnd:` is **not** nested inside `ln:`. + +Only one implementation may be active per node: + +| To enable | Set | Then configure with | +|-----------|-----|---------------------| +| LND | `ln.lnd: true` | `lnd:` sibling key | +| Core Lightning | `ln.cln: true` | `cln:` sibling key | + +See [LN Options](ln-options.md) for the full reference of everything that goes under `lnd:` and `cln:`. + +--- + +The remaining keys in this section apply to the Bitcoin Core container itself. + +### `global:` — chain and RPC password shorthand + +Sets `chain` and `rpcpassword` at the node level. These values are propagated into the Helm chart's `global` sub-object, which is also shared with LND sub-charts: + +```yaml +nodes: + - name: tank-0000 + global: + chain: signet + rpcpassword: mysecretpassword +``` + +Without `global.chain`, the default is `regtest`. + +### `resources:` — Kubernetes resource limits + +Standard Kubernetes [resource requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) for the Bitcoin Core container: + +```yaml +nodes: + - name: tank-0000 + resources: + limits: + cpu: 4000m + memory: 1000Mi + requests: + cpu: 100m + memory: 200Mi +``` + +### `startupProbe:` — startup probe override + +Override the default Kubernetes startup probe for a node. Useful when a node requires custom initialisation before it is considered ready (e.g. importing a wallet or descriptor on first boot): + +```yaml +nodes: + - name: miner + startupProbe: + exec: + command: + - /bin/sh + - -c + - bitcoin-cli createwallet miner + failureThreshold: 10 + periodSeconds: 30 + successThreshold: 1 + timeoutSeconds: 60 +``` + +### `restartPolicy:` — pod restart policy + +Sets the Kubernetes [restart policy](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) for the node pod. Defaults to `Never` for Bitcoin Core nodes and `Always` for LND nodes. + +```yaml +nodes: + - name: tank-0000 + restartPolicy: Never +``` + +### `collectLogs:` and `metricsExport:` + +See [Logging and Monitoring](logging_monitoring.md) for details on enabling log collection and Prometheus metrics export per node. + +### `extraContainers:` — sidecar containers + +Add arbitrary sidecar containers to the Bitcoin Core pod. This is the same mechanism used to attach the `bitcoin-exporter` Prometheus sidecar. Each entry is a full Kubernetes container spec: + +```yaml +nodes: + - name: tank-0000 + extraContainers: + - name: my-sidecar + image: myrepo/my-sidecar:latest + ports: + - containerPort: 8080 + name: web + protocol: TCP +``` + +## Lightning node configuration reference + +For the full reference of all `lnd:` and `cln:` configuration keys — including `channels`, `macaroonRootKey`, `adminMacaroon`, `resources`, `restartPolicy`, `persistence`, `metricsExport`, `extraContainers`, `circuitbreaker`, and more — see [LN Options](ln-options.md). + +## `node-defaults.yaml` reference + +The `node-defaults.yaml` file accepts the same node-level keys as `network.yaml` and applies them as defaults to every node. It additionally supports: + +### `warnet:` — deployment label + +A string identifier for the deployment, used as a label on Kubernetes resources: + +```yaml +warnet: my_signet_network +``` diff --git a/docs/connecting-local-nodes.md b/docs/connecting-local-nodes.md index 89bade682..0d9607253 100644 --- a/docs/connecting-local-nodes.md +++ b/docs/connecting-local-nodes.md @@ -70,9 +70,9 @@ telepresence intercept local-bitcoind --port 18444 -- bitcoind --regtest --datad ### Connect to local bitcoind from cluster ```shell -warnet bitcoin rpc 0 addnode "local-bitcoind:18444" "onetry" +warnet bitcoin rpc tank-0000 addnode "local-bitcoind:18444" "onetry" # Check that the local node was added -warnet bitcoin rpc 0 getpeerinfo +warnet bitcoin rpc tank-0000 getpeerinfo ``` ### Disconnect and remove Telepresence @@ -81,7 +81,7 @@ warnet bitcoin rpc 0 getpeerinfo # Disconnect from the cluster telepresence quit -s # Remove Telepresence from the cluster -telepresent helm uninstall +telepresence helm uninstall # Remove Telepresence from your computer sudo rm /usr/local/bin/telepresence ``` diff --git a/docs/creating-a-network.md b/docs/creating-a-network.md new file mode 100644 index 000000000..8896829b3 --- /dev/null +++ b/docs/creating-a-network.md @@ -0,0 +1,261 @@ +# Creating a Network + +A Warnet network is defined by two YAML files that live together in a directory under `networks/`: + +- **`network.yaml`** — the node list, topology, and top-level services +- **`node-defaults.yaml`** — default values applied to every node + +Once these files exist you deploy the network with: + +```sh +warnet deploy networks/ +``` + +There are three ways to produce them. All three result in the same YAML files — the choice is about how much control and scale you need. + +For the full list of options available in those files, see: +- [Tank Options](tank-options.md) — all Bitcoin Core node keys +- [LN Options](ln-options.md) — LND and CLN configuration +- [Plugin Options](plugins.md) — hooks, built-in plugins, writing your own + +--- + +## Method 1: `warnet create` (interactive wizard) + +The easiest starting point. Run it from inside an initialised Warnet project: + +```sh +warnet init # creates the project directory structure +warnet create # launches the interactive wizard +``` + +The wizard walks you through: + +1. **Network name** — becomes the directory `networks//` +2. **Node groups** — add one or more groups, each specifying: + - Bitcoin Core version (choose from the supported list or provide a custom `repo/image:tag`) + - Number of nodes in the group + - Number of connections per node +3. **Fork Observer** — whether to enable it and how often it polls (seconds) +4. **Grafana logging** — whether to enable log and metrics collection + +The wizard generates a round-robin + random connection topology and writes `network.yaml` and `node-defaults.yaml` into `networks//`. It then prints the `warnet deploy` command to run. + +**Best for:** small to medium Bitcoin-only networks with standard node versions. + +**Limitations:** +- Bitcoin Core only — no Lightning, no custom images beyond a single tag per group +- No per-node overrides (resources, custom probes, sidecar containers, etc.) +- Topology is limited to the built-in round-robin + random model + +--- + +## Method 2: Hand-written YAML + +For full control over every node, write `network.yaml` and `node-defaults.yaml` directly. This is practical for networks up to ~20 nodes. Beyond that, maintaining `addnode` lists and generating unique secrets by hand becomes error-prone. + +### Minimal example + +```yaml +# networks/my_net/network.yaml +nodes: + - name: tank-0000 + image: + tag: "27.0" + addnode: + - tank-0001 + + - name: tank-0001 + image: + tag: "25.1" + addnode: + - tank-0000 + +fork_observer: + enabled: true + configQueryInterval: 20 + +caddy: + enabled: true +``` + +```yaml +# networks/my_net/node-defaults.yaml +chain: regtest +``` + +### Topology considerations + +Every node listed in `addnode` will establish an outbound connection to that peer on startup. A common pattern is a **ring** (each node connects to the next) plus a few random cross-links: + +```yaml +nodes: + - name: tank-0000 + addnode: [tank-0001, tank-0003] # ring + random + - name: tank-0001 + addnode: [tank-0002, tank-0000] + - name: tank-0002 + addnode: [tank-0003, tank-0001] + - name: tank-0003 + addnode: [tank-0000, tank-0002] +``` + +For all available per-node keys (`global:`, `resources:`, `startupProbe:`, `lnd:`, etc.) see [Tank Options](tank-options.md) and [LN Options](ln-options.md). + +**Best for:** small bespoke networks, learning the schema, one-off experiments. + +**Not recommended** for networks larger than ~20 nodes — use a script instead. + +--- + +## Method 3: Script-generated YAML + +For large or complex networks (many node types, signet key generation, per-node macaroons, varied topologies) the most maintainable approach is to write a Python script that constructs the network data as Python objects and serialises them to YAML. + +Both reference contest repos use this pattern in `scripts/fleet.py`. The approach is: + +1. Define node types as Python classes with a `to_obj()` method that returns the YAML dict +2. Orchestrate node creation, connections, and signet key generation in a `Game` (or similar) class +3. Call `yaml.dump()` to write the output files + +### Pattern from `battle-of-galen-erso/scripts/fleet.py` + +This script generates multiple network sizes (signet_large with 100+ nodes across 13 teams, plus smaller regtest variants) from the same class hierarchy: + +```python +class Node: + def __init__(self, game, name): + self.name = name + self.rpcpassword = secrets.token_hex(16) # unique per node + self.addnode = [] + + def to_obj(self): + return { + "name": self.name, + "image": self.bitcoin_image, + "global": { + "rpcpassword": self.rpcpassword, + "chain": self.game.chain, + }, + "addnode": self.addnode, + "config": f"maxconnections=1000\nuacomment={self.name}\n", + } + +class VulnNode(Node): + """A target node: adds metrics export and resource limits.""" + def to_obj(self): + obj = super().to_obj() + obj.update({ + "collectLogs": True, + "metricsExport": True, + "metrics": 'blocks=getblockcount() mempool_size=getmempoolinfo()["size"]', + "resources": { + "limits": {"cpu": "4000m", "memory": "1000Mi"}, + "requests": {"cpu": "100m", "memory": "200Mi"}, + }, + }) + return obj + +class Miner(Node): + """Miner node: startup probe initialises the wallet.""" + def to_obj(self): + obj = super().to_obj() + obj["startupProbe"] = { + "exec": {"command": ["/bin/sh", "-c", + f"bitcoin-cli createwallet miner && " + f"bitcoin-cli importdescriptors {self.game.desc_string}"]}, + "failureThreshold": 10, + "periodSeconds": 30, + "timeoutSeconds": 60, + } + return obj +``` + +Connections are added programmatically to ensure a ring plus random cross-links: + +```python +def add_connections(self): + for i, node in enumerate(self.nodes): + node.addnode.append(self.nodes[(i + 1) % len(self.nodes)].name) # ring + for _ in range(4): + node.addnode.append(random.choice(self.nodes).name) # random +``` + +Signet requires a signing key. The script generates one with the Bitcoin Core test framework and embeds the `signetchallenge` directly into every node's config: + +```python +def generate_signet(self): + secret = secrets.token_bytes(32) + privkey = ECKey() + privkey.set(secret, True) + pubkey = privkey.get_pubkey().get_bytes() + self.signetchallenge = key_to_p2wpkh_script(pubkey).hex() + # also builds self.desc_string for the miner wallet +``` + +Finally, `write()` serialises everything to the correct directory structure: + +```python +def write(self): + network = { + "nodes": [n.to_obj() for n in self.nodes], + "caddy": {"enabled": True}, + "fork_observer": {"enabled": True, "configQueryInterval": 20}, + "services": [{"title": "Leaderboard", "path": "/leaderboard/", ...}], + "plugins": {...}, + } + # writes battlefields//network.yaml + node-defaults.yaml + self.write_network_yaml_dir("battlefields", network) + # writes armadas//network.yaml (attacker nodes) + self.write_armada(3) + # writes armies//namespaces.yaml + namespace-defaults.yaml + self.write_armies(len(TEAMS)) +``` + +Generating all network sizes is then just a few lines: + +```python +g = Game("signet_large", "signet") +g.add_nodes(len(TEAMS), len(VERSIONS)) +g.add_miner() +g.add_connections() +g.write() +``` + +### Pattern from `wrath-of-nalo/scripts/fleet.py` + +The Lightning Network variant extends this pattern with additional complexity: + +- **Per-node macaroon generation** via `lncli bakemacaroon` — each LND node gets a deterministic `adminMacaroon` and `macaroonRootKey`, enabling pre-wired metric exporters and scenario scripts to authenticate without a wallet unlock step +- **Channel topology** — a channel registry tracks which pairs have open channels and assigns pre-mined transaction slots (`block`/`index` IDs) to avoid collisions +- **Specialised node subclasses** — `SpenderNode`, `RoutingNode`, `RecipientNode`, and `GossipVulnNode` each override `to_obj()` to add the metrics, `extraContainers`, or `restartPolicy` relevant to their role +- **Circuit Breaker variants** — a parallel set of payment-route nodes (`cb-spender`, `cb-router`, `cb-recipient`) are created with `circuitbreaker: enabled: true` to test HTLC mitigation + +```python +class SpenderNode(MetricsNode): + def to_obj(self): + obj = super().to_obj() + obj["lnd"]["extraContainers"][0]["env"][0]["value"] += "failed_payments=FAILED_PAYMENTS " + return obj + +def add_payment_routes(self, n): + for i in range(n): + spender = SpenderNode(self, f"{TEAMS[i]}-spender") + router = RoutingNode(self, f"{TEAMS[i]}-router") + recipient = RecipientNode(self, f"{TEAMS[i]}-recipient") + self.nodes += [spender, router, recipient] + self.add_channel(spender, router, int(2e8)) + self.add_channel(router, recipient, int(2e8)) +``` + +### When to write a fleet script + +Use a script when any of the following are true: + +- The network has more than ~20 nodes +- Nodes need unique per-node secrets (rpcpassword, LND macaroons, signet keys) +- Multiple network sizes or variants share the same node definitions +- The topology (connections, channels) follows rules that are easier to express in code than YAML +- The network will need to be regenerated with different parameters in the future + +**Best for:** large test networks, wargame infrastructure, reproducible multi-variant deployments. diff --git a/docs/ln-options.md b/docs/ln-options.md new file mode 100644 index 000000000..57df05118 --- /dev/null +++ b/docs/ln-options.md @@ -0,0 +1,297 @@ +# LN Options + +Lightning Network nodes are attached to Bitcoin Core tanks via the `ln:` key. Configuration specific to each implementation lives under a matching top-level key (`lnd:` or `cln:`). + +Two implementations are available: + +| Key | Implementation | Default image | +|-----|---------------|---------------| +| `ln.lnd: true` | [LND](https://github.com/lightningnetwork/lnd) by Lightning Labs | `lightninglabs/lnd:v0.20.1-beta` | +| `ln.cln: true` | [Core Lightning](https://github.com/ElementsProject/lightning) by Blockstream | `elementsproject/lightningd:v25.02` | + +Only one implementation may be enabled per node. + +```yaml +nodes: + - name: tank-0000 + ln: + lnd: true # enable LND + lnd: + config: "color=#3399FF" +``` + +The LN container runs inside the same pod as Bitcoin Core and connects to it via localhost. The chain and RPC password are shared from `global.chain` and `global.rpcpassword` on the parent node — see [Tank Options](tank-options.md). + +--- + +## LND options (`lnd:`) + +### `image` + +Docker image for the LND container. Override to pin a specific version: + +```yaml +lnd: + image: + repository: lightninglabs/lnd # default + tag: "v0.18.2-beta" + pullPolicy: IfNotPresent +``` + +--- + +### `config` + +Additional lines appended to `lnd.conf`. Use this for per-node LND settings: + +```yaml +lnd: + config: | + color=#e6194b + bitcoin.timelockdelta=33 + ignore-historical-gossip-filters=true +``` + +Several options are managed by the chart (`rpclisten`, `bitcoind.rpcuser`, ZMQ endpoints, etc.) and should not be set here. + +--- + +### `channels` + +List of channels to open after the network is initialised. **You do not need to run anything manually** — `warnet deploy` detects channels in the network definition and automatically runs the `ln_init` scenario to set everything up. + +```yaml +lnd: + channels: + - id: + block: 500 # block height of the funding tx (must be unique across nodes) + index: 1 # output index within that block (1-based, max 200 per block) + target: tank-0001-ln # pod name of the remote LND node (with -ln suffix) + capacity: 1000000 # channel capacity in satoshis + push_amt: 500000 # satoshis pushed to remote side on open (optional) + source_policy: # routing policy for outbound direction (optional) + cltv_expiry_delta: 40 + htlc_minimum_msat: 1000 + fee_base_msat: 1000 + fee_proportional_millionths: 1 + htlc_maximum_msat: 990000000 + target_policy: # routing policy for inbound direction (optional) + cltv_expiry_delta: 40 + htlc_minimum_msat: 1000 + fee_base_msat: 1000 + fee_proportional_millionths: 1 + htlc_maximum_msat: 990000000 +``` + +#### What `ln_init` does automatically + +When `warnet deploy` finds any `lnd.channels` or `cln.channels` entry — in `network.yaml` or `node-defaults.yaml` — it runs `resources/scenarios/ln_init.py` as the final deploy step and streams its logs to the terminal. The full sequence is: + +1. **Wait for L1 p2p** — holds until all Bitcoin Core nodes have established their connections from `addnode`. +2. **Mine to near channel-open height** — mines to block 496 (four blocks before the default `id.block` start of 500), building a usable chain. A node named `miner` is used as the block source; if none exists, the first node in the network is used instead. +3. **Fund LN wallets** — constructs a single transaction that sends 10 BTC UTXOs to the on-chain wallet of every node that opens at least one channel. These UTXOs are sized so that the change output always lands at tx output index 1, leaving the channel funding output at index 0 — which is what makes channel IDs deterministic. +4. **Establish LN p2p connections** — connects every channel pair directly, plus builds a ring through all LN nodes so the gossip graph is connected. +5. **Open channels block-by-block** — processes all channels sorted by `id.block`, then `id.index`. For each target block: + - Mines to that height + - Opens all channels assigned to that block in parallel, using decreasing fee rates so transactions land in the block in index order + - Mines the block + - **Asserts determinism**: verifies that `block_txs[id.index] == channel_txid` and the funding output is at index 0; aborts if not +6. **Mine 5 confirmation blocks** — waits for channels to reach the required confirmation depth. +7. **Wait for gossip sync** — polls every LN node until each one reports the full set of channels in its graph. +8. **Apply channel policies** — for any channel that defined `source_policy` or `target_policy`, sends an `UpdateChannelPolicy` to the respective node and waits for the policy to propagate across the network. + +The terminal will stream `ln_init` log output during this process. On large networks it can take several minutes. + +#### `id` constraints + +The `id.block` and `id.index` values directly determine the [short channel ID](https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#definition-of-short_channel_id) (SCID) that the channel will receive. Because SCIDs encode the funding transaction's block height and position, `ln_init` must place each funding transaction at *exactly* the right position in the mined chain. + +Rules that must be followed across the entire network: + +| Constraint | Value | +|------------|-------| +| Minimum `id.block` | `500` | +| Maximum channels per block (`id.index` range) | `200` (indices 1–200) | +| Indices within a block | Must be consecutive starting at `1` with no gaps | +| Global uniqueness | No two channels may share the same `block`/`index` pair | +| Channel capacity | Must be below ~4 BTC so the change output lands at tx index 1, not 0 | + +Violating any of these causes `ln_init` to abort with an assertion error. + +> **Tip:** When generating large networks programmatically, maintain a single global channel counter that increments `index` through 1–200 then increments `block`. See [Creating a Network](creating-a-network.md) for the fleet-script pattern. + +#### Optional fields + +| Field | Default | Notes | +|-------|---------|-------| +| `push_amt` | `0` | Satoshis pushed to the remote side on open; if omitted all funds start on the opener's side | +| `source_policy` | LND default | Routing policy applied to the outbound direction after the channel is confirmed | +| `target_policy` | LND default | Routing policy applied to the inbound direction; sent to the *target* node | + +--- + +### `macaroonRootKey` + +A base64-encoded root key used to derive all LND macaroons. Setting this ensures reproducible macaroons across deployments, which is required when the `adminMacaroon` must be known before the node starts (e.g. for pre-configured metric exporters or scenario scripts). + +```yaml +lnd: + macaroonRootKey: kjeST2GJccEZa0u9/5T3egyJjtZyDZ6UkHp3p1LzslU= +``` + +Generate with `lncli bakemacaroon --root_key=` or by deriving from random bytes with `base64.b64encode(os.urandom(32))`. + +--- + +### `adminMacaroon` + +A hex-encoded pre-baked admin macaroon. Use together with `macaroonRootKey` to make the node's admin macaroon known before deployment — useful for automating authentication in scenarios and sidecars. + +```yaml +lnd: + adminMacaroon: 0201036c6e6402f801... +``` + +--- + +### `resources` + +Kubernetes resource requests and limits for the LND container: + +```yaml +lnd: + resources: + limits: + cpu: 2000m + memory: 500Mi + requests: + cpu: 100m + memory: 200Mi +``` + +--- + +### `restartPolicy` + +Restart policy for the LND pod. Default is `Always`. Set to `Never` for vulnerable target nodes where you want the node to stay down after a crash: + +```yaml +lnd: + restartPolicy: Never +``` + +--- + +### `persistence` + +Creates a PVC for the LND data directory (`/root/.lnd/`). PVC names follow the pattern `-ln.-lnd-data`. + +```yaml +lnd: + persistence: + enabled: true + size: 10Gi + storageClass: "" + accessMode: ReadWriteOncePod + existingClaim: "" +``` + +--- + +### `metricsExport` + +When `true`, registers this node with the Prometheus/Grafana monitoring stack. Requires `extraContainers` to include an `lnd-exporter` sidecar that actually scrapes and exposes the metrics: + +```yaml +lnd: + metricsExport: true + prometheusMetricsPort: 9332 + metricsScrapeInterval: 60s +``` + +--- + +### `metricsScrapeInterval` + +How often Prometheus scrapes the LND metrics exporter. Default is `15s`. + +```yaml +lnd: + metricsScrapeInterval: 60s +``` + +--- + +### `prometheusMetricsPort` + +Port the LND metrics exporter sidecar listens on. Default is `9332`. + +```yaml +lnd: + prometheusMetricsPort: 9332 +``` + +--- + +### `extraContainers` + +List of additional sidecar containers to add to the LND pod. The standard use is to attach the `lnd-exporter` Prometheus sidecar: + +```yaml +lnd: + extraContainers: + - name: lnd-exporter + image: bitcoindevproject/lnd-exporter:0.3.0 + imagePullPolicy: IfNotPresent + ports: + - containerPort: 9332 + name: prom-metrics + protocol: TCP + env: + - name: METRICS + value: > + lnd_block_height=parse("/v1/getinfo","block_height") + pending_htlcs=PENDING_HTLCS + failed_payments=FAILED_PAYMENTS + volumeMounts: + - mountPath: /macaroon.hex + name: config + subPath: MACAROON_HEX +``` + +The `lnd-exporter` image reads the `METRICS` environment variable as a space-separated list of `label=expression` pairs. Built-in aggregated metrics (`PENDING_HTLCS`, `FAILED_PAYMENTS`) are provided by the exporter. REST API values are extracted with `parse("/v1/endpoint","json_key")`. + +See the [lnd-exporter documentation](https://github.com/bitcoin-dev-project/lnd-exporter/tree/main?tab=readme-ov-file#configuration) for the full metric expression syntax. + +--- + +### `circuitbreaker` + +Deploys [Circuit Breaker](https://github.com/lightningequipment/circuitbreaker) as a sidecar alongside LND. Circuit Breaker is a Lightning Network firewall that limits in-flight HTLCs on a per-peer basis. + +```yaml +lnd: + circuitbreaker: + enabled: true + image: bitcoindevproject/circuitbreaker:v0.5.0 # optional, overrides default + httpPort: 9235 # optional, overrides default +``` + +See [Circuit Breaker](circuit-breaker.md) for detailed usage. + +--- + +## CLN options (`cln:`) + +Core Lightning uses the same top-level structure. The `cln:` key mirrors `lnd:` for the options it shares: + +| Key | Notes | +|-----|-------| +| `cln.image` | Same structure as `lnd.image` | +| `cln.config` | Lines appended to `config` (CLN config file format) | +| `cln.channels` | Same channel schema as LND | +| `cln.resources` | Same Kubernetes resource spec | +| `cln.persistence` | PVC at `/root/.lightning/`; default size 10Gi | +| `cln.extraContainers` | Same sidecar pattern | + +CLN does not currently support `macaroonRootKey`, `adminMacaroon`, `metricsExport`, `metricsScrapeInterval`, `prometheusMetricsPort`, or `circuitbreaker`. diff --git a/docs/logging_monitoring.md b/docs/logging_monitoring.md index 07c0be2c8..f73257bc6 100644 --- a/docs/logging_monitoring.md +++ b/docs/logging_monitoring.md @@ -118,13 +118,47 @@ mempool_size 0.0 ### Defining lnd metrics to capture -Lightning nodes can also be configured to export metrics to prometheus using `lnd-exporter`. -Example configuration is provided in `test/data/ln/`. Review `node-defauts.yaml` for a typical logging configuration. All default metrics reported to prometheus are prefixed with `lnd_` +Lightning nodes can also be configured to export metrics to Prometheus using `lnd-exporter`. +Example configuration is provided in `test/data/ln/`. Review `node-defaults.yaml` for a typical logging configuration. All default metrics reported to Prometheus are prefixed with `lnd_`. [lnd-exporter configuration reference](https://github.com/bitcoin-dev-project/lnd-exporter/tree/main?tab=readme-ov-file#configuration) -lnd-exporter assumes same macaroon referenced in ln_framework (can be overridden by env variable) -**Note: `test/data/ln` and `test/data/logging` take advantage of **extraContainers** configuration option to add containers to default `lnd/templates/pod`* +The `lnd-exporter` sidecar is added via `lnd.extraContainers` and assumes the same macaroon referenced in `ln_framework` (can be overridden by environment variable). Enable metrics export and configure the scrape interval and port with these `lnd:` keys: + +```yaml +nodes: + - name: tank-0000 + ln: + lnd: true + lnd: + metricsExport: true + metricsScrapeInterval: 60s # how often Prometheus scrapes (default: 15s) + prometheusMetricsPort: 9332 # port the exporter listens on (default: 9332) + extraContainers: + - name: lnd-exporter + image: bitcoindevproject/lnd-exporter:0.3.0 + imagePullPolicy: IfNotPresent + ports: + - containerPort: 9332 + name: prom-metrics + protocol: TCP + env: + - name: METRICS + value: 'lnd_block_height=parse("/v1/getinfo","block_height") pending_htlcs=PENDING_HTLCS' + volumeMounts: + - mountPath: /macaroon.hex + name: config + subPath: MACAROON_HEX +``` + +| Key | Description | +|-----|-------------| +| `lnd.metricsExport` | Set to `true` to enable Prometheus metrics scraping for this LND node | +| `lnd.metricsScrapeInterval` | How often Prometheus scrapes the exporter (e.g. `"60s"`, default `"15s"`) | +| `lnd.prometheusMetricsPort` | Port the `lnd-exporter` sidecar listens on (default `9332`) | +| `lnd.extraContainers` | List of additional sidecar containers to add to the LND pod (full Kubernetes container specs) | + +**Note:** `test/data/ln` and `test/data/logging` use `lnd.extraContainers` to attach the `lnd-exporter` sidecar to the default `lnd/templates/pod`. ### Grafana diff --git a/docs/plugins.md b/docs/plugins.md index bce833864..aed668337 100644 --- a/docs/plugins.md +++ b/docs/plugins.md @@ -1,72 +1,259 @@ -# Plugins +# Plugin Options -Plugins extend Warnet. Plugin authors can import commands from Warnet and interact with the kubernetes cluster, and plugin users can run plugins from the command line or from the `network.yaml` file. +Plugins extend Warnet by running custom code at specific points during `warnet deploy`. They are declared in the `plugins:` section of `network.yaml` and invoked automatically. -## Activating plugins from `network.yaml` +--- -You can activate a plugin command by placing it in the `plugins` section at the bottom of each `network.yaml` file like so: +## Declaring plugins in `network.yaml` -````yaml -nodes: - <> +```yaml +plugins: + : + : + entrypoint: "../plugins/" # required: path to the plugin directory + : # any additional plugin-specific config +``` + +Warnet runs `/plugin.py entrypoint '' ''` for each declared plugin. + +--- + +## Hooks + +Hooks control *when* a plugin runs relative to the deploy sequence. All six hooks run during `warnet deploy`: + +| Hook | When it runs | +|------|-------------| +| `preDeploy` | Before anything else is deployed | +| `postDeploy` | After all nodes and the network are deployed | +| `preNode` | Before each individual node is deployed *(once per node)* | +| `postNode` | After each individual node is deployed *(once per node)* | +| `preNetwork` | After logging infrastructure, before nodes are launched | +| `postNetwork` | After all node deploy threads have completed | + +### Per-node hooks and `node_name` + +For `preNode` and `postNode`, Warnet passes the current node's name in the context under the key `node_name`. Plugins can read this to act on a specific node. The pod name Warnet produces for a per-node plugin follows the pattern: + +``` +-- +``` + +For example, with `podName: hello-pod` on node `tank-0000`: +- `tank-0000-pre-hello-pod` +- `tank-0000-post-hello-pod` + +--- + +## Writing a plugin + +A plugin is a directory containing at minimum a `plugin.py` file. The file must accept the subcommand `entrypoint` with two positional JSON arguments: + +```python +import json, sys + +assert sys.argv[1] == "entrypoint" +plugin_config = json.loads(sys.argv[2]) # keys declared in network.yaml +warnet_context = json.loads(sys.argv[3]) # hook_value, namespace, annex +``` + +`warnet_context` always contains: + +| Key | Value | +|-----|-------| +| `hook_value` | The hook that fired (`"preDeploy"`, `"postNode"`, etc.) | +| `namespace` | The Kubernetes namespace being deployed into | +| `annex.node_name` | *(preNode/postNode only)* Name of the current node | + +Plugins that deploy Kubernetes resources typically use Helm: + +```python +from warnet.process import run_command +from pathlib import Path + +assert sys.argv[1] == "entrypoint" +plugin_config = json.loads(sys.argv[2]) + +command = f"helm upgrade --install my-plugin {Path(__file__).parent / 'charts' / 'my-plugin'}" +for key, value in plugin_config.items(): + command += f" --set {key}={value}" +run_command(command) +``` + +Start from the `hello` plugin included in every initialised project: + +```sh +warnet init +cat plugins/hello/plugin.py +``` + +--- + +## Built-in plugins + +The following plugins ship with Warnet in `resources/plugins/`. + +### SimLN + +[SimLN](https://simln.dev/) generates realistic Lightning Network payment activity between nodes. It runs as a `postDeploy` pod and supports both LND and CLN. + +#### Configuration in `network.yaml` + +```yaml +plugins: + postDeploy: + simln: + entrypoint: "../plugins/simln" + activity: '[{"source": "tank-0003-ln", "destination": "tank-0005-ln", "interval_secs": 1, "amount_msat": 2000}]' +``` + +The `activity` value is a JSON array of payment flows. Each flow specifies: + +| Field | Description | +|-------|-------------| +| `source` | Pod name of the sending LND/CLN node | +| `destination` | Pod name of the receiving node | +| `interval_secs` | Seconds between payment attempts | +| `amount_msat` | Payment amount in millisatoshis | + +SimLN automatically discovers node credentials (macaroons, TLS certs) for every LND and CLN node in the network. + +#### CLI subcommands + +The SimLN plugin exposes additional commands for interacting with running instances: + +```sh +# List pod names of all running SimLN instances +python3 resources/plugins/simln/plugin.py list-pod-names + +# Download results from a SimLN pod to the current directory +python3 resources/plugins/simln/plugin.py download-results + +# Get an example activity JSON for the first two LN nodes +python3 resources/plugins/simln/plugin.py get-example-activity + +# Launch a new activity from the command line +python3 resources/plugins/simln/plugin.py launch-activity '' + +# Run a shell command inside a SimLN pod +python3 resources/plugins/simln/plugin.py sh [args...] +``` -plugins: # This marks the beginning of the plugin section - preDeploy: # This is a hook. This particular hook will call plugins before deploying anything else. - hello: # This is the name of the plugin. - entrypoint: "../plugins/hello" # Every plugin must specify a path to its entrypoint. - podName: "hello-pre-deploy" # Plugins can have their own particular configurations, such as how to name a pod. - helloTo: "preDeploy!" # This configuration tells the hello plugin who to say "hello" to. -```` +Results written by SimLN inside the pod at `/working/results/` can also be retrieved with `kubectl cp`. -## Many kinds of hooks -There are many hooks to the Warnet `deploy` command. The example below specifies them: +#### Custom SimLN image -````yaml +To use a custom SimLN build, update `resources/plugins/simln/charts/simln/values.yaml`: + +```yaml +image: + repository: "myusername/sim-ln" + tag: "myversion" +``` + +--- + +### Tor + +The Tor plugin deploys a Tor daemon (`torda`) as a Kubernetes service, enabling Bitcoin nodes to connect over Tor. + +#### Configuration in `network.yaml` + +```yaml +plugins: + preDeploy: + tor: + entrypoint: "../plugins/tor" +``` + +The Tor chart does not accept additional configuration keys beyond `entrypoint`. See `resources/plugins/tor/charts/torda/values.yaml` for defaults. + +--- + +## Example plugins (from reference repos) + +The following plugin patterns appear in Warnet-based contest repos. + +### Leaderboard (`battle-of-galen-erso`) + +Deploys a scoreboard web service as a `postDeploy` plugin, then exposes it on the Caddy dashboard via the `services:` key: + +```yaml +plugins: + postDeploy: + leaderboard: + entrypoint: "../../plugins/leaderboard" + admin_key: "secretkey123" + next_public_asset_prefix: "/leaderboard" + +services: + - title: Leaderboard + path: /leaderboard/ + host: leaderboard.default + port: 3000 +``` + +### LnVisualizer (`wrath-of-nalo`) + +Deploys a Kubernetes Service that routes traffic to LnVisualizer sidecar containers running inside the miner node's LND pod. Activated as `preDeploy` so the Service exists before the dashboard is configured: + +```yaml +plugins: + preDeploy: + lnvisualizer: + entrypoint: "../../plugins/lnvisualizer" + instance: miner # node whose LND pod hosts the sidecars + name: lnd-ln # Kubernetes Service name suffix + +services: + - title: LN Visualizer Web UI + path: /lnvisualizer/ + host: lnvisualizer.default + port: 80 +``` + +The sidecar containers themselves are added under `lnd.extraContainers` on the `miner` node — see [LN Options](ln-options.md#extracontainers). + +--- + +## Full hook example + +```yaml nodes: - <> + # ... node list ... plugins: - preDeploy: # Plugins will run before any other `deploy` code. - hello: - entrypoint: "../plugins/hello" - podName: "hello-pre-deploy" - helloTo: "preDeploy!" - postDeploy: # Plugins will run after all the `deploy` code has run. + preDeploy: + setup: + entrypoint: "../plugins/setup" + config_value: "foo" + + postDeploy: simln: entrypoint: "../plugins/simln" activity: '[{"source": "tank-0003-ln", "destination": "tank-0005-ln", "interval_secs": 1, "amount_msat": 2000}]' + + preNode: hello: entrypoint: "../plugins/hello" - podName: "hello-post-deploy" - helloTo: "postDeploy!" - preNode: # Plugins will run before `deploy` launches a node (once per node). - hello: - entrypoint: "../plugins/hello" + podName: "hello-pre-node" helloTo: "preNode!" - postNode: # Plugins will run after `deploy` launches a node (once per node). + + postNode: hello: entrypoint: "../plugins/hello" + podName: "hello-post-node" helloTo: "postNode!" - preNetwork: # Plugins will run before `deploy` launches the network (essentially between logging and when nodes are deployed) + + preNetwork: hello: entrypoint: "../plugins/hello" - helloTo: "preNetwork!" podName: "hello-pre-network" - postNetwork: # Plugins will run after the network deploy threads have been joined. + helloTo: "preNetwork!" + + postNetwork: hello: entrypoint: "../plugins/hello" - helloTo: "postNetwork!" podName: "hello-post-network" -```` - -Warnet will execute these plugin commands during each invocation of `warnet deploy`. - - - -## A "hello" example - -To get started with an example plugin, review the `README` of the `hello` plugin found in any initialized Warnet directory: - -1. `warnet init` -2. `cd plugins/hello/` - + helloTo: "postNetwork!" +``` diff --git a/docs/scenarios.md b/docs/scenarios.md index c2892c421..a09f64be4 100644 --- a/docs/scenarios.md +++ b/docs/scenarios.md @@ -13,10 +13,16 @@ When creating a new network default scenarios will be copied into your project d A scenario can be run with `warnet run [optional_params]`. +Pass `-- --help` after the scenario file path to see that scenario's own argument help without launching a pod: + +```bash +warnet run resources/scenarios/miner_std.py -- --help +``` + The [`miner_std`](../resources/scenarios/miner_std.py) scenario is a good one to start with as it automates block generation: ```bash -₿ warnet run build55/scenarios/miner_std.py --allnodes --interval=10 +₿ warnet run resources/scenarios/miner_std.py --allnodes --interval=10 configmap/warnetjson configured configmap/scenariopy configured pod/commander-minerstd-1724708498 created @@ -72,3 +78,32 @@ Total Tanks: 6 | Active Scenarios: 0 ## Running a custom scenario You can write your own scenario file and run it in the same way. + +### Sharing helper modules with `--source_dir` + +If your scenario imports other local modules (e.g. utilities in the same directory), pass the directory that should be bundled into the commander pod: + +```bash +warnet run my_scenarios/my_scenario.py --source_dir=my_scenarios/ +``` + +### Running with admin privileges + +By default a commander pod can only interact with nodes in the namespace it was launched in. Pass `--admin` to give the scenario access to nodes across all namespaces (requires admin kubeconfig context): + +```bash +warnet run resources/scenarios/reconnaissance.py --admin +``` + +### Scenario status + +Scenarios appear in `warnet status` with one of the following statuses: + +| Status | Meaning | +|-----------|-------------------------------------------| +| pending | Pod is starting up | +| running | Scenario is executing | +| succeeded | Scenario completed without errors | +| failed | Scenario exited with a non-zero exit code | + +`warnet status` reports **Active Scenarios** as the count of scenarios that are currently `running` or `pending`. diff --git a/docs/snapshots.md b/docs/snapshots.md index ab0001fbd..9b98b8778 100644 --- a/docs/snapshots.md +++ b/docs/snapshots.md @@ -78,7 +78,7 @@ Here's a step-by-step guide on how to create a snapshot, upload it, and configur 6. Deploy Warnet with the updated configuration: ```bash - warnet deploy networks/your_cool_network/network.yaml + warnet deploy networks/your_cool_network ``` 7. Warnet will now use the uploaded snapshot to initialize the Bitcoin data directory when creating the "miner" node. In this particular example, the blocks will then be distibuted to the other nodes via IBD and the mining node can resume signet mining off the chaintip by loading the wallet from the snapshot: diff --git a/docs/tank-options.md b/docs/tank-options.md new file mode 100644 index 000000000..110e56db7 --- /dev/null +++ b/docs/tank-options.md @@ -0,0 +1,242 @@ +# Tank Options + +A *tank* is a Bitcoin Core node deployed inside the Kubernetes cluster. Each entry in the `nodes:` list of `network.yaml` defines one tank. All keys are optional unless marked required. + +For how these values propagate from defaults through to Helm templates see [Configuration](config.md). + +--- + +## `name` *(required)* + +Pod name for this node. Must be unique within the network and follow Kubernetes naming rules (lowercase alphanumeric and hyphens). + +```yaml +- name: tank-0000 +``` + +Tanks are addressed by this name in `warnet bitcoin rpc `, `warnet logs`, and scenario scripts (`self.tanks["tank-0000"]`). + +--- + +## `addnode` + +List of peer names this node will connect to on startup. Nodes are addressed by their `name` (for same-namespace peers) or by `.default` for cross-namespace connections on the battlefield. + +```yaml +addnode: + - tank-0001 + - tank-0003 + - miner.default # cross-namespace +``` + +--- + +## `image` + +Docker image for the Bitcoin Core container. + +```yaml +image: + repository: bitcoindevproject/bitcoin # default + tag: "27.0" # required if not set in node-defaults.yaml + pullPolicy: IfNotPresent +``` + +`tag` selects the Bitcoin Core version. Custom builds can be pulled from any repository: + +```yaml +image: + repository: myrepo/bitcoin-custom + tag: "27.0-patch1" +``` + +--- + +## `config` + +Additional lines appended to `bitcoin.conf` for this node only. Use this for per-node settings that differ from the defaults: + +```yaml +config: | + maxconnections=1000 + uacomment=tank-0000-red + rpcauth=forkobserver:1418... + rpcwhitelistdefault=0 +``` + +> **Note:** Several options (`rpcuser`, `rpcpassword`, `rpcport`, ZMQ endpoints) are managed by the Helm chart and should not be set here. Use `global.rpcpassword` instead of `rpcpassword` in `config`. + +--- + +## `global` + +Sets the chain and RPC password for this node. These values are also shared with any Lightning sub-charts (LND/CLN) attached to the same pod. + +```yaml +global: + chain: signet # regtest (default) | signet | mainnet + rpcpassword: abc123 # unique per node recommended for large networks +``` + +Without this key the chart defaults to `regtest` with a shared password. + +--- + +## `resources` + +Standard Kubernetes [resource requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) for the Bitcoin Core container. Leave unset on resource-constrained local clusters. + +```yaml +resources: + limits: + cpu: 4000m + memory: 1000Mi + requests: + cpu: 100m + memory: 200Mi +``` + +--- + +## `restartPolicy` + +Kubernetes [pod restart policy](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy). Default is `Never` — crashed tanks stay down, which is usually the desired behaviour for attack/test scenarios. + +Set to `Always` when combined with `persistence` so the node recovers after cluster restarts. + +```yaml +restartPolicy: Always +``` + +--- + +## `persistence` + +Creates a Kubernetes Persistent Volume Claim for the node's data directory (`/root/.bitcoin/`). Without this, all chain data is lost when the pod is deleted. + +```yaml +persistence: + enabled: true + size: 20Gi # default + storageClass: "" # uses cluster default + accessMode: ReadWriteOncePod # use ReadWriteOnce for older k8s + existingClaim: "" # name of a pre-existing PVC to reuse +``` + +PVC names follow the pattern `.-bitcoincore-data`. + +--- + +## `startupProbe` + +Overrides the default Kubernetes startup probe. Useful when a node needs custom initialisation before it is considered ready — for example, creating a wallet and importing a descriptor on a miner node: + +```yaml +startupProbe: + exec: + command: + - /bin/sh + - -c + - bitcoin-cli createwallet miner && bitcoin-cli importdescriptors [...] + failureThreshold: 10 + periodSeconds: 30 + successThreshold: 1 + timeoutSeconds: 60 +``` + +--- + +## `collectLogs` + +When `true`, this node's Bitcoin Core logs are shipped to the Loki stack for aggregation in Grafana. The logging stack is installed automatically on first `warnet deploy` if any node has this set. + +```yaml +collectLogs: true +``` + +--- + +## `metricsExport` + +When `true`, attaches a `bitcoin-exporter` Prometheus sidecar to the pod that scrapes Bitcoin RPC results and exposes them on port `9332`. The Prometheus/Grafana stack is installed automatically. + +```yaml +metricsExport: true +``` + +--- + +## `metrics` + +Configures which RPC values the `bitcoin-exporter` sidecar collects. A space-separated list of `label=method(args)[json_key]` expressions: + +```yaml +metrics: > + blocks=getblockcount() + mempool_size=getmempoolinfo()["size"] + memused=getmemoryinfo()["locked"]["used"] + memfree=getmemoryinfo()["locked"]["free"] +``` + +Default metrics (when `metricsExport: true` but no `metrics:` key) are block count, inbound peers, outbound peers, and mempool size. + +--- + +## `prometheusMetricsPort` + +Port the `bitcoin-exporter` sidecar listens on. Default is `9332`. + +```yaml +prometheusMetricsPort: 9332 +``` + +--- + +## `extraContainers` + +List of additional sidecar containers to add to the tank pod. Each entry is a full Kubernetes container spec. This is the mechanism used internally by `metricsExport` to attach the `bitcoin-exporter` sidecar. + +```yaml +extraContainers: + - name: my-monitor + image: myrepo/monitor:latest + ports: + - containerPort: 8080 + name: web + protocol: TCP + env: + - name: TARGET + value: "localhost:18443" +``` + +--- + +## `loadSnapshot` + +Load a chain state snapshot from a URL at startup instead of syncing from genesis. + +```yaml +loadSnapshot: + enabled: true + url: "https://example.com/snapshots/signet-height-50000.tar.gz" +``` + +--- + +## `ln` + +Enables a Lightning Network node attached to this Bitcoin Core tank. The two implementations are mutually exclusive: + +```yaml +ln: + lnd: true # enable LND + cln: false # enable CLN (default false) +``` + +When enabled, a second container is added to the pod and configured to connect to the local Bitcoin Core instance. See [LN Options](ln-options.md) for all configuration under the `lnd:` and `cln:` keys. + +--- + +## `defaultConfig` and `baseConfig` + +These keys are managed by Warnet and should not normally be set by users. `baseConfig` contains the chart's built-in defaults; `defaultConfig` is used by `warnet create` to inject project-level defaults. Both are overridden by `config`. diff --git a/resources/scripts/apidocs.py b/resources/scripts/apidocs.py index cca6fdce7..b93eb304c 100755 --- a/resources/scripts/apidocs.py +++ b/resources/scripts/apidocs.py @@ -45,18 +45,28 @@ def format_default_value(default, param_type): return default +def print_group(cmd, super=""): + """Recursively document a command group and its subcommands.""" + global doc + if "commands" in cmd: + for subcmd in cmd["commands"].values(): + print_group(subcmd, super + " " + cmd["name"]) + else: + print_cmd(cmd, super) + + with Context(cli) as ctx: info = ctx.to_info_dict() # root-level commands first for cmd in info["command"]["commands"].values(): if "commands" not in cmd: print_cmd(cmd) - # then groups of subcommands + # then groups of subcommands (recurse into nested groups) for cmd in info["command"]["commands"].values(): if "commands" in cmd: doc += f"## {cmd['name'].capitalize()}\n\n" for subcmd in cmd["commands"].values(): - print_cmd(subcmd, " " + cmd["name"]) + print_group(subcmd, " " + cmd["name"]) with open(file_path) as file: text = file.read() diff --git a/src/warnet/control.py b/src/warnet/control.py index 2c3bff45c..adf080e18 100644 --- a/src/warnet/control.py +++ b/src/warnet/control.py @@ -52,7 +52,11 @@ @click.command() @click.argument("scenario_name", required=False) def stop(scenario_name): - """Stop a running scenario or all scenarios""" + """Stop a running scenario or all scenarios. + + If scenario_name is omitted, an interactive menu lists all running + scenarios. Enter the number to stop one, 'a' to stop all, or 'q' to quit. + """ active_scenarios = [sc.metadata.name for sc in get_mission("commander")] if not active_scenarios: @@ -151,7 +155,13 @@ def stop_all_scenarios(scenarios) -> None: @click.command() def down(): - """Bring down a running warnet carefully""" + """Bring down a running warnet carefully. + + Interactive: shows a table of all Helm releases that will be destroyed and + asks for confirmation. If Persistent Volume Claims (PVCs) exist, also asks + whether to delete them. Answering 'n' to the PVC prompt preserves + persistent node data across redeployments. + """ if not can_delete_pods(): click.secho("You do not have permission to bring down the network.", fg="red") @@ -326,7 +336,12 @@ def run( ): """ Run a scenario from a file. - Pass `-- --help` to get individual scenario help + + Pass `-- --help` to print that scenario's argument help without deploying a pod. + + Use --source_dir to bundle a directory of helper modules into the commander pod. + Use --admin to grant cross-namespace node access (requires admin kubeconfig context). + Use --debug to stream logs and delete the pod when the scenario exits. """ return _run(scenario_file, debug, source_dir, additional_args, admin, namespace) @@ -454,7 +469,11 @@ def filter(path): @click.option("--follow", "-f", is_flag=True, default=False, help="Follow logs") @click.option("--namespace", type=str, default="default", show_default=True) def logs(pod_name: str, follow: bool, namespace: str): - """Show the logs of a pod""" + """Show the logs of a pod. + + If pod_name is omitted, an interactive menu lists all available commander + and tank pods sorted by creation time, most recent first. + """ return _logs(pod_name, follow, namespace) @@ -542,7 +561,11 @@ def format_pods(pods: list[V1Pod]) -> list[str]: help="Comma-separated list of directories and/or files to include in the snapshot", ) def snapshot(tank_name, snapshot_all, output, filter): - """Create a snapshot of a tank's Bitcoin data or snapshot all tanks""" + """Create a snapshot of a tank's Bitcoin data or snapshot all tanks. + + If neither tank_name nor --all is given, an interactive menu lets you + select which tank to snapshot. + """ tanks = get_mission("tank") if not tanks: diff --git a/src/warnet/namespaces.py b/src/warnet/namespaces.py index 12357525b..e59a066f2 100644 --- a/src/warnet/namespaces.py +++ b/src/warnet/namespaces.py @@ -35,7 +35,7 @@ def namespaces(): @namespaces.command() def list(): - """List all namespaces with 'wargames-' prefix""" + """List all active namespaces with the 'wargames-' prefix""" cmd = "kubectl get namespaces -o jsonpath='{.items[*].metadata.name}'" res = run_command(cmd) all_namespaces = res.split() @@ -55,7 +55,10 @@ def list(): @click.option("--all", "destroy_all", is_flag=True, help="Destroy all warnet- prefixed namespaces") @click.argument("namespace", required=False) def destroy(destroy_all: bool, namespace: str): - """Destroy a specific namespace or all 'wargames-' prefixed namespaces""" + """Destroy a specific namespace or all 'wargames-' prefixed namespaces. + + Only namespaces with the 'wargames-' prefix can be destroyed this way. + """ if destroy_all: cmd = "kubectl get namespaces -o jsonpath='{.items[*].metadata.name}'" res = run_command(cmd) diff --git a/src/warnet/users.py b/src/warnet/users.py index 24ddd9ff2..3f46d33bb 100644 --- a/src/warnet/users.py +++ b/src/warnet/users.py @@ -13,7 +13,15 @@ @click.option("--revert", is_flag=True, default=False, show_default=True) @click.argument("auth_config", type=str, required=False) def auth(revert, auth_config): - """Authenticate with a Warnet cluster using a kubernetes config file""" + """Authenticate with a Warnet cluster using a kubernetes config file. + + Merges the given kubeconfig into the local kubeconfig and switches the + active context. If an entry already exists and differs, a diff is shown + and confirmation is requested before overwriting. + + Pass --revert (with no AUTH_CONFIG) to restore the kubeconfig that was + active before the most recent `warnet auth` call. + """ if revert: auth_config = KUBECONFIG_UNDO elif not auth_config: