Skip to content

Conversation

@pierugo-dfinity
Copy link
Contributor

@pierugo-dfinity pierugo-dfinity commented Dec 1, 2025

The upgrade loop in the orchestrator is responsible both for executing upgrades and determining the subnet ID of the node, used to provision SSH keys and rotate IDKG keys. Though there are multiple code flows where the orchestrator determines the subnet ID but there is an error later in the loop, which makes the function return an error and the caller not apply the subnet ID. This prevents SSH keys from being provisioned even though the subnet ID had correctly been identified.

An example of such a code flow is if the local CUP is not deserializable but the NiDkgId is, which allows the subnet ID to be correctly determined (i.e. we hit here). But since the CUP is not deserializable and currently has the highest height compared to a recovery or peers CUP (we imagine it's at the very start of a recovery, before applying SSH keys -> there is no recovery CUP yet), we return an error here and the subnet ID is not updated, and SSH keys are not provisioned. If it does not have the highest height (i.e. there is a recovery CUP), then we can use the latter, which explains why we can still recover.

Note: the existing system test sr_app_no_upgrade_with_chain_keys_test is testing that we can recover a subnet exactly in that case (if the CUP is not deserializable but the NiDkgId is). As explained, nodes can see the Recovery CUP, but we do not apply readonly keys even though we could. In a parallel PR, I distinguished cases where the NiDkgId was corrupted or not. If yes, then there's indeed no way of provisioning SSH keys, but there's also no way of seeing the Recovery CUP -> thus use failover nodes. If not, then we should be able to provision SSH keys. When the second case runs on the current implementation, it fails because we cannot provision SSH keys. When merging this branch to it, the test succeeds, which is a positive sign towards the added value of this change.

Another example is if we detected we need to leave the subnet but removing the state failed (i.e. hit here). Then, we'd return an error again and fail to remove SSH keys of the subnet.

This PR is not supposed to bring any functional change to the upgrade logic but instead modifies the return type of the loop to return the subnet assignment also on errors, if able to determine it.

PS: The PR also uses the same registry version for the entire loop, instead of determining multiple times the latest registry version (in functions prepare_upgrade_if_scheduled, check_for_upgrade_as_unassigned, should_node_become_unassigned), in order to have a more consistent and predictable behaviour.

@github-actions github-actions bot added the feat label Dec 1, 2025
@pierugo-dfinity pierugo-dfinity added the CI_ALL_BAZEL_TARGETS Runs all bazel targets and uploads them to S3 label Dec 1, 2025
@pierugo-dfinity pierugo-dfinity changed the title feat(orchestrator): do not swallow subnet assignment on upgrade loop errors feat(orchestrator): do not ignore subnet assignment on upgrade loop errors Dec 2, 2025
Comment on lines 732 to 734
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: I would also argue changing both of these false to true. I.e., in case the registry version is unavailable locally or somehow the field is empty or not deserializable, I would prefer not to accidentally remove the state, keep the subnet's SSH keys for a bit too long and try to rotate IDKG keys (in which case the registry should anyways deny the request because we would have left the subnet) rather than the opposite.

As of today, I cannot see a way where the registry version is not available locally, since it is always lower than the latest that we have. But if this function gets re-used somewhere else, I feel like returning true is more fail-safe than false. What do you guys think?

Note that replacing this to true could also mean launching the replica even though we are unassigned. But again, I do not think it hurts a lot if it's a single node doing so since other nodes would ignore it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI_ALL_BAZEL_TARGETS Runs all bazel targets and uploads them to S3 feat

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant