[UPDATE MAY 2021]: HNC has graduated to its own repo! Please visit that repo for the latest version of this user guide.
Part of the HNC User Guide
Please feel free to suggest additional questions.
You can contact us on:
Of these, Github and the mailing list will often get you the fastest response we're not constantly on Slack.
HNC technically requires Kubernetes 1.16 or higher, although we don't test on every version of Kubernetes. See the release notes for the version you're downloading for a full list of the K8s distributions on which that release has been tested.
By default, HNC's service account is given the ability to perform any verb on
any resource. HNC does not need these permissions itself, but it does require
them to be able to propagate RoleBindings with arbitrary permissions. This
includes namespace RoleBindings to the cluster-admin
cluster
role.
You may modify HNC's own role bindings in the hnc-system
namespace to restrict
its permissions if you wish, but then it will be unable to propagate
RoleBindings that include the missing permissions. At a minimum, HNC must be
able to access (create, read, list, watch, update and delete) all of its own CRs
as well as namespaces, roles, and role bindings.
No, HNC does not impose any limitation. We've tested about a hundred, though we certainly don't recommend that - 3-5 levels should probably be more than enough for any sane use case.
HNC is deployed as a single pod with in-memory state, so it cannot scale horizontally. In practice, we have found the the API throttling by the K8s apiserver is by far the greatest bottleneck on HNC performance, which would not be improved via horizontal scaling. Almost all validating webhook calls are also served entirely by in-memory state and as a result should be extremely fast.
We have tested HNC on clusters with about 500 namespaces, both in an almost-flat
configuration (all namespaces share one common root) and in a well-balanced
hierarchy of a few levels. In the steady state, changes to the hierarchy and any
objects within them were propagated ~instantly. The only real limitation is if
the HNC pod needed to restart, which could take up to 200s on these large
clusters; once again, this was driven almost entirely by the apiserver
limitations. You can adjust the --apiserver-qps-throttle
parameter in the
manifest to increase it from the default of 50qps if your cluster supports
higher values.
As of Dec. 2020, the HNC performance test shows that 700 namespaces with 10 propagatable objects in each namespace would use about 200M memory during HNC startup and about 150M afterwards. Thus, we set a default of 300M memory limits and 150M memory requests for HNC.
To change HNC memory limits and requests, you can update the values in
config/manager/manager.yaml
, run make manifests
and reapply the manifest. If
you are using a GKE cluster, you can view the real-time memory usage in the
Workloads
tab and determine what's the best limits and requests for you.
HNC is currently deployed as a single in-memory pod and therefore does not support HA, as mentioned in the previous section. While it is restarting, it typically does not block changes to objects, but will block changes to namespaces, hierarchy configurations and subnamespace anchors.
Please contact us if this does not meet your use case.