-
Notifications
You must be signed in to change notification settings - Fork 291
(docs) CP-53645 on xenguest: Add walk-through on claiming and populating VM memory #6373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
(docs) CP-53645 on xenguest: Add walk-through on claiming and populating VM memory #6373
Conversation
Signed-off-by: Bernhard Kaindl <[email protected]>
This is done as part of CP-53645 for NUMA, and my next task is to improve the documentation further to document the design of NUMA memory allocations (developed in parallel). A preview of this PR is at If you have any comments, I will be able to incorporate them during review and also in the followup work. |
@@ -1,5 +1,4 @@ | |||
--- | |||
title: Libraries |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
title: Libraries | |
title: C Libraries in Xen |
> Errors returned by `xc_domain_claim_pages()` must be handled as they are a | ||
normal result of the `xenopsd` thread-pool claiming and starting many VMs | ||
in parallel during a boot storm scenario. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would it be a normal result for xenopsd? the toolstack does memory accounting to avoid these kind of errors
> [!warning] | ||
> This is especially important when staking claims on NUMA nodes using an updated | ||
version of this function. In this case, the only options of the calling worker | ||
thread would be to adapt to the NUMA boot storm: | ||
Attempt to find a different NUMA node for claiming the memory and try again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The xenopsd code already reads the amount of available memory under a lock to select how much memory per-node it should assign to a VM. Unless there are other toolstacks starting VMs, this won't happen. So expecting a failure and retrying is not how xenopsd is working.
the domain. | ||
is updated to call an updated version of this function for the domain. | ||
|
||
It reserves NUMA node memory before `xenguest` is called, and a new `pnode` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The parameter name is -mem_pnode
Documentation to understand the current implementation
for claiming and populating VM memory when building a
domain from
xenopsd
.The focus is on claiming the memory and allocating
and populating the boot memory using
xenguest
.