Skip to content

(docs) CP-53645 on xenguest: Add walk-through on claiming and populating VM memory #6373

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

bernhardkaindl
Copy link
Collaborator

Documentation to understand the current implementation
for claiming and populating VM memory when building a
domain from xenopsd.

The focus is on claiming the memory and allocating
and populating the boot memory using xenguest.

@bernhardkaindl
Copy link
Collaborator Author

bernhardkaindl commented Mar 19, 2025

This is done as part of CP-53645 for NUMA, and my next task is to improve the documentation further to document the design of NUMA memory allocations (developed in parallel).

A preview of this PR is at
https://bernhard-xapi-onrender-com-pr-24.onrender.com/lib/index.html

If you have any comments, I will be able to incorporate them during review and also in the followup work.

@bernhardkaindl bernhardkaindl requested a review from psafont March 19, 2025 14:58
@@ -1,5 +1,4 @@
---
title: Libraries
Copy link
Member

@psafont psafont Mar 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
title: Libraries
title: C Libraries in Xen

Comment on lines +27 to +29
> Errors returned by `xc_domain_claim_pages()` must be handled as they are a
normal result of the `xenopsd` thread-pool claiming and starting many VMs
in parallel during a boot storm scenario.
Copy link
Member

@psafont psafont Mar 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would it be a normal result for xenopsd? the toolstack does memory accounting to avoid these kind of errors

Comment on lines +31 to +35
> [!warning]
> This is especially important when staking claims on NUMA nodes using an updated
version of this function. In this case, the only options of the calling worker
thread would be to adapt to the NUMA boot storm:
Attempt to find a different NUMA node for claiming the memory and try again.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The xenopsd code already reads the amount of available memory under a lock to select how much memory per-node it should assign to a VM. Unless there are other toolstacks starting VMs, this won't happen. So expecting a failure and retrying is not how xenopsd is working.

the domain.
is updated to call an updated version of this function for the domain.

It reserves NUMA node memory before `xenguest` is called, and a new `pnode`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parameter name is -mem_pnode

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants