vCPUs-params:mask=1,2,3
```
There are also host-level `guest_VCPUs_params` which are used by
diff --git a/doc/content/lib/xenguest/_index.md b/doc/content/lib/xenguest/_index.md
new file mode 100644
index 00000000000..556a6e12948
--- /dev/null
+++ b/doc/content/lib/xenguest/_index.md
@@ -0,0 +1,32 @@
+---
+title: libxenguest
+description: Xen Guest library for building Xen Guest domains
+---
+## Introduction
+
+`libxenguest` is a library written in C provided for the Xen Hypervisor in Dom0.
+
+For example, it used as the low-level interface building Xen Guest domains.
+
+Its source is located in the folder
+[tools/libs/guest](https://github.com/xen-project/xen/tree/master/tools/libs/guest)
+of the Xen repository.
+
+## Responsibilities
+
+### Allocating the boot memory for new & migrated VMs
+
+One important responsibility of `libxenguest` is creating the memory layout
+of new and migrated VMs.
+
+The [boot memory setup](../../../xenopsd/walkthroughs/VM.build/xenguest/setup_mem)
+of `xenguest` and `libxl` (used by the `xl` CLI command) call
+[xc_dom_boot_mem_init()](xc_dom_boot_mem_init) which dispatches the
+call to
+[meminit_hvm()](https://github.com/xen-project/xen/blob/de0254b9/tools/libs/guest/xg_dom_x86.c#L1348-L1649)
+and
+[meminit_pv()](https://github.com/xen-project/xen/blob/de0254b9/tools/libs/guest/xg_dom_x86.c#L1183-L1333) which layout, allocate and populate the boot memory of domains.
+
+## Functions
+
+{{% children description=true %}}
\ No newline at end of file
diff --git a/doc/content/lib/xenguest/boot_mem_init-chart.md b/doc/content/lib/xenguest/boot_mem_init-chart.md
new file mode 100644
index 00000000000..b49d7a55d3a
--- /dev/null
+++ b/doc/content/lib/xenguest/boot_mem_init-chart.md
@@ -0,0 +1,44 @@
+---
+title: Simple Flowchart of xc_dom_boot_mem_init()
+hidden: true
+---
+```mermaid
+flowchart LR
+
+subgraph libxl / xl CLI
+ libxl__build_dom("libxl__build_dom()")
+end
+
+subgraph xenguest
+ hvm_build_setup_mem("hvm_build_setup_mem()")
+end
+
+subgraph libxenctrl
+ xc_domain_populate_physmap("One call for each memory range (extent):
+ xc_domain_populate_physmap()
+ xc_domain_populate_physmap()
+ xc_domain_populate_physmap()")
+end
+
+subgraph libxenguest
+
+ hvm_build_setup_mem & libxl__build_dom
+ --> xc_dom_boot_mem_init("xc_dom_boot_mem_init()")
+
+ xc_dom_boot_mem_init
+ --> meminit_hvm("meminit_hvm()") & meminit_pv("meminit_pv()")
+ --> xc_domain_populate_physmap
+end
+
+click xc_dom_boot_mem_init
+"https://github.com/xen-project/xen/blob/39c45c/tools/libs/guest/xg_dom_boot.c#L110-L126
+" _blank
+
+click meminit_hvm
+"https://github.com/xen-project/xen/blob/39c45c/tools/libs/guest/xg_dom_x86.c#L1348-L1648
+" _blank
+
+click meminit_pv
+"https://github.com/xen-project/xen/blob/de0254b9/tools/libs/guest/xg_dom_x86.c#L1183-L1333
+" _blank
+```
diff --git a/doc/content/lib/xenguest/xc_dom_boot_mem_init.md b/doc/content/lib/xenguest/xc_dom_boot_mem_init.md
new file mode 100644
index 00000000000..ddba647cb13
--- /dev/null
+++ b/doc/content/lib/xenguest/xc_dom_boot_mem_init.md
@@ -0,0 +1,40 @@
+---
+title: xc_dom_boot_mem_init()
+description: VM boot memory setup by calling meminit_hvm() or meminit_pv()
+mermaid:
+ force: true
+---
+## VM boot memory setup
+
+[xenguest's](../../xenopsd/walkthroughs/VM.build/xenguest/_index.md)
+`hvm_build_setup_mem()` and `libxl` and the `xl` CLI call
+[xc_dom_boot_mem_init()](https://github.com/xen-project/xen/blob/39c45c/tools/libs/guest/xg_dom_boot.c#L110-L126)
+to allocate and populate the domain's system memory for booting it:
+
+{{% include "boot_mem_init-chart.md" %}}
+
+The allocation strategies of them called functions are:
+
+### Strategy of the libxenguest meminit functions
+
+- Attempt to allocate 1GB superpages when possible
+- Fall back to 2MB pages when 1GB allocation failed
+- Fall back to 4k pages when both failed
+
+They use
+[xc_domain_populate_physmap()](../xenctrl/xc_domain_populate_physmap.md)
+to perform memory allocation and to map the allocated memory
+to the system RAM ranges of the domain.
+
+### Strategy of xc_domain_populate_physmap()
+
+[xc_domain_populate_physmap()](../xenctrl/xc_domain_populate_physmap.md)
+calls the `XENMEM_populate_physmap` command of the Xen memory hypercall.
+
+
+For a more detailed walk-through of the inner workings of this hypercall,
+see the reference on
+[xc_domain_populate_physmap()](../xenctrl/xc_domain_populate_physmap).
+
+For more details on the VM build step involving `xenguest` and Xen side see:
+https://wiki.xenproject.org/wiki/Walkthrough:_VM_build_using_xenguest
diff --git a/doc/content/xenopsd/walkthroughs/VM.build/VM_build-chart.md b/doc/content/xenopsd/walkthroughs/VM.build/VM_build-chart.md
index eec1f05fc0e..9c7c0ee9184 100644
--- a/doc/content/xenopsd/walkthroughs/VM.build/VM_build-chart.md
+++ b/doc/content/xenopsd/walkthroughs/VM.build/VM_build-chart.md
@@ -6,13 +6,18 @@ weight: 10
---
```mermaid
-flowchart
-subgraph xenopsd VM_build[xenopsd: VM_build micro#8209;op]
-direction LR
-VM_build --> VM.build
-VM.build --> VM.build_domain
-VM.build_domain --> VM.build_domain_exn
-VM.build_domain_exn --> Domain.build
+flowchart LR
+
+subgraph xenopsd: VM_build micro-op
+ direction LR
+
+ VM_build(VM_build)
+ --> VM.build(VM.build)
+ --> VM.build_domain(VM.build_domain)
+ --> VM.build_domain_exn(VM.build_domain_exn)
+ --> Domain.build(Domain.build)
+end
+
click VM_build "
https://github.com/xapi-project/xen-api/blob/83555067/ocaml/xenopsd/lib/xenops_server.ml#L2255-L2271" _blank
click VM.build "
@@ -23,5 +28,4 @@ click VM.build_domain_exn "
https://github.com/xapi-project/xen-api/blob/83555067/ocaml/xenopsd/xc/xenops_server_xen.ml#L2024-L2248" _blank
click Domain.build "
https://github.com/xapi-project/xen-api/blob/83555067/ocaml/xenopsd/xc/domain.ml#L1111-L1210" _blank
-end
```
diff --git a/doc/content/xenopsd/walkthroughs/VM.build/xenguest.md b/doc/content/xenopsd/walkthroughs/VM.build/xenguest.md
deleted file mode 100644
index 70908d556fb..00000000000
--- a/doc/content/xenopsd/walkthroughs/VM.build/xenguest.md
+++ /dev/null
@@ -1,185 +0,0 @@
----
-title: xenguest
-description:
- "Perform building VMs: Allocate and populate the domain's system memory."
----
-As part of starting a new domain in VM_build, `xenopsd` calls `xenguest`.
-When multiple domain build threads run in parallel,
-also multiple instances of `xenguest` also run in parallel:
-
-```mermaid
-flowchart
-subgraph xenopsd VM_build[xenopsd VM_build micro#8209;ops]
-direction LR
-xenopsd1[Domain.build - Thread #1] --> xenguest1[xenguest #1]
-xenopsd2[Domain.build - Thread #2] --> xenguest2[xenguest #2]
-xenguest1 --> libxenguest
-xenguest2 --> libxenguest2[libxenguest]
-click xenopsd1 "../Domain.build/index.html"
-click xenopsd2 "../Domain.build/index.html"
-click xenguest1 "https://github.com/xenserver/xen.pg/blob/XS-8/patches/xenguest.patch" _blank
-click xenguest2 "https://github.com/xenserver/xen.pg/blob/XS-8/patches/xenguest.patch" _blank
-click libxenguest "https://github.com/xen-project/xen/tree/master/tools/libs/guest" _blank
-click libxenguest2 "https://github.com/xen-project/xen/tree/master/tools/libs/guest" _blank
-libxenguest --> Xen[Xen
Hypervisor]
-libxenguest2 --> Xen
-end
-```
-
-## About xenguest
-
-`xenguest` is called by the xenopsd [Domain.build](Domain.build) function
-to perform the build phase for new VMs, which is part of the `xenopsd`
-[VM.start operation](VM.start).
-
-[xenguest](https://github.com/xenserver/xen.pg/blob/XS-8/patches/xenguest.patch)
-was created as a separate program due to issues with
-`libxenguest`:
-
-- It wasn't threadsafe: fixed, but it still uses a per-call global struct
-- It had an incompatible licence, but now licensed under the LGPL.
-
-Those were fixed, but we still shell out to `xenguest`, which is currently
-carried in the patch queue for the Xen hypervisor packages, but could become
-an individual package once planned changes to the Xen hypercalls are stabilised.
-
-Over time, `xenguest` has evolved to build more of the initial domain state.
-
-## Interface to xenguest
-
-```mermaid
-flowchart
-subgraph xenopsd VM_build[xenopsd VM_build micro#8209;op]
-direction TB
-mode
-domid
-memmax
-Xenstore
-end
-mode[--mode build_hvm] --> xenguest
-domid --> xenguest
-memmax --> xenguest
-Xenstore[Xenstore platform data] --> xenguest
-```
-
-`xenopsd` must pass this information to `xenguest` to build a VM:
-
-- The domain type to build for (HVM, PHV or PV).
- - It is passed using the command line option `--mode hvm_build`.
-- The `domid` of the created empty domain,
-- The amount of system memory of the domain,
-- A number of other parameters that are domain-specific.
-
-`xenopsd` uses the Xenstore to provide platform data:
-
-- the vCPU affinity
-- the vCPU credit2 weight/cap parameters
-- whether the NX bit is exposed
-- whether the viridian CPUID leaf is exposed
-- whether the system has PAE or not
-- whether the system has ACPI or not
-- whether the system has nested HVM or not
-- whether the system has an HPET or not
-
-When called to build a domain, `xenguest` reads those and builds the VM accordingly.
-
-## Walkthrough of the xenguest build mode
-
-```mermaid
-flowchart
-subgraph xenguest[xenguest #8209;#8209;mode hvm_build domid]
-direction LR
-stub_xc_hvm_build[stub_xc_hvm_build#40;#41;] --> get_flags[
- get_flags#40;#41; <#8209; Xenstore platform data
-]
-stub_xc_hvm_build --> configure_vcpus[
- configure_vcpus#40;#41; #8209;> Xen hypercall
-]
-stub_xc_hvm_build --> setup_mem[
- setup_mem#40;#41; #8209;> Xen hypercalls to setup domain memory
-]
-end
-```
-
-Based on the given domain type, the `xenguest` program calls dedicated
-functions for the build process of the given domain type.
-
-These are:
-
-- `stub_xc_hvm_build()` for HVM,
-- `stub_xc_pvh_build()` for PVH, and
-- `stub_xc_pv_build()` for PV domains.
-
-These domain build functions call these functions:
-
-1. `get_flags()` to get the platform data from the Xenstore
-2. `configure_vcpus()` which uses the platform data from the Xenstore to configure vCPU affinity and the credit scheduler parameters vCPU weight and vCPU cap (max % pCPU time for throttling)
-3. The `setup_mem` function for the given VM type.
-
-## The function hvm_build_setup_mem()
-
-For HVM domains, `hvm_build_setup_mem()` is responsible for deriving the memory
-layout of the new domain, allocating the required memory and populating for the
-new domain. It must:
-
-1. Derive the `e820` memory layout of the system memory of the domain
- including memory holes depending on PCI passthrough and vGPU flags.
-2. Load the BIOS/UEFI firmware images
-3. Store the final MMIO hole parameters in the Xenstore
-4. Call the `libxenguest` function `xc_dom_boot_mem_init()` (see below)
-5. Call `construct_cpuid_policy()` to apply the CPUID `featureset` policy
-
-## The function xc_dom_boot_mem_init()
-
-```mermaid
-flowchart LR
-subgraph xenguest
-hvm_build_setup_mem[hvm_build_setup_mem#40;#41;]
-end
-subgraph libxenguest
-hvm_build_setup_mem --> xc_dom_boot_mem_init[xc_dom_boot_mem_init#40;#41;]
-xc_dom_boot_mem_init -->|vmemranges| meminit_hvm[meninit_hvm#40;#41;]
-click xc_dom_boot_mem_init "https://github.com/xen-project/xen/blob/39c45c/tools/libs/guest/xg_dom_boot.c#L110-L126" _blank
-click meminit_hvm "https://github.com/xen-project/xen/blob/39c45c/tools/libs/guest/xg_dom_x86.c#L1348-L1648" _blank
-end
-```
-
-`hvm_build_setup_mem()` calls
-[xc_dom_boot_mem_init()](https://github.com/xen-project/xen/blob/39c45c/tools/libs/guest/xg_dom_boot.c#L110-L126)
-to allocate and populate the domain's system memory.
-
-It calls
-[meminit_hvm()](https://github.com/xen-project/xen/blob/39c45c/tools/libs/guest/xg_dom_x86.c#L1348-L1648)
-to loop over the `vmemranges` of the domain for mapping the system RAM
-of the guest from the Xen hypervisor heap. Its goals are:
-
-- Attempt to allocate 1GB superpages when possible
-- Fall back to 2MB pages when 1GB allocation failed
-- Fall back to 4k pages when both failed
-
-It uses the hypercall
-[XENMEM_populate_physmap](https://github.com/xen-project/xen/blob/39c45c/xen/common/memory.c#L1408-L1477)
-to perform memory allocation and to map the allocated memory
-to the system RAM ranges of the domain.
-
-https://github.com/xen-project/xen/blob/39c45c/xen/common/memory.c#L1022-L1071
-
-`XENMEM_populate_physmap`:
-
-1. Uses
- [construct_memop_from_reservation](https://github.com/xen-project/xen/blob/39c45c/xen/common/memory.c#L1022-L1071)
- to convert the arguments for allocating a page from
- [struct xen_memory_reservation](https://github.com/xen-project/xen/blob/master/xen/include/public/memory.h#L46-L80)
- to `struct memop_args`.
-2. Sets flags and calls functions according to the arguments
-3. Allocates the requested page at the most suitable place
- - depending on passed flags, allocate on a specific NUMA node
- - else, if the domain has node affinity, on the affine nodes
- - also in the most suitable memory zone within the NUMA node
-4. Falls back to less desirable places if this fails
- - or fail for "exact" allocation requests
-5. When no pages of the requested size are free,
- it splits larger superpages into pages of the requested size.
-
-For more details on the VM build step involving `xenguest` and Xen side see:
-https://wiki.xenproject.org/wiki/Walkthrough:_VM_build_using_xenguest
diff --git a/doc/content/xenopsd/walkthroughs/VM.build/xenguest/_index.md b/doc/content/xenopsd/walkthroughs/VM.build/xenguest/_index.md
new file mode 100644
index 00000000000..f94079628c3
--- /dev/null
+++ b/doc/content/xenopsd/walkthroughs/VM.build/xenguest/_index.md
@@ -0,0 +1,59 @@
+---
+title: xenguest
+description:
+ "Perform building VMs: Allocate and populate the domain's system memory."
+mermaid:
+ force: true
+---
+## Introduction
+
+`xenguest` is called by the xenopsd [Domain.build](../Domain.build) function
+to perform the build phase for new VMs, which is part of the `xenopsd`
+[VM.build](../../VM.build) micro-op:
+
+{{% include "VM_build-chart.md" %}}
+
+[Domain.build](../Domain.build) calls `xenguest` (during boot storms,
+many run in parallel to accelerate boot storm completion), and during
+[migration](../../VM.migrate.md), `emu-manager` also calls `xenguest`:
+
+```mermaid
+flowchart
+subgraph "xenopsd & emu-manager call xenguest:"
+direction LR
+xenopsd1(Domain.build for VM #1) --> xenguest1(xenguest for #1)
+xenopsd2(emu-manager for VM #2) --> xenguest2(xenguest for #2)
+xenguest1 --> libxenguest(libxenguest)
+xenguest2 --> libxenguest2(libxenguest)
+click xenopsd1 "../Domain.build/index.html"
+click xenopsd2 "../Domain.build/index.html"
+click xenguest1 "https://github.com/xenserver/xen.pg/blob/XS-8/patches/xenguest.patch" _blank
+click xenguest2 "https://github.com/xenserver/xen.pg/blob/XS-8/patches/xenguest.patch" _blank
+click libxenguest "https://github.com/xen-project/xen/tree/master/tools/libs/guest" _blank
+click libxenguest2 "https://github.com/xen-project/xen/tree/master/tools/libs/guest" _blank
+libxenguest --> Xen(Xen
Hypercalls,e.g.:
XENMEM
populate
physmap)
+libxenguest2 --> Xen
+end
+```
+
+## Historical heritage
+
+[xenguest](https://github.com/xenserver/xen.pg/blob/XS-8/patches/xenguest.patch)
+was created as a separate program due to issues with
+`libxenguest`:
+
+- It wasn't threadsafe: fixed, but it still uses a per-call global struct
+- It had an incompatible licence, but now licensed under the LGPL.
+
+Those were fixed, but we still shell out to `xenguest`, which is currently
+carried in the patch queue for the Xen hypervisor packages, but could become
+an individual package once planned changes to the Xen hypercalls are stabilised.
+
+Over time, `xenguest` evolved to build more of the initial domain state.
+
+## Details
+
+The details the the invocation of xenguest, the build modes
+and the VM memory setup are described in these child pages:
+
+{{% children description=true %}}
\ No newline at end of file
diff --git a/doc/content/xenopsd/walkthroughs/VM.build/xenguest/build_modes.md b/doc/content/xenopsd/walkthroughs/VM.build/xenguest/build_modes.md
new file mode 100644
index 00000000000..1dd7ea9efd7
--- /dev/null
+++ b/doc/content/xenopsd/walkthroughs/VM.build/xenguest/build_modes.md
@@ -0,0 +1,101 @@
+---
+title: Build Modes
+description: Description of the xenguest build modes (HVM, PVH, PV) with focus on HVM
+weight: 20
+mermaid:
+ force: true
+---
+## Invocation of the HVM build mode
+
+{{% include "mode_vm_build.md" %}}
+
+## Walk-through of the HVM build mode
+
+The domain build functions
+[stub_xc_hvm_build()](https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L2329-L2436)
+and stub_xc_pv_build() call these functions:
+
+- [get_flags()](https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L1164-L1288)
+ to get the platform data from the Xenstore
+ for filling out the fields of `struct flags` and `struct xc_dom_image`.
+- [configure_vcpus()](https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L1297)
+ which uses the platform data from the Xenstore:
+ - When `platform/vcpu//affinity` is set: set the vCPU affinity.
+ By default, this sets the domain's `node_affinity` mask (NUMA nodes) as well.
+ This configures
+ [`get_free_buddy()`](https://github.com/xen-project/xen/blob/e16acd80/xen/common/page_alloc.c#L855-L958)
+ to prefer memory allocations from this NUMA node_affinity mask.
+ - If `platform/vcpu/weight` is set, the domain's scheduling weight
+ - If `platform/vcpu/cap` is set, the domain's scheduling cap (%cpu time)
+- [xc_dom_boot_mem_init()](https://github.com/xen-project/xen/blob/39c45c/tools/libs/guest/xg_dom_boot.c#L110-L126)
+ to call `_build_setup_mem()`,
+
+Call graph of
+[do_hvm_build()](https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L596-L615)
+with emphasis on information flow:
+
+{{% include "do_hvm_build" %}}
+
+## The function hvm_build_setup_mem()
+
+For HVM domains, `hvm_build_setup_mem()` is responsible for deriving the memory
+layout of the new domain, allocating the required memory and populating for the
+new domain. It must:
+
+1. Derive the `e820` memory layout of the system memory of the domain
+ including memory holes depending on PCI passthrough and vGPU flags.
+2. Load the BIOS/UEFI firmware images
+3. Store the final MMIO hole parameters in the Xenstore
+4. Call the `libxenguest` function `xc_dom_boot_mem_init()` (see below)
+5. Call `construct_cpuid_policy()` to apply the CPUID `featureset` policy
+
+It starts this by:
+- Getting `struct xc_dom_image`, `max_mem_mib`, and `max_start_mib`.
+- Calculating start and size of lower ranges of the domain's memory maps
+ - taking memory holes for I/O into account, e.g. `mmio_size` and `mmio_start`.
+- Calculating `lowmem_end` and `highmem_end`.
+
+It then calls `xc_dom_boot_mem_init()`:
+
+## The function xc_dom_boot_mem_init()
+
+`hvm_build_setup_mem()` calls
+[xc_dom_boot_mem_init()](https://github.com/xen-project/xen/blob/39c45c/tools/libs/guest/xg_dom_boot.c#L110-L126)
+to allocate and populate the domain's system memory:
+
+```mermaid
+flowchart LR
+subgraph xenguest
+hvm_build_setup_mem[hvm_build_setup_mem#40;#41;]
+end
+subgraph libxenguest
+hvm_build_setup_mem --vmemranges--> xc_dom_boot_mem_init[xc_dom_boot_mem_init#40;#41;]
+xc_dom_boot_mem_init -->|vmemranges| meminit_hvm[meninit_hvm#40;#41;]
+click xc_dom_boot_mem_init "https://github.com/xen-project/xen/blob/39c45c/tools/libs/guest/xg_dom_boot.c#L110-L126" _blank
+click meminit_hvm "https://github.com/xen-project/xen/blob/39c45c/tools/libs/guest/xg_dom_x86.c#L1348-L1648" _blank
+end
+```
+
+Except error handling and tracing, it only is a wrapper to call the
+architecture-specific `meminit()` hook for the domain type:
+
+```c
+rc = dom->arch_hooks->meminit(dom);
+```
+
+For HVM domains, it calls
+[meminit_hvm()](https://github.com/xen-project/xen/blob/39c45c/tools/libs/guest/xg_dom_x86.c#L1348-L1648)
+to loop over the `vmemranges` of the domain for mapping the system RAM
+of the guest from the Xen hypervisor heap. Its goals are:
+
+- Attempt to allocate 1GB superpages when possible
+- Fall back to 2MB pages when 1GB allocation failed
+- Fall back to 4k pages when both failed
+
+It uses
+[xc_domain_populate_physmap()](../../../../../lib/xenctrl/xc_domain_populate_physmap.md)
+to perform memory allocation and to map the allocated memory
+to the system RAM ranges of the domain.
+
+For more details on the VM build step involving `xenguest` and Xen side see:
+https://wiki.xenproject.org/wiki/Walkthrough:_VM_build_using_xenguest
diff --git a/doc/content/xenopsd/walkthroughs/VM.build/xenguest/do_hvm_build.md b/doc/content/xenopsd/walkthroughs/VM.build/xenguest/do_hvm_build.md
new file mode 100644
index 00000000000..07bf9c0d067
--- /dev/null
+++ b/doc/content/xenopsd/walkthroughs/VM.build/xenguest/do_hvm_build.md
@@ -0,0 +1,78 @@
+---
+title: Call graph of xenguest/do_hvm_build()
+description: Call graph of xenguest/do_hvm_build() with emphasis on information flow
+hidden: true
+---
+```mermaid
+flowchart TD
+
+do_hvm_build("do_hvm_build() for HVM")
+ --> stub_xc_hvm_build("stub_xc_hvm_build()")
+
+get_flags("get_flags()") --"VM platform_data from XenStore"
+ --> stub_xc_hvm_build
+
+stub_xc_hvm_build
+ --> configure_vcpus("configure_vcpus()")
+
+configure_vcpus --"When
platform/
+ vcpu/%d/affinity
is set"
+ --> xc_vcpu_setaffinity
+
+configure_vcpus --"When
platform/
+ vcpu/cap
or
+ vcpu/weight
is set"
+ --> xc_sched_credit_domain_set
+
+stub_xc_hvm_build
+ --"struct xc_dom_image, mem_start_mib, mem_max_mib"
+ --> hvm_build_setup_mem("hvm_build_setup_mem()")
+ -- "struct xc_dom_image
+ with
+ optional vmemranges"
+ --> xc_dom_boot_mem_init
+
+subgraph libxenguest
+ xc_dom_boot_mem_init("xc_dom_boot_mem_init()")
+ -- "struct xc_dom_image
+ with
+ optional vmemranges" -->
+ meminit_hvm("meminit_hvm()")
+ -- page_size(1GB,2M,4k, memflags: e.g. exact) -->
+ xc_domain_populate_physmap("xc_domain_populate_physmap()")
+end
+
+subgraph direct xenguest hypercalls
+ xc_vcpu_setaffinity("xc_vcpu_setaffinity()")
+ --> vcpu_set_affinity("vcpu_set_affinity()")
+ --> domain_update_node_aff("domain_update_node_aff()")
+ -- "if auto_node_affinity
+ is on (default)"--> auto_node_affinity(Update dom->node_affinity)
+
+ xc_sched_credit_domain_set("xc_sched_credit_domain_set()")
+end
+
+click do_hvm_build
+"https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L596-L615" _blank
+click xc_vcpu_setaffinity "../../../../../lib/xenctrl/xc_vcpu_setaffinity/index.html" _blank
+click vcpu_set_affinity
+"https://github.com/xen-project/xen/blob/e16acd806/xen/common/sched/core.c#L1353-L1393" _blank
+click domain_update_node_aff
+"https://github.com/xen-project/xen/blob/e16acd806/xen/common/sched/core.c#L1809-L1876" _blank
+click stub_xc_hvm_build
+"https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L2329-L2436" _blank
+click hvm_build_setup_mem
+"https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L2002-L2219" _blank
+click get_flags
+"https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L1164-L1288" _blank
+click configure_vcpus
+"https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L1297" _blank
+click xc_dom_boot_mem_init
+"https://github.com/xen-project/xen/blob/e16acd806/tools/libs/guest/xg_dom_boot.c#L110-L125"
+click meminit_hvm
+"https://github.com/xen-project/xen/blob/e16acd806/tools/libs/guest/xg_dom_x86.c#L1348-L1648"
+click xc_domain_populate_physmap
+"../../../../../lib/xenctrl/xc_domain_populate_physmap/index.html" _blank
+click auto_node_affinity
+"../../../../../lib/xenctrl/xc_domain_node_setaffinity/index.html#flowchart-in-relation-to-xc_set_vcpu_affinity" _blank
+```
diff --git a/doc/content/xenopsd/walkthroughs/VM.build/xenguest/invoke.md b/doc/content/xenopsd/walkthroughs/VM.build/xenguest/invoke.md
new file mode 100644
index 00000000000..88511eb022f
--- /dev/null
+++ b/doc/content/xenopsd/walkthroughs/VM.build/xenguest/invoke.md
@@ -0,0 +1,34 @@
+---
+title: Invocation
+description: Invocation of xenguest and the interfaces used for it
+weight: 10
+mermaid:
+ force: true
+---
+## Interface to xenguest
+
+[xenopsd](../../../) passes this information to [xenguest](index.html)
+(for [migration](../../VM.migrate.md), using `emu-manager`):
+
+- The domain type using the command line option `--mode _build`.
+- The `domid` of the created empty domain,
+- The amount of system memory of the domain,
+- A number of other parameters that are domain-specific.
+
+`xenopsd` uses the Xenstore to provide platform data:
+
+- in case the domain has a [VCPUs-mask](../../../../lib/xenctrl/xc_vcpu_setaffinity.md),
+ the statically configured vCPU hard-affinity
+- the vCPU credit2 weight/cap parameters
+- whether the NX bit is exposed
+- whether the viridian CPUID leaf is exposed
+- whether the system has PAE or not
+- whether the system has ACPI or not
+- whether the system has nested HVM or not
+- whether the system has an HPET or not
+
+When called to build a domain, `xenguest` reads those and builds the VM accordingly.
+
+## Parameters of the VM build modes
+
+{{% include "mode_vm_build.md" %}}
diff --git a/doc/content/xenopsd/walkthroughs/VM.build/xenguest/mode_vm_build.md b/doc/content/xenopsd/walkthroughs/VM.build/xenguest/mode_vm_build.md
new file mode 100644
index 00000000000..e8a659d56f3
--- /dev/null
+++ b/doc/content/xenopsd/walkthroughs/VM.build/xenguest/mode_vm_build.md
@@ -0,0 +1,40 @@
+---
+hidden: true
+title: Call graph to the xenguest hvm/pvh/pv build functions
+description: Call graph of xenguest for calling the hvm/pvh/pv build functions
+---
+```mermaid
+flowchart LR
+
+xenguest_main("
+ xenguest
+ --mode hvm_build
+ /
+ --mode pvh_build
+ /
+ --mode pv_build
++
+domid
+mem_max_mib
+mem_start_mib
+image
+store_port
+store_domid
+console_port
+console_domid")
+ --> do_hvm_build("do_hvm_build() for HVM
+ ") & do_pvh_build("do_pvh_build() for PVH")
+ --> stub_xc_hvm_build("stub_xc_hvm_build()")
+
+xenguest_main --> do_pv_build(do_pvh_build for PV) -->
+ stub_xc_pv_build("stub_xc_pv_build()")
+
+click do_pv_build
+"https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L575-L594" _blank
+click do_hvm_build
+"https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L596-L615" _blank
+click do_pvh_build
+"https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L617-L640" _blank
+click stub_xc_hvm_build
+"https://github.com/xenserver/xen.pg/blob/65c0438b/patches/xenguest.patch#L2329-L2436" _blank
+```
diff --git a/doc/content/xenopsd/walkthroughs/VM.build/xenguest/setup_mem.md b/doc/content/xenopsd/walkthroughs/VM.build/xenguest/setup_mem.md
new file mode 100644
index 00000000000..81cbe41d968
--- /dev/null
+++ b/doc/content/xenopsd/walkthroughs/VM.build/xenguest/setup_mem.md
@@ -0,0 +1,40 @@
+---
+title: Memory Setup
+description: Creation and allocation of the boot memory layout of VMs
+weight: 30
+mermaid:
+ force: true
+---
+## HVM boot memory setup
+
+For HVM domains, `hvm_build_setup_mem()` is responsible for deriving the memory
+layout of the new domain, allocating the required memory and populating for the
+new domain. It must:
+
+1. Derive the `e820` memory layout of the system memory of the domain
+ including memory holes depending on PCI passthrough and vGPU flags.
+2. Load the BIOS/UEFI firmware images
+3. Store the final MMIO hole parameters in the Xenstore
+4. Call the `libxenguest` function `xc_dom_boot_mem_init()` (see below)
+5. Call `construct_cpuid_policy()` to apply the CPUID `featureset` policy
+
+It starts this by:
+- Getting `struct xc_dom_image`, `max_mem_mib`, and `max_start_mib`.
+- Calculating start and size of lower ranges of the domain's memory maps
+ - taking memory holes for I/O into account, e.g. `mmio_size` and `mmio_start`.
+- Calculating `lowmem_end` and `highmem_end`.
+
+## Calling into libxenguest for the bootmem setup
+
+`hvm_build_setup_mem()` then calls the [libxenguest](../../../../lib/xenguest/)
+function
+[xc_dom_boot_mem_init()](../../../../lib/xenguest/xc_dom_boot_mem_init.md)
+to set up the boot memory of domains.
+
+`xl` CLI also uses it to set up the boot memory of domains.
+It constructs the memory layout of the domain and allocates and populates
+the main system memory of the domain using calls to
+[xc_domain_populate_physmap()](../../../../lib/xenctrl/xc_domain_populate_physmap.md).
+
+For more details on the VM build step involving `xenguest` and Xen side see
+https://wiki.xenproject.org/wiki/Walkthrough:_VM_build_using_xenguest