Skip to content

KVM: Add multi_vms_multi_boot memory sweep test#561

Merged
xhao22 merged 1 commit intointel:mainfrom
fanchen2:memory
Mar 18, 2026
Merged

KVM: Add multi_vms_multi_boot memory sweep test#561
xhao22 merged 1 commit intointel:mainfrom
fanchen2:memory

Conversation

@fanchen2
Copy link
Contributor

@fanchen2 fanchen2 commented Mar 13, 2026

Add multi_vms_multi_boot test that boots VMs/TDs with varying memory configurations using a random_32g_window generator to sample memory sizes across sliding 32G windows up to host available memory.

Each iteration boots all VMs with a specific memory size, verifies guest login, then destroys all VMs before proceeding to the next memory size. All cases start from 1024M:

  • multi_vms.1vm.from1G_toall: single VM
  • multi_vms.1td.from1G_toall: single TD
  • multi_vms.1td_1vm.from1G_toall: 1 TD + 1 VM
  • multi_vms.2td.from1G_toall: 2 TDs
  • multi_vms.4td.from1G_toall: 4 TDs
  • multi_vms.2vm.from1G_toall: 2 VMs
  • multi_vms.4vm.from1G_toall: 4 VMs

@fanchen2 fanchen2 force-pushed the memory branch 3 times, most recently from bdfcfd7 to 87886a3 Compare March 13, 2026 09:36
- 1td_1vm:
machine_type_extra_params_vm2 = "kernel-irqchip=split"
vm_secure_guest_type_vm2 = tdx
variants:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this patch remove the original case, multi_vms.1td_1vm?

Same comments for below cases: 4td, 4vm.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, thanks, updated, added default for them.

LOG = logging.getLogger("avocado.test")


def _host_available_mem_mb():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is existing API: utils_misc.get_usable_memory_size()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

Returns a list of dicts:
[ {vm_name: {param_name: value, ...}, ...}, ... ]
"""
loop_params = _parse_series(params.get("loop_params", "mem"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems no loop_params define in any .cfg file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, updated, removed this variable

test.cancel("No VMs configured for multi_vms_multi_boot")

try:
iteration_plan = _resolve_iteration_plan(params, vm_names)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure the purpose of variable iteration_plan?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to boot VM from 1024M to all free memory in host, boot VM for many iterations.

@fanchen2 fanchen2 force-pushed the memory branch 3 times, most recently from c78d5a0 to 2faab00 Compare March 18, 2026 06:47
Add multi_vms_multi_boot test that boots VMs/TDs with varying memory
configurations using a random_32g_window generator to sample memory
sizes across sliding 32G windows up to host free memory.

Each iteration boots all VMs with a specific memory size, verifies
guest login, then destroys all VMs before proceeding to the next
memory size. All cases start from 1024M:
- multi_vms.1vm.from1G_toall: single VM
- multi_vms.1td.from1G_toall: single TD
- multi_vms.1td_1vm.from1G_toall: 1 TD + 1 VM
- multi_vms.2td.from1G_toall: 2 TDs
- multi_vms.4td.from1G_toall: 4 TDs
- multi_vms.2vm.from1G_toall: 2 VMs
- multi_vms.4vm.from1G_toall: 4 VMs

Signed-off-by: Farrah Chen <farrah.chen@intel.com>
@xhao22 xhao22 merged commit 3777bbc into intel:main Mar 18, 2026
4 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants