You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Check the option before aggregating live bytes data. Panic if the option is enabled on malloc space. (#1250)
We see failures
[here](https://github.com/mmtk/mmtk-core/actions/runs/12193674159/job/34020377988?pr=1248)
in OpenJDK tests.
```
[2024-12-06T07:09:55Z INFO mmtk::memory_manager] Initialized MMTk with MarkSweep (FixedHeapSize(54525952))
[2024-12-06T07:09:55Z WARN mmtk::memory_manager] The feature 'extreme_assertions' is enabled. MMTk will run expensive run-time checks. Slow performance should be expected.
===== DaCapo fop starting =====
[2024-12-06T07:10:07Z INFO mmtk::util::heap::gc_trigger] [POLL] MallocSpace: Triggering collection (13313/13312 pages)
thread '<unnamed>' panicked at /home/runner/work/mmtk-core/mmtk-core/mmtk-openjdk/repos/mmtk-core/src/policy/marksweepspace/malloc_ms/global.rs:158:9:
internal error: entered unreachable code
stack backtrace:
0: rust_begin_unwind
at /rustc/aedd173a2c086e558c2b66d3743b344f977621a7/library/std/src/panicking.rs:647:5
1: core::panicking::panic_fmt
at /rustc/aedd173a2c086e558c2b66d3743b344f977621a7/library/core/src/panicking.rs:72:14
2: core::panicking::panic
at /rustc/aedd173a2c086e558c2b66d3743b344f977621a7/library/core/src/panicking.rs:144:5
3: <mmtk::policy::marksweepspace::malloc_ms::global::MallocSpace<VM> as mmtk::policy::space::Space<VM>>::common
at ./repos/mmtk-core/src/policy/marksweepspace/malloc_ms/global.rs:158:9
4: mmtk::policy::space::Space::get_descriptor
at ./repos/mmtk-core/src/policy/space.rs:332:9
5: mmtk::mmtk::MMTK<VM>::aggregate_live_bytes_in_last_gc::{{closure}}
at ./repos/mmtk-core/src/mmtk.rs:542:29
6: <mmtk::plan::marksweep::global::MarkSweep<VM> as mmtk::plan::global::HasSpaces>::for_each_space
at ./repos/mmtk-core/src/plan/marksweep/global.rs:31:10
7: mmtk::mmtk::MMTK<VM>::aggregate_live_bytes_in_last_gc
at ./repos/mmtk-core/src/mmtk.rs:540:9
8: <mmtk::scheduler::gc_work::Release<C> as mmtk::scheduler::work::GCWork<<C as mmtk::scheduler::work::GCWorkContext>::VM>>::do_work
at ./repos/mmtk-core/src/scheduler/gc_work.rs:162:13
9: mmtk::scheduler::work::GCWork::do_work_with_stat
at ./repos/mmtk-core/src/scheduler/work.rs:45:9
10: mmtk::scheduler::worker::GCWorker<VM>::run
at ./repos/mmtk-core/src/scheduler/worker.rs:255:13
11: mmtk::memory_manager::start_worker
at ./repos/mmtk-core/src/memory_manager.rs:491:5
12: start_worker
at ./mmtk/src/api.rs:214:9
13: _ZN6Thread8call_runEv
at ./repos/openjdk/src/hotspot/share/runtime/thread.cpp:402:12
14: thread_native_entry
at ./repos/openjdk/src/hotspot/os/linux/os_linux.cpp:826:19
15: <unknown>
16: <unknown>
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
fatal runtime error: failed to initiate panic, error 5
```
The issue is that `aggregate_live_bytes_in_last_gc` was not guarded by
the condition that the option `count_live_bytes_in_gc` is enabled. So it
was executed in our tests.
The function accesses the space descriptor through `CommonSpace` and
`MallocSpace` does not use `CommonSpace`, thus we see the panic. This PR
adds a check before calling `aggregate_live_bytes_in_last_gc`. When the
option is not enabled, we will not call the function.
This PR also adds a panic for `MallocSpace`. If `count_live_bytes` is
turned on, we simply panic, as we cannot provide live bytes vs total
page stats for `MallocSpace`.
0 commit comments