implement --file-parallelism flag for running test files in parallel …#28715
implement --file-parallelism flag for running test files in parallel …#28715elliots wants to merge 6 commits intooven-sh:mainfrom
Conversation
…ven-sh#22817) Add --file-parallelism <N> CLI flag to bun test that runs multiple test files concurrently via the event loop within the same process. Tests within each file still run serially, but multiple files progress concurrently. Changes: - Add --file-parallelism <N> to CLI args and TestOptions (default: 1) - Add BunTestRoot.detachFile() to support parallel file lifecycle - Update onBeforePrint() to handle file-switching for interleaved output - Add current_file_id tracking to TestRunner for correct header printing - Implement parallel execution path in runAllTests using event loop - Add tests for the new flag
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (2)
WalkthroughAdds file-level parallelism to the test runner: new Changes
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/cli/test_command.zig`:
- Around line 1895-1918: The loop runs multiple files concurrently but uses a
global Jest.runner.default_timeout_override via setDefaultTimeout(), causing
cross-file interference; update the timeout handling so it is stored and reset
on the per-file TestRunner instance instead of the global Jest.runner: modify
loadAndStartFile to accept/return the per-file TestRunner (or a timeout field)
and have setDefaultTimeout() write to TestRunner.default_timeout_override (not
Jest.runner.default_timeout_override), and ensure the cleanup/reset of that
timeout happens on that same TestRunner instance when the file completes; also
remove or stop using the global Jest.runner.default_timeout_override in
TestRunner code paths to prevent shared state between parallel files.
- Around line 1903-1916: The current catch after calling loadAndStartFile(...)
swallows every error and only continues, which hides real startup failures;
change the handler so it only continues when err == error.ModuleNotFound and for
any other error it propagates (returns or uses try) instead of silently
continuing. Keep the optional debug Output.debugWarn path but after logging
ensure non-ModuleNotFound errors are rethrown (e.g., return err or remove the
catch so try/... lets the error propagate) so functions like
resolveEntryPoint/clearEntryPoint fail fast rather than dropping files.
- Around line 1975-2049: The parallel path in loadAndStartFile bypasses
reporter.repeat_count used by run(): fix it so when should_run_concurrent is
true you schedule the file repeat_count times (or loop repeat_count around the
existing scheduling logic) instead of scheduling it only once; specifically, in
loadAndStartFile (and the block that computes should_run_concurrent and calls
bun_test_root.enterFile / vm.loadEntryPointForTestRunner / buntest.addResult /
bun_test.BunTest.run) wrap that scheduling/start sequence in a for-loop over
reporter.repeat_count (ensuring you call enterFile/exitFile per iteration or
otherwise preserve the per-run header logic such that onBeforePrint still emits
the per-run metadata and current_file behavior matches the sequential path).
In `@test/cli/test/file-parallelism.test.ts`:
- Around line 8-15: The helper createTestDir is rolling its own temp-path logic;
replace it to use the repo harness tempDir primitive instead: import or
reference tempDir from the test harness, call tempDir() to create a disposable
directory, write the provided files into that directory (using the same write
loop), and return the harness-provided path; remove usage of tmpdir(),
Date.now(), Math.random(), mkdirSync for name generation so cleanup is managed
by the harness and matches other CLI tests.
- Around line 31-128: The tests currently only check outputs and would pass if
runs were sequential; add an overlap-sensitive test (e.g., add a new test named
"proves files run in parallel" inside the "--file-parallelism" describe) that
creates two files like "sync_a.test.ts" and "sync_b.test.ts" via createTestDir
where each test writes its own marker file and then waits (with a timeout) for
the other file's marker before finishing; run this with runBunTest(cwd,
["--file-parallelism", "2"]) and assert the run completes successfully (exitCode
0) — sequential execution should deadlock/fail so this proves true parallelism;
ensure you remove marker files and the temp dir in the finally block (use
rmSync) to avoid flakiness.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 59c609c5-9fbf-46f3-ada0-98d9b430e1e7
📒 Files selected for processing (7)
src/bun.js/ConsoleObject.zigsrc/bun.js/test/bun_test.zigsrc/bun.js/test/jest.zigsrc/cli.zigsrc/cli/Arguments.zigsrc/cli/test_command.zigtest/cli/test/file-parallelism.test.ts
The catch block in the parallel path was swallowing all errors from loadAndStartFile, silently dropping files on real startup failures. Now only ModuleNotFound (already reported via unhandledRejection) is skipped; other errors propagate to fail fast. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Save the initial default_timeout_override before the parallel loop and restore it before loading each new file and after completing each file. This prevents one file's jest.setTimeout() call from affecting unrelated files running concurrently. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…le-parallelism The parallel path schedules each file once and doesn't support the repeat loop (module cache clearing, re-evaluation) that --rerun-each requires. Fall back to sequential execution when repeat_count > 1. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the ad-hoc createTestDir helper with tempDir from the test harness. This uses disposable cleanup (via `using`) and matches the conventions used in other CLI tests. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add a test where two files each write a marker then poll for the other's marker. This would deadlock/timeout under sequential execution but succeeds under --file-parallelism 2, proving files truly run concurrently. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…(#22817)
NOTE: vibe coded mess. I just wanted the tests to run faster.
What does this PR do?
Add --file-parallelism CLI flag to bun test that runs multiple test files concurrently via the event loop within the same process. Tests within each file still run serially, but multiple files progress concurrently.
Changes:
How did you verify your code works?
I compiled on mac, and ran against my project's tests
--file-parallelism 1Ran 868 tests across 63 files. [402.54s]
--file-parallelism 2Ran 868 tests across 63 files. [274.40s]