-
Notifications
You must be signed in to change notification settings - Fork 409
Speedup your CI
Sergey Bronnikov edited this page Feb 1, 2023
·
12 revisions
There is an article in Russian about speeding up building and testing in CI. This page contains a list of tools that helps with speedup.
- Use
--depth N
in Git. - Enable parallel mode in Git:
git config fetch.parallel 0
git config submodule.fetchJobs 0
- Ignoring paths with non-relevant files
- Github CI:
paths-ignore
(Documentation)
- Github CI:
- Use caching dependencies
- Github Actions (Documentation)
- Disable unnecessary triggers in packages (
Processing triggers for man-db (2.9.4-2) ...
).- Debian - https://wiki.debian.org/DpkgTriggers
- Profile build system
-
ninjatracing - converts
.ninja_log
files to chrome'sabout:tracing
format. - buildbloat converts ninja build logs to webtreemap JSON files.
-
ninjatracing - converts
- Use parallel mode.
- Make - option
-j
- Ninja - option
-j
- CMake - https://www.kitware.com//cmake-building-with-all-your-cores/
- Make - option
- Replace Make with Ninja.
- Replace default linker with Mold. Faster in 17 times in comparison with GNU gold and 3-5 times in comparison with LLVM lld.
-
Include what you use - is a tool for use with clang to analyze
#includes
in C and C++ source files. - Use cache
- ccache - is cache is a compiler cache. ccache could speed up your build in 30 (!) times. See performance results.
- sccache - is ccache with cloud storage. Supported C, C++ and Rust.
- Use distributed compilation (boost 2-4 times)
- Goma (C/C++) - is a distributed compiler service for open-source project such as Chromium and Android. It's some kind of replacement of distcc+ccache. Used by Google.
- nocc (C/C++) - is distributed C++ compiler. Used by VK.
- distcc is a fast, free distributed C/C++ compiler.
- icecream - is a distributed compiler with a central scheduler to share build load. Created by SUSE.
- Use distributed running
- gg - the Stanford Builder. Uses lambdas for running self-contained binaries, see a paper From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers.
- llama - is a tool for running UNIX commands inside of AWS Lambda. Its goal is to make it easy to outsource compute-heavy tasks to Lambda, with its enormous available parallelism, from your shell.
- Fail fast
- Parallel execution
- Profile your tests
- Python pytest-profiling
- Ruby test-prof
- Test prioritization
- Test minimization
- Test impact analysis
Copyright © 2014-2025 Sergey Bronnikov. Follow me on Mastodon @[email protected] and Telegram.
Learning
- Glossary
- Books:
- Courses
- Learning Tools
- Bugs And Learned Lessons
- Cheatsheets
Tools / Services / Tests
- Code complexity
- Quality Assurance Tools
- Test Runners
- Testing-As-A-Service
- Conformance Test Suites
- Test Infrastructure
- Fault injection
- TTCN-3
- Continuous Integration
- Speedup your CI
- Performance
- Formal Specification
- Toy Projects
- Test Impact Analysis
- Formats
Functional testing
- Automated testing
- By type:
WIP sections
Community
Links