Thank you for your interest in contributing to Alumnium!
Alumnium is an experimental AI-powered test automation solution that aims to simplify test interactions and assertions. Your contributions can help improve this project.
Before contributing, please review:
- Our README and documentation help you understand Alumnium's vision of creating higher-level abstractions for test automation that simplify web page interactions and strengthen assertion mechanisms.
- Our experimental status—we're in early development and value innovative approaches.
- The core functionality that uses natural language processing to interpret testing commands.
Alumnium is organized as a monorepo with two main packages:
packages/typescript/- Core TypeScript implementation, MCP, and AI serverspackages/python/- Python client implementation
Both packages share the same API and can be developed independently or together.
- Explore the open issues to find tasks matching your interests
- We will be glad if you help us with:
- Improving test coverage for edge cases.
- Enhancing documentation and examples.
- Exploring more natural language prompts for test generation.
- Reporting usability issues or unexpected test behavior.
- Creating sample projects using Alumnium.
First, clone the repo:
# Fork and clone the repository
git clone https://github.com/your-username/alumnium.git
cd alumniumThen install mise (see the mise documentation for more installation instructions):
# Universal
curl https://mise.run | sh
# Homebrew
brew install miseFinally, install dependencies for the project:
mise installConfigure access to AI providers as mentioned in docs.
When working on Alumnium:
- Follow the existing code style and patterns in each package.
- Ensure compatibility with Appium, Playwright, and Selenium.
- Document new functionality with clear examples.
- Test your changes thoroughly in the relevant package.
cd packages/python
# Quick testing with REPL
uv run python -i demo.py
# Run BDD system tests
TEST_ONLY=behave mise :test/system
# Run pytest system tests
TEST_ONLY=pytest mise :test/system
# Run all system tests
mise :test/system
# Run system tests with specific driver
mise :test/system:selenium
mise :test/system:playwright
mise :test/system:appium-ios
mise :test/system:appium-android
# Run unit tests
mise :test/unit
# Format code
mise :format
# Check types
mise :types
# Run linter
mise :lintcd packages/typescript
# Run system tests
mise :test/system
# Run system tests with specific driver
mise :test/system:selenium
mise :test/system:playwright
mise :test/system:appium-ios
mise :test/system:appium-android
# Run unit tests
mise :test/unit
# Format code
mise :format
# Check types
mise :types
# Run linter
mise :lintFrom the root directory, you can use mise commands:
# Run system tests for all packages
mise :test/system
# Run unit tests for all packages
mise :test/unit
# Format code for all packages
mise :format
# Check types for all packages
mise :typesFor local development, you may need to configure the following environment variables:
| Variable Name | Description | Default Value |
|---|---|---|
ALUMNIUM_DRIVER |
Driver to use for tests (selenium, playwright, appium) | selenium |
ALUMNIUM_MODEL |
AI model provider (anthropic, openai, google, etc.) | openai |
ALUMNIUM_LOG_PATH |
Path to the alumnium log directory | stdout(logs to console) |
ALUMNIUM_LOG_LEVEL |
Log level or configuration value | WARNING |
ALUMNIUM_CACHE |
Cache provider or disable it | filesystem |
- Create a focused branch for your contribution.
- Write meaningful commit messages explaining your changes. We use the Conventional Commits format.
- Include tests that verify your contribution works as expected.
- Update documentation if you're adding or changing features.
- Maintain API parity - If adding features to one package, consider implementing them in both Python and TypeScript.
- Submit your PR with a clear description of what it accomplishes.
As contributors to an AI-powered testing tool, we value:
- Natural language over rigid syntax: Tests should be readable by non-technical stakeholders.
- Adaptability over brittleness: Tests should withstand UI changes.
- Intent over implementation: Focus on what should happen, not how it happens.
- Context awareness: Testing tools should understand the application under test.
- Be respectful and constructive in all interactions. See the Code of Conduct for more details.
- Share knowledge generously—we're all learning in this emerging field.
- Value diverse perspectives—they lead to more robust solutions.
- Ask questions when unclear—clarity benefits everyone.
If you're new to open-source or AI-powered testing:
- Try running the demo and experimenting with the Alumnium API. Use the REPL (
poetry run python -i demo.py) to explore functionality. - Start with documentation improvements or simple bug fixes. Check out the good first issue label.
- Ask questions on GitHub, Discord or Slack.
All contributors will be acknowledged in our releases and documentation. As an experimental project on the cutting edge of testing technology, your contributions here represent pioneering work in the field.
Thank you for joining us in paving the road towards AI-powered test automation. Together, we can create more intuitive, maintainable, and powerful testing experiences.