OnIt — the AI is working on the given task and will deliver the results shortly.
OnIt is an intelligent agent for task automation and assistance. It connects to any OpenAI-compatible LLM (private vLLM servers or OpenRouter.ai) and uses MCP tools for web search, file operations, and more. It also supports the A2A protocol for multi-agent communication.
pip install onitOr from source:
git clone https://github.com/sibyl-oracles/onit.git
cd onit
pip install -e ".[all]"onit setupThe setup wizard walks you through configuring your LLM endpoint, API keys, and preferences. Secrets are stored securely in your OS keychain. Settings are saved to ~/.onit/config.yaml.
To review your configuration at any time:
onit setup --showonitThat's it. MCP tools start automatically, and you get an interactive chat with tool access.
onit --web # Web UI on port 9000
onit --gateway # Telegram/Viber bot gateway
onit --a2a # A2A server on port 9001
onit --client --task "your task" # Send a task to an A2A serveronit setup is the recommended way to configure OnIt. It stores:
- Settings in
~/.onit/config.yaml(LLM endpoint, theme, ports, timeout) - Secrets in your OS keychain (API keys, bot tokens)
You can also use environment variables or a project-level YAML config:
# Environment variables
export ONIT_HOST=https://openrouter.ai/api/v1
export OPENROUTER_API_KEY=sk-or-v1-...
# Or a custom config file
onit --config configs/default.yamlPriority order: CLI flags > environment variables > ~/.onit/config.yaml > project config file.
serving:
host: https://openrouter.ai/api/v1
host_key: sk-or-v1-your-key-here # or set OPENROUTER_API_KEY env var
# model: auto-detected from endpoint. Set explicitly for OpenRouter:
# model: google/gemini-2.5-pro
think: true
max_tokens: 262144
verbose: false
timeout: 600
web: false
web_port: 9000
mcp:
servers:
- name: PromptsMCPServer
url: http://127.0.0.1:18200/sse
enabled: true
- name: ToolsMCPServer
url: http://127.0.0.1:18201/sse
enabled: trueGeneral:
| Flag | Description | Default |
|---|---|---|
--config |
Path to YAML configuration file | configs/default.yaml |
--host |
LLM serving host URL | — |
--verbose |
Enable verbose logging | false |
--timeout |
Request timeout in seconds (-1 = none) |
600 |
--template-path |
Path to custom prompt template YAML file | — |
--documents-path |
Path to local documents directory | — |
--topic |
Default topic context (e.g. "machine learning") |
— |
--prompt-intro |
Custom system prompt intro | — |
--no-stream |
Disable token streaming | false |
--think |
Enable thinking/reasoning mode (CoT) | false |
Text UI:
| Flag | Description | Default |
|---|---|---|
--text-theme |
Text UI theme (white or dark) |
dark |
--show-logs |
Show execution logs | false |
Web UI:
| Flag | Description | Default |
|---|---|---|
--web |
Launch Gradio web UI | false |
--web-port |
Gradio web UI port | 9000 |
Gateway (Telegram / Viber):
| Flag | Description | Default |
|---|---|---|
--gateway |
Auto-detect gateway (Telegram or Viber based on env vars) | — |
--gateway telegram |
Run as a Telegram bot | — |
--gateway viber |
Run as a Viber bot | — |
--viber-webhook-url |
Public HTTPS URL for Viber webhook | — |
--viber-port |
Local port for Viber webhook server | 8443 |
A2A (Agent-to-Agent):
| Flag | Description | Default |
|---|---|---|
--a2a |
Run as an A2A protocol server | false |
--a2a-port |
A2A server port | 9001 |
--client |
Send a task to a remote A2A server | false |
--a2a-host |
A2A server URL for client mode | http://localhost:9001 |
--task |
Task string for A2A client or loop mode | — |
--file |
File to upload with the task | — |
--image |
Image file for vision processing | — |
--loop |
Enable A2A loop mode | false |
--period |
Seconds between loop iterations | 10 |
MCP (Model Context Protocol):
| Flag | Description | Default |
|---|---|---|
--mcp-host |
Override host/IP in all MCP server URLs | — |
--mcp-sse |
URL of an external MCP server (can be repeated) | — |
Rich terminal UI with input history, theming, and execution logs. Press Enter or Ctrl+C to interrupt any running task.
Gradio-based browser interface with file upload and real-time streaming:
onit --webSupports optional Google OAuth2 authentication — see docs/WEB_AUTHENTICATION.md.
MCP servers start automatically. Tools are auto-discovered and available to the agent.
| Server | Description |
|---|---|
| PromptsMCPServer | Prompt templates for instruction generation |
| ToolsMCPServer | Web search, bash commands, file operations, and document tools |
Connect to additional external MCP servers:
onit --mcp-sse http://localhost:8080/sse --mcp-sse http://192.168.1.50:9090/sseChat with OnIt from Telegram or Viber. Configure bot tokens via onit setup or environment variables, then:
onit --gateway telegram
onit --gateway viber --viber-webhook-url https://your-domain.com/viberInstall the gateway dependency separately if not using [all]:
pip install "onit[gateway]"Run OnIt as an A2A server so other agents can send tasks:
onit --a2aThe agent card is available at http://localhost:9001/.well-known/agent.json.
Send a task via CLI:
onit --client --task "what is the weather in Manila"
onit --client --task "describe this" --image photo.jpgSend a task via Python (A2A SDK):
from a2a.client import ClientFactory, create_text_message_object
from a2a.types import Role
import asyncio
async def main():
client = await ClientFactory.connect("http://localhost:9001")
message = create_text_message_object(role=Role.user, content="What is the weather?")
async for event in client.send_message(message):
print(event)
asyncio.run(main())Repeat a task on a configurable timer (useful for monitoring):
onit --loop --task "Check the weather in Manila" --period 60onit --template-path my_template.yamlSee example templates in src/mcp/prompts/prompt_templates/.
Serve models locally with vLLM:
CUDA_VISIBLE_DEVICES=0,1,2,3 vllm serve Qwen/Qwen3-30B-A3B-Instruct-2507 \
--max-model-len 262144 --port 8000 \
--enable-auto-tool-choice --tool-call-parser hermes \
--reasoning-parser qwen3 --tensor-parallel-size 4 \
--chat-template-content-format stringonit --host http://localhost:8000/v1OpenRouter gives access to models from OpenAI, Google, Meta, Anthropic, and others through a single API.
onit --host https://openrouter.ai/api/v1Browse available models at openrouter.ai/models.
┌─────────────────────────────────────────────────────┐
│ onit CLI │
│ (argparse + YAML config) │
└────────────────────────┬────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────┐
│ OnIt (src/onit.py) │
│ │
│ ┌─────────┐ ┌──────────┐ ┌──────────┐ ┌────────┐ ┌──────┐ │
│ │ ChatUI │ │ WebChatUI│ │ Telegram │ │ Viber │ │ A2A │ │
│ │(terminal│ │ (Gradio) │ │ Gateway │ │Gateway │ │Server│ │
│ └────┬────┘ └────┬─────┘ └────┬─────┘ └───┬────┘ └──┬───┘ │
│ └─────────┬─┘ │ │ │
│ ▼ ▼ │
│ client_to_agent() / process_task() │
│ │ │
│ ▼ │
│ MCP Prompt Engineering (FastMCP) │
│ │ │
│ ▼ │
│ chat() ◄──── Tool Registry │
│ (vLLM / OpenRouter) (auto-discovered) │
└─────────────────────────────────────────────────────┘
│
┌────────────┼────────────┐
▼ ▼ ▼
┌───────────┐ ┌──────────┐ ┌──────────┐
│ Prompts │ │ Tools │ │ External │ ...
│ MCP Server│ │MCP Server│ │MCP (SSE) │
└───────────┘ └──────────┘ └──────────┘
onit/
├── configs/
│ └── default.yaml # Agent configuration
├── pyproject.toml # Package configuration
├── src/
│ ├── cli.py # CLI entry point
│ ├── setup.py # Setup wizard (onit setup)
│ ├── onit.py # Core agent class
│ ├── lib/
│ │ ├── text.py # Text utilities
│ │ └── tools.py # MCP tool discovery
│ ├── mcp/
│ │ ├── prompts/ # Prompt engineering (FastMCP)
│ │ └── servers/ # MCP servers (tools, web, bash, filesystem)
│ ├── model/
│ │ └── serving/
│ │ └── chat.py # LLM interface (vLLM + OpenRouter)
│ ├── ui/
│ │ ├── text.py # Rich terminal UI
│ │ ├── web.py # Gradio web UI
│ │ ├── telegram.py # Telegram bot gateway
│ │ └── viber.py # Viber bot gateway
│ └── test/ # Test suite (pytest)
- Gateway Quick Start — Telegram and Viber bot setup
- Testing — Running the test suite
- Docker — Docker and Docker Compose setup
- Web Authentication — Web UI authentication reference
- Web Deployment — Production deployment with HTTP/HTTPS
Apache License 2.0. See LICENSE for details.