These steps align with the README’s Quick Start section.
First, ensure the environment is set up:
cd .. && make init | tee .../data/init.log | head | tr -cd '[:print:]\n'
Verify the environment configuration:
cd .. && make check-env | tr -cd '[:print:]\n'
Set up the default model:
llm models default llama3.2
Let’s start with a simple prompt example.
llm "Write a haiku about debugging"
View available models and configuration.
llm models list | head -n 5
First, list available templates.
llm templates list | head
Register a basic Python template. Note: We mark this :llm nil as it’s a one-time setup.
llm --system "Write Python code" --save python-template
Here’s an example using the session agent template.
llm -t session-agent "Reviewing LLM commands" 2>&1 | head -n 15
llm logs -n 10
llm logs --json -n 0 | jq -r '.[]|.model' | sort | uniq -c | sort -rn
The following commands require user interaction and are not suitable for automated execution:
llm chat -m llama3.2
git diff --staged | llm -t commit