A hands-on learning environment for exploring LLM CLI tools with practical examples and exercises.
- LLM CLI Documentation
- ttok - Token counting tool
- strip-tags - HTML cleanup
- files-to-prompt - File content extraction
- Clone the repository
- Run
make init
to set up your environment - Run
make check-env
to verify your setup - Run
make essential-examples
to process introductory examples - See examples directory to begin learning
- Claude 3 models via llm-claude-3
- Google’s Gemini models
- Local models via Ollama
- AWS Bedrock integration
All examples are self-contained and build upon each other:
These are the core examples to get started:
- Getting Started - Basic LLM usage and setup
- First commands and responses
- Understanding the environment
- Working with models
- Practice: Basic prompts and responses
- Templates - Working with system prompts
- Creating custom templates
- Using built-in templates
- Template best practices
- Practice: Create and use custom templates
After completing the essential examples, explore these advanced topics:
- Agents - Specialized roles and interactions
- Agent types and purposes
- Multi-agent conversations
- Agent collaboration patterns
- Template best practices
- Context Management - Managing conversations
- Maintaining context
- Structured interactions
- Memory handling
- Embeddings Introduction - Vector representations
- Understanding embeddings
- Basic vector operations
- Similarity searches
- Photo Embeddings - Working with images
- Image analysis
- Semantic search
- Visual relationships
- Advanced Usage - Complex workflows
- Integration patterns
- Custom solutions
- Best practices
- Ollama Models - Local model usage
- Setting up Ollama
- Model management
- Performance considerations
The workspace is organized for easy navigation:
examples/
- Step-by-step learning materials
51-sqlite-queries.org
- SQLite analytics for LLM logs
src/
- Tangled code from examples
templates/
- Analysis templates and frameworks
sin-framework.md
- System analysis framework
sin-execution-plan.md
- Implementation planning
sin-execute-and-document.md
- Results documentation
src/sql/
- Organized SQL queries for analysis
advanced/
- Complex analytics queries
basic/
- Basic usage statistics
cost/
- Token cost analysis
usage/
- Usage pattern analysis
scripts/
- Utility scripts
register-sin.sh
- SIN template registration
prompts/
- Example system prompts
docs/
- Additional guides and references
data/
- Your working directory for outputs
Comprehensive SQLite queries for analyzing LLM usage logs:
- Conversation counts and trends
- Model usage statistics
- Temporal analysis
- Response time analysis
- Token usage patterns
- Full-text search capabilities
- Token usage tracking
- Cost estimation by model
- Usage optimization insights
The Structured Intelligence Network (SIN) provides a systematic approach to LLM analysis:
- Analysis categories and metrics
- Data collection methods
- Evaluation criteria
- Implementation steps
- Reporting structure
- Implementation schedule
- Data collection plan
- Analysis procedures
- Resource allocation
- Risk management
- Executive summary
- Analysis results
- Technical details
- Recommendations
- Next steps
- Check the example documentation
- Review the LLM CLI docs
- See CONTRIBUTING.org for development details