This project demonstrates how to build a Next.js application that integrates Together AI and LlamaIndex for AI-powered functionalities. The project consists of a frontend built with Next.js and a backend service handling AI operations.
frontend/
: Next.js frontend applicationbackend/
: Node.js backend service with Together AI and LlamaIndex integration
- Node.js (LTS version)
- Docker
- Together AI API key
- Restack Engine setup
First, install the Restack Web UI using Docker:
docker run -d --pull always --name restack -p 5233:5233 -p 6233:6233 -p 7233:7233 ghcr.io/restackio/restack:main
- Navigate to the backend directory:
cd backend
- Install dependencies:
npm install
- Create a
.env
file with your credentials:
TOGETHER_API_KEY=your_together_api_key
# Optional:
RESTACK_ENGINE_ID=your_engine_id
RESTACK_ENGINE_ADDRESS=your_engine_address
RESTACK_ENGINE_API_KEY=your_engine_api_key
- Start the backend service:
npm run dev
- Navigate to the frontend directory:
cd frontend
- Install dependencies:
npm install
- (Optional) Create a
.env
file:
# Optional:
RESTACK_ENGINE_ID=your_engine_id
RESTACK_ENGINE_ADDRESS=your_engine_address
RESTACK_ENGINE_API_KEY=your_engine_api_key
- Start the development server:
npm run dev
Visit http://localhost:3000 to see the application.
- Chat Completion Example: Demonstrates basic chat completion using Together AI's Llama models
- LlamaIndex Integration: Shows how to query models using LlamaIndex with Together AI integration
You can deploy both frontend and backend using Docker Compose:
docker-compose up -d
This will:
- Build and start the frontend on port 3000
- Build and start the backend on port 8000
- Set up proper networking between services
To deploy this application on Restack:
- Ensure you have Restack Cloud credentials
- Set up required environment variables
- Run the deployment script:
node restack_up.mjs
For detailed deployment information, see the Restack Cloud documentation.
- Next.js 14 application
- Server Actions for workflow triggering
- Tailwind CSS for styling
- Type-safe API integration
- Node.js backend service
- Together AI integration for LLM operations
- LlamaIndex for enhanced AI capabilities
- Rate-limited API calls (60 RPM)
The application includes two main workflows:
- Chat Completion Basic: Generates greeting and farewell messages using Together AI models
- LlamaIndex Together Simple: Demonstrates LlamaIndex integration with Together AI for complex queries
Restack Web UI showing workflow execution
Feel free to submit issues and enhancement requests.