LMStudio allows you to run powerful language models locally on your machine. Here's a step-by-step guide to set it up with ComfyUI-Copilot.
- Download LMStudio from https://lmstudio.ai/
- Install the application on your computer
- Launch LMStudio
- Open LMStudio
- Go to the "Discover" tab
- Search for a model. Recommended options:
- Llama 3.1 8B Instruct (good balance of performance and quality)
- CodeLlama 7B (excellent for code-related tasks)
- Mistral 7B Instruct (fast and efficient)
- Phi-3.5 Mini (very fast, smaller model)
- Click "Download" for your chosen model
- Go to the "Developer" tab in LMStudio
- Select your downloaded model from the dropdown
- Click "Start Server"
- Note the server URL (usually
http://localhost:1234)- If you changed the port in LMStudio, use that port number instead
- Open ComfyUI-Copilot in your browser
- Click the settings/configuration button (usually in the top-right area)
- Look for "LLM Configuration" or "API Configuration" section
- Expand the configuration panel if it's collapsed
- Configure the settings:
- API Key: Leave this completely empty (LMStudio doesn't need one)
- Server URL: Enter
http://localhost:1234/v1(or your custom port)- If you changed LMStudio's port to 1235, use:
http://localhost:1235/v1 - Always add
/v1at the end
- If you changed LMStudio's port to 1235, use:
- Click "Verify" to test the connection
- Save the configuration
- Test the connection - you should see "LMStudio connection successful!"
- Try a simple chat - ask ComfyUI-Copilot a question
- Check model availability - the model dropdown should show your LMStudio model
- Check LMStudio is running: Make sure the server is started in LMStudio
- Verify the URL: Ensure you're using the correct port and added
/v1 - Check firewall: Make sure port 1234 (or your custom port) isn't blocked
- Restart LMStudio: Sometimes restarting the server helps
- Load a model: Make sure you've selected and loaded a model in LMStudio
- Wait for loading: Some models take time to load completely
- Check LMStudio logs: Look at the LMStudio console for error messages
- Leave API key empty: LMStudio doesn't require API keys
- Clear any existing key: Remove any text from the API key field
- Default:
http://localhost:1234/v1 - Custom port:
http://localhost:YOUR_PORT/v1 - Network access:
http://YOUR_IP:1234/v1(if accessing from another machine)
- Model Performance: Larger models give better results but need more RAM/GPU
- GPU Acceleration: Enable GPU acceleration in LMStudio for faster responses
- Context Length: Adjust context length in LMStudio based on your needs
- Multiple Models: You can switch models in LMStudio without reconfiguring ComfyUI-Copilot
- Offline Usage: Once set up, everything works offline!
| Feature | LMStudio | OpenAI API |
|---|---|---|
| Cost | Free (after initial setup) | Pay per token |
| Privacy | 100% local | Data sent to OpenAI |
| Speed | Depends on your hardware | Usually very fast |
| Models | Open source models | GPT-4, GPT-3.5, etc. |
| Setup | Requires installation | Just API key |
| Offline | ✅ Works offline | ❌ Needs internet |
Once configured, you can use all ComfyUI-Copilot features with your local LMStudio models:
- Workflow generation and debugging
- Parameter suggestions
- Model recommendations
- Chat assistance
- And more!
Everything runs locally on your machine for complete privacy and control.