ComfyUI-Copilot now supports LMStudio as a local LLM provider. This allows you to run powerful language models locally on your machine without sending data to external APIs.
- Download and install LMStudio from https://lmstudio.ai/
- Download a compatible model (e.g., Llama 3.1, CodeLlama, or other OpenAI-compatible models)
- Start the LMStudio server:
- Open LMStudio
- Go to the "Developer" tab
- Click "Start Server"
- Default server will run on
http://localhost:1234
- In the ComfyUI-Copilot interface, go to the LLM configuration settings
- Set the following parameters:
- Base URL:
http://localhost:1234/v1(or your LMStudio server URL) - API Key: Leave empty (LMStudio typically doesn't require an API key)
- Model: Select the model you loaded in LMStudio
- Base URL:
- Click the "Verify Connection" button
- You should see "LMStudio connection successful" if everything is configured correctly
- ✅ Model listing from LMStudio
- ✅ Chat completions
- ✅ Streaming responses
- ✅ All ComfyUI-Copilot features (workflow generation, debugging, etc.)
- Default:
http://localhost:1234/v1 - Custom port:
http://localhost:YOUR_PORT/v1 - Network access:
http://YOUR_IP:1234/v1
- Ensure LMStudio server is running
- Check that the port (default 1234) is not blocked by firewall
- Verify the URL format includes
/v1at the end
- Make sure you have downloaded and loaded a model in LMStudio
- Check LMStudio console for any error messages
- Try restarting the LMStudio server
- LMStudio performance depends on your hardware
- Consider using quantized models for better performance
- Adjust LMStudio's GPU/CPU allocation settings
- Privacy: All data stays on your local machine
- No API costs: No per-token charges
- Offline capability: Works without internet connection
- Custom models: Use any compatible model you prefer
- Full control: Adjust model parameters and behavior
For ComfyUI-Copilot, we recommend:
- Llama 3.1 8B Instruct (good balance of performance and quality)
- CodeLlama 7B/13B (excellent for code-related tasks)
- Mistral 7B Instruct (fast and efficient)
Choose models based on your hardware capabilities and requirements.