Skip to content

bentoml/BentoVLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Self-host LLMs with vLLM and BentoML

This repository contains a group of BentoML example projects, showing you how to serve and deploy open-source Large Language Models using vLLM, a high-throughput and memory-efficient inference engine. Every model directory contains the code to add OpenAI compatible endpoints to the BentoML Service.

💡 You can use these examples as bases for advanced code customization, such as custom model, inference logic or vLLM options. For simple LLM hosting with OpenAI compatible endpoints without writing any code, see OpenLLM.

See here for a full list of BentoML example projects.

The following is an example of serving one of the LLMs in this repository: Mistral 7B Instruct.

Prerequisites

  • If you want to test the Service locally, we recommend you use an Nvidia GPU with at least 16G VRAM.
  • Gain access to the model in Hugging Face.

Install dependencies

git clone https://github.com/bentoml/BentoVLLM.git
cd BentoVLLM/llama3.2-1b-instruct

# Recommend UV and Python 3.11
uv venv && pip install .

export HF_TOEKN=<your-api-key>

Run the BentoML Service

We have defined a BentoML Service in service.py. Run bentoml serve in your project directory to start the Service.

$ bentoml serve .

2024-01-18T07:51:30+0800 [INFO] [cli] Starting production HTTP BentoServer from "service:VLLM" listening on http://localhost:3000 (Press CTRL+C to quit)
INFO 01-18 07:51:40 model_runner.py:501] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 01-18 07:51:40 model_runner.py:505] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode.
INFO 01-18 07:51:46 model_runner.py:547] Graph capturing finished in 6 secs.

The server is now active at http://localhost:3000. You can interact with it using the Swagger UI or in other different ways.

CURL
curl -X 'POST' \
  'http://localhost:3000/generate' \
  -H 'accept: text/event-stream' \
  -H 'Content-Type: application/json' \
  -d '{
  "prompt": "Explain superconductors like I'\''m five years old",
  "tokens": null
}'
Python client
import bentoml

with bentoml.SyncHTTPClient("http://localhost:3000") as client:
    response_generator = client.generate(
        prompt="Explain superconductors like I'm five years old",
        tokens=None
    )
    for response in response_generator:
        print(response)
OpenAI-compatible endpoints
from openai import OpenAI

client = OpenAI(base_url='http://localhost:3000/v1', api_key='na')

# Use the following func to get the available models
client.models.list()

chat_completion = client.chat.completions.create(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct",
    messages=[
        {
            "role": "user",
            "content": "Explain superconductors like I'm five years old"
        }
    ],
    stream=True,
)
for chunk in chat_completion:
    # Extract and print the content of the model's reply
    print(chunk.choices[0].delta.content or "", end="")

Note: If your Service is deployed with protected endpoints on BentoCloud, you need to set the environment variable OPENAI_API_KEY to your BentoCloud API key first.

export OPENAI_API_KEY={YOUR_BENTOCLOUD_API_TOKEN}

You can then use the following line to replace the client in the above code snippet. Refer to Obtain the endpoint URL to retrieve the endpoint URL.

client = OpenAI(base_url='your_bentocloud_deployment_endpoint_url/v1')

For detailed explanations of the Service code, see vLLM inference.

Deploy to BentoCloud

After the Service is ready, you can deploy the application to BentoCloud for better management and scalability. Sign up if you haven't got a BentoCloud account.

Make sure you have logged in to BentoCloud.

bentoml cloud login

Create a BentoCloud secret to store the required environment variable and reference it for deployment.

bentoml secret create huggingface HF_TOKEN=$HF_TOKEN

bentoml deploy . --secret huggingface

Note: For custom deployment in your own infrastructure, use BentoML to generate an OCI-compliant image.

Featured models

In addition to Llama 3.1 8B Instruct, we also have examples for other models in the subdirectories of this repository:

Model Links
deepseek-v3-671b GitHubHugging Face
deepseek-r1-671b GitHubHugging Face
deepseek-r1-distill-llama3.3-70b GitHubHugging Face
deepseek-r1-distill-qwen2.5-32b GitHubHugging Face
deepseek-r1-distill-qwen2.5-14b GitHubHugging Face
deepseek-r1-distill-qwen2.5-7b-math GitHubHugging Face
deepseek-r1-distill-llama3.1-8b GitHubHugging Face
deepseek-r1-distill-llama3.1-8b-tool-calling GitHubHugging Face
gemma2-2b-instruct GitHubHugging Face
gemma2-9b-instruct GitHubHugging Face
gemma2-27b-instruct GitHubHugging Face
jamba1.5-mini GitHubHugging Face
llama3.1-8b-instruct GitHubHugging Face
llama3.2-1b-instruct GitHubHugging Face
llama3.2-3b-instruct GitHubHugging Face
llama3.2-11b-vision-instruct GitHubHugging Face
llama3.2-90b-vision-instruct GitHubHugging Face
llama3.3-70b-instruct GitHubHugging Face
pixtral-12b-2409 GitHubHugging Face
mixtral-8x7b-v0.1 GitHubHugging Face
ministral-8b-instruct-2410 GitHubHugging Face
mistral-small-24b-instruct-2501 GitHubHugging Face
mistral-large-123b-instruct-2407 GitHubHugging Face
phi4-14b GitHubHugging Face
qwen2.5-7b-instruct GitHubHugging Face
qwen2.5-14b-instruct GitHubHugging Face
qwen2.5-32b-instruct GitHubHugging Face
qwen2.5-72b-instruct GitHubHugging Face
qwen2.5-coder-7b-instruct GitHubHugging Face
qwen2.5-coder-32b-instruct GitHubHugging Face
qwen2.5vl-3b-instruct GitHubHugging Face
qwen2.5vl-7b-instruct GitHubHugging Face

About

Self-host LLMs with vLLM and BentoML

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages