Skip to content

Commit a9b872f

Browse files
Remove OpenAI dependency (#183)
1 parent 9720d61 commit a9b872f

File tree

9 files changed

+284
-830
lines changed

9 files changed

+284
-830
lines changed

community/event-driven-rag-cve-analysis/Dockerfile

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -66,5 +66,8 @@ RUN source activate morpheus &&\
6666
jupyter contrib nbextension install --user &&\
6767
pip install jupyterlab_nvdashboard==0.9
6868

69+
RUN source activate morpheus &&\
70+
pip install --upgrade langchain-nvidia-ai-endpoints
71+
6972
# Launch jupyter
7073
CMD ["jupyter-lab", "--ip=0.0.0.0", "--no-browser", "--allow-root"]

community/event-driven-rag-cve-analysis/README.md

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -28,11 +28,7 @@ You will also need to have a `Morpheus 24.03` docker container built and present
2828

2929
### NVIDIA GPU Cloud
3030

31-
To access the NVIDIA hosted Inference Service, you will need to have the following environment variables set: `OPENAI_API_KEY`. To obtain the API key, please visit the [NVIDIA website](https://build.nvidia.com/) for instructions on generating your API key.
32-
33-
It's important to note here that although we store the NGC API Key under the `OPENAI_API_KEY` variable, we will be interacting with NVIDIA hosted LLMs and not OpenAI LLMs.
34-
35-
NVIDIA NIM microservices are OpenAI API compliant to maximize usability, so we will be using the `openai` with package as a wrapped to make API calls.
31+
To access the NVIDIA hosted Inference Service, you will need to have the following environment variables set: `NVIDIA_API_KEY`. To obtain the API key, please visit the [NVIDIA website](https://build.nvidia.com/) for instructions on generating your API key.
3632

3733
### Building a Morpheus Container
3834

@@ -53,13 +49,13 @@ If you are using a Morpheus version that is not `v24.03.02-runtime`, please upda
5349
```
5450
### Creating an Environment File
5551

56-
To automatically use these API keys, you can set the `OPENAI_API_KEY` value in the `docker-compose.yml` file in this directory as follows:
52+
To automatically use these API keys, you can set the `NVIDIA_API_KEY` value in the `docker-compose.yml` file in this directory as follows:
5753

5854
```bash
5955
environment:
6056
- TERM=${TERM:-}
6157
# Workaround until this is working: https://github.com/docker/compose/issues/9181#issuecomment-1996016211
62-
- OPENAI_API_KEY=<BUILD_NV_API_KEY>
58+
- NVIDIA_API_KEY=<BUILD_NV_API_KEY>
6359
# Overwrite any environment variables in the .env file with URLs needed in the network
6460
- OPENAI_API_BASE=https://integrate.api.nvidia.com/v1
6561
- OPENAI_BASE_URL=https://integrate.api.nvidia.com/v1

community/event-driven-rag-cve-analysis/cyber_dev_day/config.py

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -55,16 +55,6 @@ class NVFoundationLLMModelConfig(BaseModel):
5555
temperature: float = 0.0
5656

5757

58-
class OpenAIServiceConfig(BaseModel):
59-
type: typing.Literal["openai"] = "openai"
60-
61-
62-
class OpenAIMModelConfig(BaseModel):
63-
service: OpenAIServiceConfig
64-
65-
model_name: str
66-
67-
6858
class NIMServiceConfig(BaseModel):
6959
type: typing.Literal["NIM"] = "NIM"
7060

@@ -73,13 +63,11 @@ class NIMModelConfig(BaseModel):
7363
service: NIMServiceConfig
7464

7565
model_name: str
76-
base_url: str
7766
temperature: float = 0.0
7867
top_p: float = 1
7968

8069

8170
LLMModelConfig = typing.Annotated[typing.Annotated[NeMoLLMModelConfig, Tag("nemo")]
82-
| typing.Annotated[OpenAIMModelConfig, Tag("openai")]
8371
| typing.Annotated[NVFoundationLLMModelConfig, Tag("nvfoundation")]
8472
| typing.Annotated[NIMModelConfig, Tag("NIM")],
8573
Discriminator(_llm_discriminator)]

community/event-driven-rag-cve-analysis/cyber_dev_day/llm_service.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@ def create(service_type: str, *service_args, **service_kwargs) -> "LLMService":
153153
pass
154154

155155
@staticmethod
156-
def create(service_type: str | typing.Literal["nemo"] | typing.Literal["openai"], *service_args, **service_kwargs):
156+
def create(service_type: str | typing.Literal["nemo"] | typing.Literal["nim"], *service_args, **service_kwargs):
157157
"""
158158
Returns a service for interacting with LLM models.
159159

0 commit comments

Comments
 (0)