Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AnythingLLM client image crashes on Mac #91

Open
guimou opened this issue Oct 2, 2024 · 6 comments
Open

AnythingLLM client image crashes on Mac #91

guimou opened this issue Oct 2, 2024 · 6 comments

Comments

@guimou
Copy link
Collaborator

guimou commented Oct 2, 2024

"I have tried to run it locally with an M-Series mac but the image is crashing as soon as I perform a request.
Tested against a ollama model served locally as well as a granite model served on MaaS"

"the image is running and I could configure it ok with the endpoint and the API key"

[backend] info: [GenericOpenAiLLM] Inference API: [https://mistral-7b-instruct-v0-3-maas-apicast-production.apps.prod.rhoai.rh-aiservices-bu.com:443/v1](https://www.google.com/url?q=https://mistral-7b-instruct-v0-3-maas-apicast-production.apps.prod.rhoai.rh-aiservices-bu.com:443/v1&sa=D&source=docs&ust=1727870539443559&usg=AOvVaw0Re9V3knFRZ8T6kCoZUHQN) Model: granite-8b-code-instruct-128k
2024/10/02 10:33:34 [error] 29#29: *49 upstream prematurely closed connection while reading upstream, client: 192.168.127.1, server: ${base_url}, request: "POST /api/workspace/munzws/stream-chat HTTP/1.1", upstream: "[http://127.0.0.1:3001/api/workspace/munzws/stream-chat](https://www.google.com/url?q=http://127.0.0.1:3001/api/workspace/munzws/stream-chat&sa=D&source=docs&ust=1727870539443616&usg=AOvVaw2lg7SMJlDFLX-qSYndwfLD)", host: "localhost:8888", referrer: "[http://localhost:8888/workspace/munzws](https://www.google.com/url?q=http://localhost:8888/workspace/munzws&sa=D&source=docs&ust=1727870539443636&usg=AOvVaw2AmvxtrxpotK84IPf3qpHD)"
/opt/app-root/bin/utils/process.sh: line 10: 124 Illegal instruction (core dumped) "$@"
@r2munz
Copy link

r2munz commented Oct 2, 2024

Tested on MacBookPro M3 14.6.1, podman 5.2.1

@guimou
Copy link
Collaborator Author

guimou commented Oct 2, 2024

If the image starts, you have access to AnythingLLM, and are able to configure the endpoint, I would say that the issue is not with the image or the launcher itself. The log refers to it because the main process is launched from there, but what seems to trigger is above.
Quick thing: your endpoint is for Mistral on MaaS, but the name of the model you use is granite-8b-code-instruct-128k. That won't work for sure, as models and endpoints are linked together.
Can you correct and try again, just to remove this issue?
I'm sorry, I don't have a Mac, so cannot directly test.

@r2munz
Copy link

r2munz commented Oct 2, 2024

I have re-tried using the appropriate model, and disabling the local firewall (LittleSnitch) just in case.

I could reproduce the issue:

Running command: node /app/server/index.js
[collector] info: Collector hot directory and tmp storage wiped!
[collector] info: Document processor app listening on port 8889
[backend] info: [EncryptionManager] Self-assigning key & salt for encrypting arbitrary data.
[backend] info: [TELEMETRY DISABLED] Telemetry is marked as disabled - no events will send. Telemetry helps Mintplex Labs Inc improve AnythingLLM.
[backend] info: [CommunicationKey] RSA key pair generated for signed payloads within AnythingLLM services.
[backend] info: [EncryptionManager] Loaded existing key & salt for encrypting arbitrary data.
[backend] info: Primary server in HTTP mode listening on port 3001
[backend] info: prisma:info Starting a sqlite pool with 13 connections.
[backend] info: [BackgroundWorkerService] Feature is not enabled and will not be started.
[backend] info: [MetaGenerator] fetching custom meta tag settings...
[backend] error: Error: The OPENAI_API_KEY environment variable is missing or empty; either provide it, or instantiate the OpenAI client with an apiKey option, like new OpenAI({ apiKey: 'My API Key' }).
    at new OpenAI (/app/server/node_modules/openai/index.js:53:19)
    at openAiModels (/app/server/utils/helpers/customModels.js:59:18)
    at getCustomModels (/app/server/utils/helpers/customModels.js:27:20)
    at /app/server/endpoints/system.js:912:41
    at Layer.handle [as handle_request] (/app/server/node_modules/express/lib/router/layer.js:95:5)
    at next (/app/server/node_modules/express/lib/router/route.js:149:13)
    at validatedRequest (/app/server/utils/middleware/validatedRequest.js:20:5)
[backend] info: [Event Logged] - update_llm_provider
[backend] info: [Event Logged] - update_embedding_engine
[backend] info: [Event Logged] - update_vector_db
[backend] info: [Event Logged] - workspace_created
[backend] info: [NativeEmbedder] Initialized
[backend] info: [GenericOpenAiLLM] Inference API: https://mistral-7b-instruct-v0-3-maas-apicast-production.apps.prod.rhoai.rh-aiservices-bu.com:443/v1 Model: mistral-7b-instruct
2024/10/02 12:20:20 [error] 29#29: *119 upstream prematurely closed connection while reading upstream, client: 192.168.127.1, server: ${base_url}, request: "POST /api/workspace/ws/stream-chat HTTP/1.1", upstream: "http://127.0.0.1:3001/api/workspace/ws/stream-chat", host: "localhost:8888", referrer: "http://localhost:8888/workspace/ws"
/opt/app-root/bin/utils/process.sh: line 10:   124 Illegal instruction     (core dumped) "$@"

Digging a bit deeper in the logs I see these worrying lines:

[backend] error: Error: The OPENAI_API_KEY environment variable is missing or empty; either provide it, or instantiate the OpenAI client with an apiKey option, like new OpenAI({ apiKey: 'My API Key' }).
    at new OpenAI (/app/server/node_modules/openai/index.js:53:19)
    at openAiModels (/app/server/utils/helpers/customModels.js:59:18)
    at getCustomModels (/app/server/utils/helpers/customModels.js:27:20)
    at /app/server/endpoints/system.js:912:41
    at Layer.handle [as handle_request] (/app/server/node_modules/express/lib/router/layer.js:95:5)
    at next (/app/server/node_modules/express/lib/router/route.js:149:13)
    at validatedRequest (/app/server/utils/middleware/validatedRequest.js:20:5)

I set the API key as provided by the MaaS platform.

@guimou
Copy link
Collaborator Author

guimou commented Oct 2, 2024

[backend] error: Error: The OPENAI_API_KEY environment variable is missing or empty; either provide it, or instantiate the OpenAI client with an apiKey option, like new OpenAI({ apiKey: 'My API Key' }).
    at new OpenAI (/app/server/node_modules/openai/index.js:53:19)
    at openAiModels (/app/server/utils/helpers/customModels.js:59:18)
    at getCustomModels (/app/server/utils/helpers/customModels.js:27:20)
    at /app/server/endpoints/system.js:912:41
    at Layer.handle [as handle_request] (/app/server/node_modules/express/lib/router/layer.js:95:5)
    at next (/app/server/node_modules/express/lib/router/route.js:149:13)
    at validatedRequest (/app/server/utils/middleware/validatedRequest.js:20:5)

This part is normal the first time you launch AnythingLLM, when the Endpoint is not yet configured. By default it thinks you will use OpenAI, and tries to find the key in the environment variables. Once you have everything configured (and persisted) it does not show anymore.

The real error is this one:

[backend] info: [GenericOpenAiLLM] Inference API: https://mistral-7b-instruct-v0-3-maas-apicast-production.apps.prod.rhoai.rh-aiservices-bu.com:443/v1 Model: mistral-7b-instruct
2024/10/02 12:20:20 [error] 29#29: *119 upstream prematurely closed connection while reading upstream, client: 192.168.127.1, server: ${base_url}, request: "POST /api/workspace/ws/stream-chat HTTP/1.1", upstream: "http://127.0.0.1:3001/api/workspace/ws/stream-chat", host: "localhost:8888", referrer: "http://localhost:8888/workspace/ws"
/opt/app-root/bin/utils/process.sh: line 10:   124 Illegal instruction     (core dumped) "$@"

However, apart from the core dump, it does not say much... And from a quick search, it seems that this type of error happens on different flavours of Macs running containers. It always ends up with the illegal instruction/code dump as it's what launches the process, but the error is in the app itself.
At this point I would try to install AnythingLLM directly on the Mac, and see how it behaves...

@r2munz
Copy link

r2munz commented Oct 3, 2024

Installing AnythingLLM directly worked out perfectly.
Please note that I have installed the specific Apple Silicon image for the test, while the podman command is pulling alinux/amd64image.
Screenshot 2024-10-03 at 10 41 57

@guimou
Copy link
Collaborator Author

guimou commented Oct 3, 2024

Oh, yes, I'm so dumb! It's a standard image, made to work primarily on OpenShift. So definitely not made for Apple Silicon.
I will see if I can create one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants