-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]Multi-modal embedding issue on 2.17 #1198
Comments
@b4sjoo thanks for reporting the issue. in your ingest pipeline configuration you need to have fields on your doc that do have text and image info, and embedding field should point to vector field that will store the embeddings that we retrieve from inference call. Note that we assume single embedding for both image and text. Let's extend your index mapping and add
If you change ingest pipeline to config to something like below it should work
I tested it with local model, this returns the doc with embeddings
|
What is the bug?
Multimodal ingestion and queries are not working as expected with Bedrock image v1 connector.
How can one reproduce the bug?
Create connector and register the model
Create ingest pipeline
Create index
Ingest doc
Get doc, found not embedded.
Do a multimodal query, return exception, no log trace
What is the expected behavior?
Multimodal fields should be correctly embedded and indexed and multimodal query should work.
What is your host/environment?
OS 2.17
Do you have any additional context?
Prediction with the model directly is working, indicating the connector itself and ML-commons are working as expected.
The text was updated successfully, but these errors were encountered: