You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm exploring few-shot prompting for the Qwen QVQ model and was wondering.
Does the model support few-shot learning, meaning can it accept multiple images as input for context?
If not, are there any workarounds, such as using a custom inference setup that makes multiple LLM calls (one for each shot) and consolidates the results?
Would love to hear any insights or experiences.
Thanks in advance! 🙌
The text was updated successfully, but these errors were encountered:
Facing the same issue here. A few-shots including multiple image would be a great addition, where the model learns what it should look for with image examples!
Hello guys 👋
I'm exploring few-shot prompting for the Qwen QVQ model and was wondering.
Does the model support few-shot learning, meaning can it accept multiple images as input for context?
If not, are there any workarounds, such as using a custom inference setup that makes multiple LLM calls (one for each shot) and consolidates the results?
Would love to hear any insights or experiences.
Thanks in advance! 🙌
The text was updated successfully, but these errors were encountered: