Skip to content

Add downsampled images as context instead of screenshot.png? #56

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
klxu03 opened this issue Dec 2, 2023 · 5 comments
Closed

Add downsampled images as context instead of screenshot.png? #56

klxu03 opened this issue Dec 2, 2023 · 5 comments
Assignees
Labels
enhancement New feature or request

Comments

@klxu03
Copy link
Contributor

klxu03 commented Dec 2, 2023

Would it be advantageous to keep a collage of downsampled previous images, maybe to 160px x 90px and just stack them in a line left to right, one after another, and constantly pass this image as additional context for each action like here is a timeline of previous states the model has traversed through?

FYI: I would be happy to draft and code this feature out!

@shubhexists
Copy link
Contributor

Out of curiosity, Doesn't the Vision model has any sort of "Token Limit" like it has in GPT4?

@michaelhhogue
Copy link
Collaborator

@klxu03 This is a good idea. I've been trying to improve the vision prompt by telling GPT to reference its previous action. But also giving it a stack of previous images would allow it to reference even farther back. You would need to experiment with the downsampling and size of the previous image stack to not hit token limits as @shubhexists mentioned. Feel free to open a PR/draft demonstrating this change!

@michaelhhogue michaelhhogue added the enhancement New feature or request label Dec 2, 2023
@klxu03
Copy link
Contributor Author

klxu03 commented Dec 2, 2023

Got it. I'll go draft up a PR for the previous image stacks! I'm planning on just making it an immediate previous message in pseudo_messages.

Also, regarding the token limit, correct me if I'm wrong, is the GPT4V context window up to 128k tokens?
If so, would there be a real concern about hitting the token limit? With some calculations, where x = one dimension of maximum image size you get x = 14044, meaning you can pass up to a 14044px x 14044px image. To hit the token limit, you would need to pass a lot of images to hit 2 * 10^8 total pixels.

@michaelhhogue @shubhexists

@michaelhhogue
Copy link
Collaborator

@klxu03 Can this be closed since #57 was merged?

@joshbickett
Copy link
Contributor

closing since old and less relevant now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants