Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Minimal GUI version or TUI #3478

Closed
barracuda156 opened this issue Feb 8, 2025 · 8 comments
Closed

[Feature] Minimal GUI version or TUI #3478

barracuda156 opened this issue Feb 8, 2025 · 8 comments
Labels
enhancement New feature or request

Comments

@barracuda156
Copy link

Feature Request

Qt5+ is broken on some platforms, and also is just a huge dependency to build and keep on the disk.
Is it possible to have a minimal version with basic text GUI or TUI? What would be the bare minimum to run in principle?
P. S. Ideally I want to build this with Qt4, however I understand that hardly anyone gonna bother helping with that specifically. But if there is a generic minimal version, I can probably adapt the code to Qt4 syntax.

@barracuda156 barracuda156 added the enhancement New feature or request label Feb 8, 2025
@cebtenzzre
Copy link
Member

Qt is actually much smaller than the compute kernels we ship with GPT4All in order to support GPUs. So, this is a non-issue. The windows build of Ollama is 1.8 GiB, and it has no dependency of a GUI framework of any kind. Also, the LLMs (such as an 8B Llama) are typically about 5 GiB each.

The Qt libraries that are part of GPT4All take up only 60 MiB of disk space on my Linux system. This is nothing in comparison.

@cebtenzzre cebtenzzre closed this as not planned Won't fix, can't repro, duplicate, stale Feb 12, 2025
@barracuda156
Copy link
Author

@cebtenzzre This does not help when Qt6 does not build at all though.

@cebtenzzre
Copy link
Member

@cebtenzzre This does not help when Qt6 does not build at all though.

If there is a problem with Qt6 on your platform, please report the issue on the Qt bug tracker. It has very reasonable system requirements. We are not going to rewrite the whole program in Qt4 to support your niche use case.

@barracuda156
Copy link
Author

@cebtenzzre I did not ask to rewrite anything at all for Qt4. My request was to make bells and whistles optional, that’s it.

The problem with Qt is that its upstream deliberately kept breaking support for platforms over years since Qt5 beginning. There are higher chances to fix all their broken code than to convince them do anything. The first simply needs a lot of time, while the second requires a miracle.

@cebtenzzre
Copy link
Member

The problem with Qt is that its upstream deliberately kept breaking support for platforms over years since Qt5 beginning.

Well, you still have not named a specific platform in this thread. So we can't even know how to resolve your original issue. Is there really a computer you can buy off the shelf today that would be well-supported by GPT4All (using CUDA, Vulkan, or Metal for acceleration) if not for its dependency on Qt6?

There are areas we can improve compatibility with different hardware, for sure. For example, discrete Intel GPUs are still not supported at all. But if you cannot run Qt, that is not a problem we are interested in spending precious time solving. Sorry.

@barracuda156
Copy link
Author

barracuda156 commented Feb 12, 2025

The problem with Qt is that its upstream deliberately kept breaking support for platforms over years since Qt5 beginning.

Well, you still have not named a specific platform in this thread.

From Qt6 base port status, looks like it is broken even on Catalina, which is crazy. Current release of Q5 base builds only on 10.13+.
This, of course, does not automatically mean that all required components will build on systems where the base builds.
No version of Qt5+ builds on powerpc and, AFAIK, i386.
(Unrelated to the discussed issue, but about Qt upstream: X11 backend is broken in Qt6 for all versions of macOS on all archs.)

Some components (may not be relevant for gpt4all) of Qt5/Qt6 are broken on the current release of OpenBSD for some archs.
I have no idea of Linux status, but I suspect that once we do not talk about x86 or arm64, a lot of platforms will happen to be unsupported.

This is just to reply to the question, not to convince anyone.

@cebtenzzre
Copy link
Member

macOS 12 stopped receiving security updates from Apple last year. 10.15 got its final update three years ago. You can dedicate as much of your free time as you would like to porting modern software to old hardware, but when Apple has already left you in the dust, you should not expect applications to stay in the past and remain compatible with these devices. If you can update, you should, and if you can't update, don't complain to us, complain to Apple.

GPT4All is designed for and tested on modern, off-the-shelf PCs—or at least, commodity hardware running popular operating systems (and we even count Linux as popular). We are not going to buy an iMac G3 and then port GPT4All to that. We are also not going to install OpenBSD on a perfectly functioning desktop and then wonder why the NVIDIA GPU does not even work. There are better things to spend development time on at Nomic.

@barracuda156
Copy link
Author

@cebtenzzre If you re-read my original post, I did not ask anyone to port the code to macOS PowerPC or OpenBSD. I do not think it is something crazy to want to have a CLI-only version for something which can work in CLI. And my idea was merely to make additional functionality optional (though enabled by default).
Of course, it is up to you what you prefer to spend your time on, no drama.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants