Skip to content

Releases: ask0ldd/OsspitaUI

OSspita | Alpha v1.0.51 / Hotfix / 20250118

18 Jan 14:46
Compare
Choose a tag to compare

A hotfix has been implemented to resolve instability in web search features caused by recent Google Search layout modifications. Adjustments to stabilize the search agents and improve overall performance have been made.

Next : Regarding image generation capabilities, the foundational features have been successfully implemented and are currently undergoing comprehensive stability testing.

OSspita | Alpha v1.0.5 / 20241220

20 Dec 06:29
Compare
Choose a tag to compare

OSspita | Alpha v1.0.5: Chat with historical figures & multiple improvements.

A lot of work has been done to eliminate some unstable third party libraries, improve the user experience, and add one of the most requested features.

Since a lot has been reworked, OSspita may display some bugs here and there. If you would like me to hotfix anything, please open an issue, and I will do my best to address what is needed.

--> [!!!] Note: The available historical characters are using Llama 3.2:3B by default to ensure compatibility with low VRAM GPUs. If your hardware allows, you should switch to a larger model. <--

Features & fixes :

  • You can now converse with a few historical characters (experimental feature).
  • PDF parsing is more reliable since one defective middleware library has been excluded from the project (experimental).
  • Web search is more reliable too since OSspita doesn't rely on the unstable Duck-Duck-Scrape library to find which pages to scrape (still experimental since the query agent needs some improvements).
  • UX improvements: A new infobar tracking the mode you're in.
  • The conversations and uploaded images are now persistent between sessions.

Next:

  • Since the conversations are now persistent, the dedicated panel will be fully revamped, and more quality-of-life options will be added (such as ordering, sorting, and folders).
  • Fifteen anime characters will be added for users to chat with.
  • Fifteen fictional characters will be added for users to chat with.
  • The current naive RAG algorithm will be improved : statistical chunking & hnsw.
  • Image generation will become a major focus, with the integration of the ComfyUI API planned for the near future.

osspita main

OSspita | Alpha v1.0.4 / 20241207

07 Dec 03:56
Compare
Choose a tag to compare

OSspita | Alpha v1.0.4: Vision and Linux Performance Update

  1. Addressing a bug preventing Agent creation and updates.
  2. You can now target multiple images in a single request using any model other than LLaMA Vision (MiniCPM-V recommended).
  3. Massive performance improvements on Linux.
  4. OSspita is now compatible with Firefox on Linux.
  5. Some various quality-of-life improvements.

Next :

  • Conversations persistence.
  • Ability to chat with historical personas.
  • Possibility to configure the URL and the port used to reach Ollama.

OSspita | Alpha v1.0.3 / 20241127

27 Nov 02:25
Compare
Choose a tag to compare

OSspita | Alpha v1.0.3: Llama 3.2 Vision integration.

You can now use Vision Language Models to analyze and describe any picture at your disposal. Here are some of the best use cases leveraging this technology:

  • Visual Question Answering: VLMs can answer questions about images, providing detailed information about content, objects, and scenes.

  • Image Captioning: These models can generate descriptive captions for images, making visual content more accessible.

  • Object Detection and Recognition: VLMs can identify and locate objects within images, often with high accuracy.

  • Content Creation: VLMs can generate text based on visual inputs, assisting in creating articles, social media posts, or product descriptions.

  • Image-Text Pairing: These models can suggest relevant text for images or vice versa, useful in marketing and advertising.

  • Data Extraction: You can easily extract data from graphs within pictures.

  • Image Translation: You can translate any text being part of the image to one of the languages supported by your VLM.

NB: Unfortunately, for now, each request can only target one image. It is a conservative choice I had to make since images sent in batch to Ollama can lead to unexpected crashes.

osspita main

OSspita | Alpha v1.0.2 / 20241121

21 Nov 17:44
Compare
Choose a tag to compare

OSspita Alpha v1.0.2: Web search algorithm enhanced.

Web search functionality has been significantly improved, particularly in the reranking of scraped pages.

This update results in more reliable and up-to-date information being returned to users.

While the reranking process still requires a lot of refinement, the current version offers substantially improved usability.

Bug fix :

Address the issue where certain models failed to register during the final selection phase of the installation process.

OSspita Alpha v1.0.1 / 20241119

19 Nov 03:25
Compare
Choose a tag to compare

OSspita Alpha v1.0.1 introduces Agents Chaining, a method to customize your chat experience and solve complex tasks using natural language only.

For those who have been exploring Large Language Models (LLMs) recently, it has become increasingly clear that these AI systems struggle with overly complex or unfocused prompts. When users attempt to pack multiple instructions or queries into a single prompt, the resulting output can lack consistency and precision. To address this challenge, we are introducing Agents Chaining.

A Simple Use Case for Agents Chaining

Imagine you own an e-commerce business with substandard product descriptions in English only. You plan to target German, Spanish, and French customers in the coming months. Additionally, you want improved and translated descriptions formatted as JSON for future database integration.

Now, with Agent Chaining, it's possible to solve such a complex task with one request and high reliability.

Agent Chaining allows splitting complex and unreliable prompts into smaller, more predictable ones.

Steps to Meet User Needs

Returning to our use case, here are the agents the user could create to meet their goal:

  1. Agent 1 could rewrite a half-baked description into five great ones.
  2. Agent 2 could combine the five previous descriptions into a perfect one.
  3. Agent 3 could write a title for the perfect description.
  4. Agent 4 could translate the description and its title into the three target languages.
  5. Agent 5 could format those translated descriptions into JSON.

As you can see, some of these split tasks are very simple, opening new possibilities for leveraging small models and reducing inference times.

The final chain would look like this:

chain example

Creating Agents

The last question is: How to create these agents?

It's quite simple. Create new agents via the dedicated tab, writing a system prompt explaining to the model what will be given as input (the result of the preceding step) and what you expect as output. Since these system prompts can be very short, the model will rarely go astray. Then, add the agents in the right order to your custom chain.

Now: Send your substandard product description to the chat while the chain tab is active, and you should get the expected JSON!