Skip to content
This repository has been archived by the owner on Aug 8, 2024. It is now read-only.

Commit

Permalink
Deploying to main from @ unifyai/unify-docs@21b9a88 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
ivy-seed committed May 15, 2024
1 parent 918e1f9 commit 7974a34
Show file tree
Hide file tree
Showing 69 changed files with 7,064 additions and 94 deletions.
Binary file not shown.
Binary file not shown.
Binary file modified hub/.doctrees/demos/demos/LlamaIndex/README.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified hub/.doctrees/demos/demos/Unify/LLM-Wars/README.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified hub/.doctrees/demos/demos/template/README_TEMPLATE.doctree
Binary file not shown.
Binary file modified hub/.doctrees/demos/langchain.doctree
Binary file not shown.
Binary file modified hub/.doctrees/demos/llamaindex.doctree
Binary file not shown.
Binary file modified hub/.doctrees/demos/unify.doctree
Binary file not shown.
Binary file modified hub/.doctrees/environment.pickle
Binary file not shown.
Binary file modified hub/.doctrees/index.doctree
Binary file not shown.

Large diffs are not rendered by default.

56 changes: 56 additions & 0 deletions hub/_sources/demos/demos/LangChain/RAG_playground/README.md.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# RAG Playground 🛝

[Demo](https://github.com/Anteemony/RAG-Playground/assets/103512255/0d944420-e3e8-43cb-aad3-0a459d8d0318)

<video width="640" height="480" autoplay>
<source src="../../../../_static/RAG_Playground.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>


A live version of the application is hosted on Streamlit, try it out yourself using the link below:
[RAG Playground on Streamlit](https://unify-rag-playground.streamlit.app/)

## Introduction
Streamlit application that enables users to upload a pdf file and chat with an LLM for performing document analysis in a playground environment.
Compare the performance of LLMs across endpoint providers to find the best possible configuration for your speed, latency and cost requirements using the dynamic routing feature.
Play intuitively tuning the model hyperparameters as temperature, chunk size, chunk overlap or try the model with/without conversational capabilities.

You find more model/provider information in the [Unify benchmark interface](https://unify.ai/hub).

## Usage

1. Visit the application: [RAG Playground](https://unify-rag-playground.streamlit.app/)
2. Input your Unify APhttps://github.com/Anteemony/RAG-Playground/assets/103512255/0d944420-e3e8-43cb-aad3-0a459d8d0318I Key. If you don’t have one yet, log in to the [Unify Console](https://console.unify.ai/) to get yours.
3. Select the Model and endpoint provider of your choice from the drop-down menu. You can find both model and provider information in the benchmark interface.
4. Upload your document(s) and click the Submit button.
5. Enjoy the application!

## Repository and Local Deployment

The repository is located at [RAG Playground Repository](https://github.com/Anteemony/RAG-Playground).

To run the application locally, follow these steps:

1. Clone the repository to your local machine.
2. Set up your virtual environment and install the dependencies from `requirements.txt`:

```bash
python -m venv .venv
source .venv/bin/activate # On Windows use `.venv\Scripts\activate`
pip install -r requirements.txt
```

3. Run rag_script.py from Streamlit module

```bash
python -m streamlit run rag_script.py
```

## Contributors

| Name | GitHub Profile |
|------|----------------|
| Anthony Okonneh | [AO](https://github.com/Anteemony) |
| Oscar Arroyo Vega | [OscarAV](https://github.com/OscarArroyoVega) |
| Martin Oywa | [Martin Oywa](https://github.com/martinoywa) |
82 changes: 82 additions & 0 deletions hub/_sources/demos/demos/LlamaIndex/RAGPlayground/README.md.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# RAG Playground
[Demo](https://github.com/abhi2596/rag_demo/assets/80634226/08f6c7c4-65e3-49b4-bfb1-9a5db2cce248)

<video width="640" height="480" autoplay>
<source src="../../../../_static/RAG_LLamaIndex.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>


A live version of the application is hosted on Streamlit, try it out yourself using the link below:
[RAG Playground on Streamlit](https://unifyai-rag-playground.streamlit.app/)

## Introduction

The RAG Playground is an application designed to facilitate question-answering tasks based on uploaded PDF documents. It leverages LLamaIndex for RAG functionalities and utilizes Streamlit for the user interface.

## Key Features

- **PDF Upload:** Easily upload PDF files to the application.
- **Questioning:** Ask questions about the uploaded PDF documents.
- **RAG Integration:** Utilize LLamaIndex for RAG capabilities.
- **Embeddings:** Convert text to embeddings using the BAAI/bge-small-en-v1.5 model.
- **Reranker:** Reorder search results based on relevance to queries.
- **Streamlit Optimization:** Enhance performance using `@st.experimental_fragment` and `@st.cache_resource`.

## Project Workflow

1. **PDF Processing:**
- Load PDF files and extract text using PDFReader.
- Load data into Documents in LLamaIndex.
2. **Chunking and Conversion:**
- Chunk text and convert it into nodes using `VectorStoreIndex.from_documents`.
- Convert text to embeddings using the BAAI/bge-small-en-v1.5 model.
3. **Search Optimization:**
- Implement a reranker to reorder search results based on query relevance.
- Display top-ranked results after reranking.
4. **Interface Optimization:**
- Build the user interface using Streamlit.
- Optimize Streamlit performance with `@st.experimental_fragment` and `@st.cache_resource`.

## Tech Stack Used

- LLamaIndex
- Streamlit
- BAAI/bge-small-en-v1.5 model

## Repository and Deployment
Github - https://github.com/abhi2596/UnifyAI_RAG_playground/tree/main
Streamlit App - https://unifyai-rag-playground.streamlit.app/

Instructions to run locally:

1. First create a virtual environment in python

```
python -m venv <virtual env name>
```
2. Activate it and install poetry

```
source <virtual env name>/Scripts/activate - Windows
source <virtual env name>/bin/activate - Linux/Unix
pip install poetry
```
3. Clone the repo

```
git clone https://github.com/abhi2596/UnifyAI_RAG_playground/tree/main
```
4. Run the following commands

```
poetry install
cd rag
streamlit run app.py
```

## Contributors

| Name | GitHub Profile |
|------|----------------|
| Abhijeet Chintakunta | [abhi2596](https://github.com/abhi2596) |
2 changes: 1 addition & 1 deletion hub/_sources/demos/demos/LlamaIndex/README.md.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# LlamaIndex Projects
This folder contains various projects built using the LangChain Unify Integration. Please headover to the corresponding folder of the project for more details.
This folder contains various projects built using the LLamaIndex Unify Integration. Please headover to the corresponding folder of the project for more details.

## Introduction
Provide a brief introduction to your project here. Describe what your project demonstrates, the tech stack used, the motivation behind the project, and briefly explain the necessary concepts used. Feel free to break down this section into multiple subsections depending on your project.
Expand Down
28 changes: 28 additions & 0 deletions hub/_sources/demos/demos/Unify/Chatbot_Arena/CONTRIBUTING.md.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# How to become a contributor and submit your own code
## Contributor License Agreements
We'd love to accept your sample apps and patches! Before we can take them, we
have to jump a couple of legal hurdles.
Please fill out either the individual or corporate Contributor License Agreement
(CLA).
* If you are an individual writing original source code and you're sure you
own the intellectual property, then you'll need to sign an [individual CLA]
(https://developers.google.com/open-source/cla/individual).
* If you work for a company that wants to allow you to contribute your work,
then you'll need to sign a [corporate CLA]
(https://developers.google.com/open-source/cla/corporate).
Follow either of the two links above to access the appropriate CLA and
instructions for how to sign and return it. Once we receive it, we'll be able to
accept your pull requests.
## Contributing A Patch
1. Submit an issue describing your proposed change to the repo in question.
1. The repo owner will respond to your issue promptly.
1. If your proposed change is accepted, and you haven't already done so, sign a
Contributor License Agreement (see details above).
1. Fork the desired repo, develop and test your code changes.
1. Ensure that your code adheres to the existing style in the sample to which
you are contributing. Refer to the
[Google Cloud Platform Samples Style Guide]
(https://github.com/GoogleCloudPlatform/Template/wiki/style.html) for the
recommended coding standards for this organization.
1. Ensure that your code has an appropriate set of unit tests which all pass.
1. Submit a pull request.
120 changes: 120 additions & 0 deletions hub/_sources/demos/demos/Unify/Chatbot_Arena/README.md.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
# Chatbot Arena

[Demo](https://github.com/Kacper-W-Kozdon/demos-Unify/assets/102428159/e5908b4e-0cd7-445d-a1ac-3086be2db5ba)

<video width="640" height="480" autoplay>
<source src="../../../../_static/Chatbot_arena.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>


A live version of the application is hosted on Streamlit, try it out yourself using the link below:
[ChatBot Arena on Streamlit](https://llm-playground-unify.streamlit.app/)

<p align="center">
<em>This Streamlit application provides a user interface for interacting with Unify models through chat. It allows users to select models and providers, input text, and view the conversation history with AI assistants.
</em>
</p>
<p align="center">
<!-- Shields.io badges not used with skill icons. --><p>
<p align="center">
<em>Developed with the software and tools below.</em>
</p>
<p align="center">
<a href="https://skillicons.dev">
<img src="https://skillicons.dev/icons?i=python,docker,github,gcp">
</a></p>


### Overview
This Streamlit application provides a user interface for interacting with Unify models through the chat. It allows users to select models and providers, input text, and view the conversation history with two AI assistants at a time. The app collects the data on the users' assessment of the comparative models' performance and provides an easy access to the global leaderboards which can be used as a complementary form of assessment of the performance of the models.


### Motivation
The challenge project "Chatbot arena" is based on [this article](https://arxiv.org/abs/2403.04132).


### Features

- **Chat UI**: Interactive chat interface to communicate with AI assistants.
- **Endpoint from Unify**: Choose from a variety of models and providers.
- **Conversation History**: View and track the conversation history with each model.
- **Clear History**: Option to clear the conversation history for a fresh start.
- **Global Leaderboards**: The votes are saved locally and [globally](https://docs.google.com/spreadsheets/d/10QrEik70RYY_LM8RW8GGq-vZWK2e1dka6agRGtKZPHU/edit#gid=0).




### How to use the app


1. Input Unify API Key: Enter your Unify API key in the provided text input box on the sidebar.

2. Select endpoints : Choose the models and providers from the sidebar dropdown menus.

3. Start Chatting: Type your message in the chat input box and press "Enter" or click the "Send" button.

4. View Conversation History: The conversation history with the AI assistant for each model is displayed in separate containers.

5. Clear History: You can clear the conversation history by clicking the "Clear History" button.


### Getting Started

**System Requirements:**

* **Python**
* **streamlit**
* extra: look into the `requirements.txt` and `requirements-test.txt` files


#### Easy installation

<h4>From <code>source</code> in order to use the attached Docker file.</h4>

---

## Repository and Deployment

### Setup (without Docker)

1. Clone this repository:

```bash
git clone https://github.com/samthakur587/LLM_playground
```
2. change directory
```bash
cd LLM_playground
```


3. Install the required dependencies:

```bash
pip install -r requirements.txt
```

### Run the app
```bash
streamlit run Chatbot_arena.py
```

---
## Contributors
<p align="center">



| Name | GitHub Profile |
|------|----------------|
| Samunder Singh | [samthakur587](https://github.com/samthakur587) |
| Kacper Kożdoń | [Kacper-W-Kozdon](https://github.com/Kacper-W-Kozdon) |

<a href="https://github.com{/samthakur587/LLM_playground/graphs/contributors">
<img src="https://contrib.rocks/image?repo=samthakur587/LLM_playground">
</a>
</p>
---


6 changes: 4 additions & 2 deletions hub/_sources/demos/demos/Unify/LLM-Wars/README.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@
Your browser does not support the video tag.
</video>


A live version of the application is hosted on Streamlit, try it out yourself using the link below:
[LLM Wars on Streamlit](https://unify-llm-wars-tftznesvztdt2bwsqgub3r.streamlit.app/)

### Overview
**LLM Wars** is a web application built with Streamlit that sets up a dynamic competition between two Large Language Models (LLMs). The LLMs engage in a structured debate where they challenge each other by generating complex prompts, responding to those prompts, and evaluating the responses. This application demonstrates the natural language capabilities of modern AI models in an interactive competitive environment with visualizations.

Expand Down Expand Up @@ -38,8 +42,6 @@ LLM Wars demonstrates novel LLM applications beyond common use cases by creating
The source code for **LLM Wars** is part of a larger collection of demos. You can access the original source code for this specific project [here](https://github.com/leebissessar5/Unify-LLM-Wars).

### Live Application
A live version of the application is hosted on Streamlit, allowing you to interact with it immediately without the need to set up a local environment. Visit the application at: [LLM Wars on Streamlit](https://unify-llm-wars-tftznesvztdt2bwsqgub3r.streamlit.app/).

### Running Locally
To run **LLM Wars** locally, clone the repository, then open up a terminal window from this directory (where this README is located) and follow these steps:

Expand Down
46 changes: 46 additions & 0 deletions hub/_sources/demos/demos/Unify/SemanticRouter/README.md.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Semantic Router
[Demo](https://github.com/ithanigaikumar/demos/assets/107815119/33ceff47-3495-44a9-aad7-c0a3ba3433a8)

<video width="640" height="480" autoplay>
<source src="../../../../_static/semanticrouterapplication.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>


A live version of the application is hosted on Streamlit, try it out yourself using the link below:
[Semantic Router on Streamlit](https://semanticrouterchatbot.streamlit.app/)

## Introduction:
This semantic router Streamit application optimizes user query handling by dynamically routing each query to the most appropriate model based on semantic similarity.A routing layer is included to help with this process. This system supports predefined routes for domains like maths and coding, and allows users to create custom routes for unique needs. By ensuring that queries are processed by the best-suited model, the semantic router enhances output quality and improves cost efficiency. This approach not only delivers more accurate and contextually relevant responses but also enhances overall user satisfaction.


## Repository and deployment
Access using the following URL: [https://semanticrouterchatbot.streamlit.app/](https://semanticrouterchatbot.streamlit.app/) or follow the sections below to get started.
Fork from this respository:[https://github.com/ithanigaikumar/SemanticRouter]
To set up the project, you will need to install several Python packages. You can do this using pip, Python's package installer. Execute the following commands in your terminal or command prompt to install the required packages.

**Install Required Packages:**
```
pip install streamlit
pip install -U semantic-router==0.0.34
pip install unifyai
pip install transformers
pip install torch

```
Make sure that each command completes successfully before proceeding to the next step. If you encounter any issues during the installation process, check your Python and pip versions, and ensure your environment is configured correctly.

**Launch the App :**



streamlit run app.py



## Contributors

| Name | GitHub Username |
|-------------------------------|-----------------|
| Indiradharshini Thanigaikumar | [ithanigaikumar](https://github.com/ithanigaikumar) |
| Jeyabalan Nadar | [jeyabalang](https://github.com/jeyabalang) |
Loading

0 comments on commit 7974a34

Please sign in to comment.