Lakehouse Analytics & Advanced ML
Important This package requires Open AI & HuggingFace API key. Remember to run from a folder with the .streamlit/secrets.toml
file.
See here and here for more details.
python -m pip install llm-explorer
llm_explorer
Initial load could take some time as it downloads the model and the tokenizer. Remember to include the secrets.toml file under .streamlit/ folder.
Clone the repository
git clone https://github.com/Occlusion-Solutions/llm_explorer.git
Install the package
cd llm_explorer && make install
Run the package
llm_explorer
After cloning, create a virtual environment
conda create -n llm_explorer python=3.10
conda activate llm_explorer
Install the requirements
pip install -r requirements.txt
Run the python installation
python setup.py install
llm_explorer
Use the [email protected]
user and DEMO@occlusion
password to login.
The deployment requires a secrets.toml file created under .streamlit/:
touch .streamlit/secrets.toml
It should have a schema like this:
[connections.openai]
api_key="sk-..." # OpenAI API Key
[connections.huggingface]
api_key="shf_..." # HuggingFace API Key
[connections.databricks]
server_hostname="your databricks host"
http_path="http path under cluster JDBC/ODBC connectivity"
access_token="your databricks access token"
An assistant Query engine, that is asked naturally with table references and helps in the query generation. The execution of the queries is manual
It uses the pandas agent to generate the queries and execute them. It is a more natural way of querying the data and it operates autonomously until it thinks it finds and answer.
It uses the HuggingFace Transformers Agent chat to operate in a conversational way.
Agent is queried for the top 10 producing wells. It identifies the tables it has access to and understands that the request could be satified by the padalloc table. It then creates a query that returns the top 10 producing assets and return the results.
> Entering new AgentExecutor chain...
Observation: logs, wells
Thought: I should look at the schema of the microchip_logs and padalloc tables to see what columns I can use.
Action: schema_sql_db
Action Input: "wells"
Observation: DDL
Thought: I should query the padalloc table to get the top 10 producing wells.
Action: query_sql_db
Action Input: "SELECT WELL_CODE, SUM(PROD_GAS_VOLUME_MCF) AS total_gas_volume_mcf FROM padalloc GROUP BY WELL_CODE ORDER BY total_gas_volume_mcf DESC LIMIT 10"
Observation: results_dataframe
Thought: I now know the top 10 producing wells.
Final Answer: The top 10 producing wells are 1222344, 1212560, 1222345, 1212503, 1222335, 1222340, 1222338, 1222367, 1220189, and 1222352.
> Finished chain.
This is an adapted implementation from the GitHub repository. See the contibutions list for more details: