Skip to content

Commit

Permalink
(perf) Filter construction improvements - Make It Smarter (#131)
Browse files Browse the repository at this point in the history
* Feat make smarter (#91)

* fix  - pass in the parameters to the big query generate model

* make it smater

* merge

* prevent duplicate triggering of generation

* move logic into helper utils

* fix state

* fix the callbacks

* use params instead of a string

* add back sorting

* fix link generation

* remove some consoles

* check the filters twice, make sure that we're returning a simpler data structure

* fix race condition for page load

* separate call for filter and visualization generation

* fix typescript errors

* add jest tests

* update for edge cases when custom urls, and custom dates are in examples need to be handled.

* Fix package json

* more fixes post merge

* fix the store

* fix sidepanel

* fix the link to explore

* improve speed by parallelizing calls to gemini

* fix summarization

* scroll into view

* add filter helper and tests

* add tests

* run on every commit

* only use node 16 and 18

* use a filter validator

* remove un-used console logs

* post-merge fixes

* fix tests

* improve scrolling

---------

Co-authored-by: Jawad Laraqui <[email protected]>
Co-authored-by: Flavio Di Berardino <[email protected]>

* Update useSendVertexMessage.ts

* Fixed issue with parsing cleaned url

Added case statement to handle both cleaned an uncleaned urls

* Origin/feature generate examples (#1)

* Added generate examples script and trusted dashboard table

* Error handling

* terraform bug fix

* Handle url queryPrompt parameter

* generate_exmples bug fix

* Added number filter documentation

* work in progress - append examples

* working and tested new concat function.

* tested

* Update looker_filter_doc.md

adding more context to filter doc on dates

* Add files via upload

Adding context file on timeframes and intervals

* Update useSendVertexMessage.ts

adding import and reference for context

* Update ExploreFilterHelper.ts

updating file

* mostly working with new configs

* working settings!

* refactoring to use lookml queries

* working with cloud function and new lookml model

* Update useSendVertexMessage.ts

reverted/removed inline date and number filter documentation in useSendVertexMessage

* made settings admin-only and hide them for regular users.

* committing timeframe filter logic

* secure fetchProxy added

* working with Bigquery!

* Fixed problem with variability

* remove indeterminacy, fix filter bug

* more context with restored filters call

* bug fix

* bug fix

* add  back in filter mods

* handle NOT NULL better

* merge fixes

* Update looker_filter_doc.md to help last x timeframe filters

Adding more context to distinguish between last x days and x days ago type of filters.

* rm trusted dashboards

* work in progress on be installer

* testing cloud shell script

* testing cloud console run

* testing

* readme updated

* test

* testing

* test

* test

* test

* test

* test

* test

* test

* test

* readme edits

* updated readme

* readme update

* readme

* adding changes

* Fixed READMEs

* restore 'query object format' text.

* testing

* testing

* security setup

* updates for security testing

* error handling fixes and example script updates

* readme updates

* Revert "Merge branch 'marketplace_deploy' into make-it-smarter-bytecode"

This reverts commit 954b73b, reversing
changes made to cd7ee7e.

* Remove 'check the response'.

---------

Co-authored-by: Luka Fontanilla <[email protected]>
Co-authored-by: Jawad Laraqui <[email protected]>
Co-authored-by: Flavio Di Berardino <[email protected]>
Co-authored-by: dbarrbc <[email protected]>
Co-authored-by: Colin Roy-Ehri <[email protected]>
Co-authored-by: colin-roy-ehri <[email protected]>
Co-authored-by: carolbethperkins <[email protected]>
  • Loading branch information
8 people authored Feb 6, 2025
1 parent 27539e2 commit 4f4a1bc
Show file tree
Hide file tree
Showing 39 changed files with 11,799 additions and 3,837 deletions.
Binary file added .DS_Store
Binary file not shown.
29 changes: 29 additions & 0 deletions .github/workflows/extension-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
name: Explore Assistant Extension CI

on:
push:
branches:
- '**' # This will run on pushes to any branch

jobs:
test:
runs-on: ubuntu-latest

strategy:
matrix:
node-version: [16.x, 18.x]

steps:
- uses: actions/checkout@v3
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.node-version }}
- name: Change to project directory
run: cd explore-assistant-extension
- name: Install dependencies
working-directory: ./explore-assistant-extension
run: npm ci
- name: Run Unit tests
working-directory: ./explore-assistant-extension
run: npm test tests/unit_tests
6 changes: 3 additions & 3 deletions explore-assistant-examples/.env
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
##Update the variables in this environment file to automate the bash scripts for loading & updating the examples

PROJECT_ID="PROJECT_ID" ##Required. The Google Cloud project ID where your BigQuery dataset resides.
DATASET_ID="DATASET_ID" ##The ID of the BigQuery dataset. Defaults to explore_assistant.
EXPLORE_ID="MODEL:EXPLORE_ID" ##Required. A unique identifier for the dataset rows related to a specific use case or query (used in deletion and insertion).
PROJECT_ID=seraphic-ripsaw-360618
DATASET_ID=explore_assistant
EXPLORE_ID=nabc:spins_nlp
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"#Convert CSV to JSON\n",
"import csv\n",
"import json\n",
"\n",
"\n",
"def csv_to_json(csv_file, json_file):\n",
" \"\"\"Converts a CSV file to a JSON file.\n",
"\n",
"\n",
" Args:\n",
" csv_file: The path to the CSV file.\n",
" json_file: The path to the output JSON file.\n",
" \"\"\"\n",
"\n",
"\n",
" data = []\n",
" with open(csv_file, 'r') as csvfile:\n",
" csvreader = csv.DictReader(csvfile)\n",
" for row in csvreader:\n",
" data.append(dict(row))\n",
"\n",
"\n",
" with open(json_file, 'w') as jsonfile:\n",
" json.dump(data, jsonfile, indent=4)\n",
"\n",
"\n",
"\n",
"\n",
"# Example usage\n",
"csv_file = 'DMi EA Prompts - Explore Assistant Order Details - Cleansed.csv'\n",
"json_file = 'dmi_examples.json'\n",
"csv_to_json(csv_file, json_file)\n",
"print(f\"CSV converted to JSON: {json_file}\")"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
56 changes: 56 additions & 0 deletions explore-assistant-examples/convert_examples.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"#Convert CSV to JSON\n",
"import csv\n",
"import json\n",
"\n",
"\n",
"def csv_to_json(csv_file, json_file):\n",
" \"\"\"Converts a CSV file to a JSON file.\n",
"\n",
"\n",
" Args:\n",
" csv_file: The path to the CSV file.\n",
" json_file: The path to the output JSON file.\n",
" \"\"\"\n",
"\n",
"\n",
" data = []\n",
" with open(csv_file, 'r') as csvfile:\n",
" csvreader = csv.DictReader(csvfile)\n",
" for row in csvreader:\n",
" data.append(dict(row))\n",
"\n",
"\n",
" with open(json_file, 'w') as jsonfile:\n",
" json.dump(data, jsonfile, indent=4)\n",
"\n",
"\n",
"\n",
"\n",
"# Example usage\n",
"csv_file = '/Users/kalib/Downloads/NABC Examples - examples_cleansed.csv'\n",
"json_file = 'nabc_examples.json'\n",
"csv_to_json(csv_file, json_file)\n",
"print(f\"CSV converted to JSON: {json_file}\")"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
2 changes: 1 addition & 1 deletion explore-assistant-examples/load_examples.sh
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

source .env
TABLE_ID="explore_assistant_examples" ##The ID of the BigQuery table where the data will be inserted. Set to explore_assistant_examples.
JSON_FILE="examples.json" ##The path to the JSON file containing the data to be loaded. Set to examples.json.
JSON_FILE="nabc_examples.json" ##The path to the JSON file containing the data to be loaded. Set to examples.json.

python load_examples.py \
--project_id $PROJECT_ID \
Expand Down
Loading

0 comments on commit 4f4a1bc

Please sign in to comment.