You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+16-10Lines changed: 16 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,21 @@
1
1
# Talk to the City
2
2
3
-
[Talk to the City (TTTC)](https://ai.objectives.institute/talk-to-the-city) Talk to the City is an open-source LLM interface for improving collective deliberation and decision-making by analyzing detailed, qualitative data. It aggregates responses and arranges similar arguments into clusters.
3
+
**Note**: this repo is under very active construction with a new separate Python server for LLM calls—details below are likely to change!
4
+
Please create a GitHub Issue for anything you encounter.
4
5
5
-
This repo will allow you to setup your own instance of TTTC. The basic workflow is
6
+
Latest instructions for local development are [here.](/.contributing.md)
6
7
7
-
1. Submit a csv or google sheet with your survey data, either through the NextJS client or the Express API.
8
+
[Talk to the City (T3C)](https://ai.objectives.institute/talk-to-the-city) is an open-source LLM-enabled interface for improving collective deliberation and decision-making by analyzing detailed, qualitative data. It aggregates responses and organizes similar claims into a nested tree of main topics and subropics.
9
+
10
+
This repo will allow you to setup your own instance of T3C.
11
+
The basic workflow is
12
+
13
+
1. Submit a CSV file or Google Sheet with your survey data, either through the NextJS client or the Express API.
8
14
2. The backend app will use an LLM to parse your data.
9
15
3. The backend app will upload a JSON file to a Google Cloud Storage Bucket that you provide.
10
16
4. Your report can be viewed by going to `http://[next client url]/report/[encoded url for your JSON file]`.
11
17
12
-
If you want to use Talk to the City without any setup, you can go to our website at TBD and follow the instructions on [how to use TTTC](#usage)
18
+
If you want to use Talk to the City without any setup, you can go to our website at TBD and follow the instructions on [how to use T3C](#usage)
13
19
14
20
## Setup
15
21
@@ -23,7 +29,7 @@ or if you have git ssh
23
29
24
30
### Google Cloud Storage and Services
25
31
26
-
TTTC currently only supports using Google Cloud for storing report data out of the box.
32
+
T3C currently only supports using Google Cloud for storing report data out of the box.
27
33
28
34
First create a new storage bucket:
29
35
@@ -54,7 +60,7 @@ Set up gcloud SDK on your machine
54
60
55
61
You will need to add two .env files
56
62
57
-
#### express-pipeline/.env
63
+
#### express-server/.env
58
64
59
65
Encode your google credentials using the service account key you downloaded earlier by running the command `base64 -i ./google-credentials.json`
60
66
@@ -85,11 +91,11 @@ You can add different types of .env files based on your needs for testing, dev,
85
91
86
92
(see [this](./contributing.md) to run in dev mode instead of prod)
87
93
88
-
If you want to run a local version of the app that's not publically accessable:
94
+
If you want to run a local version of the app that's not publically accessible:
89
95
90
-
1. Open up your terminal and navigate to the repo folder (i.e. /Desktop/tttc-light-js)
91
-
2.Run `npm run build`.
92
-
3. Run `npm run dev` to star the dev server. This should open up a server for the next-client and express-pipeline on localhost:3000 and localhost:8080 respectively.
96
+
1. Open up your terminal and navigate to the repo folder (e.g. `/Desktop/tttc-light-js`)
97
+
2.If this is your first time, build the repo: `cd commmon && npm run build`.
98
+
3. Run `npm run dev` to start the dev server. This will run three servers in new terminal windows: the `next-client` frontend on `localhost:3000`, the `express-server` backend on `localhost:8080`, and the `pyserver` Python FastAPI server for the LLM calls on `localhost:8000`. A fourth terminal window will show a listener for `/common` that rebuilds the JS files when changes are made.
Copy file name to clipboardExpand all lines: contributing.md
+32Lines changed: 32 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,37 @@
1
1
# Working with Talk to the City
2
2
3
+
## Quickstart
4
+
5
+
Latest instructions as we move to a separate Python server for LLM calls.
6
+
First, pull the latest from `main` in this repo and start in the root directory (`tttc-light-js`).
7
+
8
+
### Set up dependencies
9
+
10
+
1. In a new terminal, run `brew install redis` and `redis-server` to install Redis and start a local Redis server.
11
+
2. From the repo root, `cd pyserver` and install the Python package dependencies in `requirements.txt` with your preferred method (e.g. `pip install -r requirements.txt`). Note: this will get more standardized soon.
12
+
3. Add this line to `next-client/.env` and `next-client/.env.local`:
1. From the root level, run `npm i` then `npm run dev`.
18
+
2. This should open four windows: the `next-client` app front end at `localhost:3000`, the `express-server` app backend at `localhost:8080`, the `pyserver` Python FastAPI server for LLM calls at `localhost:8000`, and an overall Node watch process from the `common` directory. Ideally none of these windows show errors — if they do, we need to fix them first.
19
+
3. In your browser, go to `http://localhost:3000/create` to view the T3C app. To create a report, fill out the fields and click "Generate report"
20
+
21
+
### Viewing reports
22
+
23
+
1. Once you click "Generate report", if the process is successful, you will see the text "UrlGCloud". This is a hyperlink — open it in a new tab/window.
24
+
2. You will see the raw data dump of the generated report in JSON format. The url of this report will have this form `https://storage.googleapis.com/[GCLOUD_STORAGE_BUCKET]/[generated report id]`, where `GCLOUD_STORAGE_BUCKET` is an environment variable set in `express-server/.env` and the generated report id is an output of this pipeline run.
25
+
3. The pretty version of the corresponding report lives at `http://localhost:3000/report/https%3A%2F%2Fstorage.googleapis.com%2F[GCLOUD_STORAGE_BUCKET]%2F[generated report id]`. You can copy & paste and substitute in the values for the generated report id (different for each report you create) and the GCLOUD_STORAGE_BUCKET (likely the same for all testing sessions). Keep in mind that the separator is %2F and not the traditional url slash :)
26
+
27
+
### Troubleshooting
28
+
29
+
Adding notes here from issues we surface in testing.
30
+
31
+
- Power cycling: one good thing to try first is to restart the process in any one of the windows by ending the process and rerunning the specific previous command in that window (e.g. using the up arrow to find it).
32
+
33
+
## Older instructions below
34
+
3
35
## Setup
4
36
5
37
[See the setup instructions in README](./README.md#setup)
Copy file name to clipboardExpand all lines: examples/README.md
+7-4Lines changed: 7 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,15 @@
1
1
# Machine Learning Workflows in T3C
2
2
3
+
Note: under very active construction as we separate a new Python server for LLM calls from the front-end app/exiting backend in TypeScript.
4
+
3
5
## Quickstart
4
6
5
-
To run the T3C pipeline with a local front-end & server:
7
+
To run the T3C pipeline with a local front-end & backend server, as well as a Python FastAPI server for LLM calls:
6
8
7
-
1. Client-side: update `next-client/.env` to point to the report generation endpoint (e.g. `export PIPELINE_EXPRESS_URL=http://localhost:8080/generate`) and run `npm run dev`
8
-
2. Server-side: update `express-pipeline/.env` with OpenAI/Anthropic/GCS keys and run `npm i && npm run dev`
9
-
3. Navigate to `localhost:3000` and upload a CSV file from the UI. Make sure the CSV is formatted as follows
9
+
1. Client-side: update `next-client/.env` to point to the report generation endpoint (e.g. `export PIPELINE_EXPRESS_URL=http://localhost:8080/`) and run `npm run dev`
10
+
2. Server-side: update `express-server/.env` with OpenAI/Anthropic/GCS keys and run `npm i && npm run dev`.
11
+
3. Python FastAPI LLM interface: install Python requirements via `cd pyserver`, then `pip install -r requirements.txt`. Install Redis and start a server: `brew install redis`, `redis-server`. From `pyserver`, run `source .venv/bin/activate && fastapi dev main.py`.
12
+
4. Navigate to `localhost:3000/create` and upload a CSV file from the UI. Make sure the CSV is formatted as follows
0 commit comments