Skip to content

Commit c704588

Browse files
authored
Merge pull request #60 from AIObjectives/pyserver_readme
readme first draft for latest local dev mode
2 parents 8010c25 + bb4e355 commit c704588

File tree

3 files changed

+55
-14
lines changed

3 files changed

+55
-14
lines changed

README.md

Lines changed: 16 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,21 @@
11
# Talk to the City
22

3-
[Talk to the City (TTTC)](https://ai.objectives.institute/talk-to-the-city) Talk to the City is an open-source LLM interface for improving collective deliberation and decision-making by analyzing detailed, qualitative data. It aggregates responses and arranges similar arguments into clusters.
3+
**Note**: this repo is under very active construction with a new separate Python server for LLM calls—details below are likely to change!
4+
Please create a GitHub Issue for anything you encounter.
45

5-
This repo will allow you to setup your own instance of TTTC. The basic workflow is
6+
Latest instructions for local development are [here.](/.contributing.md)
67

7-
1. Submit a csv or google sheet with your survey data, either through the NextJS client or the Express API.
8+
[Talk to the City (T3C)](https://ai.objectives.institute/talk-to-the-city) is an open-source LLM-enabled interface for improving collective deliberation and decision-making by analyzing detailed, qualitative data. It aggregates responses and organizes similar claims into a nested tree of main topics and subropics.
9+
10+
This repo will allow you to setup your own instance of T3C.
11+
The basic workflow is
12+
13+
1. Submit a CSV file or Google Sheet with your survey data, either through the NextJS client or the Express API.
814
2. The backend app will use an LLM to parse your data.
915
3. The backend app will upload a JSON file to a Google Cloud Storage Bucket that you provide.
1016
4. Your report can be viewed by going to `http://[next client url]/report/[encoded url for your JSON file]`.
1117

12-
If you want to use Talk to the City without any setup, you can go to our website at TBD and follow the instructions on [how to use TTTC](#usage)
18+
If you want to use Talk to the City without any setup, you can go to our website at TBD and follow the instructions on [how to use T3C](#usage)
1319

1420
## Setup
1521

@@ -23,7 +29,7 @@ or if you have git ssh
2329

2430
### Google Cloud Storage and Services
2531

26-
TTTC currently only supports using Google Cloud for storing report data out of the box.
32+
T3C currently only supports using Google Cloud for storing report data out of the box.
2733

2834
First create a new storage bucket:
2935

@@ -54,7 +60,7 @@ Set up gcloud SDK on your machine
5460

5561
You will need to add two .env files
5662

57-
#### express-pipeline/.env
63+
#### express-server/.env
5864

5965
Encode your google credentials using the service account key you downloaded earlier by running the command `base64 -i ./google-credentials.json`
6066

@@ -85,11 +91,11 @@ You can add different types of .env files based on your needs for testing, dev,
8591

8692
(see [this](./contributing.md) to run in dev mode instead of prod)
8793

88-
If you want to run a local version of the app that's not publically accessable:
94+
If you want to run a local version of the app that's not publically accessible:
8995

90-
1. Open up your terminal and navigate to the repo folder (i.e. /Desktop/tttc-light-js)
91-
2. Run `npm run build`.
92-
3. Run `npm run dev` to star the dev server. This should open up a server for the next-client and express-pipeline on localhost:3000 and localhost:8080 respectively.
96+
1. Open up your terminal and navigate to the repo folder (e.g. `/Desktop/tttc-light-js`)
97+
2. If this is your first time, build the repo: `cd commmon && npm run build`.
98+
3. Run `npm run dev` to start the dev server. This will run three servers in new terminal windows: the `next-client` frontend on `localhost:3000`, the `express-server` backend on `localhost:8080`, and the `pyserver` Python FastAPI server for the LLM calls on `localhost:8000`. A fourth terminal window will show a listener for `/common` that rebuilds the JS files when changes are made.
9399
4. This build will be optimized for production.
94100

95101
#### Using docker locally (not recommended)

contributing.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,37 @@
11
# Working with Talk to the City
22

3+
## Quickstart
4+
5+
Latest instructions as we move to a separate Python server for LLM calls.
6+
First, pull the latest from `main` in this repo and start in the root directory (`tttc-light-js`).
7+
8+
### Set up dependencies
9+
10+
1. In a new terminal, run `brew install redis` and `redis-server` to install Redis and start a local Redis server.
11+
2. From the repo root, `cd pyserver` and install the Python package dependencies in `requirements.txt` with your preferred method (e.g. `pip install -r requirements.txt`). Note: this will get more standardized soon.
12+
3. Add this line to `next-client/.env` and `next-client/.env.local`:
13+
`export PIPELINE_EXPRESS_URL=http://localhost:8080`
14+
15+
### Launch the app
16+
17+
1. From the root level, run `npm i` then `npm run dev`.
18+
2. This should open four windows: the `next-client` app front end at `localhost:3000`, the `express-server` app backend at `localhost:8080`, the `pyserver` Python FastAPI server for LLM calls at `localhost:8000`, and an overall Node watch process from the `common` directory. Ideally none of these windows show errors — if they do, we need to fix them first.
19+
3. In your browser, go to `http://localhost:3000/create` to view the T3C app. To create a report, fill out the fields and click "Generate report"
20+
21+
### Viewing reports
22+
23+
1. Once you click "Generate report", if the process is successful, you will see the text "UrlGCloud". This is a hyperlink — open it in a new tab/window.
24+
2. You will see the raw data dump of the generated report in JSON format. The url of this report will have this form `https://storage.googleapis.com/[GCLOUD_STORAGE_BUCKET]/[generated report id]`, where `GCLOUD_STORAGE_BUCKET` is an environment variable set in `express-server/.env` and the generated report id is an output of this pipeline run.
25+
3. The pretty version of the corresponding report lives at `http://localhost:3000/report/https%3A%2F%2Fstorage.googleapis.com%2F[GCLOUD_STORAGE_BUCKET]%2F[generated report id]`. You can copy & paste and substitute in the values for the generated report id (different for each report you create) and the GCLOUD_STORAGE_BUCKET (likely the same for all testing sessions). Keep in mind that the separator is %2F and not the traditional url slash :)
26+
27+
### Troubleshooting
28+
29+
Adding notes here from issues we surface in testing.
30+
31+
- Power cycling: one good thing to try first is to restart the process in any one of the windows by ending the process and rerunning the specific previous command in that window (e.g. using the up arrow to find it).
32+
33+
## Older instructions below
34+
335
## Setup
436

537
[See the setup instructions in README](./README.md#setup)

examples/README.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,15 @@
11
# Machine Learning Workflows in T3C
22

3+
Note: under very active construction as we separate a new Python server for LLM calls from the front-end app/exiting backend in TypeScript.
4+
35
## Quickstart
46

5-
To run the T3C pipeline with a local front-end & server:
7+
To run the T3C pipeline with a local front-end & backend server, as well as a Python FastAPI server for LLM calls:
68

7-
1. Client-side: update `next-client/.env` to point to the report generation endpoint (e.g. `export PIPELINE_EXPRESS_URL=http://localhost:8080/generate`) and run `npm run dev`
8-
2. Server-side: update `express-pipeline/.env` with OpenAI/Anthropic/GCS keys and run `npm i && npm run dev`
9-
3. Navigate to `localhost:3000` and upload a CSV file from the UI. Make sure the CSV is formatted as follows
9+
1. Client-side: update `next-client/.env` to point to the report generation endpoint (e.g. `export PIPELINE_EXPRESS_URL=http://localhost:8080/`) and run `npm run dev`
10+
2. Server-side: update `express-server/.env` with OpenAI/Anthropic/GCS keys and run `npm i && npm run dev`.
11+
3. Python FastAPI LLM interface: install Python requirements via `cd pyserver`, then `pip install -r requirements.txt`. Install Redis and start a server: `brew install redis`, `redis-server`. From `pyserver`, run `source .venv/bin/activate && fastapi dev main.py`.
12+
4. Navigate to `localhost:3000/create` and upload a CSV file from the UI. Make sure the CSV is formatted as follows
1013

1114
## Expected CSV format
1215

0 commit comments

Comments
 (0)