Skip to content

Commit

Permalink
Merge pull request #186 from nishio/patch-1
Browse files Browse the repository at this point in the history
fix(translate): Correctly handle string returns from LLM
  • Loading branch information
brittneygallagher authored Jun 7, 2024
2 parents 661a209 + d842bef commit 58084f1
Showing 1 changed file with 7 additions and 2 deletions.
9 changes: 7 additions & 2 deletions scatter/pipeline/steps/extraction.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,13 @@ def extract_arguments(input, prompt, model, retries=3):
llm = ChatOpenAI(model_name=model, temperature=0.0)
response = llm(messages=messages(prompt, input)).content.strip()
try:
parsed = [a.strip() for a in json.loads(response)]
return parsed
obj = json.loads(response)
# LLM sometimes returns valid JSON string
if isinstance(obj, str):
obj = [obj]
items = [a.strip() for a in obj]
items = filter(None, items) # omit empty strings
return items
except json.decoder.JSONDecodeError as e:
print("JSON error:", e)
print("Input was:", input)
Expand Down

0 comments on commit 58084f1

Please sign in to comment.