Skip to content

Commit 9ae2668

Browse files
Updated to work with latest langchain API (#57)
This PR update the langchain requirement to the latest version (>=0.2.6). The only change required is the requirement of langchain-community package.
2 parents 200e0a6 + a6319a7 commit 9ae2668

File tree

5 files changed

+18
-17
lines changed

5 files changed

+18
-17
lines changed

CODE_OF_CONDUCT.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
We as members, contributors, and leaders pledge to make participation in our
66
community a harassment-free experience for everyone, regardless of age, body
77
size, visible or invisible disability, ethnicity, sex characteristics, gender
8-
identity and expression, level of experience, education, socio-economic status,
8+
identity and expression, level of experience, education, socioeconomic status,
99
nationality, personal appearance, race, religion, or sexual identity
1010
and orientation.
1111

chatify/chains.py

+5-3
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,6 @@
66
from langchain.chains.base import Chain
77
from langchain.prompts import PromptTemplate
88

9-
from .cache import LLMCacher
109
from .llm_models import ModelsFactory
1110
from .utils import compress_code
1211

@@ -77,7 +76,8 @@ def __init__(self, config):
7776
self.llm_models_factory = ModelsFactory()
7877

7978
self.cache = config["cache_config"]["cache"]
80-
self.cacher = LLMCacher(config)
79+
# NOTE: The caching function is deprecated
80+
# self.cacher = LLMCacher(config)
8181

8282
# Setup model and chain factory
8383
self._setup_llm_model(config["model_config"])
@@ -95,6 +95,7 @@ def _setup_llm_model(self, model_config):
9595
if self.llm_model is None:
9696
self.llm_model = self.llm_models_factory.get_model(model_config)
9797

98+
# NOTE: The caching function is deprecated
9899
if self.cache:
99100
self.llm_model = self.cacher.cache_llm(self.llm_model)
100101

@@ -168,11 +169,12 @@ def execute(self, chain, inputs, *args, **kwargs):
168169
-------
169170
output: Output text generated by the LLM chain.
170171
"""
172+
171173
if self.cache:
172174
inputs = chain.prompt.format(text=compress_code(inputs))
173175
output = chain.llm(inputs, cache_obj=self.cacher.llm_cache)
174176
self.cacher.llm_cache.flush()
175177
else:
176-
output = chain(inputs)["text"]
178+
output = chain.invoke(inputs)["text"]
177179

178180
return output

chatify/main.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@ def gpt(self, inputs, prompt):
153153
output : str
154154
The GPT model output in markdown format.
155155
"""
156-
# TODO: Should we create the chain every time? Only prompt is chainging not the model
156+
# TODO: Should we create the chain every time? Only prompt is changing not the model
157157
chain = self.llm_chain.create_chain(
158158
self.cfg["model_config"], prompt_template=prompt
159159
)

chatify/prompts/tester.yaml

+9-9
Original file line numberDiff line numberDiff line change
@@ -2,36 +2,36 @@ test my understanding with some open-ended questions:
22
input_variables: ['text']
33
content: >
44
SYSTEM: You are an AI assistant for Jupyter notebooks, named Chatify. Use robot-related emojis and humor to convey a friendly and relaxed tone. Your job is to help the user understand a tutorial they are working through as part of a course, one code block at a time. The user can send you one request about each code block, and you will not retain your chat history before or after their request, nor will you have access to other parts of the tutorial notebook. Because you will only see one code block at a time, you should assume that any relevant libraries are imported outside of the current code block, and that any relevant functions have already been defined in a previous notebook cell. Make reasonable guesses about what predefined functions do based on what they are named. Focus on conceptual issues whenever possible rather than minor details. You can provide code snippets if you think it is best, but it is better to provide Python-like pseudocode if possible. To comply with formatting requirements, do not ask for additional questions or clarification from the user. The only thing you are allowed to ask the user is for them to select another option from the dropdown menu or to resubmit their request again to generate a new response. Provide your response in markdown format.
5-
5+
66
ASSISTANT: How can I help?
7-
7+
88
USER: I'd like to test my understanding with some tough open-ended essay style questions about the conceptual content of this code block:
9-
9+
1010
---
1111
{text}
1212
---
1313
1414
Can you make up some essay-style questions for me to make sure I really understand the important concepts? Remember that I can't respond to you, so just ask me to "think about" how I'd respond (i.e., without explicitly responding to you).
1515
16-
ASSISTANT:
16+
ASSISTANT:
1717
template_format: f-string
1818
prompt_id: 90gwxu1n68pbc2193jy0fy5rp9yu6h9h
1919

2020
test my understanding with a multiple-choice question:
2121
input_variables: ['text']
2222
content: >
2323
SYSTEM: You are an AI assistant for Jupyter notebooks, named Chatify. Use robot-related emojis and humor to convey a friendly and relaxed tone. Your job is to help the user understand a tutorial they are working through as part of a course, one code block at a time. The user can send you one request about each code block, and you will not retain your chat history before or after their request, nor will you have access to other parts of the tutorial notebook. Because you will only see one code block at a time, you should assume that any relevant libraries are imported outside of the current code block, and that any relevant functions have already been defined in a previous notebook cell. Make reasonable guesses about what predefined functions do based on what they are named. Focus on conceptual issues whenever possible rather than minor details. You can provide code snippets if you think it is best, but it is better to provide Python-like pseudocode if possible. To comply with formatting requirements, do not ask for additional questions or clarification from the user. The only thing you are allowed to ask the user is for them to select another option from the dropdown menu or to resubmit their request again to generate a new response. Provide your response in markdown format.
24-
24+
2525
ASSISTANT: How can I help?
26-
26+
2727
USER: I'd like to test my understanding with a multiple choice question about the conceptual content of this code block:
28-
28+
2929
---
3030
{text}
3131
---
3232
33-
I'd like the correct answer to be either "[A]", "[B]", "[C]", or "[D]". Can you make up a multiple choice question for me so that I can make sure I really understant the most important concepts? Remember that I can't respond to you, so just ask me to "think about" which choice is correct or something else like that (i.e., without explicitly responding to you). Put two line breaks ("<br>") between each choice so that it appears correctly on my screen. In other words, there should be two line breaks between each of [B], [C], and [D].
33+
I'd like the correct answer to be either "[A]", "[B]", "[C]", or "[D]". Can you make up a multiple choice question for me so that I can make sure I really understand the most important concepts? Remember that I can't respond to you, so just ask me to "think about" which choice is correct or something else like that (i.e., without explicitly responding to you). Put two line breaks ("<br>") between each choice so that it appears correctly on my screen. In other words, there should be two line breaks between each of [B], [C], and [D].
3434
35-
ASSISTANT:
35+
ASSISTANT:
3636
template_format: f-string
3737
prompt_id: cqeas35w0wzhvemd6vtduj0qcf8njo4b

setup.py

+2-3
Original file line numberDiff line numberDiff line change
@@ -11,15 +11,14 @@
1111
history = history_file.read()
1212

1313
requirements = [
14-
"gptcache<=0.1.35",
15-
"langchain<=0.0.226",
14+
"langchain>=0.2.6",
15+
"langchain-community",
1616
"openai",
1717
"markdown",
1818
"ipywidgets",
1919
"requests",
2020
"markdown-it-py[linkify,plugins]",
2121
"pygments",
22-
"pydantic==1.10.11",
2322
]
2423
extras = [
2524
"transformers",

0 commit comments

Comments
 (0)