Skip to content

Commit c6a3556

Browse files
committed
code checkin
1 parent f7a1d17 commit c6a3556

17 files changed

+2430
-3
lines changed

.gitignore

Lines changed: 142 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,142 @@
1+
# Byte-compiled / optimized / DLL files
2+
__pycache__/
3+
*.py[cod]
4+
*$py.class
5+
6+
# C extensions
7+
*.so
8+
9+
# Distribution / packaging
10+
.Python
11+
build/
12+
develop-eggs/
13+
dist/
14+
downloads/
15+
eggs/
16+
.eggs/
17+
lib/
18+
lib64/
19+
parts/
20+
sdist/
21+
var/
22+
wheels/
23+
pip-wheel-metadata/
24+
share/python-wheels/
25+
*.egg-info/
26+
.installed.cfg
27+
*.egg
28+
MANIFEST
29+
30+
# PyInstaller
31+
# Usually these files are written by a python script from a template
32+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
33+
*.manifest
34+
*.spec
35+
36+
# Installer logs
37+
pip-log.txt
38+
pip-delete-this-directory.txt
39+
40+
# Unit test / coverage reports
41+
htmlcov/
42+
.tox/
43+
.nox/
44+
.coverage
45+
.coverage.*
46+
.cache
47+
nosetests.xml
48+
coverage.xml
49+
*.cover
50+
*.py,cover
51+
.hypothesis/
52+
.pytest_cache/
53+
54+
# Translations
55+
*.mo
56+
*.pot
57+
58+
# Django stuff:
59+
*.log
60+
local_settings.py
61+
db.sqlite3
62+
db.sqlite3-journal
63+
64+
# Flask stuff:
65+
instance/
66+
.webassets-cache
67+
68+
# Scrapy stuff:
69+
.scrapy
70+
71+
# Sphinx documentation
72+
docs/_build/
73+
74+
# PyBuilder
75+
target/
76+
77+
# Jupyter Notebook
78+
.ipynb_checkpoints
79+
80+
# IPython
81+
profile_default/
82+
ipython_config.py
83+
84+
# pyenv
85+
.python-version
86+
.idea
87+
88+
# pipenv
89+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
90+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
91+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
92+
# install all needed dependencies.
93+
#Pipfile.lock
94+
95+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
96+
__pypackages__/
97+
98+
# Celery stuff
99+
celerybeat-schedule
100+
celerybeat.pid
101+
102+
# SageMath parsed files
103+
*.sage.py
104+
105+
# Environments
106+
.env
107+
.venv
108+
env/
109+
venv/
110+
ENV/
111+
env.bak/
112+
venv.bak/
113+
114+
# Spyder project settings
115+
.spyderproject
116+
.spyproject
117+
118+
# Rope project settings
119+
.ropeproject
120+
121+
# mkdocs documentation
122+
/site
123+
124+
# mypy
125+
.mypy_cache/
126+
.dmypy.json
127+
dmypy.json
128+
129+
# Pyre type checker
130+
.pyre/
131+
132+
# MacOS DS_Store
133+
.DS_Store
134+
135+
# Pickle folder
136+
.pkl_memoize_py3
137+
138+
# Folder where optimized models are stored
139+
optimized_model
140+
141+
# Config file for tests coverage
142+
.coveragerc

README.md

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,6 @@
1-
# **Open source implementation for LLaMA-based ChatGPT. 15x faster training process than ChatGPT (wip)**
1+
# ChatLLaMA
2+
3+
> 📢 Open source implementation for LLaMA-based ChatGPT runnable in a single GPU. 15x faster training process than `ChatGPT`
24
35
Meta has recently released LLaMA, a collection of foundational large language models ranging from 7 to 65 billion parameters.
46
LLaMA is creating a lot of excitement because it is smaller than GPT-3 but has better performance. For example, LLaMA's 13B architecture outperforms GPT-3 despite being 10 times smaller. This new collection of fundamental models opens the door to faster inference performance and chatGPT-like real-time assistants, while being cost-effective and running on a single GPU.
@@ -12,14 +14,19 @@ The good news is that we introduce `ChatLLaMA`, the first open source implementa
1214
- ChatLLaMA has built-in support for DeepSpeed ZERO to speedup the fine-tuning process.
1315
- The library also supports all LLaMA model architectures (7B, 13B, 33B, 65B), so that you can fine-tune the model according to your preferences for training time and inference performance.
1416

15-
If you like the project, please show your support by [leaving a star ⭐](https://github.com/nebuly-ai/nebullvm/stargazers).
16-
1717

1818
<img width="1032" alt="Screen Shot 2023-02-26 at 10 56 13 PM" src="https://user-images.githubusercontent.com/83510798/221439813-5972d029-dae5-4561-ab3d-5a55fa5cde09.png">
1919

2020
Image from [OpenAI’s blog](https://openai.com/blog/chatgpt).
2121

2222

23+
# Installation
24+
25+
```
26+
pip install chatllama
27+
```
28+
29+
2330
# Get started with ChatLLaMA
2431

2532
> :warning: Please note this code represents the algorithmic implementation for RLHF training process of LLaMA and does not contain the model weights. To access the model weights, you need to apply to Meta's [form](https://forms.gle/jk851eBVbX1m5TAv5).

chatllama/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
__version__ = '0.0.3'

chatllama/langchain_modules/__init__.py

Whitespace-only changes.
Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
REWARD_TEMPLATE = dict(
2+
template=(
3+
"Lets pretend that you are a lawyer and you have to"
4+
"evalaute the following completion task from a given"
5+
"assigment with a score between 0 and 5 where 0 represents"
6+
"a bad assignment completion and 5 a perfect completion.\n"
7+
"You MUST evaluate: text quality, content quality and"
8+
"coherence.\n"
9+
"You MUST return only the number that represents your"
10+
"judgment.\n"
11+
"The assignement is:\n{user_input}\n"
12+
"The completion is:\n{completion}\n"
13+
),
14+
input_variables=["user_input", "completion"],
15+
)
16+
17+
18+
AI_CHATBOT_TEMPLATE = dict(
19+
template=(
20+
"Assistant is a large language model trained by Meta and Nebuly.ai\n"
21+
"Assistant is designed to be able to assist with a wide range of "
22+
"tasks, from answering simple questions to providing in-depth "
23+
"explanations and discussions on a wide range of topics. As a "
24+
"language model, Assistant is able to generate human-like text "
25+
"based on the input it receives, allowing it to engage in "
26+
"natural-sounding conversations and provide responses that are "
27+
"coherent and relevant to the topic at hand.\n\n"
28+
"Assistant is constantly learning and improving, and its capabilities "
29+
"are constantly evolving. It is able to process and understand large "
30+
"amounts of text, and can use this knowledge to provide accurate and "
31+
"informative responses to a wide range of questions. Additionally, "
32+
"Assistant is able to generate its own text based on the input it "
33+
"receives, allowing it to engage in discussions and provide "
34+
"explanations and descriptions on a wide range of topics.\n\n"
35+
"Overall, Assistant is a powerful tool that can help with a wide "
36+
"range of tasks and provide valuable insights and information on a "
37+
"wide range of topics. Whether you need help with a specific "
38+
"question or just want to have a conversation about a particular "
39+
"topic, Assistant is here to assist.\n\n{history}\n\n"
40+
"Human: {human_input}\n"
41+
"Assistant:"
42+
),
43+
input_variables=["history", "human_input"],
44+
)
45+
46+
47+
PERSON_CHATBOT_TEMPLATE = dict(
48+
template=(
49+
"You are a human chatting with a chatbot. The chatbot is a large "
50+
"language model trained by Meta and Nebuly-ai\n"
51+
"The chatbot is designed to be able to assist you with a wide range "
52+
"of tasks, from answering simple questions to providing in-depth "
53+
"explanations and discussions on a wide range of topics. You are a "
54+
"human and you are testing the chatbot. Ask the chatbot questions and"
55+
"see how it responds. You can also ask the chatbot to tell you a "
56+
"story."
57+
"\n\n{history}\n\n"
58+
"Chatbot: {chatbot_input}\n"
59+
"Human:"
60+
),
61+
input_variables=["history", "chatbot_input"],
62+
)

0 commit comments

Comments
 (0)