Skip to content

Commit 4445b99

Browse files
authored
[v0.6] Add Ollama support and update docs.
[v0.6] Add Ollama support and update docs.
2 parents 9e0b241 + 53c4c63 commit 4445b99

11 files changed

+498
-26
lines changed

README-EN.md

+38-2
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,13 @@
11
# GitHub Sentinel
22

3+
![GitHub stars](https://img.shields.io/github/stars/DjangoPeng/GitHubSentinel?style=social)
4+
![GitHub forks](https://img.shields.io/github/forks/DjangoPeng/GitHubSentinel?style=social)
5+
![GitHub watchers](https://img.shields.io/github/watchers/DjangoPeng/GitHubSentinel?style=social)
6+
![GitHub repo size](https://img.shields.io/github/repo-size/DjangoPeng/GitHubSentinel)
7+
![GitHub language count](https://img.shields.io/github/languages/count/DjangoPeng/GitHubSentinel)
8+
![GitHub top language](https://img.shields.io/github/languages/top/DjangoPeng/GitHubSentinel)
9+
![GitHub last commit](https://img.shields.io/github/last-commit/DjangoPeng/GitHubSentinel?color=red)
10+
311
<p align="center">
412
<br> English | <a href="README.md">中文</a>
513
</p>
@@ -24,7 +32,7 @@ pip install -r requirements.txt
2432

2533
### 2. Configure the Application
2634

27-
Edit the `config.json` file to set up your GitHub token, Email settings(e.g.Tencent Exmail), subscription file, and update settings:
35+
Edit the `config.json` file to set up your GitHub token, Email settings(e.g.Tencent Exmail), subscription file, update settings and LLM settings(both support OpenAI GPT API and Ollama REST API so far):
2836

2937

3038
```json
@@ -40,7 +48,13 @@ Edit the `config.json` file to set up your GitHub token, Email settings(e.g.Tenc
4048
"slack_webhook_url": "your_slack_webhook_url",
4149
"subscriptions_file": "subscriptions.json",
4250
"github_progress_frequency_days": 1,
43-
"github_progress_execution_time":"08:00"
51+
"github_progress_execution_time":"08:00",
52+
"llm": {
53+
"model_type": "openai",
54+
"openai_model_name": "gpt-4o-mini",
55+
"ollama_model_name": "llama3",
56+
"ollama_api_url": "http://localhost:11434/api/chat"
57+
}
4458
}
4559

4660
```
@@ -53,6 +67,10 @@ export GITHUB_TOKEN="github_pat_xxx"
5367
export EMAIL_PASSWORD="password"
5468
```
5569

70+
#### Ollama: Installation and Deployment
71+
72+
[Ollama Installation and Deployment](docs/ollama.md)
73+
5674
### 3. How to Run
5775

5876
GitHub Sentinel supports the following three modes of operation:
@@ -117,5 +135,23 @@ To run the application with a Gradio interface, allowing users to interact with
117135
python src/gradio_server.py
118136
```
119137

138+
![gradio_demo](images/gradio_demo.png)
139+
120140
- This will start a web server on your machine, allowing you to manage subscriptions and generate reports through a user-friendly interface.
121141
- By default, the Gradio server will be accessible at `http://localhost:7860`, but you can share it publicly if needed.
142+
143+
## Contributing
144+
145+
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated. If you have any suggestions or feature requests, please open an issue first to discuss what you would like to change.
146+
147+
<a href='https://github.com/repo-reviews/repo-reviews.github.io/blob/main/create.md' target="_blank"><img alt='Github' src='https://img.shields.io/badge/review_me-100000?style=flat&logo=Github&logoColor=white&labelColor=888888&color=555555'/></a>
148+
149+
## License
150+
151+
This project is licensed under the terms of the Apache-2.0 License . See the [LICENSE](LICENSE) file for details.
152+
153+
## Contact
154+
155+
Django Peng - [email protected]
156+
157+
Project Link: https://github.com/DjangoPeng/GitHubSentinel

README.md

+43-3
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,13 @@
11
# GitHub Sentinel
22

3+
![GitHub stars](https://img.shields.io/github/stars/DjangoPeng/GitHubSentinel?style=social)
4+
![GitHub forks](https://img.shields.io/github/forks/DjangoPeng/GitHubSentinel?style=social)
5+
![GitHub watchers](https://img.shields.io/github/watchers/DjangoPeng/GitHubSentinel?style=social)
6+
![GitHub repo size](https://img.shields.io/github/repo-size/DjangoPeng/GitHubSentinel)
7+
![GitHub language count](https://img.shields.io/github/languages/count/DjangoPeng/GitHubSentinel)
8+
![GitHub top language](https://img.shields.io/github/languages/top/DjangoPeng/GitHubSentinel)
9+
![GitHub last commit](https://img.shields.io/github/last-commit/DjangoPeng/GitHubSentinel?color=red)
10+
311
<p align="center">
412
<br> <a href="README-EN.md">English</a> | 中文
513
</p>
@@ -24,7 +32,7 @@ pip install -r requirements.txt
2432

2533
### 2. 配置应用
2634

27-
编辑 `config.json` 文件,以设置您的 GitHub Token、Email 设置(以腾讯企微邮箱为例)、订阅文件和更新设置
35+
编辑 `config.json` 文件,以设置您的 GitHub Token、Email 设置(以腾讯企微邮箱为例)、订阅文件、更新设置,以及大模型服务配置(支持 OpenAI GPT API 和 Ollama 私有化大模型服务)
2836

2937
```json
3038
{
@@ -39,9 +47,16 @@ pip install -r requirements.txt
3947
"slack_webhook_url": "your_slack_webhook_url",
4048
"subscriptions_file": "subscriptions.json",
4149
"github_progress_frequency_days": 1,
42-
"github_progress_execution_time":"08:00"
50+
"github_progress_execution_time":"08:00",
51+
"llm": {
52+
"model_type": "openai",
53+
"openai_model_name": "gpt-4o-mini",
54+
"ollama_model_name": "llama3",
55+
"ollama_api_url": "http://localhost:11434/api/chat"
56+
}
4357
}
4458
```
59+
4560
**出于安全考虑:** GitHub Token 和 Email Password 的设置均支持使用环境变量进行配置,以避免明文配置重要信息,如下所示:
4661

4762
```shell
@@ -51,6 +66,12 @@ export GITHUB_TOKEN="github_pat_xxx"
5166
export EMAIL_PASSWORD="password"
5267
```
5368

69+
#### Ollama 安装与配置
70+
71+
72+
[Ollama 安装部署与服务发布](docs/ollama.md)
73+
74+
5475
### 3. 如何运行
5576

5677
GitHub Sentinel 支持以下三种运行方式:
@@ -115,5 +136,24 @@ python src/command_tool.py
115136
python src/gradio_server.py
116137
```
117138

139+
![gradio_demo](images/gradio_demo.png)
140+
118141
- 这将在您的机器上启动一个 Web 服务器,允许您通过用户友好的界面管理订阅和生成报告。
119-
- 默认情况下,Gradio 服务器将可在 `http://localhost:7860` 访问,但如果需要,您可以公开共享它。
142+
- 默认情况下,Gradio 服务器将可在 `http://localhost:7860` 访问,但如果需要,您可以公开共享它。
143+
144+
## 贡献
145+
146+
贡献是使开源社区成为学习、激励和创造的惊人之处。非常感谢你所做的任何贡献。如果你有任何建议或功能请求,请先开启一个议题讨论你想要改变的内容。
147+
148+
<a href='https://github.com/repo-reviews/repo-reviews.github.io/blob/main/create.md' target="_blank"><img alt='Github' src='https://img.shields.io/badge/review_me-100000?style=flat&logo=Github&logoColor=white&labelColor=888888&color=555555'/></a>
149+
150+
## 许可证
151+
152+
该项目根据Apache-2.0许可证的条款进行许可。详情请参见[LICENSE](LICENSE)文件。
153+
154+
## 联系
155+
156+
Django Peng - [email protected]
157+
158+
项目链接: https://github.com/DjangoPeng/GitHubSentinel
159+

config.json

+7-1
Original file line numberDiff line numberDiff line change
@@ -10,5 +10,11 @@
1010
"slack_webhook_url": "your_slack_webhook_url",
1111
"subscriptions_file": "subscriptions.json",
1212
"github_progress_frequency_days": 1,
13-
"github_progress_execution_time":"08:00"
13+
"github_progress_execution_time":"08:00",
14+
"llm": {
15+
"model_type": "ollama",
16+
"openai_model_name": "gpt-4o-mini",
17+
"ollama_model_name": "llama3",
18+
"ollama_api_url": "http://localhost:11434/api/chat"
19+
}
1420
}

docs/ollama.md

+195
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,195 @@
1+
## Ollama 安装部署与服务发布
2+
3+
### Linux
4+
5+
```bash
6+
curl -fsSL https://ollama.com/install.sh | sh
7+
```
8+
9+
[手动安装说明](https://github.com/ollama/ollama/blob/main/docs/linux.md)
10+
11+
### macOS
12+
13+
[下载](https://ollama.com/download/Ollama-darwin.zip)
14+
15+
### Windows 预览版
16+
17+
[下载](https://ollama.com/download/OllamaSetup.exe)
18+
19+
---
20+
21+
## 快速入门
22+
23+
要运行并与 [Llama 3.1](https://ollama.com/library/llama3.1) 进行对话:
24+
25+
```bash
26+
ollama run llama3.1
27+
```
28+
29+
---
30+
31+
## 模型库
32+
33+
Ollama 支持在 [ollama.com/library](https://ollama.com/library) 上提供的一系列模型。
34+
35+
以下是一些可以下载的示例模型:
36+
37+
| 模型 | 参数 | 大小 | 下载命令 |
38+
| ------------------ | ----- | ----- | ------------------------------ |
39+
| Llama 3.1 | 8B | 4.7GB | `ollama run llama3.1` |
40+
| Llama 3.1 | 70B | 40GB | `ollama run llama3.1:70b` |
41+
| Llama 3.1 | 405B | 231GB | `ollama run llama3.1:405b` |
42+
| Phi 3 Mini | 3.8B | 2.3GB | `ollama run phi3` |
43+
| Phi 3 Medium | 14B | 7.9GB | `ollama run phi3:medium` |
44+
| Gemma 2 | 2B | 1.6GB | `ollama run gemma2:2b` |
45+
| Gemma 2 | 9B | 5.5GB | `ollama run gemma2` |
46+
| Gemma 2 | 27B | 16GB | `ollama run gemma2:27b` |
47+
| Mistral | 7B | 4.1GB | `ollama run mistral` |
48+
| Moondream 2 | 1.4B | 829MB | `ollama run moondream` |
49+
| Neural Chat | 7B | 4.1GB | `ollama run neural-chat` |
50+
| Starling | 7B | 4.1GB | `ollama run starling-lm` |
51+
| Code Llama | 7B | 3.8GB | `ollama run codellama` |
52+
| Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored` |
53+
| LLaVA | 7B | 4.5GB | `ollama run llava` |
54+
| Solar | 10.7B | 6.1GB | `ollama run solar` |
55+
56+
---
57+
58+
### 命令行工具
59+
60+
#### 创建模型
61+
62+
`ollama create` 用于从 Modelfile 创建模型。
63+
64+
```bash
65+
ollama create mymodel -f ./Modelfile
66+
```
67+
68+
#### 拉取模型
69+
70+
```bash
71+
ollama pull llama3.1
72+
```
73+
74+
> 此命令还可用于更新本地模型。仅会拉取差异部分。
75+
76+
#### 删除模型
77+
78+
```bash
79+
ollama rm llama3.1
80+
```
81+
82+
#### 复制模型
83+
84+
```bash
85+
ollama cp llama3.1 my-model
86+
```
87+
88+
#### 多行输入
89+
90+
对于多行输入,可以使用 `"""` 包裹文本:
91+
92+
```bash
93+
>>> """Hello,
94+
... world!
95+
... """
96+
```
97+
这将输出一个包含“Hello, world!”消息的简单程序。
98+
99+
#### 多模态模型
100+
101+
```bash
102+
ollama run llava "这张图片中有什么? /Users/jmorgan/Desktop/smile.png"
103+
```
104+
图像中显示的是一个黄色的笑脸,可能是图片的中心焦点。
105+
106+
#### 以参数传递提示
107+
108+
```bash
109+
$ ollama run llama3.1 "总结此文件: $(cat README.md)"
110+
```
111+
Ollama 是一个轻量级、可扩展的框架,用于在本地计算机上构建和运行语言模型。
112+
113+
---
114+
115+
### REST API
116+
117+
Ollama 提供 REST API 来运行和管理模型。
118+
119+
#### 生成响应
120+
121+
```bash
122+
curl http://localhost:11434/api/generate -d '{
123+
"model": "llama3.1",
124+
"prompt":"为什么天空是蓝色的?"
125+
}'
126+
```
127+
128+
#### 与模型对话
129+
130+
```bash
131+
curl http://localhost:11434/api/chat -d '{
132+
"model": "llama3.1",
133+
"messages": [
134+
{ "role": "user", "content": "为什么天空是蓝色的?" }
135+
]
136+
}'
137+
```
138+
139+
有关所有端点(Endpoint)的详细信息,请参阅 [API 文档](./docs/api.md)
140+
141+
---
142+
143+
### Docker 支持
144+
145+
Ollama 官方提供了 Docker 镜像 `ollama/ollama`,可以在 Docker Hub 上找到。
146+
147+
#### 使用 CPU 运行
148+
149+
```bash
150+
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
151+
```
152+
153+
#### 使用 Nvidia GPU 运行
154+
155+
要使用 Nvidia GPU,首先需要安装 NVIDIA Container Toolkit:
156+
157+
```bash
158+
# 配置仓库
159+
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
160+
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
161+
sudo apt-get update
162+
163+
# 安装 NVIDIA Container Toolkit 包
164+
sudo apt-get install -y nvidia-container-toolkit
165+
166+
# 配置 Docker 使用 Nvidia 驱动
167+
sudo nvidia-ctk runtime configure --runtime=docker
168+
sudo systemctl restart docker
169+
```
170+
171+
启动容器:
172+
173+
```bash
174+
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
175+
```
176+
177+
#### 使用 AMD GPU 运行
178+
179+
要使用 AMD GPU 运行 Ollama,可以使用 `rocm` 标签,并运行以下命令:
180+
181+
```bash
182+
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm
183+
```
184+
185+
### 本地运行模型
186+
187+
现在,你可以运行一个模型:
188+
189+
```bash
190+
docker exec -it ollama ollama run llama3
191+
```
192+
193+
---
194+
195+
请根据以上内容进行 Ollama 的安装和配置,使用 CLI 工具和 Docker 镜像来管理和运行各种模型。如需更多信息,请访问 [Ollama GitHub 仓库](https://github.com/ollama/ollama)

images/gradio_demo.png

191 KB
Loading

src/command_tool.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
def main():
1212
config = Config() # 创建配置实例
1313
github_client = GitHubClient(config.github_token) # 创建GitHub客户端实例
14-
llm = LLM() # 创建语言模型实例
14+
llm = LLM(config) # 创建语言模型实例
1515
report_generator = ReportGenerator(llm) # 创建报告生成器实例
1616
subscription_manager = SubscriptionManager(config.subscriptions_file) # 创建订阅管理器实例
1717
command_handler = CommandHandler(github_client, subscription_manager, report_generator) # 创建命令处理器实例

src/config.py

+7
Original file line numberDiff line numberDiff line change
@@ -23,3 +23,10 @@ def load_config(self):
2323
self.freq_days = config.get('github_progress_frequency_days', 1)
2424
# 默认早上8点更新 (操作系统默认时区是 UTC +0,08点刚好对应北京时间凌晨12点)
2525
self.exec_time = config.get('github_progress_execution_time', "08:00")
26+
27+
# 加载 LLM 相关配置
28+
llm_config = config.get('llm', {})
29+
self.llm_model_type = llm_config.get('model_type', 'openai')
30+
self.openai_model_name = llm_config.get('openai_model_name', 'gpt-4o-mini')
31+
self.ollama_model_name = llm_config.get('ollama_model_name', 'llama3')
32+
self.ollama_api_url = llm_config.get('ollama_api_url', 'http://localhost:11434/api/chat')

src/daemon_process.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ def main():
3737
config = Config() # 创建配置实例
3838
github_client = GitHubClient(config.github_token) # 创建GitHub客户端实例
3939
notifier = Notifier(config.email) # 创建通知器实例
40-
llm = LLM() # 创建语言模型实例
40+
llm = LLM(config) # 创建语言模型实例
4141
report_generator = ReportGenerator(llm) # 创建报告生成器实例
4242
subscription_manager = SubscriptionManager(config.subscriptions_file) # 创建订阅管理器实例
4343

0 commit comments

Comments
 (0)