You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert

10
38
11
39
<palign="center">
12
-
<br> English | <a href="README.md">中文</a>
40
+
<br> <a href="README.md">中文</a> | English
13
41
</p>
14
42
15
-
GitHub Sentinel is an open-source tool AI Agent designed for developers and project managers. It automatically retrieves and aggregates updates from subscribed GitHub repositories on a regular basis (daily/weekly). Key features include subscription management, update retrieval, notification system, and report generation.
43
+
GitHub Sentinel is an open-source tool AI agent designed for developers and project managers. It automatically retrieves and aggregates updates from subscribed GitHub repositories periodically (daily/weekly). Key features include subscription management, update retrieval, notification system, and report generation.
16
44
17
45
## Features
18
-
- Subscription management
19
-
- Update retrieval
20
-
- Notification system
21
-
- Report generation
22
46
23
-
## Getting Started
47
+
-**Subscription Management**: Manage your subscription list of GitHub repositories.
48
+
-**Update Retrieval**: Automatically retrieve and aggregate the latest updates from subscribed repositories, including commits, issues, and pull requests.
49
+
-**Notification System**: Notify subscribers about the latest project progress via email.
50
+
-**Report Generation**: Generate detailed project progress reports based on retrieved updates, supporting multiple formats and templates.
51
+
-**Multi-Model Support**: Support natural language report generation through OpenAI and Ollama models.
Edit the `config.json` file to set up your GitHub token, Email settings(e.g.Tencent Exmail), subscription file, update settings and LLM settings(both support OpenAI GPT API and Ollama REST API so far):
36
-
65
+
Edit the `config.json` file to set up your GitHub Token, Email settings (using Tencent WeCom Email as an example), subscription file, update settings, and large model service configurations (supporting OpenAI GPT API and Ollama private large model service):
**For security reasons:** It is recommended to configure the GitHub Token and Email Password using environment variables to avoid storing sensitive information in plain text, as shown below:
94
+
95
+
**For security reasons:** The GitHub Token and Email Password settings support using environment variables to avoid configuring sensitive information in plain text, as shown below:
62
96
63
97
```shell
64
-
#GitHub
98
+
#Github
65
99
export GITHUB_TOKEN="github_pat_xxx"
66
100
# Email
67
101
export EMAIL_PASSWORD="password"
68
102
```
69
103
70
-
#### Ollama: Installation and Deployment
71
-
72
-
[Ollama Installation and Deployment](docs/ollama.md)
73
-
74
104
### 3. How to Run
75
105
76
-
GitHub Sentinel supports the following three modes of operation:
106
+
GitHub Sentinel supports the following three running modes:
77
107
78
-
#### A. Run as a Command-Line Tool
108
+
#### A. Run as a CommandLine Tool
79
109
80
-
You can interactively run the application from the command line:
110
+
You can run the application interactively from the command line:
81
111
82
112
```sh
83
113
python src/command_tool.py
84
114
```
85
115
86
-
In this mode, you can manually enter commands to manage subscriptions, retrieve updates, and generate reports.
116
+
In this mode, you can manually input commands to manage subscriptions, retrieve updates, and generate reports.
87
117
88
118
#### B. Run as a Background Service
89
119
90
-
To run the application as a background service (daemon), it will automatically update according to the configured schedule.
120
+
To run the application as a background service (daemon process), it will automatically update periodically according to the relevant configuration.
91
121
92
-
You can use the daemon management script [daemon_control.sh](daemon_control.sh) to start, check the status, stop, and restart:
122
+
You can directly use the daemon management script [daemon_control.sh](daemon_control.sh) to start, check the status, stop, and restart:
93
123
94
124
1. Start the service:
95
125
@@ -99,8 +129,8 @@ You can use the daemon management script [daemon_control.sh](daemon_control.sh)
99
129
DaemonProcess started.
100
130
```
101
131
102
-
- This will launch [./src/daemon_process.py], generating reports periodically as setin`config.json`, and sending emails.
103
-
- Service logs will be saved to `logs/DaemonProcess.log`, with historical logs also appended to `logs/app.log`.
132
+
- This will start [./src/daemon_process.py], which will periodically generate reports and send emails according to the update frequency and time point setin`config.json`.
133
+
- The service log will be saved to the `logs/DaemonProcess.log` file. At the same time, historical cumulative logs will also be appended to the `logs/app.log` log file.
- This will start a web server on your machine, allowing you to manage subscriptions and generate reports through a user-friendly interface.
141
-
- By default, the Gradio server will be accessible at `http://localhost:7860`, but you can share it publicly if needed.
171
+
- By default, the Gradio server will be accessible at `http://localhost:7860`, but it can be shared publicly if needed.
172
+
173
+
## Ollama Installation and Service Deployment
174
+
175
+
Ollama is a private large model management tool that supports local and containerized deployment, command-line interaction, and REST API calls.
176
+
177
+
For detailed instructions on Ollama installation and private large model service deployment, please refer to [Ollama Installation and Service Deployment](docs/ollama.md).
178
+
179
+
### Ollama Brief Official Installation
180
+
181
+
To use Ollama forcalling private large model servicesin GitHub Sentinel, follow these steps for installation and configuration:
182
+
183
+
1. **Install Ollama**:
184
+
Download and install the Ollama service according to the official Ollama documentation. Ollama supports multiple operating systems, including Linux, Windows, and macOS.
185
+
186
+
2. **Start the Ollama Service**:
187
+
After installation, start the Ollama service with the following command:
188
+
189
+
```bash
190
+
ollama serve
191
+
```
192
+
193
+
By default, the Ollama API will run on `http://localhost:11434`.
194
+
195
+
3. **Configure Ollama forUsein GitHub Sentinel**:
196
+
In the `config.json` file, configure the relevant information for the Ollama API:
Start GitHub Sentinel and generate a report with the following command to verify that the Ollama configuration is correct:
210
+
211
+
```bash
212
+
python src/command_tool.py
213
+
```
214
+
215
+
If the configuration is correct, you will be able to generate reports using the Ollama model.
216
+
217
+
## Unit Testing
218
+
219
+
To ensure the quality and reliability of the code, GitHub Sentinel uses the `unittest` module for unit testing. For detailed explanations of `unittest` and related tools (such as `@patch` and `MagicMock`), please refer to [Detailed Unit Test Explanation](docs/unit_test.md).
220
+
221
+
### Unit Testing and Validation Script `validate_tests.sh`
222
+
223
+
#### Purpose
224
+
`validate_tests.sh` is a shell script used to run unit tests and validate the results. It is executed during the Docker image build process to ensure the correctness and stability of the code.
225
+
226
+
#### Functionality
227
+
- The script runs all unit tests and outputs the results to the `test_results.txt` file.
228
+
- If the tests fail, the script outputs the test results and causes the Docker build to fail.
229
+
- If all tests pass, the script continues the build process.
230
+
231
+
## Building and Validating with Docker
232
+
233
+
To facilitate building and deploying the GitHub Sentinel project in various environments, we provide Docker support. This support
234
+
235
+
includes the following files and functionalities:
236
+
237
+
### 1. `Dockerfile`
238
+
239
+
#### Purpose
240
+
The `Dockerfile` is a configuration file used to define how to build a Docker image. It describes the steps to build the image, including installing dependencies, copying project files, running unit tests, etc.
241
+
242
+
#### Key Steps
243
+
- Use `python:3.10-slim` as the base image and set the working directory to `/app`.
244
+
- Copy the project's `requirements.txt` file and install Python dependencies.
245
+
- Copy all project files to the container and grant execution permission to the `validate_tests.sh` script.
246
+
- During the build process, execute the `validate_tests.sh` script to ensure that all unit tests pass. If the tests fail, the build process will be aborted.
247
+
- After a successful build, the container will default to running `src/main.py` as the entry point.
248
+
249
+
### 2. `build_image.sh`
250
+
251
+
#### Purpose
252
+
`build_image.sh` is a shell script used to automatically build a Docker image. It retrieves the branch name from the current Git branch and uses it as the tag for the Docker image, facilitating the generation of different Docker images on different branches.
253
+
254
+
#### Functionality
255
+
- Retrieve the current Git branch name and use it as the tag for the Docker image.
256
+
- Use the `docker build` command to build the Docker image and tag it with the current Git branch name.
257
+
258
+
#### Usage Example
259
+
```bash
260
+
chmod +x build_image.sh
261
+
./build_image.sh
262
+
```
263
+
264
+
With these scripts and configuration files, you can ensure that Docker images built in different development branches are based on code that has passed unit tests, thereby improving code quality and deployment reliability.
142
265
143
266
## Contributing
144
267
145
-
Contributions are what make the opensource community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated. If you have any suggestions or feature requests, please open an issue first to discuss what you would like to change.
268
+
Contributions make the open-source community a wonderful place to learn, inspire, and create. Any contributions you make are **greatly appreciated**. If you have any suggestions or feature requests, please start an issue to discuss what you would like to change.
0 commit comments