Skip to content

Commit 6a92295

Browse files
authored
Merge pull request #36125 from github/repo-sync
Repo sync
2 parents 1632bbe + 9c14490 commit 6a92295

File tree

8 files changed

+98
-31
lines changed

8 files changed

+98
-31
lines changed

Diff for: content/admin/enforcing-policies/enforcing-policy-with-pre-receive-hooks/creating-a-pre-receive-hook-environment.md

+23-19
Original file line numberDiff line numberDiff line change
@@ -26,42 +26,46 @@ If you are using another Git implementation, it must support relative paths in t
2626

2727
## Creating a pre-receive hook environment using Docker
2828

29-
You can use a Linux container management tool to build a pre-receive hook environment. This example uses [Alpine Linux](https://www.alpinelinux.org/) and [Docker](https://www.docker.com/).
29+
You can use a Linux container management tool to build a pre-receive hook environment. This example uses [Debian Linux](https://www.debian.org/) and [Docker](https://www.docker.com/).
3030

3131
{% data reusables.linux.ensure-docker %}
32-
1. Create the file `Dockerfile.alpine` that contains this information:
32+
1. Create the file `Dockerfile.debian` that contains this information:
3333

3434
```dockerfile
35-
FROM alpine:latest
36-
RUN apk add --no-cache git bash
35+
FROM --platform=linux/amd64 debian:stable
36+
RUN apt-get update && apt-get install -y git bash curl
37+
RUN rm -fr /etc/localtime /usr/share/zoneinfo/localtime
3738
```
3839

39-
1. From the working directory that contains `Dockerfile.alpine`, build an image:
40+
>[!NOTE] The Debian image includes some symlinks by default, which if not removed, may cause errors when executing scripts in the custom environment. Symlinks are removed in the last line of the example above.
41+
42+
1. From the working directory that contains `Dockerfile.debian`, build an image:
4043

4144
```shell
42-
$ docker build -f Dockerfile.alpine -t pre-receive.alpine .
43-
> Sending build context to Docker daemon 12.29 kB
44-
> Step 1 : FROM alpine:latest
45-
> ---> 8944964f99f4
46-
> Step 2 : RUN apk add --no-cache git bash
47-
> ---> Using cache
48-
> ---> 0250ab3be9c5
49-
> Successfully built 0250ab3be9c5
45+
$ docker build -f Dockerfile.debian -t pre-receive.debian .
46+
> [+] Building 0.6s (6/6) FINISHED docker:desktop-linux
47+
> => [internal] load build definition from Dockerfile.debian
48+
> => [1/2] FROM docker.io/library/debian:latest@sha256:80dd3c3b9c6cecb9f1667e9290b3bc61b78c2678c02cbdae5f0fea92cc6
49+
> => [2/2] RUN apt-get update && apt-get install -y git bash curl
50+
> => exporting to image
51+
> => => exporting layers
52+
> => => writing image sha256:b57af4e24082f3a30a34c0fe652a336444a3608f76833f5c5fdaf4d81d20c3cc
53+
> => => naming to docker.io/library/pre-receive.debian
5054
```
5155

5256
1. Create a container:
5357

5458
```shell
55-
docker create --name pre-receive.alpine pre-receive.alpine /bin/true
59+
docker create --name pre-receive.debian pre-receive.debian /bin/true
5660
```
5761

5862
1. Export the Docker container to a `gzip` compressed `tar` file:
5963

6064
```shell
61-
docker export pre-receive.alpine | gzip > alpine.tar.gz
65+
docker export pre-receive.debian | gzip > debian.tar.gz
6266
```
6367

64-
This file `alpine.tar.gz` is ready to be uploaded to the {% data variables.product.prodname_ghe_server %} appliance.
68+
This file `debian.tar.gz` is ready to be uploaded to the {% data variables.product.prodname_ghe_server %} appliance.
6569

6670
## Creating a pre-receive hook environment using chroot
6771

@@ -78,7 +82,7 @@ You can use a Linux container management tool to build a pre-receive hook enviro
7882
> * `/bin/sh` must exist and be executable, as the entry point into the chroot environment.
7983
> * Unlike traditional chroots, the `dev` directory is not required by the chroot environment for pre-receive hooks.
8084
81-
For more information about creating a chroot environment see [Chroot](https://wiki.debian.org/chroot) from the _Debian Wiki_, [BasicChroot](https://help.ubuntu.com/community/BasicChroot) from the _Ubuntu Community Help Wiki_, or [Installing Alpine Linux in a chroot](https://wiki.alpinelinux.org/wiki/Installing_Alpine_Linux_in_a_chroot) from the _Alpine Linux Wiki_.
85+
For more information about creating a chroot environment see [Chroot](https://wiki.debian.org/chroot) from the _Debian Wiki_ or [BasicChroot](https://help.ubuntu.com/community/BasicChroot) from the _Ubuntu Community Help Wiki_.
8286

8387
## Uploading a pre-receive hook environment on {% data variables.product.prodname_ghe_server %}
8488

@@ -98,6 +102,6 @@ For more information about creating a chroot environment see [Chroot](https://wi
98102
1. Use the `ghe-hook-env-create` command and type the name you want for the environment as the first argument and the full local path or URL of a `*.tar.gz` file that contains your environment as the second argument.
99103

100104
```shell
101-
admin@ghe-host:~$ ghe-hook-env-create AlpineTestEnv /home/admin/alpine.tar.gz
102-
> Pre-receive hook environment 'AlpineTestEnv' (2) has been created.
105+
admin@ghe-host:~$ ghe-hook-env-create DebianTestEnv /home/admin/debian.tar.gz
106+
> Pre-receive hook environment 'DebianTestEnv' (2) has been created.
103107
```

Diff for: content/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise.md

+8-5
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ You can configure any of the following policies for your enterprise:
3434
* [Suggestions matching public code](#suggestions-matching-public-code)
3535
* [Give {% data variables.product.prodname_copilot_short %} access to Bing](#give-copilot-access-to-bing)
3636
* [{% data variables.product.prodname_copilot_short %} access to {% data variables.copilot.copilot_claude_sonnet %}](#copilot-access-to-claude-35-sonnet)
37-
* [{% data variables.product.prodname_copilot_short %} access to the o1 family of models](#copilot-access-to-the-o1-family-of-models)
37+
* [{% data variables.product.prodname_copilot_short %} access to the o1 and o3 families of models](#copilot-access-to-the-o1-and-o3-families-of-models)
3838

3939
### {% data variables.product.prodname_copilot_short %} in {% data variables.product.prodname_dotcom_the_website %}
4040

@@ -81,16 +81,19 @@ You can chat with {% data variables.product.prodname_copilot %} in your IDE to g
8181

8282
By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to **Anthropic {% data variables.copilot.copilot_claude_sonnet %} in {% data variables.product.prodname_copilot_short %}**, members of your enterprise can choose to use this model rather than the default `GPT 4o` model. See [AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot).
8383

84-
### {% data variables.product.prodname_copilot_short %} access to the o1 family of models
84+
### {% data variables.product.prodname_copilot_short %} access to the o1 and o3 families of models
8585

8686
{% data reusables.models.o1-models-preview-note %}
8787

88-
By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to the o1 family of models, members of your enterprise can select to use these models rather than the default `GPT 4o` model.
88+
By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to the o1 or o3 models, members of your enterprise can select to use these models rather than the default `GPT 4o` model.
8989

90-
The o1 family of models includes three models:
90+
The o1 family of models includes the following models:
9191

9292
* `o1`/`o1-preview`: These models are focused on advanced reasoning and solving complex problems, in particular in math and science. They respond more slowly than the `gpt-4o` model. Each member of your enterprise can make 10 requests to each of these models per day.
93-
* `o1-mini`: This is the faster version of the `o1` model, balancing the use of complex reasoning with the need for faster responses. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model per day.
93+
94+
The o3 family of models includes one model:
95+
96+
* `o3-mini`: This is the next generation of reasoning models, following from `o1` and `o1-mini`. The `o3-mini` model outperforms `o1` on coding benchmarks with response times that are comparable to `o1-mini`, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model every 12 hours.
9497

9598
### {% data variables.product.prodname_copilot_short %} Metrics API access
9699

Diff for: content/copilot/managing-copilot/managing-github-copilot-in-your-organization/managing-policies-for-copilot-in-your-organization.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Organization owners can set policies to govern how {% data variables.product.pro
3333
* Suggestions matching public code
3434
* Access to alternative models for {% data variables.product.prodname_copilot_short %}
3535
* Anthropic {% data variables.copilot.copilot_claude_sonnet %} in Copilot
36-
* OpenAI o1 models in Copilot
36+
* OpenAI o1 and o3 models in Copilot
3737

3838
The policy settings selected by an organization owner determine the behavior of {% data variables.product.prodname_copilot %} for all organization members that have been granted access to {% data variables.product.prodname_copilot_short %} through the organization.
3939

Diff for: content/copilot/using-github-copilot/asking-github-copilot-questions-in-github.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ The skills you can use in {% data variables.product.prodname_copilot_chat_dotcom
6767

6868
{% data reusables.copilot.copilot-chat-models-beta-note %}
6969

70-
{% data reusables.copilot.copilot-chat-models-list-o1 %}
70+
{% data reusables.copilot.copilot-chat-models-list-o3 %}
7171

7272
### Limitations of AI models for {% data variables.product.prodname_copilot_chat_short %}
7373

Diff for: content/copilot/using-github-copilot/asking-github-copilot-questions-in-your-ide.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@ You can tell {% data variables.product.prodname_copilot_short %} to answer a que
153153

154154
{% data reusables.copilot.copilot-chat-models-beta-note %}
155155

156-
{% data reusables.copilot.copilot-chat-models-list-o1 %}
156+
{% data reusables.copilot.copilot-chat-models-list-o3 %}
157157

158158
### Changing your AI model
159159

Diff for: content/github-models/prototyping-with-ai-models.md

+53-3
Original file line numberDiff line numberDiff line change
@@ -133,70 +133,81 @@ Low, high, and embedding models have different rate limits. To see which type of
133133
<tr>
134134
<th scope="col" style="width:15%"><b>Rate limit tier</b></th>
135135
<th scope="col" style="width:25%"><b>Rate limits</b></th>
136-
<th scope="col" style="width:20%"><b>Free and Copilot Individual</b></th>
137-
<th scope="col" style="width:20%"><b>Copilot Business</b></th>
138-
<th scope="col" style="width:20%"><b>Copilot Enterprise</b></th>
136+
<th scope="col" style="width:15%"><b>Copilot Free</b></th>
137+
<th scope="col" style="width:15%"><b>Copilot Pro</b></th>
138+
<th scope="col" style="width:15%"><b>Copilot Business</b></th>
139+
<th scope="col" style="width:15%"><b>Copilot Enterprise</b></th>
139140
</tr>
140141
<tr>
141142
<th rowspan="4" scope="rowgroup"><b>Low</b></th>
142143
<th style="padding-left: 0"><b>Requests per minute</b></th>
143144
<td>15</td>
144145
<td>15</td>
146+
<td>15</td>
145147
<td>20</td>
146148
</tr>
147149
<tr>
148150
<th><b>Requests per day</b></th>
149151
<td>150</td>
152+
<td>150</td>
150153
<td>300</td>
151154
<td>450</td>
152155
</tr>
153156
<tr>
154157
<th><b>Tokens per request</b></th>
155158
<td>8000 in, 4000 out</td>
156159
<td>8000 in, 4000 out</td>
160+
<td>8000 in, 4000 out</td>
157161
<td>8000 in, 8000 out</td>
158162
</tr>
159163
<tr>
160164
<th><b>Concurrent requests</b></th>
161165
<td>5</td>
162166
<td>5</td>
167+
<td>5</td>
163168
<td>8</td>
164169
</tr>
165170
<tr>
166171
<th rowspan="4" scope="rowgroup"><b>High</b></th>
167172
<th style="padding-left: 0"><b>Requests per minute</b></th>
168173
<td>10</td>
169174
<td>10</td>
175+
<td>10</td>
170176
<td>15</td>
171177
</tr>
172178
<tr>
173179
<th><b>Requests per day</b></th>
174180
<td>50</td>
181+
<td>50</td>
175182
<td>100</td>
176183
<td>150</td>
177184
</tr>
178185
<tr>
179186
<th><b>Tokens per request</b></th>
180187
<td>8000 in, 4000 out</td>
181188
<td>8000 in, 4000 out</td>
189+
<td>8000 in, 4000 out</td>
182190
<td>16000 in, 8000 out</td>
183191
</tr>
184192
<tr>
185193
<th><b>Concurrent requests</b></th>
186194
<td>2</td>
187195
<td>2</td>
196+
<td>2</td>
188197
<td>4</td>
189198
</tr>
190199
<tr>
191200
<th rowspan="4" scope="rowgroup"><b>Embedding</b></th>
192201
<th style="padding-left: 0"><b>Requests per minute</b></th>
193202
<td>15</td>
194203
<td>15</td>
204+
<td>15</td>
195205
<td>20</td>
196206
</tr>
197207
<tr>
198208
<th><b>Requests per day</b></th>
199209
<td>150</td>
210+
<td>150</td>
200211
<td>300</td>
201212
<td>450</td>
202213
</tr>
@@ -205,59 +216,98 @@ Low, high, and embedding models have different rate limits. To see which type of
205216
<td>64000</td>
206217
<td>64000</td>
207218
<td>64000</td>
219+
<td>64000</td>
208220
</tr>
209221
<tr>
210222
<th><b>Concurrent requests</b></th>
211223
<td>5</td>
212224
<td>5</td>
225+
<td>5</td>
213226
<td>8</td>
214227
</tr>
215228
<tr>
216229
<th rowspan="4" scope="rowgroup"><b>Azure OpenAI o1-preview</b></th>
217230
<th style="padding-left: 0"><b>Requests per minute</b></th>
231+
<td>Not applicable</td>
218232
<td>1</td>
219233
<td>2</td>
220234
<td>2</td>
221235
</tr>
222236
<tr>
223237
<th><b>Requests per day</b></th>
238+
<td>Not applicable</td>
224239
<td>8</td>
225240
<td>10</td>
226241
<td>12</td>
227242
</tr>
228243
<tr>
229244
<th><b>Tokens per request</b></th>
245+
<td>Not applicable</td>
230246
<td>4000 in, 4000 out</td>
231247
<td>4000 in, 4000 out</td>
232248
<td>4000 in, 8000 out</td>
233249
</tr>
234250
<tr>
235251
<th><b>Concurrent requests</b></th>
252+
<td>Not applicable</td>
236253
<td>1</td>
237254
<td>1</td>
238255
<td>1</td>
239256
</tr>
240257
<tr>
241258
<th rowspan="4" scope="rowgroup" style="box-shadow: none"><b>Azure OpenAI o1-mini</b></th>
242259
<th style="padding-left: 0"><b>Requests per minute</b></th>
260+
<td>Not applicable</td>
261+
<td>2</td>
262+
<td>3</td>
263+
<td>3</td>
264+
</tr>
265+
<tr>
266+
<th><b>Requests per day</b></th>
267+
<td>Not applicable</td>
268+
<td>12</td>
269+
<td>15</td>
270+
<td>20</td>
271+
</tr>
272+
<tr>
273+
<th><b>Tokens per request</b></th>
274+
<td>Not applicable</td>
275+
<td>4000 in, 4000 out</td>
276+
<td>4000 in, 4000 out</td>
277+
<td>4000 in, 4000 out</td>
278+
</tr>
279+
<tr>
280+
<th><b>Concurrent requests</b></th>
281+
<td>Not applicable</td>
282+
<td>1</td>
283+
<td>1</td>
284+
<td>1</td>
285+
</tr>
286+
<tr>
287+
<th rowspan="4" scope="rowgroup" style="box-shadow: none"><b>Azure OpenAI o3-mini</b></th>
288+
<th style="padding-left: 0"><b>Requests per minute</b></th>
289+
<td>Not applicable</td>
243290
<td>2</td>
244291
<td>3</td>
245292
<td>3</td>
246293
</tr>
247294
<tr>
248295
<th><b>Requests per day</b></th>
296+
<td>Not applicable</td>
249297
<td>12</td>
250298
<td>15</td>
251299
<td>20</td>
252300
</tr>
253301
<tr>
254302
<th><b>Tokens per request</b></th>
303+
<td>Not applicable</td>
255304
<td>4000 in, 4000 out</td>
256305
<td>4000 in, 4000 out</td>
257306
<td>4000 in, 4000 out</td>
258307
</tr>
259308
<tr>
260309
<th><b>Concurrent requests</b></th>
310+
<td>Not applicable</td>
261311
<td>1</td>
262312
<td>1</td>
263313
<td>1</td>
+10
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}:
2+
3+
* **GPT 4o:** This is the default {% data variables.product.prodname_copilot_chat_short %} model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/gpt-4o) and review the [model card](https://openai.com/index/gpt-4o-system-card/). Gpt-4o is hosted on Azure.
4+
* **{% data variables.copilot.copilot_claude_sonnet %}:** This model excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. Learn more about the [model's capabilities](https://www.anthropic.com/claude/sonnet) or read the [model card](https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf). {% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_claude_sonnet %} hosted on Amazon Web Services.
5+
* **o1:** This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the `gpt-4o` model. You can make 10 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1 is hosted on Azure.
6+
* **o3-mini:** This model is the next generation of reasoning models, following from o1 and o1-mini. The o3-mini model outperforms o1 on coding benchmarks with response times that are comparable to o1-mini, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. You can make 50 requests to this model every 12 hours. Learn more about the [model's capabilities](https://platform.openai.com/docs/models#o3-mini) and review the [model card](https://openai.com/index/o3-mini-system-card/). o3-mini is hosted on Azure.
7+
8+
For more information about the o1 and o3 models, see [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.
9+
10+
For more information about the {% data variables.copilot.copilot_claude_sonnet %} model from Anthropic, see [AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot).

Diff for: data/reusables/models/o1-models-preview-note.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
> [!NOTE] Access to OpenAI's `o1` models is in {% data variables.release-phases.public_preview %} and subject to change.
1+
> [!NOTE] Access to OpenAI's `o1` and `o3` models is in {% data variables.release-phases.public_preview %} and subject to change.

0 commit comments

Comments
 (0)