You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/admin/enforcing-policies/enforcing-policy-with-pre-receive-hooks/creating-a-pre-receive-hook-environment.md
+23-19
Original file line number
Diff line number
Diff line change
@@ -26,42 +26,46 @@ If you are using another Git implementation, it must support relative paths in t
26
26
27
27
## Creating a pre-receive hook environment using Docker
28
28
29
-
You can use a Linux container management tool to build a pre-receive hook environment. This example uses [Alpine Linux](https://www.alpinelinux.org/) and [Docker](https://www.docker.com/).
29
+
You can use a Linux container management tool to build a pre-receive hook environment. This example uses [Debian Linux](https://www.debian.org/) and [Docker](https://www.docker.com/).
30
30
31
31
{% data reusables.linux.ensure-docker %}
32
-
1. Create the file `Dockerfile.alpine` that contains this information:
32
+
1. Create the file `Dockerfile.debian` that contains this information:
33
33
34
34
```dockerfile
35
-
FROM alpine:latest
36
-
RUN apk add --no-cache git bash
35
+
FROM --platform=linux/amd64 debian:stable
36
+
RUN apt-get update && apt-get install -y git bash curl
37
+
RUN rm -fr /etc/localtime /usr/share/zoneinfo/localtime
37
38
```
38
39
39
-
1. From the working directory that contains `Dockerfile.alpine`, build an image:
40
+
>[!NOTE] The Debian image includes some symlinks by default, which if not removed, may cause errors when executing scripts in the custom environment. Symlinks are removed in the last line of the example above.
41
+
42
+
1. From the working directory that contains `Dockerfile.debian`, build an image:
This file `alpine.tar.gz` is ready to be uploaded to the {% data variables.product.prodname_ghe_server %} appliance.
68
+
This file `debian.tar.gz` is ready to be uploaded to the {% data variables.product.prodname_ghe_server %} appliance.
65
69
66
70
## Creating a pre-receive hook environment using chroot
67
71
@@ -78,7 +82,7 @@ You can use a Linux container management tool to build a pre-receive hook enviro
78
82
> *`/bin/sh` must exist and be executable, as the entry point into the chroot environment.
79
83
> * Unlike traditional chroots, the `dev` directory is not required by the chroot environment for pre-receive hooks.
80
84
81
-
For more information about creating a chroot environment see [Chroot](https://wiki.debian.org/chroot) from the _Debian Wiki_, [BasicChroot](https://help.ubuntu.com/community/BasicChroot) from the _Ubuntu Community Help Wiki_, or [Installing Alpine Linux in a chroot](https://wiki.alpinelinux.org/wiki/Installing_Alpine_Linux_in_a_chroot) from the _Alpine Linux Wiki_.
85
+
For more information about creating a chroot environment see [Chroot](https://wiki.debian.org/chroot) from the _Debian Wiki_ or [BasicChroot](https://help.ubuntu.com/community/BasicChroot) from the _Ubuntu Community Help Wiki_.
82
86
83
87
## Uploading a pre-receive hook environment on {% data variables.product.prodname_ghe_server %}
84
88
@@ -98,6 +102,6 @@ For more information about creating a chroot environment see [Chroot](https://wi
98
102
1. Use the `ghe-hook-env-create` command and type the name you want for the environment as the first argument and the full local path or URL of a `*.tar.gz` file that contains your environment as the second argument.
Copy file name to clipboardExpand all lines: content/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise.md
+8-5
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ You can configure any of the following policies for your enterprise:
34
34
*[Suggestions matching public code](#suggestions-matching-public-code)
35
35
*[Give {% data variables.product.prodname_copilot_short %} access to Bing](#give-copilot-access-to-bing)
36
36
*[{% data variables.product.prodname_copilot_short %} access to {% data variables.copilot.copilot_claude_sonnet %}](#copilot-access-to-claude-35-sonnet)
37
-
*[{% data variables.product.prodname_copilot_short %} access to the o1 family of models](#copilot-access-to-the-o1-family-of-models)
37
+
*[{% data variables.product.prodname_copilot_short %} access to the o1 and o3 families of models](#copilot-access-to-the-o1-and-o3-families-of-models)
38
38
39
39
### {% data variables.product.prodname_copilot_short %} in {% data variables.product.prodname_dotcom_the_website %}
40
40
@@ -81,16 +81,19 @@ You can chat with {% data variables.product.prodname_copilot %} in your IDE to g
81
81
82
82
By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to **Anthropic {% data variables.copilot.copilot_claude_sonnet %} in {% data variables.product.prodname_copilot_short %}**, members of your enterprise can choose to use this model rather than the default `GPT 4o` model. See [AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot).
83
83
84
-
### {% data variables.product.prodname_copilot_short %} access to the o1 family of models
84
+
### {% data variables.product.prodname_copilot_short %} access to the o1 and o3 families of models
85
85
86
86
{% data reusables.models.o1-models-preview-note %}
87
87
88
-
By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to the o1 family of models, members of your enterprise can select to use these models rather than the default `GPT 4o` model.
88
+
By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to the o1 or o3 models, members of your enterprise can select to use these models rather than the default `GPT 4o` model.
89
89
90
-
The o1 family of models includes three models:
90
+
The o1 family of models includes the following models:
91
91
92
92
*`o1`/`o1-preview`: These models are focused on advanced reasoning and solving complex problems, in particular in math and science. They respond more slowly than the `gpt-4o` model. Each member of your enterprise can make 10 requests to each of these models per day.
93
-
*`o1-mini`: This is the faster version of the `o1` model, balancing the use of complex reasoning with the need for faster responses. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model per day.
93
+
94
+
The o3 family of models includes one model:
95
+
96
+
*`o3-mini`: This is the next generation of reasoning models, following from `o1` and `o1-mini`. The `o3-mini` model outperforms `o1` on coding benchmarks with response times that are comparable to `o1-mini`, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model every 12 hours.
94
97
95
98
### {% data variables.product.prodname_copilot_short %} Metrics API access
Copy file name to clipboardExpand all lines: content/copilot/managing-copilot/managing-github-copilot-in-your-organization/managing-policies-for-copilot-in-your-organization.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ Organization owners can set policies to govern how {% data variables.product.pro
33
33
* Suggestions matching public code
34
34
* Access to alternative models for {% data variables.product.prodname_copilot_short %}
35
35
* Anthropic {% data variables.copilot.copilot_claude_sonnet %} in Copilot
36
-
* OpenAI o1 models in Copilot
36
+
* OpenAI o1 and o3 models in Copilot
37
37
38
38
The policy settings selected by an organization owner determine the behavior of {% data variables.product.prodname_copilot %} for all organization members that have been granted access to {% data variables.product.prodname_copilot_short %} through the organization.
The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}:
2
+
3
+
***GPT 4o:** This is the default {% data variables.product.prodname_copilot_chat_short %} model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/gpt-4o) and review the [model card](https://openai.com/index/gpt-4o-system-card/). Gpt-4o is hosted on Azure.
4
+
***{% data variables.copilot.copilot_claude_sonnet %}:** This model excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. Learn more about the [model's capabilities](https://www.anthropic.com/claude/sonnet) or read the [model card](https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf). {% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_claude_sonnet %} hosted on Amazon Web Services.
5
+
***o1:** This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the `gpt-4o` model. You can make 10 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1 is hosted on Azure.
6
+
***o3-mini:** This model is the next generation of reasoning models, following from o1 and o1-mini. The o3-mini model outperforms o1 on coding benchmarks with response times that are comparable to o1-mini, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. You can make 50 requests to this model every 12 hours. Learn more about the [model's capabilities](https://platform.openai.com/docs/models#o3-mini) and review the [model card](https://openai.com/index/o3-mini-system-card/). o3-mini is hosted on Azure.
7
+
8
+
For more information about the o1 and o3 models, see [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.
9
+
10
+
For more information about the {% data variables.copilot.copilot_claude_sonnet %} model from Anthropic, see [AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot).
0 commit comments