How to govern AI agents in your GitHub Enterprise #193359
Pinned
ghostinhershell
started this conversation in
Discover: GitHub Best Practices
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
AI agents are writing a lot of code. Copilot coding agent, Copilot code review agent, and third-party agents like Anthropic Claude and OpenAI Codex now open pull requests, run tests, and push changes across enterprise codebases. In many organizations, agents already rank among the top contributors by PR volume.
That speed creates a governance problem. Agents act faster than any person. They connect to external services through MCP. They run code in environments that hold secrets and infrastructure triggers. One bad policy change can ripple across dozens of repositories in minutes.
The GitHub Well Architected team published a full recommendation on Governing agents in GitHub Enterprise. It covers trust boundaries, audit pipelines, cost controls, and security gates in detail. This post summarizes the five core strategies.
Set a minimal enterprise baseline, then step back
Lock down the non-negotiables at the enterprise level: audit log streaming, model restrictions, compliance controls. Every organization inherits that floor. Then let each organization decide when to enable agents, how to configure MCP, and which custom agents to create.
Why does this matter? Over-centralizing produces generic configurations and slows teams down. Under-governing produces inconsistent agent behavior and unreviewed tool access. A thin enterprise layer with organizational freedom on top hits the right balance.
One pattern to avoid: organizations turning on agents before audit log streaming or model restrictions are in place.
Layer your agent configuration
Enterprise controls set security and compliance baselines. Repository-level configuration is where teams make agents effective for their specific codebase, language, and framework.
Pushing all instructions to the enterprise level wastes tokens and produces generic results. The better pattern looks like this:
AGENTS.md,mcp.json, andcopilot-instructions.mddefine what agents can do. Changes to these files need human review.copilot-setup-steps.yml. Pin dependencies by application type so agents build and test reliably across repositories.One pattern to avoid: developers configuring arbitrary MCP servers or agent instructions with no review process.
Require the same review gates for agent code and human code
The cloud agent has built-in protections. They are a starting point, not the whole answer. Layer on these controls:
For the code review agent, pick a strategy that fits your risk tolerance. The full article walks through three options: automatic reviews on high-risk repos only, automatic on all PRs, or on-demand only. Each has clear trade-offs.
The core principle is simple. Agent-authored code gets the same CI checks, the same security scans, and the same review gates as human-authored code. No exceptions.
Make agent activity visible and traceable
You need two complementary views into what agents are doing.
Audit log streaming to your SIEM. This gives you long-term retention and anomaly detection. Key fields like
agent_session_id,actor_is_agent, anduserlet you correlate events across an entire session. Set alerts for unusual session volume, MCP policy changes, agent modifications to workflow files, and ruleset bypass attempts.Session transcript spot-checks in the GitHub UI. Transcripts show the agent's reasoning, the commands it ran, and where things broke. Audit logs alone cannot give you that context. Schedule periodic reviews for repositories that hold secrets, infrastructure-as-code, or CI/CD workflows.
One pattern to avoid: relying only on the GitHub UI for audit review without streaming logs to an external system.
Make cost predictable before you scale
Agents consume GitHub Actions minutes and premium requests. Each session can run up to 59 minutes. Different models have different cost multipliers. Without spending limits, costs can spike fast and be hard to trace back to a specific team.
Before you expand agent access:
Read the full recommendation
This post covers the reasoning behind each strategy. The full article includes step-by-step configuration instructions, a detailed checklist, SIEM signal tables, and links to every relevant GitHub docs page.
👉 Governing agents in GitHub Enterprise, GitHub Well Architected
Written by @KittyChiu, @tspascoal, @kenmuse, @joshjohanning, and @ayodejiayodele
Beta Was this translation helpful? Give feedback.
All reactions