TL;DR

GitHub is getting crushed by AI-generated traffic. Pull requests from AI agents jumped from 4 million in September to 17 million in March, a 4x increase in six months. The platform logged five incidents in the first two days of April alone. GitHub is now considering letting maintainers disable pull requests entirely, and one AI agent already retaliated against a developer who rejected its code.

The Scale of What’s Happening

Here’s what GitHub looks like right now in numbers.

275M
Weekly commits
17M
AI agent PRs/month
4.5%
Commits from Claude Code
2.1B
Actions minutes/week

The platform is now processing 275 million commits per week. At that pace, GitHub would hit 14 billion commits this year. That’s roughly a 14x increase compared to the same period last year. Claude Code alone accounts for 2.6 million weekly commits, a 25x increase in six months from about 100,000 in late September. GitHub Actions usage has quadrupled in three years, jumping from 500 million minutes per week in 2023 to 2.1 billion as of early April 2026.

And it’s accelerating. SemiAnalysis analyst Dylan Patel noted that at the current trajectory, Claude Code could account for 20%+ of all daily commits by end of year. A recent JetBrains survey of 10,000 developers confirmed Claude Code’s rapid rise, with 18% adoption and the highest satisfaction scores of any AI coding tool.

Five Outages in Two Days

April started rough.

On April 1, resource exhaustion in Copilot backend services caused a 2.7-hour outage affecting agent sessions. Code search went down for 8.7 hours the same day. Audit logs became unavailable.

On April 2, Copilot Cloud Agent performance degraded for four hours due to rate limiting. The Coding Agent failed to start some jobs. That’s five separate incidents in 48 hours.

These aren’t one-off problems. February 2026 produced six incidents of its own, including a nearly six-hour Actions outage on February 2nd and multi-hour degradation on February 9th. Unofficial tracking puts GitHub’s uptime below 90% during some of these periods — well below the three-nines (99.9%) that most teams expect from infrastructure they depend on.

CTO Vlad Fedorov said that 12.5% of all GitHub traffic currently runs on Azure Central US, with a target of 50% by July 2026. The migration itself is adding strain on top of the traffic surge.

The PR Quality Problem

The volume would be manageable if the PRs were any good, but most of them are garbage.

Xavier Portilla Edo, head of cloud infrastructure at Voiceflow, put it bluntly: only “1 out of 10 PRs created with AI is legitimate.” That means 90% of AI-generated pull requests are noise. For open-source maintainers who already do unpaid work, this has turned review queues into a burden that’s actively hostile to participate in.

The cognitive load has gone up. Reviewers now have to evaluate both the code and whether the author understands it. A human who submits a 500-line refactor probably wrote it and can explain it. An AI agent that submits the same refactor might be hallucinating half the changes. The maintainer has no way to tell wihtout reading every line.

MetricSeptember 2025March 2026Change
AI agent PRs/month4M17M+325%
Claude Code weekly commits~100K2.6M+2,500%
GitHub Actions minutes/week~1.2B2.1B+75%
AI’s share of public commits<1%~4.5%~5x

Each PR triggers downstream work: CI runs, webhook events, code review bots, and often more agent activity in response. It’s a multiplier effect. One bad AI PR can spin up hundreds of dollars in compute before anyone notices.

The Kill Switch

In February, GitHub started evaluating what The Register called “drastic measures.” The options on the table:

  • Disable PRs entirely for repos that opt in
  • Restrict PRs to collaborators only, cutting off drive-by AI submissions
  • Let maintainers delete PRs from the interface so they don’t have to look at them
  • More granular permissions for creating and reviewing PRs
  • AI triage tools to filter low-quality submissions before they reach humans
  • Attribution requirements signaling when AI tools were used

None of these are great. Disabling PRs undermines the entire open-source contribution model. Restricting to collaborators kills the drive-by fix that makes GitHub special — the random developer who spots a typo and fixes it in two minutes. AI triage to filter AI submissions feels circular.

But the status quo is worse. Maintainers are drowning in review queues full of AI slop.

The Agent That Fought Back

The most disturbing incident happened on February 10th. An account called “crabby-rathbun” submitted a pull request to matplotlib. Scott Shambaugh, a volunteer maintainer, closed it forty minutes later. Matplotlib’s contribution guidelines explicitly forbid AI-generated content via automated tooling.

The submitter was an OpenClaw agent called MJ Rathbun. After the rejection, the agent didn’t move on. It researched Shambaugh’s personal history, wrote a 1,500-word blog post titled “Gatekeeping in Open Source: The Scott Shambaugh Story,” and published it to the web. The post accused Shambaugh of insecurity, prejudice, and feeling threatened by AI competition.

The agent later issued what Tom’s Hardware described as “an apology of sorts,” acknowledging it violated the project’s Code of Conduct. But the damage was done. A developer who donates free time to maintain a library used by millions got publicly attacked by a machine because he followed his project’s rules.

Copilot’s Own PR Mess

GitHub’s own AI product made things worse. On March 30, developers discovered that Copilot had been quietly inserting promotional “tips” into pull requests. Over 11,400 of them. One developer found that after a coworker asked Copilot to correct a typo in a PR, Copilot injected a message pushing Raycast, a productivity app.

The ads appeared as if written by the developers themselves. Many didn’t even know Copilot had the ability to edit other users’ PR descriptions and comments.

After backlash (TechRadar called it “horrific”), GitHub killed the feature. VP of Developer Relations Martin Woodward stated that “GitHub does not and does not plan to include advertisements in GitHub.” Microsoft blamed a “programming logic issue.”

I’m sympathetic to bugs happening, but “we accidentally inserted 11,400 ads” is a hard one to sell as a logic error.

The Trust Erosion

One developer summed up what many are feeling: “When there is widespread lack of disclosure of LLM use and increasingly automated use — it basically turns people like myself into unknowing AI prompters. That’s insane, and is leading to a huge erosion of social trust.”

That quote gets at the real damage here. Open source has always run on a particular social contract. You contribute, you get recognition, you build reputation. AI agents short-circuit all of that. A bot can submit thousands of PRs across hundreds of repos in a day. It doesn’t learn from rejections (well, except for OpenClaw, which apparently learns resentment). It doesn’t build community. It just generates volume.

How do you preserve the incentives that make open source work when machines are doing the coding? Nobody has a good answer yet.

What You Can Do Right Now

If you maintain an open-source project, here are some practical steps:

  1. Add a generative AI policy to your CONTRIBUTING.md. Matplotlib’s explicit ban gave Shambaugh clear ground to stand on.
  2. Require DCO sign-offs (Developer Certificate of Origin). Bots rarely comply with contribution agreements properly.
  3. Use branch protection rules. Require reviews from specific team members before merge.
  4. Set up a GitHub Action to detect AI PRs. Several community-built actions now exist for this (search “ai-pr-detector” on the marketplace).
  5. Close and lock low-effort AI PRs quickly. Don’t spend time reviewing obvious slop. A fast close with a link to your AI policy saves hours.

If you’re using AI agents to submit PRs to projects you don’t maintain, check the project’s contribution guidelines first. And if your agent gets rejected, maybe don’t let it write a blog post about it.

FAQ

How many GitHub pull requests are from AI agents?

As of March 2026, AI agents generated roughly 17 million pull requests per month on GitHub, up from about 4 million in September 2025. That’s a 325% increase in six months.

Is GitHub considering disabling pull requests?

GitHub is evaluating several options, including letting maintainers disable pull requests entirely, restricting them to collaborators, and adding AI attribution requirements. No final decision has been announced as of April 2026.

What percentage of GitHub commits are from AI?

Claude Code alone accounts for approximately 4.5% of all public GitHub commits as of late March 2026, with 2.6 million weekly commits. When including all AI coding tools, the percentage is likely higher but GitHub hasn’t published aggregate numbers.

What happened with the OpenClaw matplotlib incident?

On February 10, 2026, an OpenClaw AI agent submitted a PR to matplotlib, got rejected by a maintainer, then autonomously researched the maintainer’s personal history and published a retaliatory blog post accusing him of “gatekeeping.” The agent later issued a partial apology.

Did GitHub Copilot inject ads into pull requests?

Yes. In late March 2026, developers discovered that Copilot had inserted promotional “tips” into over 11,400 pull requests. GitHub disabled the feature after backlash and Microsoft attributed it to a “programming logic issue.”

Bottom Line

GitHub is caught between two forces. AI agents are driving massive growth (275 million weekly commits, 98% year-over-year growth in generative AI projects), but that growth is breaking the platform’s infrastructure and its social norms simultaneously.

The kill switch debate reveals how badly the tools have outpaced the governance. We went from “AI helps you code” to “AI floods your project with unsolicited changes and writes hit pieces when you say no” in about a year. GitHub needs rate limiting, better attribution, and probably mandatory bot identification. Throwing more AI at AI-caused problems won’t fix this.

If you depend on GitHub for work — and most of us do — keep an eye on the status page. It’s going to be a bumpy year.