Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

AlexB138

Github has published some incredible usage rate increase numbers, which they ascribe to the rise of agentic coding. At some point, they are going to have to change rate limits, cut free-tier usage, or find some other path to reducing load. It's clear that their infrastructure can't keep up with this significant increase, and it's unlikely that they're going to just absorb the increased costs themselves.

Very curious to see what the future holds for Github.

eddyg

From the GitHub COO on April 3rd:

    Platform activity is surging. There were 1 billion commits in 2025.
    Now, it's 275 million per week, on pace for 14 billion this year if
    growth remains linear (spoiler: it won't.)

    GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week
    in 2025, and now 2.1B minutes so far this week.

    So we're pushing incredibly hard on more CPUs, scaling services, and
    strengthening GitHub’s core features.
https://x.com/kdaigle/status/2040164759836778878

They also had a recent blog post about availability: https://github.blog/news-insights/company-news/an-update-on-...

I don't envy the scaling issues the GitHub engineers are facing! #HugOps

munk-a

After the Microsoft acquisition GH marketing and pricing put an immense amount of effort[1] into trying to kill secondary platforms that integrated into github and move more corporate accounts fully on-platform. We recently dropped travis for github actions and dropped reviewable for github PRs (which are terrible).

There's a portion of this that is agentic driven and there's a portion of this that's just github making their own bed.

1. Arguably anticompetitive pricing like MSFT is used to doing with the office suite.

foolswisdom

In other words, the set of github core services has expanded because you don't use third party tooling for some of those services anymore.

blks

That sounds like their classic EEE

skylerwiernik

This is extremely interesting how fast this happened. Either AI use surged massively in the last quarter, or this is a very sneaky move by Anthropic. Looking at my own stats, I don't think I'm using Claude Code much more than I used to, but my commits have gone way up. I have a feeling they've tuned the models recently to commit more often, which gives the illusion of more work being done.

crystal_revenge

> Either AI use surged massively in the last quarter

December 2025 is considered by many people to be a major step function in agentic coding (both due to improvements in harnesses and LLMs themselves). I know my coding has forever changed since then.

Before I was basically always hands on the keyboard while working with AI. Now I'm running experiments with multiple agents over the weekend, only periodically checking in if they have any questions or need further instruction.

The last quarter is where I personally first started to see how this was all going to change things (despite having worked on both the research and product side of AI for the last few years).

> I have a feeling they've tuned the models recently to commit more often, which gives the illusion of more work being done.

Agents certainly are committing more often, but I know, at least for these projects, there really is work being done. An example: I had an agent auto-researching a forecast I was working on. This is something I've done manually for over a decade now. The iteration process is tedious and time consuming, and would often take weeks of setting up and ultimately poorly documenting many, many experiments to see what works. Now I can "set it and forget it", and get the same results I would have in hours (with much more surface area covered and much better documentation). Each experiment is a branch (or work-tree) so yes there are a lot of commits happening, but the results are measurably real.

I often think the big divide related to the success with agents is whether or not the quality of ones work can be objectively measured. For those of us doing work that can be measured, the impact of agents is still hard to comprehend.

martinald

Many things at once I suspect:

1. Models have got way better, which means you are far more likely to get something working. I know I used to have little 'tool'/'weekend projects' all the time that wouldn't get off the starting blocks before, now it takes a few minutes often to build them, and once I've built them I tend to want to have them saved on github. Quite how useful they turn out to be is another question though...

2. Related, because the models are a lot better I can generate far more code per unit time. On Sonnet last year I'd have to babysit the model and constantly 'steer' it, which meant a lot of the CC time was actually me reviewing it. Now with Opus4.7 it can often just churn away for 10-30minutes and get something reasonable.

3. Most importantly, just the volume of new users to coding agents - loads of new developers shipping far more far frequently.

4. Many users who were not on github, now signing up and pushing code to it. "Vibe coders" basically who don't have SWE experience and their agent tells them git would be a good idea.

Each of these would be a big increase in scale, but combined it is vvv high

tossandthrow

I don't think commits per se puts pressure on the infrastructure.

More likely pulls and pushes, and, naturally, the ci minutes they identify as the main issue.

siva7

It's the end of the free lunch era. Subsidizing groups like students or new users to gain market share worked as long as there weren't billions of them at the same time eating all compute from the paying customers. It's not working anymore for ai products.

po1nt

Not a free lunch, data gold mine

wolfi1

I wonder how many of those actions are really necessary

PhilipRoman

And how many of those actions do uncached downloads instead of building self-contained offline images... Speaking of which, I wonder if GitHub has implemented any HTTP interception for common mirror sites, like used by apt, etc.

bravetraveler

Or how many pushes those commits are spread across; oh, neat, big number.

sgt

They can easily spin this as massive success. Uptime will only matter for a small number of users. Probably not true, but not far from the truth either. I'm a heavy Github user and I can't really say it's THAT bad. If something doesn't work, you can always fill your time with something else.

hansmayer

Wow, nice to see the relentless push for more AI slop finally paying back some dividents back to the issuer.

amluto

For literally decades, I’ve observed that there are systems that make each operation cheap and systems that work hard to scale out. The former frequently seems to wildly outperform the latter.

GitHub, for example, seems to implement the main repository /pulls page as a search query, which is hinted at by the prefilled search bar and was mostly confirmed last week when the search backend failed and pull requests didn’t load. But it could have been implemented as a plain API call that just loads open pull requests, and that API exists and did not go down.

If GitHub focused a bit on identifying their top 95% of high level operations (page loads including resulting API calls, for example) and making them efficient, I bet they could get a 5x or better reduction in backend load by simplifying them.

(Don’t even get me started on the diff viewer. I realize that much of its awfulness is the horribly inefficient front end, which does not directly load the back end, but I expect there is plenty of room for improvement. The plain git command line features are very fast.)

mnky9800n

Are you telling me you don’t want a chat interface to greet you when you log in to GitHub?

amluto

That’s sort of orthogonal. But if GitHub actually invoked an LLM on initial page load, that would be about par for the course, and it would be amusing for GitHub to then complain that they’ve grown so quickly that their systems can’t keep up.

stabbles

I noticed the same https://news.ycombinator.com/item?id=47940213. My working hypothesis is that, given that a filter was always required (prs and issues are likely rows in the same database with a bool property to distinguish them), someone thought it'd be good to use the search API uniformly. But search is on the derivative of the underlying data, in contrast to the specific APIs for listing issues and prs.

munk-a

Working in an organization without a mono-repository I've actually found it extremely difficult to keep a tab on PRs and issues across multiple repositories. For a problem that should be resolved by a "For me" page that just lists out all your active incoming and outgoing PRs their multi-page solution involving search filters that often need to be reset feels extremely weak. I've worked on large multi-tenant solutions before and a page where you can "SELECT * FROM everything LIMIT 10" is the absolute last thing you want to give to users.

It is bizarre to me that so much of their tooling defaults to acting across the whole of github data points without guiding the user towards (or even making available as far as I can tell) a way to easily scope requests down outside of a complex search filter.

wavemode

Git itself is kind of a fundamentally computationally inefficient way to store and retrieve information. If the problem to solve were simply "store and version this text", 14 billion commits in a year would not even be considered a lot.

In other words, a centralized version control system built from the ground up to operate at scale would do far more for scalability than anything GitHub could possibly do to optimize their Git operations. Every major tech company (Amazon, Meta, Google, etc) is already doing something like this internally.

Though this would require people to start using a github-specific client rather than the traditional git+ssh. (Though the github client could still maintain a git repo locally, for compat.)

munk-a

I can guarantee you one thing - github's problem isn't coming from git.

Considering all the ci/cd pipelines, PR & issue discussions, social media tracking, rich data and else that github hosts if their true issue is the actual meat and potatoes of running git I would be gobsmacked.

stabbles

What are you referring to when you say it's "fundamentally computationally inefficient"? It's pretty efficient because it's content-addressed, plus optimizations to reduce storage and data transfer with packfiles.

the_sleaze_

I think you need to broaden your focus here - I can't really remember any significant downtime before the Microsoft acquisition and the data supports my memories.

Microsoft bought Github and migrated to Azure, which is explains the findings. The query performance was fine before they started serving from Azure.

I mean honestly, as though there isn't one single person competent enough to read some logs and horizontally scale a few read only dbs to meet demand? That's not it

AlexB138

> I think you need to broaden your focus here - I can't really remember any significant downtime before the Microsoft acquisition and the data supports my memories.

This is the opposite of my recollection, actually. I distinctly remember having conversations about Github struggling to scale well before MS was involved, and people claiming that MS had somehow saved Github because it had stabilized and begun adding features again.

> The query performance was fine before they started serving from Azure.

This may be correct though. The Azure migration seems more aligned with the timeline of struggling to scale.

nvme0n1p1

I don't know why this is downvoted. The data backs you up: https://damrnelson.github.io/github-historical-uptime/

philistine

I mean, are any of the other forges, which I presume are also seeing logarithmic increase in commits, also failing as hard as Github?

graypegg

IMO, they're reaching the point of no return. I don't think they can horizontally-scale their way out of the hole they dug themselves unless they separate their free and paid infra maybe... which doesn't seem likely considering how their other infra changes are going.

In the same way you need to be 10x better for someone to consider switching to your product, if you get 10x worse your competitors get a free 10x by just standing still.

AlexB138

I think there's a very good chance you're right. Their reputation is obviously severely harmed, and high profile projects like Ghostty leaving may be a canary in the coalmine.

Something creative like separating their free and paid tiers may help them. I suspect the fact that all of this is happening to them along with their migration to Azure is probably complicating their ability to adapt their infrastructure.

bastardoperator

What if I told you most enterprise customers don't even use the cloud offering and aren't impacted by any of this? Companies like Apple use GHES, and honestly thats where most of their revenue comes from, not the free offering.

dylan604

I wonder if AWS resurrecting CodeCommit might be related. "For all of our warts, we still have a higher rep score than github" would not be an extraordinary thought at this point. There has been some brief chat about looking to github, and I'm so glad we never did. A previous company did migrate to github with no real answers on what the benefit was other than investors ask if your code is in github by name vs some other repo.

fastball

How can they not? Surely at GitHub scale there isn't a single component where they were relying on vertical scaling?

graypegg

For all of it's history (up to and including now possibly?) Github was a big Ruby on Rails monolith. [0] Obviously some things run in their own service, but I'm seeing the core github features fall apart which should be the features packed into the big monolith. If load is this much a problem, not being able to only vertically scale the processes that need the extra headroom is a big problem. Scaling horizontally by just throwing more machines at it, or at least cordoning-off some machines as "the ones that people actually pay for" is all I can think of for an application I can only describe as "accidentally working". Urgency is most-definitely high and that pushes decision making towards permanently-temporary patches instead of actual infra/architecture improvements.

[0] https://github.blog/engineering/architecture-optimization/bu...

jcgrillo

IIRC back in the day they used to have an on-prem Enterprise product? I've never heard of anyone who actually used it though. IMO that would make a lot of sense for a medium-large organization--you still get the familiar Github product but you can take responsibility for your own uptime--like with Jira, Jenkins (nee Hudson), PyPI/Maven/etc.

kqp

A week ago GitHub published a blog post saying this, a day later GitHub execs were in HN comments repeating it, and just like that it’s common knowledge that GitHub’s steady reliability decline from the 2019 onward was actually caused not by the 2019 Microsoft integration, but by something that did not exist until 2023. PR works, y’all. Turns out the reason GitHub doesn’t work is because it’s just so good!

sh3rl0ck

I've been a strong proponent of reallocating all LinkedIn server capacity to GitHub.

dijksterhuis

this is an idea that i’d happily get behind.

bachmeier

[flagged]

cdrnsf

They can't really cite the situation as a problem given their hand in creating and continuing it.

nine_k

It's hard to talk about "them" as a singular entity. I bet that the "Copilot all the things!!11" faction mostly does not consist of GitHub SREs.

Hamuko

The GitHub SREs are working for the Copilot company.

petcat

The sysadmins didn't make any of those decisions.

cdrnsf

I suppose the idiocy of their parent company is their job security.

munk-a

Have they published incredible usage rate numbers somewhere? I saw their recent blog post about the outages[1] and it has a graph without axis labels and without any context around usage before 2019 to indicate just how much this agentic acceleration has actually increased usage growth.

1. https://github.blog/news-insights/company-news/an-update-on-...

crote

It's a bit hard to blindly trust their numbers when they are trying very hard to sell Copilot to everyone.

Sure, AI will undoubtedly have increased their workload, but how much of the shown figures is real, and how much is the PR department trying to make it look like Copilot & friends is a massive success?

gejose

Github has 84.92% uptime in the last 90 days according to https://mrshu.github.io/github-statuses

I don't know how this is even remotely close to acceptable.

gen220

IMO that site overcounts downtime. If you filter for major and critical outages (the kind that make the front page of HN), the story is still bad but it’s not 84.92% bad.

https://isgithubcooked.com/?severities=major.critical

jszymborski

2/9 9s is pretty awful.

ex-aws-dude

We aim for 9 5s

pluc

It isn't. Lots of unacceptable things going on these days and everyone seems to be accepting them just fine.

steve1977

I think it's like some kind of collective inferiority complex. Nobody really understands things anymore but everyone is afraid to point out mistakes of others because they are scared to come under scrutiny themselves then.

adityashankar

I don't think it's an inferiority complex, negativity sells more and carefully understanding things doesn't sell as much

tardedmeme

We should make an alternative git site, but how to acquire users?

go_elmo

Make it nerdy enough to scare of agentic coders only. Also, blackjack and hookers are said to be helpful in such circumstances.

tantalor

What do you need users for?

GH is not a social network

dd8601fn

Forgejo is a thing. But the headlines lately make it sound like it’s not in great shape either.

chrisjj

> We should make an alternative git site, but how to acquire users?

Buy ad space on Github's outage page?

01HNNWZ0MV43FF

codeberg is doing fine

afro88

Guarantee enterprises with SLAs aren't accepting them

maccard

The thing about an SLA is that once you’ve broken it you’ve lost the trust. It doesn’t _really_ matter what the cost is for breaking it, nobody chooses their platform based on the refund they’ll get if they’re down. But they absolutely do choose based on reliability and uptime. The enterprise SLA refund credit will show as a (big) metering blip, but the problem is the people who signed the contracts are going to be speaking to Gitlab now

booleandilemma

I think the default position people like to take generally is to just go with the status quo. GitHub has reached status quo level. As in "nobody ever got fired for choosing GitHub". It's the only forge I've seen advertisements for in the meatspace, and even non-technical people know about it. On job applications, companies ask for my GitHub URL. I think it'll be awhile now before they get abandoned. That said, I recently started moving my stuff over to Codeberg. The change needs to start with us, the people writing software.

Retr0id

I, for one, am not paying them enough money to expect any better.

tantalor

They can't even get two eights, let alone three nines.

amarant

Hey there's a nine in there, so it's fine!

croes

At least one 9 … somewhere

veber-alex

ffs can we stop talking about that number and site already.

It treats any service being down as the entire platform being down which is nonsense.

It's just lying with statistics.

bspammer

The individual numbers for git operations, pull requests, and actions are all still single-nine.

indianhippie

This is reaching an unacceptable level of performance. There isn't a week that work isn't interrupted by GH.

petcat

AI agents have changed the scalability properties of basically the entire internet.

It used to be that GitHub could rely on a finite number of people interacting with their platform in real human ways in real observable patterns. So I'm assuming that they scale for those patterns, and optimize for the UI and UX hotspots.

But now everyone's got a moltbot running 24/7, sometimes many, and it's completely overloading a lot of services. Especially services like GitHub which are very much agent-centric nowadays.

njovin

Microsoft buys github.

Microsoft forces AI usage down everyone's throats.

AI bot usage takes down github.

I have to assume that there are some serious fights going on between the poor SRE teams wanting to throttle bots, and MS not wanting to do anything to dissuade AI usage.

j_maffe

How do you throttle bots? Everyone will stop having commit msgs mentioning LLM agents. then what?

PunchyHamster

GH was going down before AI explosion. The start of the trend is MS buying it, not AI explosion, that is just final nail

DetroitThrow

>AI agents have changed the scalability properties of basically the entire internet.

Why is GH the only service provider seeing such consistently bad availability then? Everyone has had to scale massively all the time, if GH is choosing moltbots capacity over basic availability for the rest of the humans, they have made the wrong choice.

mert-kurttutan

Some people really abuse the f out of the system in a way optimized to take github down. Like they push every minute or for every commit instead of with certain time intervals (e.g. a single push a few times a day for each repo).

I follow some of the accounts that run 24/7 agent sesssion. Their projects are not even that novel for the number of commits that appear on the profile. Many of the commits have the log of beads, claude session etc (no change to the actual code). Some of them are ports of some projects to another language. AI surely will increase the productivity, but the waste and noise that some people are willing to commit ....

dingnuts

[dead]

tardedmeme

It's not new, it's just a DoS, which is a serious crime, just report the attacker to the police if in your country, or block their IP if not. If done accidentally, it's likely not a crime but the police will still scare them to stop doing it.

lbourdages

It's been unacceptable for months, but now it's at the level of "we should actively look for alternatives".

cenal

Any centralized solution like GitHub is going to suffer the same fate as vibe coding chokes these services. The only option to have high uptime is to self host and most organizations can't do that easily. Time will tell if GitHub can scale up enough to meet demand.

tardedmeme

It's a nice thought but I think the revealed preference from the history of the internet is that people actually only want centralised services, no matter what they say they want.

People love to clown on the fediverse because of having to choose a server. Which is no different from email. I guess the difference is that their ISP used to give them email.

ozgrakkurt

Not really. It is possible to implement systems that handle a lot more scale than github has. This is proven by systems that exist today.

It might be hard to create such systems using ruby and microslop AI management though

baq

A week? You're going to be happy with more than a day without an incident.

I lost track which Monday morning PST in a row this is.

undefined

[deleted]

Hamuko

Things are a lot better in Europe. I stopped working hours before this incident started, and I can't really remember any major work-stopping indicents in the past months. I only remember once trying to do hobby stuff in the evening that was impacted recently.

enraged_camel

I would say we are way past unacceptable.

Insanity

At this point, "GH is down" posts are competing with "Newest LLM Hype" for the HN front-page week over week.

For my personal project, I've been considering moving everything over to Codeberg. Stability of GH being one reason, but I also like the idea of an alternative that is not strictly tied to a big tech company.

SpyCoder77

Your name summarizes all the GitHub uptime crap.

htx80nerd

"Claude Code is basically magic" spam hit the hardest. Temporarily side lined by GH status post. Maybe Claude Advertising is in a lull right now.

eastbound

And yet, you haven’t. That’s the problem with dominant platforms: Slight inconveniences + inertia are enough to ensure no-one moves (even without monopolistic abuse – and I’m talking about Microsoft here).

matthew_hre

Hilariously, it looks like basically everything except Copilot is degraded. The jokes write themselves sometimes.

cdrnsf

Copilot's full functionality is only fractionally useful compared to what's currently degraded.

Kwpolska

Copilot is fully independent of the code forge parts of GitHub, so I would imagine it’s running on completely different infrastructure, without any hard dependencies on the Rails monolith.

chao-

It feels weird (sad?) that I'm starting to get a sixth sense for when Github is going to a service disruption.

About an hour ago, clicking "Resolve Conversation" in a Pull Request failed a few times with an error message that appeared lower on the page (outside the viewport), and which I did not see the first few times. I had to reload the page after every few actions to get the server to register new ones.

I told a colleague, and added "Github might be having an issue with some other service, and it's just bleeding over to PR comments? Maybe it will snowball into a larger outage?"

romellem

Literally just had this same signal with PR review comments. Checked the status page, saw it was green, and (correctly) assumed “not for long!”

thomas_viaelo

[flagged]

gritzko

2027: "GitHub is up!"

elevation

"Three 8s of availability"

lelanthran

2028: 3/8ths of availability

baq

I literally laughed out then shed a tear, because I'd actually take three 8s today.

tardedmeme

Three eights is more than a month of downtime every year. Today is the three eights.

jfrbfbreudh

Reduce the free tier.

I’ve made 4000 commits in the last 2.5 months. That’s just to main. And I push up tons of artifacts daily for regression testing.

For $0.

timmg

Honestly, I hate "free tiers" for SAAS products like this.

For about 5 minutes, Google had a pay-as-you-go service on GCP for git. I used to use that, because I wanted to own my stuff. But (I guess) because everyone used "free" github -- and like a lot of other Google services -- they sunsetted it.

So now I'm using github for free. But would rather pay for my storage an usage with a (big) cloud provider.

cphoover

They do that and open source projects that haven't left already will migrate.

PunchyHamster

That's like 2 minutes of CPU usage for the repo part

faangguyindia

i just use local git repo and since i am a solo dev i push nowhhere.

Though i plan to use my own server later.

Kwpolska

GitHub should add a slop tax. Co-authored by Claude? Pay up. Em-dashes in comments? Pay up. A lot of code written in a short window of time? Pay up.

m_w_

This is really getting ridiculous - although people sometimes dismiss the "missing" status page because it includes copilot, it's worth noting that pull requests (95.5%) are even lower availability than copilot (96.4%).

How am I expected to comment "LGTM" if I can't even get to the PR?

int32_64

At least people are gaining knowledge of how to use the git remote command.

ecshafer

At this point we can just have a Bot repost this exact submission every day and it would be right more often than not.

bigbuppo

It exists, but it runs as a github action, so it's not working right now.

theworstname

That is beautiful

tardedmeme

We couldn't, because HN prevents duplicate posts.

nine_k

The best time for GH to increase prices was 6 months ago, or so. No service is going to weather the storm of agentic code overload unscathed. But at least they could become an expensive-but-working solution instead of the sad comedy they're now, and thus keep their most lucrative customers.

Daily Digest email

Get the top HN stories in your inbox every day.