Get the top HN stories in your inbox every day.
bartread
dminik
I've recently built a script that periodically (every 25 minutes) fetches the latest merged PRs to check for some potential rule violations. I'm not an admin and couldn't get the events API working, so I just resorted to polling.
On an average ~8 hour working day, there's at least one failed request. In fact, looking over the logs, I can't spot a single day that did not have a failed request.
Now, I can't guarantee that these are all caused by GitHub (as opposed to my connection), but it is pretty funny.
dude250711
Microsoft board and shareholders: "LOL, nah! More vibe coding inside and outside plz".
embedding-shape
Hah, love that now they say "Our priorities are clear: availability first, then capacity, then new features" when 6 months ago, it was seemingly exactly the same except Azure supposedly was gonna save them:
> GitHub Will Prioritize Migrating to Azure Over Feature Development - GitHub is working on migrating all of its infrastructure to Azure, even though this means it'll have to delay some feature development.
> In a message to GitHub’s staff, CTO Vladimir Fedorov notes that GitHub is constrained on capacity in its Virginia data center. “It’s existential for us to keep up with the demands of AI and Copilot, which are changing how people use GitHub,” he writes.
https://thenewstack.io/github-will-prioritize-migrating-to-a...
So the currently delayed feature development is now gonna be further delayed, yet almost every week we see new features and changes, just the other day the single issues view was changed, as just one example. And it was "existential" 6 months ago yet they keep stumbling on the exact same issue today?
Even if they're focused exclusively on reliability and uptime, we get the experience that we have today, kind of incredible how a company with the resources of Microsoft seemingly are unable to stop continuously shot themselves in the foot. It's kind of impressive actually. As icing on the cake, they've decided to buy up all popular developer services then migrate them all to the same platform, great idea too.
madeofpalk
This seems uncharitable. Priorities aren't exclusive, especially at scale across large engineering orgs like GitHub. It could be that these are the top level priorities, but teams or individuals who aren't able to contribute to these priorities will work on other things like new features.
voncheese
Agree that priorities aren't exclusive and there may be teams/individuals that aren't able to contribute if they stay in their current teams/roles
Where it becomes questionable though is when enough progress isn't being made on the top priority (reliability). If Github is being true to their word, they need to be pulling people off of teams that are working on features to work on reliability so that top priority gets the resourcing it needs.
Given the pace of improvement, and the cited example of moving to Azure from months ago, it's not super clear they are doing that. Also not clear that they aren't, maybe the move to Azure is just a more than 6mo project no matter how many people are on it.
estimator7292
Sure, but frontend devs fundamentally cannot contribute to the structural reliability issues.
The person who rewrote the issue page view probably doesn't know anything about multi-cloud scaling for millions of users with Azure-crippling throughput. That's an incredibly specialized set of knowledge and experience that is utterly disjunct to frontend work.
But at the same time, given the state that GitHub is in, I personally wouldn't want to allow any devs to push anything to prod that doesn't immediately affect stability. I'd completely freeze frontend work until the infrastructure is more stable. But then again I write C for microcontrollers so what do I know?
embedding-shape
Ditto. I agree though, just because the priority is reliability, doesn't mean others can't work on features, especially features that might help with reliability, which I read was the motivation behind the new single-issue view, so that's my bad, might have been a bit much.
I still think the rest of my point stands, especially the last one which is the move that has the biggest impact to the most of us developers.
dangus
Why do we need to be charitable to Microsoft?
Did we lose our ability to consider them the evil empire?
allthetime
There’s a lot of “won’t someone think of the GitHub employees” on here
saghm
No, but they are ordered generally, and in this case they are explicitly saying that availability should come first
rwmj
It's entirely possible the move to Azure has made the availability problems worse. Dedicated hardware is much more predictable than cloud. "Let's not move to Azure and instead buy a few more racks" was likely a decision beyond the pay grade of github's management.
undefined
0xy
Azure is easily the least reliable and least secure of the 3 hyperscalers, which is crazy because GCP was an also-ran underdog not that long ago.
alper
This entire exercise if anything is a huge indictment of Azure.
But that doesn't matter because the kind of person that buys Azure, just like the kind of person that buys MS Teams, is entirely driven by price and does not care about anything else.
AntiUSAbah
I mean its Microsoft and its Azure. How much can go wrong clicking yourself a few/hundred non autoscaling normal VMs?
There is so much workload running on Azure, i never heard of VMs go away.
If Microsoft can source hardware for Azure, Microsoft can source hardware for Github.
dijit
there's a lot that can go wrong with a hypervisor, even including hiding hardware issues from the guest OS.
We don't think about it because we've been quite spoiled with excellent virtual machine platforms (KVM, Xen and even VMWare).
Those that have worked a lot with VirtualBox will be aware of this, it can be deeply unnerving that VM technology is the default way to deploy things after you've spent sufficient time with VirtualBox. (which: is very good for its original purpose, but not for reliability).
The question is: Does Azure use something more like VirtualBox, or more like KVM?
HyperV exhibits properties closer to VirtualBox.
ZoneZealot
I've had Windows Server VMs soft crash and hard crash on Azure. Some soft-lock and a restart via Azure gets them back. Some times the only fix has been to power off / deprovision - then power on again (i.e. a restart didn't fix it). It's not common, but I've encountered it multiple times. These are with operating systems that were created in Azure from their images.
ncruces
> So the currently delayed feature development is now gonna be further delayed, yet almost every week we see new features and changes, just the other day the single issues view was changed, as just one example.
They did that as a panic mode hack to mitigate performance: https://news.ycombinator.com/item?id=47912521
giancarlostoro
If they had not added or changed any features to GitHub for the past 5 years, nobody would be upset, and yet, they keep changing it. It's a website that doesn't need to be reworked every five minutes. I assume the main development teams maintaining GitHubs codebase are ran by managers who cannot justify their jobs unless they deliver new features for the sake of delivering new features to keep their jobs going, and / or in the hopes of getting new people to join GH, when in reality the more they wind up breaking, the more the opposite becomes true.
They severely nerfed their search, I'm not sure why every other major tech company (Google - Search and YouTube) keeps breaking search for everything when it was working fine previously.
What's a bigger joke is Microsoft has Azure DevOps which looks like it might be abandoned? But then you also have GitHub... My least favorite thing about both is the ticketing system, I cannot believe that I'd ever utter the phrase "I miss Jira" when every Jira project I've ever been in had been so inconsistently setup, every, single, one.
JCTheDenthog
>What's a bigger joke is Microsoft has Azure DevOps which looks like it might be abandoned?
My favorite was trying to figure out how to publish debug symbols with NuGet packages to Azure DevOps artifact feeds. Horrible documentation and I was never able to get it figured out.
jamesfinlayson
> They severely nerfed their search
This always kills me. It used to work so well, and now it doesn't seem to work at all if not logged in, and not particularly well if you are logged in.
greatgib
What they nerfed the most is the basic feature of the PR diff view.
It's only job is to display diff and review comments and it easily hide the diff for files that are a lit bit longer and hide comments when you have more than a dozen. You need to click to see. It's impossible to search in diff without going through it to expand everything.
And a ton of things are regression compared to working with pr a few years ago. Including being a lot worse in terms of latency!
russellthehippo
Reading the capacity crunch idea made me a little more empathetic to their issues - 30x in one year is a lot when you're starting from a high baseline. Now that being said...I'd really appreciate more availability.
gamerslexus
> The main driver is a rapid change in how software is being built. Since the second half of December 2025, agentic development workflows have accelerated sharply.
So, it's because of LLMs guys.
maccard
It's kind of hard to read this with a straight face.
The unlabelled graph with big numbers on top, the priorities that don't match with what we're experiencing, and a list of things that they're doing without a real acknowledgement of the _dire_ uptime over the last 12 months....
georgyo
These are not the worst graphs in the world... Sure the bottom left axis is not labeled, but it still conveys the point correctly. The growth between 2023->2024->2025->2026 is growing quickly. And that in the end/beginning of 2026 they say more growth than the three years before, combined!
You don't need to know the bottom left axis number. We do have to assume the graph is linear, and not some kind of negative exponent log graph. But given the rest of the content, I think that is safe to assume.
Any company that experiences significantly more growth than they were planning for will have capacity issues.
The priorities are most inline with that. The are way beyond the point that they can just add more hardware. They need to make the backend more efficient, and all the stated goals are about helping there.
johndough
> You don't need to know the bottom left axis number.
We very much do. The graph suggests an insane growth in PRs from almost zero to 90M. Now compare this misleading graph with this much clearer one, which shows that the growth over the last three years has been less than 80%: https://github.blog/wp-content/uploads/2025/10/octoverse-202...
heisenbit
PRs were the culmination of human work. Now PRs are generated by machines to trigger human work. So the growth graph is not really absurd.
SkiFire13
That link shows the number of PRs created to be less than 10M though.
undefined
maccard
> These are not the worst graphs in the world... Sure the bottom left axis is not labeled, but it still conveys the point correctly.
No, they're completely useless. Using the "New repos per month" as an example, if the bottom left is 1m, then that's a 20x increase in 2 years which is a lot. If the bottom left is 19m, it's a 5% increase in 2 years which is nothing.
The massive surge on their labelled X axis starts in 2026, and these issues have been going on for a lot longer than that. GHA has been borderline unusable for a year at this point, if not longer.
> But given the rest of the content, I think that is safe to assume.
The rest of the content is "we're working on it", and "here's two outages in the last 14 days, one of which caused actual data loss"
undefined
ncruces
More numbers: https://x.com/kdaigle/status/2040164759836778878
What's the question here, you don't believe growth is currently exponential, or do you think it shouldn't be hard to scale, when 10x YoY is not enough?
OtherShrezzing
As a business user, our costs have gone up while service has gone down dramatically. Meanwhile our marginal cost to GitHub has hardly changed. Where our costs to them have increased, they mostly charge us per cpu minute, so obviously aren’t making any kind of loss on our account.
I’m sure they’re experiencing scaling issues across the platform, but it’s unacceptable for that to have a negative impact on us when we're sending them $250/dev/yr for (what is in all honesty) hosting a bunch of static text files.
ncruces
I understand that, and maybe GitHub became a bad deal because of that.
But if anything, their post and your reply are precisely an endorsement of usage based billing.
The bit that's growing 13x YoY (and which they expect will easily blow past that) is unmetered - commits. The bit that is metered (for some, not all folks) - action minutes, grew only 2x YoY.
GitHub was not built to limit the number of commits, checkouts, forks, issues, PRs, etc - nor do we want them to - but that's what's growing ridiculously as people unleash hordes of busy beaver agents on GitHub, because their either free or unlimited.
Where there are limits - or usage based billing - people add guardrails and find optimizations.
Because for all the talk, agents don't bring a 10x value increase; otherwise, they'd justify a 10x cost increase.
Besides, other forges are having issues too. Even running your own. We have Anubis everywhere protecting them for a reason.
rdevilla
> we're sending them $250/dev/yr for (what is in all honesty) hosting a bunch of static text files.
You know, you can just host your own code forge. Or you can just drop gitolite on a server. Or pull directly from each others' dev machines on a LAN.
GitHub is not git.
tracker1
I'm curious how Azure DevOps reliability has been for comparison. My current job is managing stories in DevOps with SCC in GitHub ent. While I like Github slightly more, have been curious about the decision.
graemep
In that case, why are you using them at all?
dist-epoch
> we're sending them $250/dev/yr for (what is in all honesty) hosting a bunch of static text files.
so start a GitHub competitor which bills $50/dev/yr for solving this easy problem and make a lot of money?
maccard
These numbers should have been in the blog post, not the graphs that are present.
> What's the question here, you don't believe growth is currently exponential, or do you think it shouldn't be hard to scale
I think you're putting words in my mouth here; I didn't say either of those things. I'm saying that this blog post is a meaningless platitude when the github stability issues predate this, and that all this post says is "we hear you're having issues".
ncruces
Sorry if I misread your intent.
I just think their charts, taken at face value, show substantially the same thing (for PRs, commits, new repos).
Either those charts are a bald-faced lie (the tweet could be as well) or there is no way for that chart to be something else.
The only way to fake exponential growth like that would be to use an inverse log scale (which would be a bald-faced lie).
It doesn't even really matter what's the y-axis baseline, unless we really think growth was huge in 2020, then cratered to zero by 2023, now back to the previous normal.
As for the rest of the post, I do think it's panic mode platitudes. But I honestly don't know what I'd write instead that's better.
You can already see people complaining loudly where they instead of "we'll do better" decided to limit usage.
PunchyHamster
You mean since GH acquisition 6 years ago https://damrnelson.github.io/github-historical-uptime/
ramon156
"We hear you" in ~300 words, basically.
ferguess_k
You can do the same with so many clients.
mijoharas
> we started working on path to multi cloud.
Is this microsoft stating that they aren't able to get acceptable reliability from Azure? (I mean, I think a lot of us have heard that, but it's interesting to hear it from microsoft themselves).
derwiki
It’s pretty damning. But as someone who has used Azure, I buy it.
everfrustrated
Pretty damming that two Microsoft subsidiaries - GitHub and LinkedIn - either shelved their forced migration to Azure or are looking at non-Azure options.
cbg0
I think this is more tailored towards enterprise clients that lose money when Github is down, that would probably help with retention.
bombcar
You’d think they could have had the existing GitHub on whatever continue as is (maybe for paying customers) while all the AI new inrush goes to the Azure setup.
jofzar
Yeah that's a top tier enterprise plan feature if I have ever seen ut
jasoncartwright
Seems pretty sensible to not rely on a single provider for their large complex system?
embedding-shape
Man, you should have been there 6 months ago when they decided to start tearing down GitHub's own data centers and move everything exclusively to Azure. Seems they themselves realized this after they started moving, but imagine if you could have helped them realize this before they even started :)
nextaccountic
Made me think. Why not convert Github datacenters into Azure datacenters that have Github as their sole customer?
Then it's up to Azure how they will manage this
benterix
> Seems they themselves realized this after they started moving
I guess most people at Github knew exactly it makes no sense but they didn't really have a choice. Maybe some voiced their statement, got "we hear you" in response and were told to proceed anyway.
cyanydeez
This isn't a mom and pop shop. They have locations all over the world: https://datacenters.microsoft.com/
There's no intrinsic reason they should be vulnerable to themselves.
farfatched
+1. Multi-cloud is typically done for vendor independence.
But Github don't have that rationale.
jasoncartwright
That website (for me) uses Cloudflare via WPEngine, which also isn't Azure
mijoharas
I mean, amazon (shopping, along with prime video e.t.c.) runs on AWS.
PunchyHamster
It was more "we built AWS to run our stuff and figured out we can sell it too".
While Azure feels like Temu clone of Cloud
ksimukka
When I was at AWS, retail was not yet running on AWS. Has that changed?
Prime video does use some AWS services, but live and on-demand are two entirely different beasts.
jasoncartwright
Prime video uses a non-AWS CDN when I watch football on it here in the UK
zamalek
There was somewhat recently a post here about how priorities, pressure, and management subverted Dave Cutler's vision for Azure (which was to have near zero human involvement) - my Google fu isn't strong enough to find it. Supposedly, someone running over or opening a serial to a rack/VM is now typical operational procedure.
youwangd
Show HN timing matters more than people think. Monday-Thursday, 9-11am Pacific, is when the front page has the most engaged readers. Weekend posts get less competition but also less engagement.
tedd4u
> multi-cloud
XXXXL size project. May not ever deliver. But if it fails, it will only do so after years grinding through people, resources, etc.
undefined
jansan
The entire concept of multi cloud is amusing if you think what cloud originally was supposed to be. They could call them meta clouds (might infringe trademarks), and with the current growth trajectory of AI generated code eventually multi-meta-clouds, renamed to beyond-clouds, and then multi-beyond-clounds. I see no limits.
s_ting765
> Vladimir Fedorov is GitHub's Chief Technology Officer .... He currently serves on the board of Codepath.org, an organization dedicated to reprogramming higher education to create the first AI-native generation of engineers, CTOs, and founders.
I think I found the issue.
dude250711
Sounds like a bet that AGI is not achievable.
BlackFingolfin
GitHub stability has been bad for me. And recently even the data they show me in the web has been unreliably.
Since yesterday, me and several colleagues noticed that the pull request lists on the website are incomplete, across many repositories. For example, on https://github.com/gap-system/gap/pulls it says "Pull requests 78" in the "tab list", but the PR list view reports "35 open" (the number 78 is correct, and confirmed by e.g. `gh pr list`)
And that despite <https://www.githubstatus.com> reporting "all systems operational".
matharmin
In many of my projects don't show any closed pull requests for the last 6 days. The CLI can list them, but anything going through search shows nothing.
Their support acknowledged the issue, but has been silent since then, and the status page still shows nothing other than the potentially-related issue on the 27th. It looks like it has been resolved on some repositories in the meantime, but I still have the issue across multiple orgs and repositories.
tracker1
I'm not able to see the current release-please PR and the last one broke during release creation so aborted the deploy. Hoping today goes better, but limited expectations after yesterday and may be deploying manualy.
vinc
I noticed the same thing and indeed the status page is not reporting the issue. I could find the missing PRs by browsing the branches page.
embedding-shape
> For example, on https://github.com/gap-system/gap/pulls it says "Pull requests 78" in the "tab list", but the PR list view reports "35 open" (the number 78 is correct, and confirmed by e.g. `gh pr list`)
Surely a scaling hack where they use "estimation" queries that return "kind of right" results instead of 100% correct data, as it's less load on the infrastructure. Not necessarily a bug as much a shit choice from product perspective.
BlackFingolfin
If the numbers were all that is wrong, that'd be OK. But it fails to list all data -- so the only way to navigate to the missing PRs is to know their number, and manually inserting the right URL (or to go to another PR, and then edit the URL in the navigation).
Sorry, but I don't think there is any way this can be classified as "not actually a bug"
darkwater
Glad that they released some data about new repo/issues/commits over the last years. It confirms what everyone else already believed from the outside: agents are putting a lot of extra, sudden pressure on GitHub. It's like a startup that is growing exponentially, with the difference that they already have a large user base to serve - and that keeps them in the bullseye - and probably a not-so-fast-moving organization when it comes down to changes. On the other side of the coin, they also have a lot of talent, infra and money a startup might not have yet.
maccard
What data is that? There's an unlabelled graph and a number at the current peak.
ncruces
Some previous numbers: https://x.com/kdaigle/status/2040164759836778878
maccard
This is the data that should be in the blog post. Thanks for sharing.
undefined
darkwater
IMO it transmits the magnitude of the impact pretty well.
LiamPowell
I can not figure out what on Earth they've done with these graphs, it almost seems like these are an artists impression of a graph.
Looking at the commit graph: Why do commits have big steps followed by slow rolloffs? Why do the steps not happen at uniform points Why do larger steps sometimes have less of a slope than smaller steps but not all the time?
Then looking at the other graphs there's completely different effects going on.
jospeh554
It's because they are your standard PowerPoint graph that just shows "thing goes up" rather than actual data, or the meaning of the data.
arnitdo
They seem to be the result of an image-gen model to me
If this is the unvetted and unbased information they are putting out in public facing-blogs, only the stars would know what data is being "presented" in their boardrooms
icy
I'm biased (founder of tangled.org), but the future really should be federated forges. Host repositories on sovereign infra with global identity + federated "metadata" (issues, pulls, etc.).
Global indices for this should be trivial to spin up so availability is never a concern (we're working towards this!).
PunchyHamster
It's cute idea but most people don't want to host their own stuff.
And if they are using 3rd parties to host their stuff, inevitable 1-3 big players will show up offering that as a service.
And even if you do host your own stuff to avoid availability problems, the big actors can still fail just like GH and you can't do shit coz your dependencies need it.
So the solution is same as it is now, proxy or mirror everything you use
ArcHound
But, there are? I can host a repo on GitHub, Codeberg and self host it too. Then I need to watch over main to keep it consistent between those. After that's established, I can do updates from wherever. Link'em in the README.
embedding-shape
There are distributed forges? Yes, git is distributed, but often everything around it isn't. The case parent is trying to make, is that the rest ("federated forges") should also be distributed, not just git.
ArcHound
Ok, gotcha. So there's a demand for the additional features that are not bundled within git to be federated somehow.
I'd say we have emails, mailing lists and bug trackers. Or maybe: what is the missing killer feature that needs federation?
nibbleyou
There's also a tool to automatically push it to multiple repos: https://github.com/prashantsengar/GitEcho
Disclaimer: the author is a colleague of mine
Though to be fair, what the parent meant by federated forges is different than this approach.
pabs3
git itself can push to multiple URLs btw:
https://stackoverflow.com/questions/849308/how-can-i-pull-pu...
ljm
I would love if it coding agents didn't default to GitHub for their deep VCS integration.
If I could get the same bells and whistles by wiring up another forge, so long as it offered a decent API and/or sent events over a webhook, I'd have everything self-hosted.
The agents would need to expose an interface on their own end but as long as you implemented it with a plugin, it'd take the dependency of GitHub and you could use MCP or skills for the rest of it.
icy
The neat thing about Tangled is it's built on an open protocol (https://atproto.com)—this allows us to effectively build an API-free system since all data on Tangled can effectively be ingested via the AT Protocol firehose.
Which is to say, this is perfect for agents given they don't need any bespoke SDK from us: simply write Tangled records for issues, pulls, whatever to your PDS and it'll show up on Tangled. We plan to start working on some exemplar agents first-party that would 1. enhance Tangled itself, 2. showcase cool things you can do with an open data firehose.
ljm
You do realise that writing Tangled records for issues, pulls, whatever constitutes both a spec and API.
The fact that you use a protocol to define it is beside the point. You still have to define what a Tangled record is, and the interface that accepts it, and the mechanism to resolve it on the client.
How else do you define what a 'tangled' is even if the underlying structure is git.
ramon156
Love the idea, would replace the LLM generated content ony our site, though.
I recently migrated to codeberg because I'm okay with self-hosting big runners, while using codeberg's available runners for smaller cron-based things (they even have lazy runners for this).
icy
It’s… all hand written? We just sound “professional”.
sikozu
I've never heard of this before, going to sign up and check it out!
icy
Thanks! If you need anything, email me anirudh@!
beernet
What is "sovereign infra" exactly?
mathgeek
I know it's just marketing speak, but the term made me think of the scenes in the Matrix where what's left of humanity (ignoring all the cyclical lore that was added on top of it) has to make sure the machines can't remote in to any of their tech.
tfrancisl
No less than self hosted, imo. If youre on some cloud it doesnt really matter that you pay them absurd amounts of money, you arent sovereign.
beernet
So if a company self hosts their physical infrastructure which will burn down once a fire sets in, they are more "sovereign" than a company running on a redundant cloud? I definitely would not want to be "sovereign" then.
Point is: This discussion is much more multi-dimensional than some suggest.
embedding-shape
So literally a computer at home/in the office, as with anything else you don't really "own" the infrastructure? Or is this just about "cloud"?
iso1631
> the future really should be federated
The internet should not be centralised, but you can't make a billion dollar company without capturing the world and selling your company to a trillion dollar company
frangonf
What are we doing?
Stop subsidizing tokens now that we extracted enough training data from you and we have enough agentic junkies business to keep the flywheel going up and cut on the loss leaders. [0]
Get the top HN stories in your inbox every day.
> I wanted to give an update on GitHub’s availability in light of two recent incidents.
[Emphasis mine]
Vlad, you are living in a very different world to me.
GitHub has suffered dozens and dozens of outages since the beginning of the year. It is notably less available and reliable than it was even as recently as last year. People have created dashboards and heatmaps showing how bad GitHub has become. At least one of those has made it to the front page of Hacker News. In fact its unreliability and persistent availability issues have become a frequent topic of conversation across sites and communities frequented its users - of which HN and Reddit are two obvious examples. At this point GitHub's unreliability risks becoming a meme, if it hasn't already done so.
The only thing your post makes clear is that your priorities ARE NOT clear.
> Our priorities are clear: availability first, then capacity, then new features.
WRONG!
Your priorities are:
1. Availability 2. Availability 3. Availability
You have NO OTHER PRIORITIES.
If you want other priorities, focus on AVAILABILITY for 6 months and then come back and we can all have a serious conversation about something else.
In the meantime, you need to understand that GitHub's reliability over months and months - not just in April - has been completely unacceptable.
Focus on fixing that and on nothing else.