Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

cromka

For the record, it's failing silently, too, showing e.g. "There aren’t any open pull requests." even though there are dozens. That's pretty bad, this will definitely mislead people.

dclowd9901

Or last week's "If you use merge queue, oopsie, we accidentally destroyed your trunk", which also failed silently.

scottbez1

I was surprised that incident didn’t seem to get as much attention since that was a pretty major data corruption bug, but I guess it was a much smaller scope of impacted repos/customers than a lot of these availability issues?

dclowd9901

Without reaching for my tin foil hat, I have a feeling MS is able to suppress these incidents somehow, because yeah, that one was pretty bad.

MarkMarine

Definitely got attention during the prod outage at my work. I’m going to find another alternative, I’m sick of this terrible uptime.

yearolinuxdsktp

Merge queues are not as frequently used… ~2000 PRs affected over 4 hours. I reckon that’s on the order of 10 commits per tenant. It’s a feature with low traction, probably because it creates more problems than it solves.

elischleifer

We happen to build the perfect solution to avoiding GitHubs Merge Queue - ours :)

It’s also massively more performant

https://trunk.io/merge-queue

rileymichael

while external merge queues offer a ton more features, i wouldn't describe any of them as 'perfect' based on the simple fact the UX is bolted on. github continues to display their native UI components for merging, and users are forced to interact via arcane commands in comments or external CLIs/webpages. not ideal!

techterrier

speak for yourself, we are celebrating having completed all our PR's for a change :D

enraged_camel

Even when it does show the PR list, it doest necessarily show all the PRs in the category being viewed. Truly nasty issue.

yallpendantools

Man, this SUCKS big time for me. Just a few months ago, $PARENT_CONGLOMERATE mandated all under its benevolent wing to migrate to GitHub for reasons of synergy and efficiency. So now it's my turn at $DAYJOB to be migrating us from our self-hosted Gitlab instance. I already have a few grievances...

- IT policies around GH accounts make no sense. It's a long story but, in short, you can't use any of your pre-existing GH accounts whether personal or professional (as in, an account I made exclusively for $DAYJOB before The Synergy Mandate) and must create a new one aligned with IT conventions.

- We don't monorepo hence we made extensive use of groups. There is no direct mapping for this concept in GitHub so we have to manually namespace projects.

- And now of course GH's no-nines availability :(

For my team, profit happens to be sensitive to our release dates---a day or two of delay can really make the difference if we'll make the month's projections or not. In another world, I would proactively mirror our profit-essential code but it's not worth the risk making a skunkworks guerilla effort. I'd like to think we can blame The Synergy Mandate in a few postmortems in the near future but of course I did not graduate yesterday, I know that's not gonna happen.

Thoughts and prayers we keep hitting our profit projections and they don't axe our product for underperformance.

(Writing this down, I can really feel how this job has changed since I joined.)

Arcuru

Reminder to all OSS projects: it is extraordinarily easy to setup a simple CI job to keep your code in sync between multiple Forges. And getting email notifications from a second Forge is 0 extra effort.

At least give people the option to start moving away from GitHub to contribute to your project. It will, ultimately, be better for the ecosystem.

kakwa_

Syncing the code is the easy and trivial part and your CI job is only solving that. And in my opinion, it's not even that necessary for most projects.

The difficult part is all what's around the code:

* the tickets/PR (including the closed ones)

* the links referencing the project

* the CI setup

* for large projects, the committers permission setup

* if applicable, the push/commit/branch rules

All that will be deeply annoying to migrate on a per project basis, or might get lost.

But that's not even the worst on my opinion. Losing the go-to platform for finding software is (fediverse for software when?).

rurban

Syncing is trivial, the CI is the deal. GH actions are the best option still. Neither the FSF nor any other OSS lab didn't come up with a proper CI for us open source maintainers. The CI load also increased massively since.

djyde

Setting up own GitLab instance might be a good solution too.

agartner

Yeah I think I've finally had enough. I need to start seriously advocating for alternatives since this is starting to impact our business. It's clearly not getting any better.

rhdunn

If you want a GitHub-like UI (with org/repo structure limitations) use either Forgejo or Gitea.

If you want a similar but different experience use GitLab.

If you want something more akin to the kernel experience (i.e. hosting, flexible repository structure, user auth via ssh keys, and a simple web UI) use gitolite with cgit, or alternatively gitweb.

dijit

There's always gerrit.

I mean, technically it's a code review platform, not a complete toolbox like Gitlab and co, but damn if it isn't the most professional feeling experience.

dymk

I love gitea, and I use it for my homelab, but the permissions system needs a lot of work. There’s still an open bug which doesn’t let anyone but the repo owner read CI logs regardless of settings.

rhdunn

I used Gitea for a while, but eventually switched to gitolite+cgit. That was down to the org/repo structure not fitting my git hierarchy (I'm using a topic/repo, topic/subtopic/repo style structure) and the lack of organization/topic wide issue tracking/management.

mghackerlady

or sr.ht, you can host it yourself if you want

homebrewer

Go ahead. We've been self-hosting Gitea with Drone/Woodpecker for years; either it or Forgejo will do fine if you're okay with their feature set. I sometimes wander into these GitHub threads to have a laugh; our Gitea instance has had several minutes of downtime combined over the last few years, all of them planned (to upgrade Gitea) and in the middle of the night.

lioeters

Ooh, Woodpecker CI works with Gitea and Forgejo. https://woodpecker-ci.org/ That might be last piece I needed to migrating Git repos from GitHub to a self-hosted forge.

Edit: Actually there's Gitea Actions and Forgejo Actions, that might be enough for my use case.

https://docs.gitea.com/usage/actions/

https://forgejo.org/docs/next/user/actions/reference/

dymk

I’ve found gitea actions (based on ACT, so it’s nearly identical to a GitHub action runner) to work great. Migrating a GitHub workflow is mostly just a file name change.

MiracleRabbit

Gitea Upgrading.. replacing binary, restarting. I love it.

Same for Forgejo.

scottyah

I struggled with Woodpecker for a bit, but now gitea has Actions that work wonderfully for my use case (and one less tool to support). I believe they also highlight compatibility with a github action protocol of sorts. Might be worth looking into.

1970-01-01

I'm surprised GitLab isn't getting more attention. Yes, its not a carbon copy, but it is close. Apples and pears instead of oranges.

BoingBoomTschak

It's close in the sense that it's also Jabbascript SPA crap that needs a supercomputer just to (try and fail to) display a diff of a few thousands lines, you mean? We're using it at work and it sucks massively.

ahartmetz

I'd rather have the open core one running on my own servers (i.e. GitLab), but performance is a few orders of magnitude away from acceptable for both Git**b.

estimator7292

I run GitLab and its CI on a Xeon server from 2010. It's fine. It runs exactly as fast as anything else on that machine. I've also run it on a tiny AWS instance. Also fine.

I don't like that the idle CPU load is high (for really inane reasons) but it performs perfectly well.

richstokes

Was thinking the same honestly. GH is very sticky though, especially when you have actions and all kinds of other integrations set up. But it’s just kind of absurd at this point how many outages they have.

cyclopeanutopia

I'm now self-hosting Git and CI with Forgejo, works like a charm. ;)

mcoliver

This is bigger than github: https://downdetector.com

MerrimanInd

Looks like Azure might be the common denominator.

TacticalCoder

It's nearly as if anything touched by Microsoft was tending to be as buggy as Windows while getting as needlessly complex (and misused) as Excel.

Can't wait for Microsoft to go the IBM way.

cdrnsf

It's a day ending in y so, yes, there's a GitHub outage.

recitedropper

Hate GitHub being down, plus hate AI stealing your code? Join sourcehut--it has worked great for me, and I'd love to see it flourish as a platform.

yrds96

I like the experience of exploring new repositories so I switched everything to codeberg which is where most of the projects I'm interested with are

pokstad

How is sourcehut different? It’s just yet another centralized service.

recitedropper

If you need to self-host, self-host. Sourcehut is obviously not a replacement for that.

But, if not: It is different because Drew DeVault is scathingly anti-AI, and has a history of sticking to strong opinions (for better or worse). Seems like the best bet for off-premise source control if you are concerned about AI scraping and downtime.

Conscat

At least it doesn't go down as often, I guess. I think most users do want a centralized forge that gives them discoverability and star graphs.

arielcostas

> It’s just yet another centralized service

Yeah, collaboration usually requires some sort of centralisation. Whether that is the LKML+git.kernel.org, gitlab.gnome.org, salsa.debian.org or Sourcehut, or GitHub. At least Sourcehut isn't completely proprietary and shoving AI down your throat at every possible chance. The same can be said for Codeberg and almost any GitLab CE, Gitea or Forgejo instance

senko

what are my options if I hate github being down but love AI stealing my code?

undefined

[deleted]

iLemming

Wow, this taking unusually long to fix. I suppose the team trying to fix it hit the Claude session limits and now can't do anything until the end of the cooldown and the only person who knows how to fix it without AI is out for a surgery. When the entire generation of people who knew how to fix shit without using AI will retire, what happens then?

lrvick

Every time Github goes down, a few more people move to ethical alternatives and reduce the FOSS community having a SPOF in Microsoft.

https://sfconservancy.org/GiveUpGitHub/

tracker1

While I appreciate the sentiment... there was something nice about the social aspects of many/most projects being on GitHub in terms of collaboration. I think there is starting to be a lot of friction for many reasons. I've been seeing more use of issues as spam, not to mention even more nefarious activities making rounds.

lrvick

Corpos tend to be on Github, where community focused projects are very rapidly shifting to codeberg, which is fully open source and also has commit signing integration that actually works.

lioeters

SPOF = Single Point of Failure

swiftcoder

> Users are experiencing intermittent failures to view issues, pull requests, projects and Actions workflow runs

"intermittent" is kind of underselling a failure on ~9/10 page loads

pier25

Github has been having issues since the Microsoft acquisition.

https://damrnelson.github.io/github-historical-uptime/

fishtoaster

This is not to say that things haven't gotten worse over time, but...

I don't think that chart shows what it seems like it shows. There were plenty of pre-2018 outages that don't show up there: https://hn.algolia.com/?dateEnd=1545696000&dateRange=custom&...

An alternate interpretation of that chart is "After the microsoft acquisition, they got serious about actually tracking outages."

That said, anecdotally, it's felt much worse over the last 6 months. I'd guess it's a combination of MS-induced quality drops and AI-induced scale increases.

r14c

They're moving to Azure and had to fix up Azure first to be stable enough for GH to even consider moving.

I'm guessing its a combo of Azure still not being stable enough and a byproduct of trying to move an entire company's operations from a physical DC into a cloud while its running.

bonesss

Speculation from afar: clouds are not commensurate, and high-volume cloud services are going to anchor key architectural decisions around technical benefits/realities of the cloud environment they target. Moving GitHub isn’t a tech decision, and it’s broadly a Dumb Idea.

I think GitHub is well past the complexity threshold where the reflective architecting that happens during cloud development can’t be separated from product. If the Engineers were begging for Azure it’d be one thing, but otherwise this is destabilizing churn.

I agree Azure needed a lift to even handle the job, and see the that gap as indicative of a more fundamental challenge. That change is kinda like a skeleton transplant… managements feelings and post-surgery desires don’t necessarily account for the impact and essential difficulty.

thayne

I think it is probably a combination of:

- Switching to Azure

- Adding more AI features

- Using AI more for development

- Higher load caused by AI agents

Three of those are top-down direction from MS.

shevy-java

Well, perhaps not as long ago (e. g. from the acquisition), but if you look at the last four weeks or so, just that part alone, you can clearly see that something is not working here. Microsoft is constantly mentioned on Hacker News and not typically in a great, praising light.

madeofpalk

If it’s just the last 4 weeks, then I would say it seems the Microsoft acquisition had little impact on their reliability.

It seems pretty reasonable that the massive surge in AI over the past 6 months has put tremendous strain on GitHub’s infrastructure, and most of these outages are as a result of that one way or another.

2ndorderthought

That's damning. I wonder what it will look like over the next 13 months as more and more code is written by ai

corvad

Too bad Copilot's having issues and as such it will take longer for recovery.

dclowd9901

I kind of had assumed that had already begun impacting downtime, though I guess it would be good to get some confirmation.

danny_codes

Microslop!

It’s astonishing how bad their software is now. I guess 20 years of outsourcing and bean-counting will do that

badeeya

seems like it couldve been covid instead? look closer at the months and also we need a y bar for "msft makes github do xyz"

hx8

We cannot blame December 2019 uptime on covid-19.

wldcordeiro

Seems like every week or so there's status issues. Often at what feels like the start of the week too.

adverbly

You know what else changed around this time?

They dropped Ruby on Rails.

Ruby on rails got a bad rap IMO.

It was maybe the epitome of the get shit done internet era, and despite AI's proported productivity gains, I actually don't think we've got anywhere close to the velocity, stability, and simplicity of the peak Rails era just coming out of those PHP days. And teams were actually way smaller than they are now even after all these AI cuts!

BoingBoomTschak

And even when it's "working", it isn't. See this gem: https://github.com/orgs/community/discussions/142308

surgical_fire

Tbh, for a while GitHub didn't seem to be any more nor less reliable than prior to MS acquiring it.

But in the past year or so, it does feel like outages are becoming commonplace.

Daily Digest email

Get the top HN stories in your inbox every day.