Enterprise customers, remember to email your sales rep and ask for them to report on their contracted uptime with you that you are allowed to do as per contract.
They wont do this unless you ask hoping you don't notice the outages.
It creates lots of internal pain - they have no automation internally for reporting on this.
This is the only way anything will ever change. GitHub is _easily_ the most unreliable SaaS product. There's not a week whereby we aren't affected by an outage. Their reputation is mud.
I used GitHub for years in the tech industry, then went into the games industry and used BitBucket and hated it, thought it was such a downgrade. Now I'm back in the tech industry and using GitHub, and I miss BitBucket.
Starting to pretty much desire tools that can do nothing but are just fast at their core competency.
Looking at PRs in github and then when toggling to the "files" tab it chocking up or being like "I don't want to display this file because its more than 100 lines" is like wtf you're whole point is to show me modified files.
I wish the world worked this way, but I don't think it does, especially in tech. If you "do one thing well", the cloud hyperscalers will use their billions to copy whatever that is, and add your "one good thing" into their bundled subscriptions or cloud plans. At which point, any rational CTO will go "why should we pay for this, when we're already getting it via AWS/O365/whatever, and with better integration with our existing tooling to boot?"
I don't think "do one thing well" can succeed in this world, which is why Atlassian, Dropbox, etc. keep on launching things like office suites even though that makes no sense considering their core competencies. It's the only way not be streamrolled by FAANG.
I've seen it mentioned in the context of smaller countries competing with the big guys using industrial policy, and one example mentioned in that article (I don't have a link - it was long ago) they specifically mention Sweden and Volvo.
We use so many cloud services and SaaS products that it will be quite a shock how fast systems can be. You don't even need to use a local service, just hook up a minimal server somewhere that pulls from slow services here and there. Technically cloud, but not as shitty.
I think modern wait times are crushing for productivity, it is really demotivating and wears you down. Either just skip to another task and get overwhelmed by context switches or you wait and degenerate to a non-thinking troglodyte.
Github's problem is that it isn't a SPA. It is a massive Ruby on rails project that is all server-rendered. Everything you do needs to be synchronous and almost everything requires a reload.
A react or angular app with great restraint would be dramatically faster at all of this as viewing a file is just an API call - not a page reload. They are stuck with their hands tied as loading large data would cause the whole page load to be delayed - thus silly limits.
Many things should not be webapps... but an app on the web like this...probably should.
> Everything you do needs to be synchronous and almost everything requires a reload.
this is pretty incorrect, you may want to look into the concept of "partials" in SSR.
maybe you meant everything requires a roundtrip ? but SPA would not solve most of the roundtrips necessary in github given many interactions in the github app require authn/authz checks.
would you care getting into more details ?
Also, 'old' github was known to be very fast an reliable and was indeed a ruby on rails SSR app.
Since a few years ago github started to introduce react and more client side logic and it correlates with more issues and more slowness in the frontend. It only correlates, but still.
You can have parts of the web app rendered on the client, and still keep the rest of the app the same. Rewrite the diffs and previews, keep the rest as-is.
There is no excuse for possibly the most used feature of Github to suck so badly.
I've worked at two game companies and both used both of them. And I went to grad school for game design and used Perforce. So yeah I've ... had the experience.
I liked the UI of BitBucket more. Stuff I accessed frequently like commits and branches were tabs across the top, easy to reach from any page. Branch dropdown sort order was by most recently updated unlike GitHub where I have to search for it. Easy to diff files. This was like 4 or 5 years ago though, maybe it has gone through modernization/enshittification. GitHub feels a bit fragmented and it tries to be more performant by virtualizing some things but while BitBucket in some sense was more rudimentary and showed it all (bogging my machine down some), it allowed me to CTRL + F easily with more confidence whereas with virtualization I've had issues with it finding things and I couldn't 100% trust it.
Of course, but there are some oddities in tool use compared to other industries. At my job we use Perforce for version control for example, which I think is more common in the game industry than other solutions for whatever reason. Naturally everyone here hates it.
> Perforce for version control for example, which I think is more common in the game industry than other solutions for whatever reason.
The last game I worked on was like 80gb built. The perforce depot was many terabytes large, not something you want to have on every person's workstation. Games companies use Perforce for a very good reason.
But not everybody here has to try and manage many GB or even TB of assets in their VCS. I wager game company build/dev engineers know what they are doing in picking Perforce.
Depends how intergrated you are. If you're using it as a code repository fine, if you're tying your workflows into it with pull requests, actions, maybe a third party CI which ties to it, and use it as part of operations then it's a major problem
I just approved a PR which added a user to one of our AWS accounts for example, if github is down then that PR can't be approved, the update can't work and the user can't access that account
I use self-hosted jira, it's a great product, but I have full control over my teams tasks and workflows and as a tiny team we make them work for us (subject, description, comments, occasional linking to other tickets, assigned to, and status of "open", "blocked" or "done")
Most of the problems I hear about are micromanaging product managers. That's not the fault of the tool itself per-se.
I used to maintain a self hosted instance of BitBucket and the user experience of it was actually very nice. We shut it down when Atlassian deprecated the self-hosted licenses. Moving to GitHub and GitHub Actions felt like a downgrade in more than a few ways
Ah, you clearly haven’t worked with the self hosting teams that I have.
I hope I never again have to explain to someone that you can’t just “restore the code from the weekly database backup” because the code is in the file system they just osmosed.
Two customers of mine have been using Bitbucket and other Atlassian products. I remember a problem a few months ago, nothing else. Maybe I've been lucky, no accesses during the outages.
Companies should automate this. Write their own outage monitoring, feed the results, plus the cumbersome format you have to send to the provider, into an LLM, have it spit out an email requesting SLA credits or whatever the contract specifies.
Probably not worth it for low cost services, but if you’re paying GitHub $x millions per year, maybe it is.
They intentionally underreport outages. Everybody does. When your performance metrics for your customers, managers, and individual contributors all include uptime, what you get isn't better uptime but lies about uptime.
Some customers of my product, StatusGator, do this with our API. They can extract the outage data -- including the time when we detect the outage before its acknowledged. And then use that to get SLA credits.
Its great that your specific product does this, but as a whole I have to monitor the service separately to keep you honest (well not you specifically, I'm sure you are honest and do as much as you can to be honest, but not every company is), and of course to monitor the problems I have which you don't detect.
There's thousands of enterprise SaaS products. I seriously doubt the OP is referencing any grand audit of them or could even list 20. Yet one they dislike is "easily" the worst. They write like a child.
Most teams could get buy with something like a tiny instance with snapshots every commit
AWS has too much skin in the game to be as unreliable as they used to be.
There's zero reason for a startup to use all these services anymore. The only reason they ever existed was big government manipulation of the labor market through ZIRP
It's far more set n forget to self host git than github will be
With continued reliability issues and the CEO stepping down, now feels like such an opportune time for a competitor to start taking market share. I sure am rooting for it!
For the longest time, I thought that there was absolutely no way for some of these corner stone companies (slash tech) to be toppled. And I’m very impressed with their ability to destroy consumer trust!
Between Tangled, GitLab, Codeburg (Forgejo), and Gitea there's quite a lot of decent alternatives now compared to when GitHub first sold to MS. Having the entire world of FOSS integrated in one development platform was convenient but I'm more excited by the possibilities for more innovation in the space.
Had to buy an IPv4 address for a VPS the other day in order to clone some git repositories. Couldn't believe it. Costing their customers money when they should be able to support v6 by now.
They charge €0.50 per month to add an IPv4 address. A shared IPv4 NAT gateway introduces a whole lot of problems for them just to support customers who need IPv4 but don't want to pay a tiny amount for it.
How would a server-side NAT know which Hetzner customer it should route a request to? It has an encrypted packet arriving at this shared address on port 443. You can route a shared address to the proper service based on the HTTP Host header but that can only be done by the customer using their encryption key, so no sharing an address between customers. Home LAN NAT only works because the router can change the source port used by the request so that responses are unambiguously routed to the right client.
I don't think they're saying they should support incoming connections on such a NAT, I think they're saying that servers behind the NAT would be able to make outgoing connections (e.g. to access shared resources).
In regards to an EC2, AFAIK, not necessarily. You pay extra for an elastic IP (IPv4) which is the equivalent to a static IP but the EC2 is assigned an IPv4 address and an IPv6 when IPv6 is enabled.
Given that they are probably at least partly on Azure, this makes it less surprising because Azure has the worst IPv6 implementation of the 3 large cloud providers.
I’ve gone on long rants about it before right here on HN but I can’t be bothered digging up the old post…
… the quick and dirty bullet points are:
- Enabling IPv6 in one virtual network could break managed PaaS services in other peered networks.
- Up until very recently none of the PaaS services could be configured with IPv6 firewall rules.
- Most core managed network components were IPv4 only. Firewalls, gateways, VPNs, etc… support is still spotty.
- They NAT IPv6 which is just gibbering eldrich madness.
- IPv6 addresses are handed out in tiny pools of 16 addresses at a time. No, not a /16 or anything like that.
Etc…
The IPv6 networking in Azure feels like it was implemented by offshore contractors that did as they were told and never stopped to think if any of it made sense.
- You STILL can't use PostgreSQL with IPv6: "Even if the subnet for the Postgres Flexible Server doesn't have any IPv6 addresses assigned, it cannot be deployed if there are IPv6 addresses in the VNet." -- that's just bonkers.
- Just... oh my god:
"Azure Virtual WAN currently supports IPv4 traffic only."
"Azure Route Server currently supports IPv4 traffic only."
"Azure Firewall doesn't currently support IPv6"
"You can't add IPv6 ranges to a virtual network that has existing resource in use."
Any important feature from Gitlab you feel is missing? I personally think Gitlab has way more features than I need but maybe there are some important ones I would miss.
I deal with GitLab a lot. Both the official instance and third party instances.
It drives me crazy how slow it is. A lot of operations take minutes. Eg: I push commits to a branch and open an MR. During 2-3 minutes, the MR will indicate that no changes were found. When I push new changes, it can also take minutes or update, so I can’t quickly check that it all looks correct.
The latest release changed their issues UI, so when you try to open an issue, it’s opened on a floating panel on the right 30% of the screen. I’ve no idea what exotic use case this addresses, but when I click a link, just open it. The browser does it find. No need to reinvent navigation like this. Now to open an issue, I need to wait for this slow floating UI to load before I can _actually_ navigate to the page. Which will also be extremely slow.
Don’t even get me started on the UI. Buttons are hidden all over the place. Obvious links are behind obscure abstract menus. At this point, I remember where all the basic stuff is, but I can understand why newcomers struggle so much.
Hosting GitLab is also really resource intensive. For a small team of 2-3 people, I don’t think you can get away with “just” 8GB of ram.
—-
I do have to admit, GitLab CI is pretty good, assuming that you’re fine with just Docker support and don’t need to support BSD or some exotic platforms.
I use git pretty basically, just as a revision system where the hosted version has some nice-to-haves on top like rendering markdown, permalinking bits of code to people, and being able to open tickets and contribute code. Gitea/Forgejo/Codeberg does all of that and I haven't run into any missing features. It's also a lot easier to navigate than GitLab, but I'll admit that's probably just a matter of me not being used to it
Having self-hosted Gitea after considering GitLab, I can also say that the resource consumption of Gitea is a tiny fraction of that of GitLab's. I don't get the impression that their employees care about self hosters beyond as a gateway for enterprise sysadmins to get it running quickly before doing some big installation
A tough question as everyone's needs are different; I might recommend you create an account on https://codeberg.org as it's the largest, most popular instance of Forgejo running with many FOSS projects hosted there.
Codeberg devs have to disable some features (pull mirrors e.g., only push is allowed to prevent abuse) and they use some custom code (abuse mitigation - spam, etc.) but in general you're getting the latest Foregjo experience "test drive" which only gets better when self hosting when you can use all the features.
Gods, for some reason GitLab consumes 5-10% of a CPU at all times. I spent weeks trying to get it to calm down to reduce our AWS spend. Absolutely no changes no matter what I tried. On my 2013 Xeon server at home it's even worse.
GitLab is great, I really do enjoy working with it. I hate running it.
Yeah, I love gitlab as a user - but as an admin, the performance feels like something out the 90s. I had to use the gitlab-rails REPL console for something a couple of weeks ago. Even on a server with tons of headroom, it took *10 minutes* to start up?
Depends on your needs. Last time I checked, Gitlab wanted money for e.g. assigning multiple PR reviewers, which is available in gitea/forgejo.
The real issue with gitea/forgejo compared to Gitlab is their terrible CI, which is (to some approximation) a clone of GitHub Actions, also a dumpster fire for those of us proficient with/preferring the UNIX command line. You'll probably need a separate CI runner, like Woodpecker or Drone.
CI is one area that it's "lacking." I quote that because, honestly, all the bells and whistles in these CI yamls are starting to hurt. Woodpecker[1] (what forgejo uses) is strikingly simple.
Maybe they should focus less on "agentic" and more on just keeping their core product solid... I suppose that doesn't rhyme with growth at all cost... zzz sad... it is
git was designed as a distributed vcs for high-latency connected developers with plenty of ability to work offline.
I don't think I've really been impacted by any of the outages. Maybe I wait an extra hour to merge a feature or something, in which case I actually get to eat lunch and browse HN, doesn't feel quite as catastrophic for me, as some of you.
The problem is that people design their entire development and release lifecyle to be dependent on Github. A lot of times they can't even push code hotfixes to production without it. It's a terrible SPOF for a lot of engineering orgs.
We also started having customers since a few years that declare GitHub fully trusted, as in, it is simply not worth considering what the impact would be if that vendor gets compromised. I can't name names, but this includes a vendor that aims to prevent supply chain attacks (technically language-agnostic; in practice aiming to be the solution chosen by one of the biggest programming language's package manager)
> can't even push code hotfixes to production without it. It's a terrible SPOF
GitHub's availability impact is the least of my concerns these days. It'll be a really tough year for society worldwide if we need to rebuild loads of infrastructure after some threat actor got into github and managed to change key pieces of code without being detected a couple of years. Having seen how hospitals handle updates, they might get lucky and be old enough to not be affected yet, or have a really tough time recovering due to understaffed IT
No clue how to even begin solving this since our OSes are likely all pulling dependencies from GitHub without verification of the developer's PGP key, if the project even has that and applies it correctly. I guess I can only recommend being aware of the problem and doing what you can in your own organization to reduce the impact
It also has pretty neat support for emailing patches.
And it's practically impossible to lose data as long as any single dev still has an intact .git directory.
Nobody is preventing the devs from just setting up a second "upstream" and pushing to both github and gitlab (for example) or any other service at the same time.
It's kind of weird that we've collectively decided on a distributed version control system, while centralizing where we keep the repositories and metadata.
I think it's just a simple reality that most projects don't actually need or want a decentralized development process. In my experience, most projects are looking for a single, high-reliability canonical source that is in control of project leadership. Most projects are only developed by a small group, maybe even only one person.
I think the distributed support is pretty nice for easy-ish mirroring. Even to a relatively bare git+ssh target on a self-hosted server. No specialized services required. I mean, just for VCS.
^ Comment on the Nth 100+ GitHub Down thread (every thread is like that).
Maybe everyone here is just using it as an excuse to chatter about forges or GitHub being down too much etc. and it has no impact. But if it does and people are honestly fretting they can mirror their repos. Then no one needs to worry that much (except for their cursed CI setups) the next time it happens.
And that’s a benefit of peer-to-peer repos right there.
You claimed that most projects don’t want/need decentralized development and I gave you an example of how mirroring can make a project more robust. You’re welcome.
We're starting to see the pain of such monopolies. Note that I included a hosted option (Savannah) in my list. It doesn't take everyone leaving github to break the monopoly, just enough to make it not a monopoly.
Just completed one or our (quarterly) github exports before this hit... If people are looking into extracting everything from their organisations, i've published the scripts I use: https://github.com/sigio/github-export
I've been working on a self-hosted alternative to GitHub and I am curious what HN finds to be the most important features. I think Code, Issues and PRs are the critical aspects, but I don't know what typical workflows look like for others these days.
It seems like there are some teams that have figured out a way to turn GH into a labyrinth of CI/CD actions that allegedly produces a ton of business value, but I don't know how typical this kind of use case is. If the average GH user just needs the essentials, I could easily focus those verticals down and spend more time on things like SAML/OIDC, reporting APIs, performance, etc. I suspect there aren't a whole lot of developers who are finding things like first party AI integration to be indispensable.
Any time I see this topic brought up, two things are always mentioned: the "hub" part, meaning the discoverability and social aspect, and the "network effect" of having everyone use a single service (so everyone already has an account and they don't have to create additional account for every self-hosted project.
Agreed, it's definitely the network cohesion that keeps GH together. Especially for FLOSS. For advanced features, there are some niceties that say Azure DevOps offers that GH Enterprise still lacks, though it feels like there's some convergence on the backend.
I like GH Actions myself, though sometimes it can get a little cumbersome with self-hosted workers on private/enterprise projects. I'm a big fan of automation and like to keep related bits closer together. As opposed to alternatives that have completely detached workflows/deployments.
I like what I've used of Forgejo (Git, Issues+Board, Wiki), and have hosted it on servers and localhost easily. I haven't tried its CI features yet.
Codeberg is a cloud site for open source projects that runs Forgejo.
Forgejo is a fork of Gitea, which is another option, especially if you want commercial support, but I haven't tried it yet.
I also kinda like GitLab, both the cloud one and the enterprise on-prem version. And their issue label features work more easily with the board than Forgejo's (automatically moving issues between columns based on scoped labels). Though their pricing tiers have been unfortunate at times (I don't know latest).
If you don't need any features beyond a backup location for git, all you need is an SSH server with FS support. All you need is git on the remote server to initialize a server directory, and you can target that with git+ssh directly. Works well enough as a backup/mirror repository.
If you want to self-host for more features (CI/CD, PRs, etc.) there's GitLab, Gitea, and forgejo that I'm aware of. I think GitLab is a bit heavy duty for most self-hosting usage myself though. I actually appreciate the online/cloud and commercial options.
When you self-host, it becomes your job to fix it when it breaks.
Yup, as you mentioned, there's other alternatives to Gitlab lcaolly that are decent.
I'm finding myself liking and using gitlab more and more when I come back to it every 6-8 months.
I don't know how I'd be able to trust only a cloud for my source code and devops/CI/CD. At least a mirrored setup in a private or hybrid cloud on another provider as a failover that isn't with the same cloud provider.
Our gitea uptimes are measured in months. The only downtime is during non-working hours for upgrading gitea & the underlying OS, which take about 5 seconds of work and another 15 seconds of waiting for it to upgrade the database and restart.
I have gitea running in my basement. You can see how often it updates on the gitea github (since they don't self host since they haven't written an importer yet). You could automate this, but I haven't spent enough time updating to make that make sense.
Before gitlabs, github, etc. it was common to host your own code repos on-prem. The thing now, though, is that there's a lot of add-on functionality for how teams flow from using github over just hosting a git server.... so it's not really apples v oranges, anymore.
> The thing now, though, is that there's a lot of add-on functionality
I don't get why people use those.
I understand free software projects that don't want to run any infrastructure, but why companies push their building and deployment out of premises when all you need is one trusted computer somewhere. Why do people insist on trusting cloud computers more than the ones they can kick?
Gitlab can handle a lot of CI/CD hosted locally for free that github actually charges for.
Git was originally local only too. People would run their own source code repos, it was trivial to run and maintain for the most part for most basic to intermediate use cases.
I had clients who insisted source code (mine or theirs) couldn't be on a public cloud provider. It's not that unreasonable or uncommon.
I used to work at a place that had a second copy of all the git repositories on a server, available over ssh. We could push there and then, at deploy time, instruct the servers to pull from that repo instead of bitbucket.
If I were to set up the same thing again today, I'd add some automation to automatically keep it in sync with github as well as automate the servers so that they'd attempt to pull from both on every deploy.
I share this as an idea for those who need an emergency or backup deploy mechanism.
If you're reading this there's a great opportunity to pull a Linear move and disrupt the entrenched players with a 10x better UX. Although the hardest nut to crack here are the network effects.
Used and using Gitlab with success in all of my companies. It is maybe a bit boring and a little bit slow, but it is enterprise and has everything you need. Included time on the market..
I've always used Bitbucket as it allows private repositories, so Github was never something I wanted to use. But it is one of the most important websites in the world for tech people and should be run better than this, especially being now owned by the largest tech company in the world. On the other hand, it just shows that centralisation, or over-reliance on one thing or service, is always the worst idea. But people are very lazy and so we keep running in these circles ad infinitum.
I have been using bitbucket before it got acquired by atlassian, which was in 2010, so i am sure github changed quite a bit since then. Even though i do have a github account, i don't use it for anything else other than creating or commenting on issues for other repos. So i have no clue about present day github capabilities as i am satisfied with bitbucket and had no need to explore github in depth.
I’m really wondering what internally causes this. No one likes it when they have outages, but it keeps happening. Is this a culture thing? Like pressure to ship features fast? Some under staffed teams or lack of ownership on some crucial components?
Definitely all good questions. I've noticed that there seems to be a bit of convergence between Azure DevOps and Github (Enterprise) and am curious how co-mingled the teams or management are at any given level or not.
I'm mixed on the cultural changes at MS and have historically preferred GH's approach. I'm hoping MS moves closer to GH than the reverse. I'm not working with/at either company and don't really have a lot of insights to offer other than observations from the outside.
I am surprised at how much github is willing to let themselves go for being a product that can nowadays be replaced by someone over the weekend. This is the second major outage within like 2 weeks.
Depends how much customization you've done to your cicd and how heavily you use custom GHA, and other plugins. No way my org could move off in a weekend.
Funny, because just yesterday I was downvoted for pointing out (among other things) GitHub's less than stellar reliability these past few years.
When are the AI vibe coders going to create a GitHub replacement? With 1000x AI productivity a lean startup should easily outcompete the incumbents, no?
I've used their public site for a few private projects, mostly in habit from when private projects in GH were limited to paid accounts. The collaboration was a bit better at that time imo.
I'm not sure that I would choose it for self-hosting over gitea, forgejo or straight up ssh+git on a remote system, which works well enought for a personal backup target.
Just saying. Our company has been on self-hosted Gitlab for years. We have one devops guy who spends like an hour a month managing the server. Never one outage.
Well, considering git is only version control, and github does much more... pull requests, social interaction, workers/workflows, ci/cd etc. It's kind of a big deal.
Unless you have some sort of decentralized method of doing CI/CD and pull requests I'm unaware of?
I’m at Google, we have a million + file codebase. Every piece of code is snapshotted (no need to commit, automatic snapshot whenever a change is made). Every line of code has its own unique URL (this is for your branch too, not just overall). Background tests running nonstop hourly. Never seen any downtime. The infra at Google is insane
That’s what I disagree with, I think Google codebase scale is massive. It has android + chrome + Google + Waymo + tensorflow etc, this is a lot of code. And also consider that every third party library is put inside Google3 codebase too, there are no truly external dependencies, all the code is there in one repo, with their version one version principle. I think Google is roughly similar scale to GitHub and is managing things far better
Yes you can still commit as a logical delineation of changes. And every commit is automatically a PR, folding multiple commits is possible but not the common workflow, they prefer just reviewing and approving each commit.
This is the only way anything will ever change. GitHub is _easily_ the most unreliable SaaS product. There's not a week whereby we aren't affected by an outage. Their reputation is mud.