We just started using Gerrit, but using Github authentication. Can't sign in! Not doing our own account management appears to have been a poor decision.
Every time a service like Github goes down, I always swear to myself I'll replace that dependency with a project that takes days, weeks, or months to implement will ensure I'll never have hours or minutes of downtime ever again.
I run a gogs[0] instance, which is a GitHub clone with great performance characteristics. I use it for both company and personal projects, and we haven't had any downtime. No more productivity drop when github is down :D.
I recently discovered another huge dependency I have when coding though : StackOverflow. That's going to be a lot harder to replace though...
It could have been the motto of RhodeCode (https://rhodecode.com). Self-hosted repository management not only has a benefit of having one's uptime under control, but also is much more secure.
This isn't an argument against third-party account management. It's an argument against single points of failure.
You can (and many websites which do) support multiple auth providers such as Facebook, Twitter, GitHub etc. If you bind your account to multiple of these providers, you can mitigate this risk to a large extent.
Even with that setup, people normally don't bother to associate more than one set of credentials with their account. So you would have to institute a more-than-one credential practice.
We're only more reliable when self hosting. GitLab.com is growing very fast and we have availability and speed problems. But I love the 'for now'. We're working to improve .com and we'll keep launching awesome features every month.
At least when it's self-hosted, it's your responsibility and you can fix it. When you are anything larger than a tiny company, this is much less risky.
Remember that 'the cloud' just means 'someone else's computers' so you are subject to their infrastructure management practices and uptime guarantees (or lack thereof).
Even for a company much larger than tiny, you're better off with GitHub from a reliability perspective. Most medium companies (up to hundreds of employees) do not have operations staff anywhere near as responsive as GitHub's are.
Look at their status history. The vast majority of companies could not boast such a record for their internal operations.
I had just posted a link to an issue in github and when I tried to click it to see if it worked, I was wondering if I did anything wrong. Github has so much activity that many people could be thinking the same right now.
I also guess hundreds of persons are wondering if the commit they just pushed killed github's servers.
Has anyone got a good Github backup script that's actually robust? Something that will reliably go through all your company private repos and clone the lot. I've found a few, but they're all clunky and in serious need of adaptation. (e.g. https://gist.github.com/rodw/3073987 which is the best I've found so far.)
Hard part appears to be finding one that does an organisation account well, not just a user.
You could try github-backup [0]. Quoting from the description:
It backs up everything GitHub publishes about the repository,
including branches, tags, other forks, issues, comments, wikis,
milestones, pull requests, watchers, and stars.
The git hosting is fairly independent of the web site. It happens fairly frequently that one is down and the other isn't. The git hosting has a resilient distributed back end designed to maintain redundant replicas and be highly available. The web site is, I believe, a huge Rails app.
My activity is missing on my activity page, but the issue I just filed like an hour ago is there on my contributions page. I suspect that perhaps some of the feeds and lists have to repopulate, but nothing is really missing.
I really wish we could get the community effects of github without github and have everyone self-hosting repos or even a fully distributed solution on top of something like IPFS.
I am sorely tempted to just start tracking issues as YML or CSV + markdown files in an issues directory. That gets me all kinds of features that GH doesn't provide:
- offline issue management (bugs on a plane)
- issues associated with branches
- grep, awk, ack, sed
- all of the above, available to git hook scripts
The only thing he holding me back, is that I really want to get this right. This needs to become a standard, not "Dan's weird repo that no one understands".
19 minutes and twelve pop-under ads later... I can't commit my changes until I punch the monkey and learn one weird trick to make my teeth whiter. Ah, the good old days.
Are there companies out there that include Github as a critical part of their infrastructure such that if the web front end or GIT hosting goes down that their production servers are affected?
I'm sure there are but how many people would actually have been affected by this specific outage which only seemed to affect the web front end?
I've been planning to do a static company page, deployed as a GH organisation page. Maybe I should have a fail over plan that can run off of S3 or something, and a short TTL on the DNS records...
Well that was scary. I pushed some changes and noticed the CI server didn't do anything and it reported no changes. So I checked the PR and sure enough it doesn't show any of my latest commits although git seems to think all is well.
Saw that Syncthing had an update for the desktop-version and wanted to check that the Android-client was on roughly the same version number, so that the protocols wouldn't be incompatible.
And because GitHub was down, I couldn't check the changelog, meaning that I had to go to such crazy measures as opening the Syncthing-app to check what protocol-version it displayed, which turned out to be a hundred times quicker than rustling through some changelog, but uh, something something, ramble ramble.
There have in the past been some ddos attacks that were most likely by a foreign actor, but they were directed against individual repo's that contained material that the attacker wanted to suppress.