Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IMHO, the main advantage of github is that it is an ecosystem. This is a well-thought-out Swiss knife: a pioneering (but no longer new) PR system, convenient issues, as well as a well-formed CI system with many developed actions and free runners. In addition, it is best to use code navigation simply in a web browser. You write code, and almost everything works effortlessly. Having a sponsorship system is also great, you don't have to search for external donation platforms and post weird links in your profile/repository.

All in one, that's why developers like it so much. The obsession with AI makes me nervous, but the advantages still outweigh, as for me, the average developer. For now.





I don't agree with this at all. I think the reason Github is so prominent is the social network aspects it has built around Git, which created strong network effects that most developers are unwilling to part with. Maintainers don't want to loose their stars and the users don't want to loose the collective "audit" by the github users.

Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality, and like it or not are now part of modern software engineering. Developers are more likely to use a repo that has more stars than its alternatives.

I know that the code should speak for itself and one should audit their dependencies and not depend on Github stars, but in practice this is not what happens, we rely on the community.


These are the only reasons I use GitHub. The familiarity to students and non-developers is also a plus.

I have no idea what the parent comment is talking about a "well-formed CI system." GitHub Actions is easily the worst CI tool I've ever used. There are no core features of GitHub that haven't been replicated by GitLab at this point, and in my estimation GitLab did all of it better. But, if I put something on GitLab, nobody sees it.


I am surprised by the comments about GH CI. I first started using CI on GL, then moved to GH and found GH's to let me get things done more easily.

It's been years through and the ease of doing simple things is not always indicative of difficult things. Often quite the contrary...


From what I gather it's that GH Actions is good for easy scenarios: single line building, unit tests, etc. When your CI pipeline starts getting complicated or has a bunch of moving parts, not only do you need to rearchitect parts of it, but you lose a lot of stability.

Bingo. GH Actions is great if you're deploying vanilla web stuff to a vanilla web server. I write firmware. GH Actions is hell.

Easy and good are radically different things.

And this is the core problem with the modern platform internet. One victor (or a handful) take the lead in a given niche, and it becomes impossible to get away from them without great personal cost, literal, moral, or labor, and usually a combo of all three. And then that company has absolutely no motivation at all to prioritize the quality of the product, merely to extract all the value from the user-base as possible.

Facebook has been on that path for well over a decade, and it shows. The service itself is absolute garbage. Users stay because everyone they know is already there and the groups they love are there, and they just tolerate being force-fed AI slop and being monitored. But Facebook is not GROWING as a result, it's slowly dying, much like it's aging userbase. But Facebook doesn't care because no one in charge of any company these days can see further than next quarter's earnings call.


This is a socio-economic problem, it can happen with non internet platforms too. Its why people end up living in cities for example. Any system that has addresses, accounts or any form of identity has the potential for strong network effects.

I would say that your comment is an addition to mine, and I think so too. This is another reason for the popularity of github.

As for me, this does not negate the convenient things that I originally wrote about.


Github became successful long before those 'social media features' were added, simply because it provided free hosting for open source projects (and free hosting services were still a rare thing back in the noughties).

The previous popular free code hoster was Sourceforge, which eventually entered its what's now called "enshittifcation phase". Github was simply in the right place at the right time to replace Sourceforge and the rest is history.


There's definitely a few phases of Github, feature and popularity wise.

   1. Free hosting with decent UX
   2. Social features
   3. Lifecycle automation features
In this vein, it doing new stuff with AI isn't out of keeping with its development path, but I do think they need to pick a lane and decide if they want to boost professional developer productivity or be a platform for vibe coding.

And probably, if the latter, fork that off into a different platform with a new name. (Microsoft loves naming things! Call it 'Codespaces 365 Live!')


Technically so was BitBucket but it chose mercurial over git initially. If you are old enough you will remember articles comparing the two with mercurial getting slightly more favorable reviews.

And for those who don’t remember SourceForge, it had two major problems in DevEx: first you couldn’t just get your open source project published. It had to be approved. And once it did, you had an ugly URL. GitHub had pretty URLs.

I remember putting up my very first open source project back before GitHub and going through this huge checklist of what a good open source project must have. Then seeing that people just tossed code onto GitHub as is: no man pages, no or little documentation, build instructions that resulted in errors, no curated changelog, and realizing that things are changing.


Github was faster than BitBucket and it worked well whether or not JavaScript was enabled. This does seem to be regressing as of late. I have tried a variety of alternatives; they have all been slower, but Github does seem to be regressing.

> Technically so was BitBucket

The big reason I recall was that GitHub provided free public repos and limited private, while BitBucket was the opposite.

So if you primarily worked with open-source, GitHub was the better choice in that regard.


Mercurial was/is nice and imho smooths off a lot of the unnecessarily rough git edges.

But VCS has always been a standard-preferring space, because its primary point is collaboration, so using something different creates a lot of pain.

And the good ship SS Linux Kernel was a lot of mass for any non-git solution to compete with.


And GitHub got free hosting and support from Engine Yard when they were starting out. I remember it being a big deal when we had to move them from shared hosting to something like 3 dedicated supermicro servers.

> Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality, and like it or not are now part of modern software engineering.

I hate that this is perceived as generally true. Stars can be farmed and gamed; and the value of a star does not decay over time. Issues can be automatically closed, or answered with a non-response and closed. Numbers of followers is a networking/platform thing (flag your significance by following people with significant follower numbers).

> Developers are more likely to use a repo that has more stars than its alternatives.

If anything, star numbers reflect first mover advantage rather than code quality. People choosing which one of a number of competing packages to use in their product should consider a lot more than just the star number. Sadly, time pressures on decision makers (and their assumptions) means that detailed consideration rarely happens and star count remains the major factor in choosing whether to include a repo in a project.


Stars, issues closed, PRs, commits, all are pointless metrics.

The metrics you want are mostly ones they don't and can't have. Number of dependent projects for instance.

The metrics they keep are just what people have said, a way to gameify and keep people interested.


So number of daily/weekly downloads on PyPI/npm/etc?

All these things are a proxy for popularity and that is a valuable metric. I have seen projects with amazing code quality but if they are not maintained eventually they stop working due to updates to dependencies, external APIs, runtime environment, etc. And I have see projects with meh code quality but so popular that every quirk and weird issue had a known workaround. Take ffmpeg for example: its code is.. arcane. But would you choose a random video transcoder written in JavaScript just due to the beautiful code that was last updated in 2012?


It is fine if a dependency hasn't been updated in years, if the number of dependent projects hasn't gone down. Especially if no issues are getting created. Particularly with cargo or npm type package managers where a dependency may do one small thing that never needs to change. Time since last update can be a good thing, it doesn't always mean abandoned.

I agree with you. I believe it speaks to the power of social proof as well as the time pressures most developers find themselves with.

In non-coding social circles, social proof is even more accepted. So, I think that for a large portion of codebases, social proof is enough.


> Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality

They're NOT! Lots of trashy AI projects have +50k stars.


You don't need to develop on Github to get this, just mirror your repo.

that's not enough, i still have to engage with contributors on github. on issues and pull requests at a minimum.

Unfortunately the social network aspect is still hugely valuable though. It will take a big change for anything to happen on that front.

> Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality

Hahahahahahahahahahahaha...


OK, indicators of interest. Would you bet on a project nobody cares about?

I guess if I viewed software engineering merely as a placing of bets, I would not, but that's the center of the disagreement here. I'm not trying to be a dick (okay maybe a little sue me), the grandparent comment mentioned "software engineering."

I can refer you to some github repositories with a low number of stars that are of extraordinarily high quality, and similarly, some shitty software with lots of stars. But I'm sure you get the point.


You are placing a bet that the project will continue to be maintained; you do not know what the future holds. If the project is of any complexity, and you presumably have other responsibilities, you can't do everything yourself; you need the community.

There are projects, or repositories, with a very narrow target audience, sometimes you can count them on one hand. Important repositories for those few who need them, and there aren't any alternatives. Things like decoders for obscure and undocumented backup formats and the like.

Most people would be fine with Forgejo on Codeberg (or self hosted).

> Maintainers don't want to loose their stars

??? Seriously?

> All these things are powerful indicators of quality

Not in my experience....


Why are you as surprised?

People don't just share their stargazing plots "for fun", but because it has meaning for them.


In my 17 years of having a GitHub account I don’t think I’ve ever seen a “stargazing plot”. Have you got an example of one?

> People don't just share their stargazing plots "for fun", but because it has meaning for them.

What's the difference?


> a pioneering (but no longer new) PR system

having used gerrit 10 years ago there's nothing about github's PRs that I like more, today.

> code navigation simply in a web browser

this is nice indeed, true.

> You write code, and almost everything works effortlessly.

if only. GHA are a hot mess because somehow we've landed in a local minimum of pretend-YAML-but-actually-shell-js-jinja-python and they have a smaller or bigger outage every other week, for years now.

> why developers like it so much

most everything else is much worse in at least one area and the most important thing it's what everyone uses. no one got fired for using github.


The main thing I like about Github's PRs is that it's a system I'm already familiar with and have a login/account for. It's tedious going to contribute to a project to find I have to sign up for and learn another system.

I've used Gerrit years ago, so wasn't totally unfamiliar, but it was still awkward to use when Go were using it for PRs. Notably that project ended up giving up on it because of the friction for users - and they were probably one of the most likely cases to stick to their guns and use something unusual.


> Notably [go] ended up giving up on [gerrit]

That's not accurate. They more or less only use Gerrit still. They started accepting Github PRs, but not really, see https://go.dev/doc/contribute#sending_a_change_github

> You will need a Gerrit account to respond to your reviewers, including to mark feedback as 'Done' if implemented as suggested

The comments are still gerrit, you really shouldn't use Github.

The Go reviewers are also more likely than usual to assume you're incompetent if your PR comes from Github, and the review will accordingly be slower and more likely to be rejected, and none of the go core contributors use the weird github PR flow.


> The Go reviewers are also more likely than usual to assume you're incompetent if your PR comes from Github

I've always done it that way, and never got that feeling.


there's certainly a higher rejection rate for github PRs

That seems unsurprising given that it’s the easiest way for most people to do it. Almost any kind of obstacle will filter out the bottom X% of low effort sludge.

correlation, not causation.

Lowest common denominator way will always get worst quality


sure it's correlation, but the signal-to-noise ratio is low enough that if you send it in via github PR, there's a solid chance of it being ignored for months / years before someone decides to take a look.

Oh right. Thanks for the correction - I thought they had moved more to GitHub. Guess not as much as I thought!

Many people confuse competence and dedication.

A competent developer would be more likely to send a PR using the tool with zero friction than to dedicate a few additional hours of his life to create an account and figure out how to use some obscure.


You are making the same mistake of conflating competence and (lack of) dedication.

Most likely, dedication says little about competence, and vice versa. If you do not want to use the tools available to get something done and rather not do the task instead, what does that say about your competence?

I'm not in a position to know or judge this, but I could see how dedication could be a useful proxy for the expected quality a PR and the interaction that will go with it, which could be useful for popular open source projects. Not saying that's necessarily true, just that it's worth considering some maintainers might have anecdotal experiences along that line.


A competent developer wouldn't call gerrit an obscure tool.

This attitude sucks and is pretty close to just being flame bait. There are all kinds of developer who would have no reason to ever have come across it.

A competent developer should be aware of the tools of the trade.

I'm not saying a competent developer should be proficient in using gerrit, but they should know that it isn't an obscure tool - it's a google-sponsored project handling millions of lines of code internally in google and externally. It's like calling golang an obscure language when all you ever did is java or typescript.


It’s silly to assume that someone isn’t competent just because you know about a tool that they don’t know about. The inverse is almost certainly also true.

Is there some kind of Google-centrism at work here? Most devs don’t work at Google or contribute to Google projects, so there is no reason for them to know anything about Gerrit.


> Most devs don’t work at Google or contribute to Google projects, so there is no reason for them to know anything about Gerrit.

Most devs have never worked on Solaris, but if I ask you about solaris and you don't even know what it is, that's a bad sign for how competent a developer you are.

Most devs have never used prolog or haskell or smalltalk seriously, but if they don't know what they are, that means they don't have curiosity about programming language paradigms, and that's a bad sign.

Most competent professional developers do code review and will run into issues with their code review tooling, and so they'll have some curiosity and look into what's out there.

There's no reason for most developers to know random trivia outside of their area of expertise "what compression format does png use by default", but text editors and code review software are fundamental developer tools, so fundamental that every competent developer I know has enough curiosity to know what's out there. Same for programming languages, shells, and operating systems.


These are all ridiculous shibboleths. I know what Solaris is because I’m an old fart. I’ve never used it nor needed to know anything about it. I’d be just as (in)competent if I’d never heard of it.

> The main thing I like about Github's PRs is that it's a system I'm already familiar with and have a login/account for. It's tedious going to contribute to a project to find I have to sign up for and learn another system.

codeberg supports logging in with GitHub accounts, and the PR interface is exactly the same

you have nothing new to learn!


Yeah and this slavish devotion to keeping the existing (broken imho) PR structure from GH is the one thing I most dislike about Forgejo, but oh well. I still moved my project over to Codeberg.

GH's PR system is semi-tolerable for open source projects. It's downright broken for commercial software teams of any scale.

Like the other commenter: I miss Gerrit and proper comment<->change tracking.


agreed, the github "innovation", i.e. the pull request interface is terrible for anything other than small changes

hopefully codeberg can build on it, and have an "advanced" option


> having used gerrit 10 years ago there's nothing about github's PRs that I like more, today.

I love patch stack review systems. I understand why they're not more popular, they can be a bit harder to understand and more work to craft, but it's just a wonderful experience once you get them. Making my reviews work in phabricator made my patchsets in general so much better, and making my patchsets better have improved my communication skills.


I used gerrit a bit at work but any time I want to contribute to OSS project requiring to use it I just send a message with bugfix patch applied and leave, it's so much extra effort for drive by contributions that I don't care.

It's fine for code review in a team, not really good in GH-like "a user found a bug, fixed it, and want to send it" contribution scheme


> a well-formed CI system

Man :| no. I genuinely understand the convenience of using Actions, but it's a horrible product.


Maybe I have low standards given I've never touched what gitlab or CircleCi have to offer, but compared to my past experiences with Buildbot, Jenkins and Travis, it's miles ahead of these in my opinion.

Am I missing a truly better alternative or CI systems simply are all kind of a pita?


I don't enough experience w/ Buildbot or Travis to comment on those, but Jenkins?

I get that it got the job done and was standard at one point, but every single Jenkins instance I've seen in the wild is a steaming pile of ... unpatched, unloved, liability. I've come to understand that it isn't necessarily Jenkins at fault, it's teams 'running' their own infrastructure as an afterthought, coupled with the risk of borking the setup at the 'wrong time', which is always. From my experience this pattern seems nearly universal.

Github actions definitely has its warts and missing features, but I'll take managed build services over Jenkins every time.


Jenkins was just build in pre-container way so a lot of stuff (unless you specifically make your jobs use containers) is dependent on setup of machine running jenkins. But that does make some things easier, just harder to make repeatable as you pretty much configuration management solution to keep the jenkins machine config repeatable.

And yes "we can't be arsed to patch it till it's problem" is pretty much standard for any on-site infrastructure that doesn't have ops people yelling at devs to keep it up to date, but that's more SaaS vs onsite benefit than Jenkins failing.


My issue with Github CI is that it doesn't run your code in a container. You just have github-runner-1 user and you need to manually check out repository, do your build and clean up after you're done with it. Very dirty and unpredictable. That's for self-hosted runner.

> My issue with Github CI is that it doesn't run your code in a container.

Is this not what you want?

https://docs.github.com/en/actions/how-tos/write-workflows/c...

> You just have github-runner-1 user and you need to manually check out repository, do your build and clean up after you're done with it. Very dirty and unpredictable. That's for self-hosted runner.

Yeah checking out everytime is a slight papercut I guess, but I guess it gives you control as sometimes you don't need to checkout anything or want a shallow/full clone. I guess if it checked out for you then their would be other papercuts.

I use their runners so never need to do any cleanup and get a fresh slate everytime.


Gitlab is much better

Curious what are some better options. I feel it is completing with Jenkins and CircleCI and its not that bad.

In what way? I've never had an issue other than outages.

> it’s horrible, i use it every day > the alternatives are great, i never use them

Every time.


What do you consider a good product in this space?

I'd rather solve advent of code in brainfuck than have to debug their CI workflows ever again.

Surely you just need the workflow to not have embedded logic but call out to a task manager so you can do the same locally?

Well then why 99% of GH Actions functionality even exists.

It is fairly common pratice almost engineering best pratice to not put logic in CI. Just have it call out to a task runner, so you can run the same command locally for debugging etc. Think of CI more as a shell as a service, your just paying someone to enter some shell commands for you, you should be able to do exactly the same locally.

You can take this a setup furthur and use an environment manager to removing the installing of tools from CI as well for local/remote consistency and more benefits.


To lock you in.

Ergo, I'd rather use brainfuck to program CI.

The big issue with Github is that they never denied feeding ai with private repositories. (Gitlab for example did that when asked). This fact alone makes many users bitter, even for organizations not using private repos per se.

>a well-formed CI system with many developed actions and free runners.

It feels to me like people have become way too reliant on this (in particular, forcing things into CI that could easily be done locally) and too trusting of those runners (ISTR some reports of malware).

>In addition, it is best to use code navigation simply in a web browser.

I've always found their navigation quite clunky and glitchy.


Underrated feature is the code search. Everyone starts out thinking they’ll just slap elastic search or similar in front of the code but it’s more nuanced than that. GitHub built a bespoke code search engine and published a detailed blog post about it afterwards.

Github'PR and CI are some of the worst.

> In addition, it is best to use code navigation simply in a web browser.

IMHO the vanilla Github UI sucks for code browsing since it's incredibly slow, and the search is also useless (the integrated web-vscode works much better - e.g. press '.' inside a Github project).

> as well as a well-formed CI system with many developed actions and free runners

The only good thing about the Github CI system are the free runners (including free Mac runners), for everything else it's objectively worse than the alternatives (like Gitlab CI).


Well, I guess. It's not a surprise LinkedIn and GitHub are owned by the same entity. Both are degrading down to the same Zuckernet-style engagement hacking, and pseudo-resume self-boosting portfolio-ware. If the value of open source has become "it gets me hired", then ... fine. But that's not why many of us do free software development.

GitHub's evolution as a good open source hosting platform stalled many years ago. Its advantages are its social network effects, not as technical infrastructure.

But from a technology and UX POV it's got growing issues because of this emphasis, and that's why the Zig people have moved, from what I can see.

I moved my projects (https://codeberg.org/timbran/) recently and have been so far impressed enough. Beyond ideological alignment (free software, distaste for Microsoft, want to get my stuff off US infrastructure [elbows up], etc.) the two chief advantages are that I could create my own "organization" without shelling over cash, and run my own actions with my own machines.

And since moving I haven't noticed any drop in engagement or new people noticing the project since moving. GitHub "stars" are a shite way of measuring project success.

Forgejo that's behind Codeberg is similar enough to GitHub that most people will barely notice anyways.

I'm personally not a fan of the code review tools in any of them (GitLab, Foregejo, or GitHub) because they don't support proper tracking of review commits like e.g. Gerritt does but oh well. At least Foregejo / Codeberg are open to community contribution.


> In addition, it is best to use code navigation simply in a web browser

How do you define "code navigation"? It might've got a bit easier with automatic highlighting of selected symbols, but in return source code viewer got way too laggy and, for a couple of years now, it has this weird bug with misplaced cursors if code is scrolled horizontally. I actually find myself using the "raw" button more and more often, or cloning repo even for some quick ad-hoc lookups.

Edit: not to mention the blame view that actively fights with browser's built in search functionality.


Hint: Type the '.' key on any code page or PR.

And now it opens... some VSCode-esque editor in the browser that asks me to sign-in? Why would I want something even more resource-hungry and convoluted just to look up a random thing once in a while?

If you're familiar with VSCode it's quite handy. If you hate VSCode for some reason then just don't use it.

> a pioneering (but no longer new) PR system

Having used Forgejo with AGit now, IMO the PR experience on GitHub is not great when trying to contribute to a new project. It's just unnecessarily convoluted.


What do you like most about agit?

It's just how straightforward it is. With GitHub's fork-then-PR approach I would have to clone, fork, add a remote to my local fork, push to said remote, and open the PR.

With agit flow I just have to clone the repository I want to contribute to, make my changes, and push (to a special ref, but still just push to the target repo).

I have been making some small contributions to Guix when they were still using email for patches, and that (i.e. send patches directly to upstream) already felt more natural than what GitHub propagates. And agit feels like the git-native interpretation of this email workflow.


I don't get what people are complaining about. I haven't run into these AI issues except for Copilot appearing AS AN OPTION in views. Otherwise it seems to be working the same has it always

Is there more?


> Having a sponsorship system is also great

They have zero fees for individuals too which is amazing. Thanks to it I gained my first sponsor when one of my projects was posted here. Made me wish sponsorships could pay the bills.


Would you say Github has any significant advantages over Gitlab in this regard? I always found them to be on par, with incremental advantages on either side.

One of my favourite GitHub features is the ability to do a code search over the whole of GitHub, not sure GitLab has the same when I use to use it?

Code search over all of Gitlab (even if available) wouldn't help much when many of the interesting repos might be on Github. To be truly useful, it would need to index repos across many different forges. But there's a tension in presenting that to users if you're afraid that they might exit your ecosystem to go to another forge.

Embrace, extend, extinguish.

That's not a Victorinox you're looking at, it's a cheap poorly made enshittified clone using a decades old playbook (e-e-e).

The focus on "Sponsorship buttons" and feature instead of fixing is just a waste of my time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: