They're specifically referring to the dead comments from new users in this thread, so it's not insinuation. They're pointing out a higher-than-normal quantity of shill bots flocked to this thread.
The fact that the comments are dead means the system is working as intended, but it's not unreasonable to point out the nature of the comments.
> I publish OSS projects on GitHub because that's where the community is
And so we go, forever in circles, until enough of us move to other platforms regardless of where the existing community is. Just like how GitHub found its community in the early days, when most people (afaik) was using SourceForge, if anything.
"The community" will always remain on GitHub, if everyone just upload code to where "the community" already is. If enough of us stop using GitHub by default, and instead use something else, eventually "the community" will be there too, but it is somewhat of a chicken-and-egg problem, I admit.
I myself workaround this by dropping the whole idea that I'm writing software for others, and I only write it for myself, so if people want it, go to my personal Gitea instance and grab it if you want, I couldn't care less about stars and "publicity" or whatever people nowadays care about. But I'm also lucky enough to already have a network, it might require other's to build their network on GitHub first, then also be able to do something similar, and it'll all work out in the end.
> "The community" will always remain on GitHub, if everyone just upload code to where "the community" already is. If enough of us stop using GitHub by default, and instead use something else, eventually "the community" will be there too, but it is somewhat of a chicken-and-egg problem, I admit.
SourceForge was abandoned due to UX issues and the adware debacle; at the same time, GitHub started making changes which made it more viable to use the platform to distribute binary releases.
The deficiencies of GitHub are not critical enough for me to care, and if it ever gets that bad, pushing somewhere else and putting a few "WE HAVE MOVED" links isn't a big deal.
And "the community" isn't moving to Codeberg because Codeberg can't support "the community" without a massive scale up.
"SourceForge was abandoned due to UX issues and the adware debacle"
I'd say SourceForge was abandoned due to VA Linux going under. I remember the pain/panic as large numbers of OSS projects were suddenly scrambling to find alternatives to SF. I actually started a subscription to GitHub just to try and help ensure they had the money to stay in business, and we didn't have to go thru that again.
>> And "the community" isn't moving to Codeberg because Codeberg can't support "the community" without a massive scale up.
People have a superficial knowledge of the space (I think this extends beyond Codeberg) but feel strongly that they need to advocate for something. Codeberg themselves seem to have opinions about what they want to do but people are suggesting they can do more simply because it gives them an outlet.
The constraints that Codeberg set seem to, on the surface at least, ensure they can scale based on their needs and protect them from external threats. Hosting random sites comes with a range of liabilities they probably understand and want to avoid right now. There are EU regulations which can be challenging to handle.
GitHub also generously gives me a bunch of free CI, in exchange for whatever they benefit from me being there.
It's worth $50 just this month, according to them, but I don't see anyone else offering the mac runners that account for most of it.
For all the complaints, I test my packages that actually need it across dozens of architecture and OS combinations with a mix of runners, nested virtualization and qemu binfmt, all on their free platform.
In particular a number of other projects assume that you have a GitHub account. https://github.com/rust-lang/crates.io/issues/326 has been open for literally a decade without any meaningful work. If you want to publish a Lean software packages on Reservoir, the official Lean package registry, their requirements (https://reservoir.lean-lang.org/inclusion-criteria) not only specify a GitHub project specifically, but having at least two stars on GitHub as a "basic quality filter". Microsoft is a big funder of Lean and I can't help but think this is a deliberate restriction to increase lock-in on a Microsoft-owned platform.
Which community? Organic traffic to your GitHub exclusively coming from external references and links. There is no reason the same isn't working with Codeberg. If you link to Codeberg instead of GitHub it still works the same.
I have been struggling with this, myself. I used to push everything to GitHub, but a couple months ago I switched over to using my small low-power home server as a Git host. I used to really enjoy the feeling of pushing commits up to GitHub, and that little dopamine rush hasn't really transferred to my home machine yet.
It's a shame. The people who control the money successfully committed enshittification against open source.
Considering that "the community" is now filled with vibe coding slop pull requesters, and non-coders bitching in issues, the filter that not-github provides becomes better and better.
Of course, that mostly goes for projects big enough to already have an indepedent community.
Not to contradict you, but there's another important aspect to 'community' besides the bad contributors and the entitled complainers. That's discoverability. How do you discover a project that may be hosted anywhere on the dozens of independent forges out there? Searching each one individually is not a viable proposition. The search often ends on the biggest platform - Github.
I'm not trying defend github here. The largest platform could have been anyone who took advantage of the early opportunities in the space, which just happens to be Github. But discoverability is still a nagging problem. I don't think that even a federated system (using activitypub, atproto or whatever else out there) is going to solve that problem. We need a solution that can scour the entire code hosting space like search engines do (but collaboratively, not aggressively like LLM scrapers).
Ideally this should be something search engines handle - but they do a poor job in specialised areas like code repos.
It's helpful to have a github mirror of your "real" repo (or even just a stub pointing to the real repo if you object to github strongly enough that mirroring there is objectionable to you).
One day maybe there will be an aggregator that indexes repos hosted anywhere. But in many ways that will be back to the square one - a single point of failure.
The Fediverse seems to dislike global search. Or is that just a mastodon thing?
I don't think I ever find new software through github's own search. I find them through the software's website or some other means like a search engine.
That was solved by forums, tech mailing lists,... If you were interested in something, you hang around the communities and almost everything that was interesting enough will pass by.
Do you hang around every forum or mailing list that discusses the solutions to problems that you may potentially encounter in the future? The type of problem I'm talking about isn't one that can be foreseen years in advance.
Word of mouth: What if it's just some random script a guy created in a weekend? I have my code used by others in such a manner.
Package managers: Same problem as above. You missed the point of free software.
Search engines: They do a disastrous job of indexing anything on a forge. You might as well yell at the clouds instead.
LLM of choice: I'm not taking this seriously.
> Does anyone seriously use GitHub search to discover new projects?
I don't even understand the point of such questions. None of the solutions you proposed are any better solving what I described than the insufficient method I wrote about.
The reason copyright doesn't get fixed or removed is largely because the general public is worried more about other things and the big rightsholders continue their monthly payments—err, lobbying.
Though AI might change that. In the end, large corporations get what they want.
The general public also get sold on the rosy idea that copyright (and patents to a certain extent), protect the little guy, that thanks to this mechanism their work will not be stolen by opportunistic freeloaders. It also resonates with the "one day I will strike rich" mentality.
What they usually "forget" to tell you is that your IP is absolutely worthless if you don't have the resources to defend it in court, which in turns actually advantages freeloaders who either have relatively low costs to sue (patent trolls are basically an example of this) or enough money that they don't feel the pain if they lose.
The current system basically incentivizes suing over IP NOT creating it.
We know that this isn't really going to reduce harm for children, we know Meta is not seriously going to suffer or change, and we know this is going to be used as a cudgel to beat down privacy and increase surveillance.
Why is it so important that kids have access to the internet anyway that we're willing to sacrifice both our privacy and freedom of speech rights for it when we already know it's damaging their mental health?
We don't need all this privacy invasion if we just didn't give kids a smartphone with a data plan.
On school computers maybe, but not on their smartphones. If schools actually require kids to have their own computer with internet then that's quite simple to fix by enacting new rules. That's important for access to education as well for kids from low-income families.
The reality is that people who cheer for this stuff are going to be unreasonably shocked when it comes to bite them later. Once the government's done going after the big guys, the little guys are next, and unlike the big guys, they can't absorb a few fines and judgments.
I feel like the issue with FIPS is not even the lagging behind, but the fact that FIPS-approved algorithms are often harder to implement than non-FIPS alternatives.
WireGuard itself is the perfect example: ChaCha20-Poly1305 is relatively simple to implement without screwing up. Curve25519 fits as well. Blake2s is fast even with only 32-bit integers.
A good AES implementation without any subtle vulnerabilities is hard. They left plenty of footguns on the table for you. DJB has plenty of criticisms of secp256r1 and similar curves, which is why Ed25519 and Curve25519 exist in the first place.
The algorithms might be fine, but the difficulty and complexity increases the odds that something will go wrong. Even your trusted implementation might have a bug or get one later, and there's more places for those to hide.
Ordinary implementers aren't doing de novo implementations of AES, and the gap between the P-curves and Curve25519 has closed, so this feels like a critique that might have been more germane 10-15 years ago?
I'm bullish on video generation technology, but honestly not on OpenAI or any Western company's deployment of it. I think they'll all mostly suffer from the same problems that Sora did.
I heard Seedance is also full of restrictions now, although the model seems to be better at that sort of “cinematic” look, which might allow it to compete with Veo 3 and the like.
The issue is that Sora ended up getting the short end of the stick: by generating the footage, it became the primary target of complaints. Meanwhile, they were forced to remove the videos, but people simply took those videos and uploaded them to random social media platforms like Twitter, TikTok, or YouTube, which ended up hosting the content while being much less of a target, since the content wasn’t generated there.
Honestly, I think the only way forward will be to wait for local models to become good enough so that you can run something like Sora locally and generate whatever you want.
Seedance has a lot more restrictions now, but still arguably not as much, it's probably cheaper for ByteDance to run, and as you said, it at least looks good enough to be worth paying for.
Sora had all of the downsides, and attracted all of the scrutiny. Local-first is definitely the way.
i think it's clear cloud hosted is the actual future, which people have predicted for decades. it will never make financial sense to duplicate what you can get for cheap, because it's oversubscribed, with economies of scale and "if we let this run idle it's losing us money" pressure, for hardware found in a datacenter.
this has been the case for a long while now, and will increasingly be so as data centers buy up all the everything.
local first usually means extreme compromise so it can, practically, be run locally, because the cost of owning high end hardware is prohibitive. there are also companies providing locally deployed closed source models, that meet certain security requirements.
reply