Sure, everyone wants to "stop silly people replying to my comments by posting LLM-generated garbage", but rules are rules, so you should understand that by introducing a rule like the one you propose, you also automatically forbid discussions about "here's a weird trick to make LLM make stupid mistakes", or "biases of different LLMs" where people reply to each other which prompts they tried and what was the result. Obviously, that's not what you've meant (right?), and everyone understands that, so then it's a judgement call when this applies and when it doesn't, and, congratulations, you've made another stupid rule that no one follows "and that's ok".
"A guideline to refrain" seems better. Basically, this should be only slightly more tolerated than "let me google for you" replies: maybe not actively harmful, but rude. But, anyway, let's not be overly pretentious: who even reads all these guidelines (or rules for that matter)? Also, it is quite apparent, that the audience of HN is on average much less technical and "nerdy" than it was, say, 10 years ago, so, I guess, expect these answers to continue for quite some time and just deal with it.
I feel the same way, but I wouldn't be bold enough to call it dumb. I mean, I assume they know what they are doing. This is very inconvenient for me, as a buyer, but I suppose most companies just aren't Apple, so they throw at us a lot of various stuff hoping that something sticks. And, for that matter, Apple's product line gets more diversified each year too. now it's Air, and Pro, and Max, so I wouldn't bet it won't be G1 Ultra F12b in 10 more years too.
Great, now I can compare it with other Zsomethings. But as the post shows, thats just one branch of many naming schemes HP employs. Also: This already is a workstation, how could it _not_ be "ultra"? Why the doubling? Or does "workstation" just mean "something I can work with", including office stuff? In that case, I am very interested what other letters they use and what they're for.
For the same reason there is the Z2, Z4, Z8. There are several tiers of workstations, the Zbook Ultra is the best PC laptop you can get in a non-boat-anchor format, bar none.
I wonder why ThinkPads are not mentioned. It's not like I recommend them (I mean, I use one, but it's not like I've tried most laptops out there, so who I am to judge), but I was under the impression it's still a de-facto Linux laptop standard.
They used to be genuinely great, now they coasting on brand name and simply not being as bad as most laptops. DIY hardware upgrades are no longer possible, the keyboard is no longer a differentiator, and linux battery life is about 1/4 windows running on the same machine.
From what I remember the last time I bought a laptop, they also have a really annoying pricing model where everything is 30% overpriced but are running constant discounts
The author of the article recently had a “laptop olympics” stream where he compared his laptops. He owns an X1 carbon but doesn’t like it at all, mainly because of the CPU iirc.
Thinkpads are probably fine if that's the price point you're shopping at, but the modern ones are not good value. HP has the best mobile workstation available currently.
The old ones had hot-swappable batteries (a second internal battery - of decent size - kept the device going while you replaced the external one). I used to keep a couple of extra 50 Wh batteries in my backpack and therefore had excellent battery life. The power efficiency wasn't great though.
I now have a Z13 Gen 1 (AMD 6850U) running Fedora and the battery life is passable. It draws 7-8W at idle from a 51 Wh battery.
I have an X1 extreme. I’ve never gotten it to last over 2h on Windows. On Linux it can last an hour or so more if I turn off the NVIDIA GPU, but otherwise it’s still abysmal.
Then there’s the stupid BIOS warning that requires you to press ESC for the computer to boot if it’s not plugged in to the official charger, which means that if it ever reboots at night it’ll just keep you awake (because the power management hasn’t been initialized yet so it’s stuck at 100% CPU) until you go press ESC.
Oh and it thermal throttles all the time so the CPU performance is good for a few minutes and then it’s just awful.
Cannot properly evaluate now. I never really use it without a cable for too long, and by now the battery must have slightly degraded too. Anyway, it never was Macbook-level, I guess, and it's an oldish model, so you should check actual reviews for current models that you are interested in, there always was plenty of them for ThinkPads (at least, the last time when I looked for a new laptop).
I never even used Google Photos (because, you know), so if somebody could explain more concretely: how do you use it? Is it actually a backup app (and if so, is it really much different from using a generic backup app or even just syncthing), or does it somehow magically allow you to keep the preview gallery and search on your device, while your actual 200 GB of photos are somewhere in the cloud and the local storage is basically just auto-managed cache, where everything you didn't access in the last 6 months gets deleted? Does it preserve all this additional data Android cameras add, like HDR, video fragments before photos, does it handle photospheres well, etc? I'm asking because I don't even fully understand how the camera app handles it itself, and if all the data is fully portable.
FWIW, I also don't use any fancy collection management and barely understand what all these Lightrooms and XMP files are for. Maybe I should, but up to this day photos for me are just a bunch of files in the folder, that I sometimes manually group into subfolders like 2025-09, mostly to make it easier on thumbnail-maker.
It auto uploads all your photos to the cloud and you can delete them locally and still have them. The biggest feature is the AI search, you can type anything and it will find your pictures without you doing any work categorizing them. It can do objects or backgrounds or colors and it can even do faces so you can search by people's name. That and there's share links to albums and multiplayer albums.
It keeps the originals locally when it uploads forever unless you delete them. There's a one click "free up space on this device" button to delete the local files. It's actually somewhat annoying to export in bulk, you pretty much have to use takeout.
Key features that matter to me:
1) backup from android or iOS. This helps when I have switched phones over the years.
2) shared albums with family or friends where invited people can both see and contribute photos. Think kids albums, weddings, holidays.
3) ability to re-download at full resolution
1) You don't have backups of other data on your phone (chat history, 2FA secrets and private keys, text notes, anki cards, game progress, configuration of all apps, etc.)? I had assumed everyone who cares about their data has backups of their data anyway, so that's not really a selling point to install another app for
2) that's nice!
3) "it doesn't throw my data away" is the last selling point?! Isn't that just assumed?!
1) I do have separate backups, as well as this, which runs more frequently (after picture is taken) vs daily for device backup
3) not compared to iCloud photos which I migrated from. You can export a whole album with Google at original quality with 1 click. With Apple you can only do 1000 at a time. For apple you can ask for a whole account export, but that takes a few days and gives you all photos. (Similar to Google Takeout).
The backup thing is really more than just backing it up. I take a picture, and it’s nearly immediately available in the same way across all my devices and the web. I can search for “pizza” on the web app and see any picture of pizza I ever took. On a different or new device I’ll immediately have access to the whole library with no set up.
For nearly a decade I've been using Google Photos with a love-hate relationship. I've tried a few alternative photo apps, even tried building one myself as a side side side side project, but nothing really felt like it could replace how I use Google Photos (haven't tried in the past couple of years mind).
I have a daughter, and my family lives in another country, so I want to be able to share photos with them. These are the feaures I need:
- Sharing albums with people (read only). It sounds pretty simply, but even Google fucked it up somehow. I added family members by their Google account to the album, and somehow later I saw someone I didn't know was part of the album. Apparently adding people gives (or did?) them permission to share the album with other people which is weird. I want to be able to control exactly who sees the photos, and not allow them to share or download them with others. On the topic of features, I should note that zero of the other social features (comments / reactions) have ever been used.
- Shared album with my spouse (write). I take photos of the kid, she takes photos of the kid. We want to be able to both add our photos to the shared album.
- Automatic albums or grouping by faces. Being able to quickly see all the photos of our kid is really great, especially if it works with the other sharing features. On Google you could setup Live Albums that did this... (automatic add and share between multiple people) but I can't see the option anymore on Android. I feel it could be a bit simpler though, just tagging a specific face, so that all photos should be shared within my Google One Family.
- The way we use it is we have a shared album between us or all the photos, and then a curared album shared with family members of the best photos.
Other than that I just use it as a place to dump photos (automatically backed up from my phone) and search if needed. Ironically the search is not very good, but usually I can remember when the photo I need was taken roughly so can scroll through the timeline. In total my spouse and I have ~200GB of media on Google Photos, some of it is backed up elsewhere.
What about automatic background sync without ever having to open the app on mobile? Does that work or do you have to open the app regularly for it to sync properly?
This doesn't work properly on Nextcloud (it sometimes gets out of sync and then I'm screwed because I have to reset the app on my family member's phone and have them resync for hours).
You can back up to Immich using various methods, including dumb file copy into a dropbox folder. For a while, I was using PhotoSync that uploaded photos to my NAS with Immich using WebDAV.
Immich also has an app that can upload photos to your server automatically. You can store them there indefinitely. There are galleries, timelines, maps for geotagged photos, etc.
The app also allows you to browse your galleries from your phone, without downloading full-resolution pictures. It's wickedly fast, especially in your home network.
> Does it preserve all this additional data Android cameras add, like HDR, video fragments before photos, does it handle photospheres well, etc?
It preserves the information from sidecar files and the original RAW files. The RAW processing is a bit limited right now, and it doesn't support HDR properly. However, the information is not lost and once they polish the HDR support, you'll just need to regenerate the miniatures.
Wouldn't recommend. When I wanted to move from Google Photos to iCloud, there was no way to simply get all my photos. I had to use a JS script that would keep scrolling the page and download photos one by one.
I was pondering on this very thing not so long ago. I didn't discover anything new, of course, but I ended up convinced that the whole thing exists only because most people don't take a moment to think how absurd it is, and not so much time has passed since its somewhat forceful foundation (meaning, it wasn't something that "people of the Europe willingly decided to establish"). And, hence, it's only a matter of time it falls apart, and it may happen any time. Which is a pity, because I like open borders, I like EU as an idea, and I don't like wars, revolutions and other rapid changes, which I'd otherwise prefer to happen outside of my lifetime.
What I mean to say is that the whole EU political system is an epitome of citizen alienation, and it is like that by design. It is the purest faceless Kafkian bureaucratic machine. And, by the way, I think it works pretty well for what it is. I don't know how to measure it, but I suppose the overall quality of legislation is higher than what, say, Russia or USA produce. But the fact it is completely opaque by design, that no one is ever truly accountable for anything, I think, just isn't what anyone would willingly accept, and it's only a matter of time when the critical mass of people truly "notice" the fact.
You can often hear how some guy on the internet calls POTUS "the most powerful man in the world", which is always somewhat funny, because, of course, anyone sane understands how far from truth that is. It's laughable, how little he can really do as a president, how powerless he is to change something he truly wants to change. He is more of a glorified clown, than a ruler or a politic. But I come to believe it's really important to have a role like that in the government, somebody who ignorant people believe to be responsible for everything, somebody they can hate and blame for all that is wrong around them. It is important for the silliest psychological reasons, just by human nature.
Anyway, the comment is too long as it is, so I know I won't be able to properly explain myself, but the thing is I don't imagine things like the meaningless cookie-notification, or that idiotic bottlecap thing being possible almost anywhere but Brussels, certainly not that often. It is both ironic and very characteristic of the system, that both are only some very minor footnotes in an Appendix to some enormous legal package that is "mostly obviously good", and are about the only thing from the whole package that most people notice (and obviously are very costly in the end).
It would be a good thing, if it would cause anything to change. It obviously won't. As if a single person reading this post wasn't aware that the Internet is centralized, and couldn't name specifically a few sources of centralization (Cloudflare, AWS, Gmail, Github). As if it's the first time this happens. As if after the last time AWS failed (or the one before that, or one before…) anybody stopped using AWS. As if anybody could viably stop using them.
If anything, centralisation shields companies using a hyperscaler from criticism. You’ll see downtime no matter where you host. If you self host and go down for a few hours, customers blame you. If you host on AWS and “the internet goes down”, then customers treat it akin to an act of God, like a natural disaster that affects everyone.
It’s not great being down for hours, but that will happen regardless. Most companies prefer the option that helps them avoid the ire of their customers.
Where it’s a bigger problem is when a critical industry like retail banking in a country all choose AWS. When AWS goes down all citizens lose access to their money. They can’t pay for groceries or transport. They’re stranded and starving, life grinds to a halt. But even then, this is not the bank’s problem because they’re not doing worse than their competitors. It’s something for the banking regulator and government to worry about. I’m not saying the bank shouldn’t worry about it, I’m saying in practice they don’t worry about it unless the regulator makes them worry.
I completely empathise with people frustrated with this status quo. It’s not great that we’ve normalised a few large outages a year. But for most companies, this is the rational thing to do. And barring a few critical industries like banking, it’s also rational for governments to not intervene.
> If anything, centralisation shields companies using a hyperscaler from criticism. You’ll see downtime no matter where you host. If you self host and go down for a few hours, customers blame you.
Not just customers. Your management take the same view. Using hyperscalers is great CYA. The same for any replacement of internally provided services with external ones from big names.
Exactly. No one got fired for using AWS. Advocating for self-hosting or a smaller provider means you get blamed when the inevitable downtime comes around.
If you cannot give a patient life saving dialysis because you don't have a backup generator then you are likely facing some liability. If you cannot give a patient life saving dialysis because your scheduling software is down because of a major outage at a third party and you have no local redundancy then you are in a similar situation. Obviously this depends on your jurisdiction and probably we are in different ones, but I feel confident that you want to live in a district where a hospital is reasonably responsible for such foreseeable disasters.
Yeah I mentioned banking because of what I was familiar with but medical industry is going to be similar.
But they do differ - it’s never ok for a hospital to be unable to dispense care. But it is somewhat ok for one bank to be down. We just assume that people have at least two bank accounts. The problem the banking regulator faces is that when AWS goes down, all banks go down simultaneously. Not terrible for any individual bank, but catastrophic for the country.
And now you see what a juicy target an AWS DC is for an adversary. They go down on their own now, but surely Russia or others are looking at this and thinking “damn, one missile at the right data Center and life in this country grinds to a halt”.
>If anything, centralisation shields companies using a hyperscaler from criticism. You’ll see downtime no matter where you host. If you self host and go down for a few hours, customers blame you.
What if you host on AWS and only you go down? How does hosting on AWS shield you from criticism?
This discussion is assuming that the outage is entirely out of your control because the underlying datacenter you relied on went down.
Outages because of bad code do happen and the criticism is fully on the company. They can be mitigated by better testing and quick rollbacks, which is good. But outages at the datacenter level - nothing you can do about that. You just wait until the datacenter is fixed.
This discussion started because companies are actually fine with this state of affairs. They are risking major outages but so are all their competitors so it’s fine actually. The juice isn’t worth the squeeze to them, unless an external entity like the banking regulator makes them care.
I’m pretty cloudflare centric. I didn’t start that way. I had services spread out for redundancy. It was a huge pain. Then bots got even more aggressive than usual. I asked why I kept doing this to myself and finally decided my time was worth recapturing.
Did everything become inaccessible the last outage? Yep. Weighed against the time it saves me throughout the year I call it a wash. No plans to move.
I'm of a similar mindset... yeah, it's inconvenient when "everything" goes down... but realistically so many things go down now and then, it just happens.
Could just as easily be my home's internet connection, or a service I need from/at work, etc. It's always going to be something, it's just more noticeable when it affects so many other things.
To be honest, it's MUCH easier to have one source to blame when things go down. If a small-medium vendor's website goes down on a normal day, so poor IT guy is going to be fielding calls all day.
If that same vendor goes down because Cloudflare went down, oh well. Most already know and won't bother to ask when your site will be back up
> It would be a good thing, if it would cause anything to change. It obviously won't.
I agree wholeheartedly. The only change is internal to these organizations (eg: CloudFlare, AWS) Improvements will be made to the relevant systems, and some teams internally will also audit for similar behavior, add tests, and fix some bugs.
However, nothing external will change. The cycle of pretending like you are going to implement multi-region fades after a week. And each company goes on continuing to leverage all these services to the Nth degree, waiting for the next outage.
Not advocating that organizations should/could do much, it's all pros/cons. But the collective blast radius is still impressive.
the root cause is customers refusing to punish these downtime.
Checkout how hard customers punish blackouts from the grid - both via wallet, but also via voting/gov't. It's why they are now more reliable.
So unless the backbone infrastructure gets the same flak, nothing is going to change. After all, any change is expensive, and the cost of that change needs to be worth it.
I think you’re viewing the issue from an office worker’s perspective. For us, downtime might just mean heading to the coffee machine and taking a break.
But if a restaurant loses access to its POS system (which has happened), or you’re unable to purchase a train ticket, the consequences are very real. Outages like these have tangible impacts on everyday life. That’s why there’s definitely room for competitors who can offer reliable backup strategies to keep services running.
Talking more about some unrelated function taking down the whole system, not advocating for "offline" credit card transactions (is this even a thing these days?). Ex: If the transaction needs to be logged somewhere, it can be built to sync whenever possible rather than blocking all transactions if the central service is down.
Payment processor being down is payment processor being down.
Do any of those competitors actually have meaningfully better uptime?
From a societal level, having everything shut down at once is an issue. But if you only have one POS system targeting only one backend URL (and that backend has to be online for the POS to work) then cloudflare seems like one of the best choices
If the uptime provided by cloudflare isn't enough then the solution isn't a cloudflare competitor, it's the ability to operate offline (which many POS have, including for card purchases) or at least multiple backends with different DNS, CDN, server location etc.
If it’s that easy to get the exact same service / product as another vendor the maybe your competitive advantage isn’t so high. If Amazon would be down I’d just wait a few hours as I don’t want to sign up on another site.
I agree. These days it seems like everything is a micro-optimization to squeeze out a little extra revenue. Eventually most companies lose sight of the need to offer a compelling product that people would be willing to wait for.
I remember a Google cloud outage years ago that happened to coincide with one of our customers' massively expensive TV ads. All the people who normally would've gone straight to their website instead got 502. Probably a 1M+ loss for them all things considered.
You need to be punishing the services you "paid" to use, but had downtime. So did you terminate any of those services for downtime, or had any sort of punishment done to them as a result?
> Checkout how hard customers punish blackouts from the grid - both via wallet, but also via voting/gov't.
What? Since when has anyone ever been free to just up and stop paying for power from the grid? Are you going to pay $10,000 - $100,000 to have another power company install lines? Do you even have another power company in the area? State? Country? Do you even have permission for that to happen near your building? Any building?
The same is true for internet service, although personally I'd gladly pay $10,000 - $100,000 to have literally anything else at my location, but there are no proper other wired providers and I'll die before I ever install any sort of cellular router. Also this is a rented apartment so I'm fucked even if there were competition, although I plan to buy a house in a year or two.
Downtimes happen one way or another. The upside of using Cloudflare is that bringing things back online is their problem and not mine like when I self-host. :]
Their infrastructure went down for a pretty good reason (let the one who has never caused that kind of error cast the first stone) and was brought back within a reasonable time.
Same idea with the Crowdstrike bug, it seems like it didn't have much of on effect on their customers, certainly not with my company at least, and the stock quickly recovered, in fact doing very well. For me, it looks like nothing changed, no lessons learned.
That's true of a lot of "Enterprise" software. Microsoft enjoys success from abusing their enterprise customers what seems like daily at this point.
For bigger firms, the reality is that it would probably cost more to switch EDR vendors than the outage itself cost them, and up to that point, CrowdStrike was the industry standard and enjoyed a really good track records and reputation.
Depending on the business, there are long term contracts and early termination fees, there's the need to run your new solution along side the old during migration, there's probably years of telemetry and incident data that you need to keep on the old platform, so even if you switch, you're still paying for CrowdStrike for the retention period. It was one (major) issue over 10+ years.
Just like with CloudFlare, the switching costs are higher than outage cost, unless there was a major outage of that scale multiple times per year.
that IS the lesson! there are a million questions i can ask myself about those incidents. What dictates they can't ever screw up? sure it was a big screw up, but understanding the tolerances for screw ups is important to understanding how fast and loose you can play it. AWS has at least a big outage a year, whats the breaking point? risk and reward etc.
I've worked places where every little thing is yak shaved, and places where no one is even sure if the servers are up during working hours. Both jobs paid well.. both jobs had enough happy customers
Not that I doubt examples exist (I've yet to be at a large place with 0 failures on responding to such issues over the years), but it'd be nice if you'd share the specific examples you have in mind if you're going to bother commenting about it. It helps people understand how much is a systemic problem to be interested in vs having a comment which more easily falls into many other buckets instead. I'd try to build trust off the user profile as well, but it proclaims you're shadowbanned for two different reasons - despite me seeing your comment.
To be fair AWS (and GCP and Azure) at least is easy to replace with something else. And pretty much all alternatives are cheaper, less messy, etc. There are very few situations where you cannot viably do so.
We live in a world where you can get things like dedicated servers, etc. within similar time spans as creating a "compute engine" node on a big cloud provider.
The fact that cloud services added serious limitations to what applications were able to do (things like state management, passing configuration in more unified ways, etc.) means that running your own infrastructure is easier than ever, since your devs won't end up whining at you until you do something super custom just for some project to be a bit easier. But if you really want to you can.
GitHub also has become easy to get away from and indeed many individuals and companies did so.
CDNs are the bigger thing but A) there are a lot of other CDNs and B) having an image, or lets say an ansible config allows you to quickly deploy something that might be close enough for your use case. Just take any hosting company or even a dozen around the world.
Of course if you allowed yourself to end up in a complete vendor lock in things might be different, but if you think that it's a good idea to be completely dependent on the whims of some other company maybe you deserve that state. As in don't run a business without having any kind of fallback for decisions you make. Yes, profit from that big benefit something might give you, but don't lock the door behind you.
Sure you might be lucky and sure maybe you are fine going for luck while it lasts. Just don't be surprised when it all shatters.
It is as easy to not use them as it ever was. There has been no actual centralisation. Everything is done using open protocols. I don't know what more you could want.
Compare it to Windows where there is deep volume discounting and salespeople shmoozing CTOs and getting in with schools, healthcare providers etc etc. That's actual lock-in.
It’s too few and far between. It’s gonna make some changes if it’s a monthly event. If businesses start to lose connection for 8 hours every month, maybe the bigger ones are going to run for self hosting or at least some capacity of self hosting.
Here's where we separate the men from the boys, the women from the girls, the Enbys from the enbetts, and the SREs from the DevOps. If you went down when Cloudflare went do, do you go multicloud so that can't happen again, or do you shrug your shoulders and say "well, everyone else is down"? Have some pride in your work, do better, be better, and strive for greatness. Have backup plans for your backup plans, and get out of the pit of mediocrity.
Or not, shit's expensive and kubernetes is too complicated and "no one" needs that.
Same with the big Crowdstrike fail of 2024. Especially when everyone kept repeating the laughable statement that these guys have their shit in order, so it couldn't possibly be a simple fuckup on their end. Guess what, they don't, and it was. And nobody has realized the importance of diversity for resilience, so all the major stuff is still running on Windows and using Crowdstrike.
I wrote https://johannes.truschnigg.info/writing/2024-07-impending_g... in response to the CrowdStrike fallout, and was tempted to repost it for the recent CloudFlare whoopsie. It's just too bad that publishing rants won't change the darned status quo! :')
People will not do anything until something really disastrous happens. Even afterwards memories can fade. Cloudstrike has not lost many customers.
Covid is a good parallel. A pandemic was always possible, there is always a reasonable chance of one over the course of decades. However people did not take it seriously until it actually happened.
A lot of Asian countries are a lot better prepared for a tsunami then they were before 2004.
The UK was supposed to have emergency plans for a pandemic, but it was for a flu variant, and I suspect even those plans were under-resourced and not fit for purpose. We are supposed to have plans for a solar storm but when another Carrington even occurs I very much doubt we will deal with it smoothly.
It appears to me that the rationale was clearly stated in GP:
> resulting in missing upstream features and decreased security
I.e. it's a matter of technical superiority, which, to me, how the decisions should be made. Not by having friends in the community and all of us being Europeans and so on. (But, of course, I would be glad to hear more particular details/examples of Forgejo lagging behind.)
You should simply compare release notes over the same time period for both projects, what's been done and how much. There's lots of nonsense repeated on this site and others, just do the research yourself, it won't take long. They both have very predictable release schedules.
We've stuck with Gitea, after not being impressed by the extremely FUDish behavior of the main driver of the fork, and this has proven to be the right choice so far. In spite of what some people claim, all of the major contributors to Gitea have continued developing it, none of the "heavy hitters" have left. It shows.
The database can be downgraded anyway. I've been doing backwards migrations for each new version all the way back to 1.22 (which is the last Gitea version that is "side-gradable" to Forgejo).
i dont get this blindspot by lots of developers parroting this uber technocratic nonsense.
There's no such thing as some apolitical, objectively best approach to a technical problem. Instead of arguing about specific merits about specific issues people throw out this big wide handwave about how "idea X is simply technically the wrong choice", as if this is a legit position to have.
Take a philosophy course for god's sake before you engineer us all to death.
Thanks. I was wondering what is the status of it, given that Forgejo is being pushed more in the media lately. TBH, I haven't understood the controversy even after reading a couple of recaps. I remember it being about having "suddenly revealed" a couple of years ago that the guy on top is the owner of the trademark. Doesn't sound like a big deal to me, given that he actually was the main contributor and de-facto the leader of the project the whole time.
But then a couple of years have passed, and I started to hear about Forgejo more often only very recently, so I was wondering, if maybe the original project actually had some downfall and questionable technical decisions since. I still haven't switched, and was wondering if I should do so. As far, as I've heard it's still basically a matter of running the different docker container with the same volume, and it should work seamlessly. So what's about this "hard fork" you are mentioning? Did it actually break compatibility?
Forgejo used to be a set of patches applied on Gitea, but they moved to a fork with cherry picking Gitea commits, this is more work. In my view they don't have the development to keep up with Gitea.
Wow, you are an optimist. I do feel "it's close", but I wouldn't bet this close. But I wouldn't argue either, I don't know. Also, when it really pops, the consequences will be more disastrous than the bubble itself feels right now. It's literally hundreds of billions in circular investing. It's absurd.
"A guideline to refrain" seems better. Basically, this should be only slightly more tolerated than "let me google for you" replies: maybe not actively harmful, but rude. But, anyway, let's not be overly pretentious: who even reads all these guidelines (or rules for that matter)? Also, it is quite apparent, that the audience of HN is on average much less technical and "nerdy" than it was, say, 10 years ago, so, I guess, expect these answers to continue for quite some time and just deal with it.
reply