And not a single one of these is tenable, even when combined. How do the people that post reviews, or spread something over word-of-mouth, discover the thing in the first place? Try your hand at starting a business and trying to sell goods or services using these methods and see how well it works.
Banning advertising would have the opposite effect; entrenched players would have a massive moat. The biggest gains from advertising by far accrue to newer entrants, not the big companies.
Everything single one of those local businesses is also doing advertising, and is probably how you found them in the first place. They're buying local newspaper adverts, using flyers, or participating in valpaks/coupon mailers.
Actually all of those sound fine to me... I guess it's really just Internet advertising that feels wrong to me, especially when they try to fill in as the source of revenue themselves rather than a means to drive revenue for the main product.
It's understandable, but it's a position that doesn't consider the large swathe of lower-income households that have access to goods and services subsidized through ads (much of my family). I know it's not a position most of HN seems to be sympathetic with, but for many ad-supported services, including Netflix and Spotify, would be inaccessible without ads. My family can't afford to go out to movies regularly, or spend money out at restaurants, or go on vacation (ever), but they still deserve some leisure time and entertainment and a non-trivial percentage of the market is funded through ads.
The idea that we should eliminate that because a higher-income bracket of consumers is inconvenienced by ads just comes across oddly haughty and privelaged to me.
Heck, I wouldn't have my successful career today if it wasn't for the ad-supported ISP NetZero CD I came stumbled upon in 1999.
>How do the people that post reviews, or spread something over word-of-mouth, discover the thing in the first place?
The follow industry conventions, visit registries of industry websites, have professional lists where companies submit their announcements (and not to the general public) and so on.
>Try your hand at starting a business and trying to sell goods or services using these methods and see how well it works.
If advertising is banned, it will work just as good as for any competitor.
The "legal limit" is terribly misunderstood, but 0.08% is just legal threshold where the state doesn't need to prove impairment and the offense is upgraded to an automatic criminal DUI. A driver in an accident with a BAC of 0.03% could still be charged with a DUI if impairment can be proven but most prosecutors' offices have more important things to work on.
It's also terribly misunderstood by pedants since you can be charged with a DUI with a 0.00 BAC by doing drugs. The point isn't that it's a definitive line in the sand between impairment and not, but if people are trusted to drive a car (generally or broadly speaking, not pedantically speaking), being above or below said limit is a reasonable litmus test for "visibly/obviously impaired" or not.
If you use streaming replication (ie. WAL shipping over the replication connection), a single replica getting really far behind can eventually cause the primary to block writes. Some time back I commented on the behaviour: https://news.ycombinator.com/item?id=45758543
You could use asynchronous WAL shipping, where the WAL files are uploaded to an object store (S3 / Azure Blob) and the streaming connections are only used to signal the position of WAL head to the replicas. The replicas will then fetch the WAL files from the object store and replay them independently. This is what wall-g does, for a real life example.
The tradeoffs when using that mechanism are pretty funky, though. For one, the strategy imposes a hard lower bound to replication delay because even the happy path is now "primary writes WAL file; primary updates WAL head position; primary uploads WAL file to object store; replica downloads WAL file from object store; replica replays WAL file". In case of unhappy write bursts the delay can go up significantly. You are also subject to any object store and/or API rate limits. The setup makes replication delays slightly more complex to monitor for, but for a competent engineering team that shouldn't be an issue.
But it is rather hilarious (in retrospect only) when an object store performance degdaration takes all your replicas effectively offline and the readers fail over to getting their up-to-date data from the single primary.
There is no backpressure from replication and streaming replication is asynchronous by default. Replicas can ask the primary to hold back garbage collection (off by default), which will eventually cause a slow down, but not blocking. Lagging replicas can also ask the primary to hold onto WAL needed to catch up (again, off by default), which will eventually cause disk to fill up, which I guess is blocking if you squint hard enough. Both will take considerable amount of time and are easily averted by monitoring and kicking out unhealthy replicas.
> If you use streaming replication (ie. WAL shipping over the replication connection), a single replica getting really far behind can eventually cause the primary to block writes. Some time back I commented on the behaviour: https://news.ycombinator.com/item?id=45758543
I'd like to know more, since I don't understand how this could happen. When you say "block", what do you mean exactly?
I have to run part of this by guesswork, because it's based on what I could observe at the time. Never had the courage to dive in to the actual postgres source code, but my educated guess is that it's a side effect of the MVCC model.
Combination of: streaming replication; long-running reads on a replica; lots[þ] of writes to the primary. While the read in the replica is going it will generate a temporary table under the hood (because the read "holds the table open by point in time"). Something in this scenario leaked the state from replica to primary, because after several hours the primary would error out, and the logs showed that it failed to write because the old table was held in place in the replica and the two tables had deviated too far apart in time / versions.
It has seared to my memory because the thing just did not make any sense, and even figuring out WHY the writes had stopped at the primary took quite a bit of digging. I do remember that when the read at the replica was forcefully terminated, the primary was eventually released.
þ: The ballpark would have been tens of millions of rows.
What you are describing here does not match how postgres works. A read on the replica does not generate temporary tables, nor can anything on the replica create locks on the primary. The only two things a replica can do is hold back transcation log removal and vacuum cleanup horizon. I think you may have misdiagnosed your problem.
Wars are frequently fought of these three things, and there's no shortage of examples of the humans controlling these resources lording over those that did not.
I believe they were just pointing out that Postgres doesn't do in-place updates, so every update (with or without partitions) is a write followed by marking the previous tuple deleted so it can get vacuumed.
I think the point was that there is way more people involved in Proton that the people/work coming from Wine, not to trivialise the amount of work integrating a bunch of projects take.
reply