> The thing is that real security isn't something that a checklist can guarantee.
I've taken this even further. You cannot do security with a checklist. Trying to do so will inevitably lead to bad outcomes.
Couple of years back I finally figured out how to dress this in a suitably snarky soundbite: doing security with a spreadsheet is like trying to estimate the health of a leper colony by their number of remaining limbs.
Every product vendor, especially those that are even within a shouting distance from security, has a wet dream: to have their product explicitly named in corporate policies.
Everything more complex than a hello-world has bugs. Compiler bugs are uncommon, but not that uncommon. (I must have debugged a few ICEs in my career, but luckily have had more skilled people to rely on when code generation itself was wrong.)
I had a fun bug while building a smartwatch app that was caused by the sample rate of the accelerometer increasing when the device heated up. I had code that was performing machine learning on the accelerometer data, which would mysteriously get less accurate during prolonged operation. It turned out that we gathered most of our training data during shorter runs when the device was cool, and when the device heated up during extended use, it changed the frequencies of the recorded signals enough to throw off our model.
I've also used a logic analyzer to debug communications protocols quite a few times in my career, and I've grown to rather like that sort of work, tedious as it may be.
Just this week I built a VFS using FUSE and managed to kernel panic my Mac a half-dozen times. Very fun debugging times.
There are two different "ads" we're discussing here. One is the ads reddit the platform allows you to pay for, and it shows up in the client(s) as ads. Another is the type of ad where a company reaches out to community members and ask them to post about their project/product in exchange for a static sum, which looks like "normal posts" but are actually sponsored content.
The first one sucks for a multitude of reasons, the second one you basically don't notice are ads, yet they're all over reddit.
Can't say I know how it looks everywhere on reddit, as I'm not everywhere on reddit, but the AI subreddits I referenced earlier are filled with it, and I've even received offers myself to get paid to pay about stuff and I'm a nobody, so surely I only know of the surface.
If you use streaming replication (ie. WAL shipping over the replication connection), a single replica getting really far behind can eventually cause the primary to block writes. Some time back I commented on the behaviour: https://news.ycombinator.com/item?id=45758543
You could use asynchronous WAL shipping, where the WAL files are uploaded to an object store (S3 / Azure Blob) and the streaming connections are only used to signal the position of WAL head to the replicas. The replicas will then fetch the WAL files from the object store and replay them independently. This is what wall-g does, for a real life example.
The tradeoffs when using that mechanism are pretty funky, though. For one, the strategy imposes a hard lower bound to replication delay because even the happy path is now "primary writes WAL file; primary updates WAL head position; primary uploads WAL file to object store; replica downloads WAL file from object store; replica replays WAL file". In case of unhappy write bursts the delay can go up significantly. You are also subject to any object store and/or API rate limits. The setup makes replication delays slightly more complex to monitor for, but for a competent engineering team that shouldn't be an issue.
But it is rather hilarious (in retrospect only) when an object store performance degdaration takes all your replicas effectively offline and the readers fail over to getting their up-to-date data from the single primary.
There is no backpressure from replication and streaming replication is asynchronous by default. Replicas can ask the primary to hold back garbage collection (off by default), which will eventually cause a slow down, but not blocking. Lagging replicas can also ask the primary to hold onto WAL needed to catch up (again, off by default), which will eventually cause disk to fill up, which I guess is blocking if you squint hard enough. Both will take considerable amount of time and are easily averted by monitoring and kicking out unhealthy replicas.
> If you use streaming replication (ie. WAL shipping over the replication connection), a single replica getting really far behind can eventually cause the primary to block writes. Some time back I commented on the behaviour: https://news.ycombinator.com/item?id=45758543
I'd like to know more, since I don't understand how this could happen. When you say "block", what do you mean exactly?
I have to run part of this by guesswork, because it's based on what I could observe at the time. Never had the courage to dive in to the actual postgres source code, but my educated guess is that it's a side effect of the MVCC model.
Combination of: streaming replication; long-running reads on a replica; lots[þ] of writes to the primary. While the read in the replica is going it will generate a temporary table under the hood (because the read "holds the table open by point in time"). Something in this scenario leaked the state from replica to primary, because after several hours the primary would error out, and the logs showed that it failed to write because the old table was held in place in the replica and the two tables had deviated too far apart in time / versions.
It has seared to my memory because the thing just did not make any sense, and even figuring out WHY the writes had stopped at the primary took quite a bit of digging. I do remember that when the read at the replica was forcefully terminated, the primary was eventually released.
þ: The ballpark would have been tens of millions of rows.
What you are describing here does not match how postgres works. A read on the replica does not generate temporary tables, nor can anything on the replica create locks on the primary. The only two things a replica can do is hold back transcation log removal and vacuum cleanup horizon. I think you may have misdiagnosed your problem.
I concur on "really good" but have to disagree on the "series" part. Children of Time is a remarkable book, one of the best science fiction stories in a very long time.
Children of Ruin is ... okay. Children of Memory is not a good book, IMO. Both of these suffer from the same mysticism-used-to-spin-up-a-red-reset-button plot device plague that fundamentally guts Xenocide. Nowhere as bad as that, of course, but the unpleasant echoes are there.
As it happens I'm in the middle of the Architects series and while it has its distant whiff of Stainless Steel Rat[ß], on the whole the series and its universe have so far remained consistent.
ß: Stainless Steel Rat was notorious for repeatedly putting the protagonist into impossible situations and then whipping up near-magical pieces of technomancy that just happened to solve the problem of the moment.
For me, Children of Ruin had more of a horror focus to it and left me with much more icky feelings than the brilliant positivity I felt at the end of the first book. It was still well done, though.
I agree that Children of Memory is not very good, mostly because it repeats itself so much. That could've been handled differently while still advancing the plot. I LOVE the overall concept, and the author's skills describing Gothi and Gethli's unique kind of intelligence was great, so I was okay with it overall... but too much of it was just a slog. First book is by far the best in my opinion as well.
I always took the Deus Ex Machina in Harrison's books to be just more of the satire. He never really takes his settings or characters lightly, but the presentation is almost always aimed at comedic effect.
> Also, most of us are unlike the author, and 0.07s vs 0.38s startup time means no difference.
That's quite likely a workflow thing. If you are popping up new (transient) terminals frequently, then a ~400ms wait time for each adds up and makes the entire machine feel really slow. I'm willing to wait extra half a second for a new terminal -- once -- after I've changed my autocompletion configs (rebuild + rehash takes a while), but if I had to wait for that long every time I hit Win+enter and wait for the terminal to become active, I'd be irritated pretty damn quickly too.
You get conditioned to immediate responses pretty fast.
This has already been the case for political and/or social impact events for years in the UK's betting exchanges. The settlement rules for any potentially hairy real-world event have to be explicitly clear and account for all possible outcomes that might affect the resolution.
When there's money on the line, I have years of hard evidence that arm-chair lawyers (ie. betting exchange clients) will do absolutely anything to find potential loopholes in settlement rules and argue that their bets should have paid off.
That sounds like it would make one hell of a tech talk. I have a gut feeling many readers (especially lurkers) of this very thread would gladly watch the recording.
Common and/or various ways the two groups misunderstand each other, and how you help them to anchor to the underlying base concepts? Yes please. For example, we know that interest accrues over time, but we still use shorthand for the annual interest as a step function because it makes intuitively more sense.
I've taken this even further. You cannot do security with a checklist. Trying to do so will inevitably lead to bad outcomes.
Couple of years back I finally figured out how to dress this in a suitably snarky soundbite: doing security with a spreadsheet is like trying to estimate the health of a leper colony by their number of remaining limbs.
reply