Neat, they have Newt, the robot my dad built that was the first mobile robot with its own onboard computer. Newt is still there in his basement, and as a kid I did science fair projects programming behaviors for it. At that point the computer had been upgraded to a Motorola 68K. https://www.theoldrobots.com/Newt.html
An incorrect assumption (though it was nice to have seen a familiar face when I came across it) but good to know what you think of it. I assumed HN deduped posts but I guess not.
I've found that once you make the jump to a real partitioned datastore (like DynamoDB) you can actually go back and undo a lot of the queues and caches that were used as band-aids to reduce pressure on the DB. If you have something that has consistent performance at any scale and true elasticity, you just don't need all that other stuff, and the whole system gets far easier to understand and operate.
The pattern I've always used for this, which I suspect is what they landed on, is to have an optimistic notification method in a separate message queue that says "something changed that's relevant to you". Then you can dedupe that, etc. Then structure the data to easily sync what's new, and let the client respond to that notification by calling the sync API. That even lets you use multiple notification methods for notification. None of that involves having to have the database coordinate notifications in the middle of a transaction.
The flip side of technical debt is, what tools do you have to actually solve technical debt? Some technical debt comes from conscious decisions to choose an expedient solution over a long term one, but a lot of it just comes from not knowing what the future is going to bring, and not wanting to over-engineer for every possible outcome.
The question I'd ask is, shouldn't we have tools that make it super easy to change things, so we can adapt to different outcomes without it becoming a huge slog?
We're using Go, so cross-compilation has never been a big problem (for producing artifacts). But this'll be great for testing on ARM. I'm interested to see the performance of these instances too - our experience has been that Amazon's Graviton processors have fantastic bang-for-buck vs. Intel/AMD.
Many people don't know this, but on a correctly configured amd64 Linux box this just works:
$ GOARCH=s930x go test
The test is cross compiled, and then run with QEMU user mode emulation.
Configuring this for GitHub Actions is a single dependency: docker/setup-qemu-action@v3
Also, if you want to test different OSes, there are a couple of actions to accomplish it.
I'll probably be integrating these Linux ARM instances, but this workflow should give you an idea of what was already possible with the existing runners:
Nice article! Having used DynamoDB for some massive applications that first point really resonates - it feels like you need to put way too much effort into designing your data model to match your access patterns, and then that all goes out the window when your requirements change.
We've actually been building a new database that uses DynamoDB as an underlying storage layer but aims to address the lack of flexibility and difficulty in evolving your data model to meet new requirements: https://stately.cloud/blog/developers-should-be-able-to-chan... - we'd love feedback from folks who've been unsatisfied with DynamoDB.
If you're interested in a MongoDB style document database, but don't like the fact that Mongo doesn't give you any tools around modeling your data, we're building a new database that takes a schema-first approach to NoSQL: https://stately.cloud/blog/developers-should-be-able-to-chan...
The main motivation for building StatelyDB was exactly what you describe in your first paragraph - the difficulty of migrating data and changing your mind once you've already got data in the database. We're building an elastic schema that you can easily migrate, with automatic backwards compatibility so you don't have to update your applications all at once. Take a look, we'd love feedback.