DynamoDB is a pain in the ass if you want to do too many relational or arbitrary queries. It's not for data exploration.
It is my favourite database though (next to S3)! For cases where my queries are pretty much known upfront, and I want predictable great performance. As Marc Brooker wrote in [1], "DynamoDB’s Best Feature: Predictability".
I consistently get single digit millisecond GETs, 10-15ms PUTs, and a few more milliseconds for TransactWriteItems.
Are you able to complex joins? No. Are you able to do queries based on different hash/sort keys easily? Not without adding GSIs or a new table. The issue in the past few years was the whole craze around "single-table design". Folks took it literally as having to shove all their data in a single table, instead of understanding the reason and the cases that worked well. And with ongoing improvements of DynamoDB those cases were getting fewer and fewer over time.
But, that's what tradeoffs are about. With on-demand tables, one-shot transactions, actually serverless storage/scaling, and predictable performance you get very, very far.
I have also written about my "worklog" [1], what I call my own changelog at work.
It's a simple Markdown file, versioned on my own Gitea instance.
The worklog contains all notes about everything I spend my time at work. That is project work, meetings, discussions, todos, long-term todos and investigations, and anything I need to be able to check back later.
I cannot count the number of times I went back and searched this file to find information that others forgot, the meeting notes did not capture, and in general things that I need.
Keeping this kind of log makes performance reviews trivial too. Just scroll through the worklog for the period you want, copy paste bullet points, and then spend some time cleaning them up and rewriting them as necessary.
I have built a managed platform automating HTTP API testing, at https://www.skybear.net. The core basis is to run your Hurl files. Automatic report persistence, scheduled runs, and multiple files supported with hundreds of requests per "run/execution".
Soon, I will be adding analytics, insights, and automatic test generation features.
I have been working for a year on it, and will keep working on it for many years to come, since I am using it myself a lot anyway.
Just published a new article about "The CI/CD Flywheel". Why it's such a superpower for teams that embrace it, and how it maps to CI/CD pipelines.
I dive into the different steps of the CI/CD flywheel, including local code authoring, running CI jobs on pull-requests, deploying and verifying on staging (preproduction) environments, and finally deploying and verifying in production.
Then, I try to showcase the business and technical benefits of a proper CI/CD pipeline.
Busines benefits:
- Ship value to customers more often
- Earn and retain customer trust
- Fast experimentation feedback loop
- Attract the right talent
- Cost reduction
Technical benefits:
- Code quality
- Comprehensive tests
- Automate repetitive tasks for reproducibility
- Build once, use everywhere
- Controlled rollouts and faster Mean Time To Resolution (MTTR)
I would say that luck in FAANG interviews is much lower than the average tech company. At least in my experience. Don't get me wrong, there is luck involved, and it's a big part of the interview process.
But, at least, you have standardised pools of coding questions, standardised pools of system design questions, and a standardised feedback form.
The interviewer varies a lot though... And makes all the difference when you have an inexperienced one, or someone that doesn't care.
> The interviewer varies a lot though... And makes all the difference when you have an inexperienced one, or someone that doesn't care.
I believe this is likely the main point being made about luck - be unlucky enough to hit up an inexperienced interviewer and all the time spent preparing as in the linked OP is up in smoke.
Not sure if it counts as an average tech company, but in terms of non-FAANG I think startups and smaller companies can end up relying less on luck. There is more time and motivation to truly flesh out candidates, maybe giving non-interview coding tasks and such to balance out interviews. At a FAANG if there was an issue because of an inexperienced interviewer, again you need luck for a recruiter or committee to attempt a save among all the other applicants.
>I can only assume the author has not interviewed many candidates.
I did more than 100 interviews so far.
>I've lost count of how many tenured engineers I've interviewed who could not write basic code or explain basic programming concepts. Things that a practicing programmer would encounter every week.
Where did I say that I agree with this statement. This is what other engineers claim, and as you pointed out being said many times.
I am totally against this claim as well. I do think that those tenured engineers should be interviewed for basic coding skills too. Just last month I had a staff+ engineer not being able to write a DFS...
That part in the article is ironic, hence why I said that I didn't want to make it a discussion about the fact that we have these coding interviews, and to "Get over it!" :)
>2) practice communicating with people that have very thick Indian or Chinese accents. It will also help in general day to day life
Absolutely :) I have that item explicitly in my article, since it happened to me as well being on both sides of the interview process.
It's very unfortunate when communication is problematic accent-wise. And it makes judging difficult as well since you are not focusing on the technical merits alone anymore.
>1) don’t expect only questions relevant to the position.
For this, you might have been unlucky with an inexperienced interviewer...
Can you elaborate on what you mean by this? Do they focus more on your IC experience and reject with the excuse that you don't have a lot of years being a people manager? Or did you mean something else?
I don’t know what they focus on, I’ve had many assumptions to A/B test over many interviews
I have a lot of experience people managing though in successful projects and teams! I guess I never found the Cracking the Engineering Manager Interview book
DynamoDB is a pain in the ass if you want to do too many relational or arbitrary queries. It's not for data exploration.
It is my favourite database though (next to S3)! For cases where my queries are pretty much known upfront, and I want predictable great performance. As Marc Brooker wrote in [1], "DynamoDB’s Best Feature: Predictability".
I consistently get single digit millisecond GETs, 10-15ms PUTs, and a few more milliseconds for TransactWriteItems.
Are you able to complex joins? No. Are you able to do queries based on different hash/sort keys easily? Not without adding GSIs or a new table. The issue in the past few years was the whole craze around "single-table design". Folks took it literally as having to shove all their data in a single table, instead of understanding the reason and the cases that worked well. And with ongoing improvements of DynamoDB those cases were getting fewer and fewer over time.
But, that's what tradeoffs are about. With on-demand tables, one-shot transactions, actually serverless storage/scaling, and predictable performance you get very, very far.
1. https://brooker.co.za/blog/2022/01/19/predictability.html