This is the defining pain point for data science, in my experience. There’s no simple ground truth to test competence against.
If someone tells you that the data says their work is good, the only real way to know if they’re right or wrong is to look at what the data says yourself. If 99% of the work is building and 1% is checking something like latency, then you’re likely to have more than one set of eyeballs on that 1%. But if 99% of the work is putting the data together and doing the analysis, then you’re unlikely to have more than one person ever look at that part.
So incompetence goes unchecked (or worse, it is rewarded).
That's the same for many tech jobs. Competence is often only a local thing, subject to politics, reputation, and appearances. There's also no ground truth because the ground changes so fast. No one knows if the technologies mentioned in the OP will be popular 5-10 years from now.
> For establishing competence, you still have to dig in to see what caused the slowness.
Not as management. You just have to see that other people's similar sites are not slow with the same resources, therefore it is possible for your site not to be slow. You don't have to know why you're failing to know that the totality of the people you hired were not as good as the people those others hired.
This is of course barring management failure; but if you're failing at management, that's about the same as saying that your engineers were under-resourced.
Engineering competence is largely composed of the skills to figure out what is causing problems e.g. slowness. If you can't figure out what is causing the slowness, your engineers aren't good enough to figure out what is causing the slowness, qed.
Software Engineering is one of the few knowledge working areas where you can actually test the result in various ways as a layman. You can flush the toilet before paying the plumber to a large extent and hire another counter-team called QA. QA themselves are tested by future production bugs.
In other disciplines it is way more fuzzy. If you are in the conclusion business and there isn’t a clear path to test your conclusion in the short term you can bullshit away!
Unfortunately, I haven't worked at a company with dedicated QA in the past 5+ years, maybe longer. QA is often seen as a side job for engineers and product teams.
Oh! Dedicated QA makes a big difference, especially when they take leadership and are willing to get involved in lets call it qa-ops:
improving automated testing and such like.
If someone tells you that the data says their work is good, the only real way to know if they’re right or wrong is to look at what the data says yourself. If 99% of the work is building and 1% is checking something like latency, then you’re likely to have more than one set of eyeballs on that 1%. But if 99% of the work is putting the data together and doing the analysis, then you’re unlikely to have more than one person ever look at that part.
So incompetence goes unchecked (or worse, it is rewarded).