I wouldn't characterize Leetcode interviewing as being biased towards those who recently took an algorithms class - it's biasing in favor of those (of any level of experience) who are willing to spend a few months practicing Leetcode.
I thought that Google/etc had at least dialed this back a bit, or maybe just dialed back the "how many gas stations are in the US" type questions, after realizing this wasn't the best predictor of good performance.
> or maybe just dialed back the "how many gas stations are in the US" type questions, after realizing this wasn't the best predictor of good performance.
These problems (known as fermi problems) have been out of vogue for over a decade now. Google is one of the companies that pioneered algorithm-centric leetcode problems as a replacement for fermi problems.
Leetcode problems are not hugely useful outside of the data given by solving a fizzbuzz. Rather, it’s just another excuse so interviewers can convince themself a person is smart, call it signal, and justify a hire.
The last time Google gave me a job offer, one of my interviews was literally a souped up fizzbuzz - straightforward imperative code with no trick, no complicated algorithms, and no fancy data structures. I suppose that may be the reason I got an offer, that I didn’t need fancy algorithms that I hadn’t prepared.
Ultimately it’s impossible to know if someone will be a good hire from an interview. Being a good engineer requires a bunch of traits that simply can’t be tested. The leetcode interview, as I see it, acknowledges this weakness and instead chooses to filter out low-effort candidates, as anyone persistent can practice leetcoding and interviewing (in theory).
This is pretty consistent with my experience. I had one hardcore algorithms question that I bombed, but the other ones hinged on things like "when should you use a map vs a list" that should be second nature to anyone who has been writing code long enough.
I think there's a lot more that could be tested that what current implementation-centric interviews measure. At my company for example, I feel like we've gotten a lot of use out of our debugging interviews.
Last year I went through a loop and a couple of the questions I got and mentioned to friends who are SWE's at Google thought they were too hard for an interview, especially after checking their proposed solutions in the internal problem bank.
So it's definitely still happening.
No I didn't get the job because they were too hard for me to even get a brute force solution.
It’s just sad, since of course those questions have nothing to do with the skills you actually need on the job: “the login authentication is failing once every 10,000 times, go fix it!” Oooh, shall I use a B-tree!?
I thought that Google/etc had at least dialed this back a bit, or maybe just dialed back the "how many gas stations are in the US" type questions, after realizing this wasn't the best predictor of good performance.