Perfectly encapsulates the state of the job market. Interviewing is genuinely a hellscape at this point and I've experienced many interviews where there was a complete breakdown of etiquette/guidelines and good faith.
Geez. Good one. Was in something similar lately. 10 weeks wasted and a shittiest feedback ever. These companies should be legally required to pay candidates for gauntlets they put them through.
The lack of feedback is the worst part and is increasingly more common. Zero respect for the candidates time investment and propagates a terrible culture.
Most of big-CO legal teams do not allow for feedback to be communicated to the candidates. They are afraid the candidates will sue base on that. That is not new.
They could at least allow hiring teams to send out a feedback email that highlights what the candidate did WELL, at a high level. This way the candidate gets some meaningful signal, while the company avoids the legal gray area of admitting why they rejected them. Just add a disclaimer like “unfortunately company policy prohibits us from explicitly mentioning why we chose another candidate.”
But you’d need to actually care to take something like that into consideration so… ¯\_(ツ)_/¯
Our entire system is getting so bogged down by things like this that it is ceasing to function. Lots of things that make sense individually but are breaking the previous social contract, or removing the grease that made things work.
Some jobs that I interviewed replied with an automated email saying that, if I wanted, I could ask for feedback. I always did and none of them replied... This somehow feels even more insulting.
Once I got really detailed feedback from an interview for a job I didn't get. It really took me by surprise! I didn't even have to ask.
It was quite interesting too because the things they'd inferred about me - stuff that I had understood or not understood - were just plain wrong. I didn't get everything right, but some bits I did understand fine, they thought I didn't.
I'm not sure what to take from that, other than that it's not about knowing stuff, it's about convincing someone else that you know stuff.
Also I'm about to do a hardcore leetcode interview. Wish me luck. (I'm probably going to fail; I'm pretty great at programming but only average at leetcode.)
One thing to keep in mind is that leetcode is testing (surprise) social anxiety. You can be a great engineer, terrific peer to have in the time when crisis hits but still fail at leetcode problem because someone is watching.
I'm sorry you had such a bad interviewing experience. You asked for feedback in your blog post, and since your blog doesn't allow comments, I hope you won't mind my responding here.
You wrote something that I think is untrue of most tech companies, so I'd like to discuss it:
> [As I and a friend spoke], I realised something: Three technical interviews went well, I was feeling confident going into the behavioural interview... This means that I'm heading into behavioural and HR contract stages with confidence in my performance thus far and my ability to excel at the role. And it means that I have the upper hand in salary and benefit negotiation. This is horrible for them. THEY NEED to shut me down and bring me down a few rungs before this step. And to edge me for 2 weeks (and counting...) after the supposed final round before I hear anything back.
I suspect that approximately 0% of top tech firms are trying to tank your interview as a comp-negotiating tactic. For most of these firms, the biggest problem is finding people they want to hire. To find qualified people, they need to measure what applicants, like you, can actually do. And they can't get a good measurement when they sabotage your performance. Further, if they decide to hire you, they need you to feel good about the company, not hate it because of how you were maltreated. They want you to say yes to their offer, not rage quit the hiring pipeline.
I'm not saying that there aren't bad companies or bad interviewers out there. Nor am I saying that you can't get into an interview where the other person is actually out to get you. It happens. Maybe it happened to you.
What I'm trying to say is that if your mental model of the hiring process is that the company is probably going to sabatage your end-game interviews, you're probably going to be wrong most of the time and make some bad decisions.
> What do you think? Was that a normal interview that I should have expected? I am in the wrong by posting this? Should I nuke my blog?
Here's what I think. If you have a public blog, it's fair game at an interview. If you write mostly about data science stuff but you apply for a software engineering job, you ought to be prepared to explain the contrast. Understand that, for most top firms, hiring good people and getting them to stick is hard. Most employers will want some assurance that you are serious about the position you're applying for. If you send signals that you might want some other position, be prepared to get asked about those signals.
And you got asked about those signals:
> "How do we know we won't hire you and you'll try to transition to a data scientist?"
You ought to be prepared for questions like these. For example, most interviewers would probably be satisfied with an answer like these:
That's a great question. Data science is something I do for fun in my spare time. I don't want it to become my day job. I love software engineering and that's what I want to focus my career on.
Or:
That's an important question. Thanks for asking about it. I try to stay abreast of important trends in industry, and when AI and data became important in some of my past work, I put in some personal time to learn more about them. When I learn things, I often write about them on my blog to help me remember. My blog's just a learning tool, a memory aid, right? It's not a barometer of my career interests. If you want to know what my career interests are, let me be clear: I want to write software. Five years from now, I still want to be a software engineer.
> Should I nuke my blog?
I'd say no. But you should read your blog from the perspective of a firm that's considering you for a job and be prepared to explain away anything they might have concerns about.
That's just my two cents. If you find anything in my comment helpful, great. If not, feel free to dismiss everything I've written.
> mental model of the hiring process is that the company is probably going to sabatage your end-game interviews
I definitely agree and it is not a mental model that I carry into any interview, I have good intentions and I'm super friendly! This was only a tiny (disillusioned) post-interview reflection. I would say most interviews especially with engineers have gone well but there has absolutely been a vibe shift in the past year.
You can tell teams are a lot more risk averse when it comes to hiring. The promise of a fabled 10x engineer on the horizon paired with SWE automation devaluing existing talent has meant they will make you jump through 10 more loops and even then the decision is scrutinised. Understandably hiring is an expensive process (both successful and unsuccessful).
> Most employers will want some assurance that you are serious about the position you're applying for.
This is also a reflection of the job market. If it was balanced this notion would not exist. It's become a game of numbers, automated screening + AI has meant candidates need to send out 100s of application often with automation on their end too. On the other side every job likely receives 1000s of applications especially with stupid things like "L*nkedIn Easy Apply". Me personally, I would not apply for a role I am not committed to taking and I especially would not have gone through FOUR stages for fun, the first interview should be plenty screening for both parties!!! Alas.
I appreciate you taking the time to respond and thank you for your well wishes!
> the first interview should be plenty screening for both parties
Most good companies will interview you multiple times simply because they understand that individual interviewers can be biased. If five different people all say hire this guy, that's a much more trustworthy signal than if one person says the same thing.
> Here's what I think. If you have a public blog, it's fair game at an interview. If you write mostly about data science stuff but you apply for a software engineering job, you ought to be prepared to explain the contrast. Understand that, for most top firms, hiring good people and getting them to stick is hard. Most employers will want some assurance that you are serious about the position you're applying for. If you send signals that you might want some other position, be prepared to get asked about those signals.
Great! Let me trawl through all candidates' HN and social media comments, and ask why they spend more time talking about politics, movies, science fiction, than CRUD SW development. They need to justify it!
That's certainly one way of interpreting what I wrote.
My point was that potential employers are not blind to what you put out in the public space. If what you put out would cause a reasonable employer to have questions about your viability as candidate, you ought to be prepared for those questions. If you're lucky, they'll ask you those questions and you can dispell their concerns.
>> For most of these firms, the biggest problem is finding people they want to hire.
While the firm wants to hire someone, the hiring pipeline/process is made up of individuals that have their own individual preferences on who should get hired. One person can certainly sabotage a candidate, and the further into the process the greater their incentive.
> Here's what I think. If you have a public blog, it's fair game at an interview. If you write mostly about data science stuff but you apply for a software engineering job, you ought to be prepared to explain the contrast. Understand that, for most top firms, hiring good people and getting them to stick is hard. Most employers will want some assurance that you are serious about the position you're applying for. If you send signals that you might want some other position, be prepared to get asked about those signals.
This is kind of absurd. Could you imagine a registered nurse being asked to expain why they have a blog about astronomy and not nursing?
"What do you mean you don't write about dressing wounds in your spare time? How much could you really know about it then?"
"Managing Type 2 Diabetes isn't interesting enough for you to blog about? I'll have you know most of the patients htat you would be dealing with at this long term care facility have T2D. I'm skeptical that you'd be able to care for them."
Why do we allow this kind of BS in the tech industry? Whens the last time a nurse did a whiteboard interview?
> Could you imagine a registered nurse being asked to expain why they have a blog about astronomy and not nursing?
That hits pretty close to home... I'm a doctor who has a small blog about the implementation details of the lisp I made.
> Managing Type 2 Diabetes isn't interesting enough for you to blog about?
If someone asked me this point blank I think I'd laugh out loud. It's interesting enough for me to keep up with the latest evidence, thanks.
> Whens the last time a nurse did a whiteboard interview?
To be fair, healthcare professionals have some pretty gruelling training and difficult licensing examinations. Some amount of preselection is taking place. Nobody needs a license to write software.
1) What 60 year old in tech his entire life only makes a HN account in the last 17 hours?
2) Assuming he wasn't aware of it. What brought the site to his attention and why now?
3) Did not engage with the thread at all after his initial post. Has not engaged with anything else since. You'd think someone introduced to a tech community would be eager to look around and contribute??
I completely understand your sentiment though and it's exactly what makes the OG post so tone deaf.
I don't doubt that there are some bot comments here and there, but there are tens of people in this comment section echoing the same sentiment. Many of them have post histories going back many years. They can't all be bots.
On every forum, there are a lot of lurkers that never make an account and just read their website of interest to keep up with the news and check on things they're interested in. It's not often that they make the effort to create an account to say something. Usually, that happens when something they feel strongly about is brought up. So, while the account age of this poster makes me very suspicious, it's also not enough for me to rule out completely.
I'm not sure the assumption is that he's coming across HN for the first time rather than making an alt/stop lurking to post this. Or even that someone in tech their entire life must have already had a HN account before today. HN is big, but it's not so big that statement is even remotely reasonable.
What I doubt most about this shift of "forget writing code or reviewing it you shouldn't even look at it" (their tagline was "review demos, not diffs") is the ignorance of scope-drift. I use agentic tools all day and I can tell you I would absolutely not trust an agent to run for hours without supervision because it is very likely that over the course of HOURS (even with a fully detailed structured plan with .md files and loaded preferences) the agent will have drifted substantially from your initial request.
The biggest attestation to this is: When Claude is done working on something for you and you haven't told defined the next steps - ask it what you should do next. See if it at all aligns with what you actually wanted to do.
Good report, very important thing to measure and I was thinking of doing it after Claude kept overriding my .md files to recommend tools I've never used before.
The vercel dominance is one I don't understand. It isn't reflected in vercel's share of the deployment market, nor is it one that is likely overwhelming prevalent in discourse or recommended online (possible training data). I'm going to guess it's the bias of most generated projects being JS/TS (particularly Next.js) and the model can't help but recommend the makers of Next.js in that case.
Yeah exactly, it's best to keep track and be aware of common tropes used in AI writing so that you don't end up 5 responses deep and emotionally invested in a conversation before you realise you've been fooled into speaking to a bot.
I built this tool primarily to identify AI writing in articles and posts but it's proven useful for comments/responses too: https://tropes.fyi/vetter
This is interesting because it is largely a set of good writing advice for people in general, and AI likely writes like this because these patterns are common.
Not least because a lot of these things are things that novice writers will have had drummed into them. E.g. clearly signposting a conclusion is not uncommon advice.
Not because it isn't hamfisted but because they're not yet good enough that the links advice ("Competent writing doesn't need to tell you it's concluding. The reader can feel it") applies, and it's better than it not being clear to the reader at all. And for more formal writing people will also be told to even more explicitly signpost it with headings.
The post says "AI signals its structural moves because it's following a template, not writing organically. But guess what? So do most human writers. Sometimes far more directly and explicitly than an AI.
To be clear, I don't think the advice is bad given to a sufficiently strong model - e.g. Opus is definitely capable of taking on writing rules with some coaxing (and a review pass), but I could imagine my teachers at school presenting this - stripped of the AI references - to get us to write better.
If anything, I suspect AI writes like this because it gets rewarded in RLHF because it reads like good writing to a lot of people on the surface.
EDIT: Funnily, enough https://tropes.fyi/vetter thinks the above is AI assisted. It absolutely is not. No AI has gone near this comment. That says it all about the trouble with these detectors.
These patterns overlap with formal writing advice because AI was trained overwhelmingly on academic papers, journals and professional writing so it inherited this style.
I completely understand - and do not intend to disparage - the use of these tropes. With the vetter and aidr tools I try to focus more on frequency analysis. I've tried to minimise false positives by tuning detection thresholds to match density rather than individual occurrences e.g. "it's not X, it's Y" is fine but 3x in one paragraph and suspicions flare.
But other tropes like lack of specificity and ESPECIALLY AIs tendency to converge to the mean (less risk, less emotion, FALSE vulnerability) are blatantly anti-human imo.
I'd argue most of them I overlap less with academic writing advice than high school level writing advice. Most people don't transcend that because they have no need to, and it's where most people learn to write essays.
These tropes emerge from the distribution of the LLM itself and from my experimentation it's actually very difficult to get an LLM to change its language. Especially when you consider they've been RLHFed to the max to speak the way they do.
Changing the style is easy: Just feed it a writing sample, and tell it to review its own writing against the style of the writing sample.
That won't entirely weed out these tropes, but it will massively change the style.
Then add a few specific rules and make it review its writing, instead of expecting it to get it right while writing.
To weed out the tropes is largely a question of enforcing good writing through rules.
A whole lot of the tropes are present because a lot of people write that way. It may have been amplified by RLHF etc., but in that case it's been amplified because people have judged those responses to be better - after all that is what RLHF is.
tldr: We took our hypertuned coding agent trained it on millions of internal data engineering workflows and data, with specialized custom-built tools, and it only managed to complete 3 more tasks than Claude Code (out of 43) on a super niche domain-specific benchmark.
Glad we're moving in this direction, I've also got a tool that I use to determine if writing is AI using common tropes and reconstruct the OG prompt from it: https://tropes.fyi/aidr
Engagement is great if you target a specific group. Don't need human content. It's ridiculously easy to start a Facebook page in a niche targeting a specific demographic, connect a site to it, unleash AI generated content, post it on FB and run ads. With enough traction, Facebook will pay you for making more content, while you extract money from your page followers. You're separating easy-to-influence boomers and conspiracy theorists from their money. It's disgusting, but it is ridiculously easy to make heaps of money with whatever content on Facebook.
One was so bad I had to write about it: https://ossama.is/writing/betrayed
reply