Hacker Newsnew | past | comments | ask | show | jobs | submit | twoparachute45's commentslogin

The built-in TOTP in Bitwarden password manager is only available to premium Bitwarden subscribers, requires you to have a Bitwarden account, and stores your TOTP codes in Bitwarden's servers.

This standalone app is available for free, can be used without an account, and the TOTP codes are only stored locally (or through your phone's native backup system).

Some people dislike the idea of storing TOTP codes in the same location as passwords, so it seems this helps provide those people with that separation, while still using Bitwarden products (which tbh is cool with me - a lot of the other TOTP apps on the appstores suck).


> Some people dislike the idea of storing TOTP codes in the same location as passwords,

And many organizations/companies have policy against that although I don't know how can anyone enforce that.


> The built-in TOTP in Bitwarden password manager is only available to premium Bitwarden subscribers, requires you to have a Bitwarden account, and stores your TOTP codes in Bitwarden's servers.

if you selfhost (eg with vaultwarden) you get all the pay features for free


Vaultwarden doesn't have all the paid features.


That makes sense, thank you!


My company, a very very large company, is transitioning back to only in-person interviews due to the rampant amount of cheating happening during interviews.

As an interviewer, it's wild to me how many candidates think they can get away with it, when you can very obviously hear them typing, then watching their eyes move as they read an answer from another screen. And the majority of the time the answer is incorrect anyway. I'm happy that we won't have to waste our time on those candidates anymore.


So far 3 of the 11 people we interviewed have been clearly using ChatGPT for the >>behavioral<< part of the interview (like, just chatting about background, answering questions about their experience). I find that absolutely insane, if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.

We actually allow using AI in our in-person technical interviews, but our questions are worded to fail safety checks. We'll talk about smuggling nuclear weapons, violent uprising, staging a coup, manufacturing fentanyl, etc. (within the context of system design) and that gives us really good mileage on weeding out those who are just transcribing what we say into AI and reading the response.


> I find that absolutely insane, if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.

I'm genuinely curious what questions you ask during the behavioral interview. Most companies ask questions like "recall a time when..." and I know people who struggle with these kinds of questions despite being good teammates, either because they find it difficult to explain the situation, or due to stress. And recruitment process is not a "basic conversation" — as a recruiter you're in far more comfortable position. I find it hard to believe anyone would use an LLM if you ask them question like "what were your responsibilities in your last role", and I do see how they might've primed the chat to help them communicate an answer to a question like "tell me about a situation when you had a conflict with your manager"


We usually just ask them to share their background, like the typical background exchange handshake at the beginning of any external call.

That normally prompts some follow ups about specific work, specific projects, if they know so-and-so moot at their old company. I call it behavioral because I don’t have another word but it’s not brainteasers and etc like consulting/finance interviews.


Ha ha, that's a great idea!

I love the idea of embedding sensitive topics that ChatGPT and other LLMs will steer clear of, within the context of a coding question.

Have you ever had any candidate laugh?

Any candidates find it offensive?


We usually get laughs, some quick jokes, etc., some really involved candidates will ask if it’s worded that way to prevent using ChatGPT.

No one’s found it offensive, the prompt is mostly neutral just very “dangerous activity” coded.


I think you (your company) and many other commenters here are just trying too hard.

I had just recently lead through several interview rounds for software engineering role and we have not had any issue with LLM use. What we do for the technical interview part is very simple - live whiteboarding design task where we try to identify what the candidate's focus is and might pivot at any time or dig deeper into particular topics. Sometimes, we will even go as detailed as talking about particular algorithms the candidate would use.

In general, I found that this type of interview is the most fun for both sides. The candidates don't feel pressure that they must do the only right thing as there is a lot of room for improvisation; the interviewers don't get bored with repetitive interviews over and over as new candidates come by with different perspectives. Also, there is no room for LLM use because the candidate has to be involved in drawing on the whiteboard and showing their technical presentation skills, which are very important for developers.


Unfortunately, we've noticed that candidates are on another call and their screen is fed by someone else using chatGPT and pasting the responses, as they can hear both the interviewer and the candidate


I saw a pretty impressive cheat tool that could apparently grab the screen from the live share, process text on the screen in response to an obscure keybind and then run it through OCR to solve (or just look up a LC solution).

At that point it seems like trying too hard, but be aware there are theoretical approaches which are extremely hard to detect (the inevitable evolution of sticky notes on the desk, or wall behind the monitor).


> if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.

I wouldn’t be surprised if the effect of Google Docs and Gmail forcing full AI, is a generation of people who can’t even talk about themselves, and can’t articulate even a single email.

Is it necessary? Perhaps. Will it make the world boring? Yes.


what actually happens to the interviewee? Do they suddenly go blank when they realise the LLM has replied "I'm sorry I cannot assist you with this", or they try to make something up?


Yeah pretty much, they either go silent for 2-3 minutes or leave the call and claim their internet has cut out and need to reschedule.

Just one time someone got mad and yelled at the interviewer about nothing specific, just stuff like I’m not who you are looking for, you will never find anybody to hire.


llama2-uncensored to the rescue


So depressing to hear that “because of rampant cheating”

As a person looking for a job, I’m really not sure what to do. If people are lying on their resumes and cheating in interviews, it feels like there’s nothing I can do except do the same. Otherwise I’ll remain jobless.

But to this day I haven’t done either.


Here's the thing: 95% of cheaters still suck, even when cheating. Its hard to imagine how people can perform so badly while cheating, yet they consistently do. All you need to do to stand out is not be utterly awful. Worrying about what other people are doing is more detrimental to your performance than anything else is. Just focus on yourself: being broadly competent, knowing your niche well, and being good at communicating how you learn when you hit the edges of your knowledge. Those are the skills that always stand out.


Yea but I also suck in 95% of FAANG like interviews since I'm very bad at leetcode medium/hard type of questions. It's just something that I never practiced. It's very tempting at this point to trow in my towel and just use some aid. No one cares about my intense career and the millions I helped my clients earn, all that matters (and sometimes directly affects comp rate) is how I do on the "coding task".


> I suck in FAANG interviews... it's just something I never practiced.

Well, sounds like you know the solution. Or set your sights on a job that interviews a different way.

I think it's mostly leetcode "easy", anyway. Maybe some medium. Never seen a hard, except maybe from one smartass at Google (they were not expecting a perfect answer). Out of a dozen technical interviews, I don't think I've ever needed to know a data structure more exotic than a hash map or binary search tree.

The amount of deliberate practice required to stand out is probably not more than 10-20 hours, assuming you do actually have the programming and CS skills expected for a FAANG job. It's unlikely you need to do months of grinding.

If 20 hours of work was all that stood between me and half a million dollars a year, I'd consider myself pretty lucky.


On the other hand, if 20 hours of leetcode practice is all that stands between you and half a million dollars a year, isn't that a pretty good indicator that the interview process isn't hiring based on your skills, talent and education, and instead on something you basically won't encounter in the workplace?


10-20 hours is assuming you’re qualified for the job and just bad at leetcode. I think many qualified people could pass without studying, especially if they’re experienced in presenting or teaching.

If you’re totally unqualified, 20 hours of leetcoding won’t get you a job at Meta.


Agency?


Right. Almost any time somebody fails an interview it is not because of "very hard questions" but because they did not prepare properly in a sensible manner. People don't want whiteboarding, no programming questions, no mathematical questions, no fermi problems etc. which is plain silly and not realistic. One just needs to know the basics and simple applications of the above which is more than enough to get through most interviews. The key is not to feel overawed/overwhelmed with unknown notations/jargons which is what the actual problem is when people run away from big-O, DS/Algo, Recursion, application of Set Theory/Logic to Programming etc.


Well, cheaters only cheat because they suck and they know it. Otherwise cheat would not be a rational approach.


I don't approve of cheating but I think you're underestimating how hard some interview questions can be. Even competent people don't know everything and could draw a blank, in which case they would benefit from cheating despite being competent.


Not just difficult, but there's just so many of them (for the same company ofc). You could ace 3 interviews and not even be half way through the process. You have to be continually on top form for days/weeks on end.


Right, last offer I got required 7 (non-HR) steps over a 4-month period, where around a dozen technical people got involved.

I don't/won't cheat, as I am a rather anxious person who can't really handle "covert ops". But at this point I totally understand those who do.


A lot of these people also have a policy that even one person can fail you. So if you do 8 interviews with 2 people each, then there's up to 20 people in the process that can ruin it for you.

I think LLM performance on previously seen questions like interview questions is too good for it to be allowed. I wouldn't mind someone using an IDE or API docs, but you have to draw the line somewhere. It's like how you can't use a calculator that can do algebra on a calculus test. It just doesn't accomplish the goal of testing anything if you use too much tech, and using the current LLMs that all suck in general but can nail these questions that have a million examples online is bad. I would much rather see someone consult online references for information than to see them use an LLM.


I've only heard of candidates getting dropped like that after they've done something horribly inappropriate(like cheat).


If that were true we would never hear of top level athletes using performance enhancing drugs.


That's different. These candidates are not trying to get an edge on qualifying for the top dev position in the entire world.

Top athletes do it because they're essentially at the limits of human performance and those drugs are the only edge they can reasonably get.


Kids at my tiny high school football team did steroids to get an edge - no chance at a scholarship, either.

Different people have a different threshold for cheating no matter the stakes. I imagine some people vheat even if they know the answer - just to be sure.


It's much more widespread. Minor league player uses PEDs to make the major leagues. Middling major leaguer uses them to be an all-star. All-star uses them to make the hall of fame. In the context of programming, if some kind of cheating is what's necessary to nab a $150k job, a whole lot of people are going to cheat.


Yeah, we found this when we started doing take-home exams: it turns out that a junior dev who spends twice as much time on the problem than what we asked them to doesn’t put out senior-level code - we could read the skill level in the code almost instantly. Same thing with cheating like that - it turns out knowing the answer isn’t the same thing as having experience, and it’s pretty obvious pretty quickly which one you’re dealing with.


I don't know, I kind of feel like leetcode interviews are a situation where the employer is cheating. I mean, you're admittedly filtering out a great number of acceptable candidates knowing that if you just find 1 in a 1000, that'll be good enough. It is patently unfair to the individuals that are smart enough to do your work, but poor at some farcical representation of the work. That is cheating.

In my opinion, if a prospective employee is able to successfully use AI to trick me into hiring them, then that is a hell of a lot closer to the actual work they'll be hired to do (compared to leetcode).

I say, if you can cheat at an interview with AI, do it.


I dunno why there is always the assumption in these threads that leetcode is being used. My company has never used leetcode-style questions, and likely never will.

I work in security, and our questions are pretty basic stuff. "What is cross-site scripting, and how would you protect against it?", "You're tasked with parsing a log file to return the IP addresses that appear at least 10 times, how would you approach this?" Stuff like that. And then a follow-up or two customized to the candidate's response.

I really don't know how we could possibly make it easier for candidates to pass these interviews. We aren't trying to trick people, or weed people out. We're trying to find people that have the foundational experience required to do the job they're being hired for. Even when people do answer them incorrectly, we try to help them out and give them guidance, because it's really about trying to evaluate how a person thinks rather than making sure they get the right answer.

I mean hell, it's not like I'm spending hours interviewing people because I get my rocks off by asking people lame questions or rejecting people; I want to hire people! I will go out of my way to advocate for hiring someone that's honest and upfront about being incorrect or not knowing an answer, but wants to think through it with me.

But cheating? That's a show stopper. If you've been asked to not use ChatGPT, but you use it anyway, you're not getting the benefit of the doubt. You're getting rejected and blacklisted.


>I dunno why there is always the assumption in these threads that leetcode is being used

because it matches my experience. I work in games and interviews are more varied (math, engine/language questions, game design questions, software design patterns). I'd still say maybe 30% of them do leetcode interviews, and another 40% bring in leetcode questions at some point. I hate it because I need to study too many other types of questions to begin with, and leetcode is the least applicable.


> "You're tasked with parsing a log file to return the IP addresses that appear at least 10 times, how would you approach this?"

Out of curiosity, did anyone just reply with `awk ... | sort | count ... | awk`? Its certainly what I would do rather than writing out an actual script.


Nobody has yet, but if they did I'd probably be ecstatic! We specifically tell candidates they can use any language they want. A combination of awk/sort/sed/count/etc is just as effective as a Python script!


I once got a surprise leetcode coding interview for a security testing role that mentioned proficiency in a coding language or two as desirable but not essential.

I come from a math background rather than CS and code for fun / personal projects, so don't know the 'proper' names for some algorithms from memory. I could have done some leetcode prep / revision if I had any indication that it was coming up, though the interview was pretty much a waste of time. I told them that and made a stab at it, though they didn't seem interested in engaging at all and barely made eye contact during the whole interview.


The employer sets the terms of the interview. If you don’t like them, don’t apply.

What you’re suggesting here isn’t any different than submitting a fraudulent resume because you disagree with the required qualifications.


> The employer sets the terms of the interview. If you don’t like them, don’t apply.

What you're missing here is that this is an individual's answer to a systemic problem. You don't apply when it's _one_ obnoxious employer.

When it's standard practice across the entire industry, we have a problem.

> submitting a fraudulent resume because you disagree with the required qualifications.

This is already worryingly common practice because employers lie about the required qualifications.

Honesty gets your resume shredded before a human even looked at it. And employers refusing to address that situation is just making everything worse and worse.


You make a valid point that while the rules of the game are known ahead of time, it’s strange that the entire industry is stuck in this local maximum of LeetCode interviews. Big companies are comfortable with the status quo, and small companies just don’t have the budget to experiment with anything else (maybe with some one-offs).

Sadly, it’s not just the interview loops—the way candidates are screened for roles also sucks.

I’ve seen startups trying to innovate in this space for many years now, and it’s surprising that absolutely nothing has changed.


>I’ve seen startups trying to innovate in this space for many years now, and it’s surprising that absolutely nothing has changed.

I don't want to be too crass, but I'm not surprised people who can startup a business are precisely the ones who hyper-fixate on efficiency when hiring and try to find the best coders. Instead of the best engineers. When you need to put your money where you mouth is, many will squirm back to "what works".


> Honesty gets your resume shredded before a human even looked at it

Does it? Mine is honest, fairly normal, and gets me through to interviews fine. What are common lies and why are they necessary?


Or he can simply choose to ignore the arbitrary and often pointless requirements, do the interview on his own terms, and still perform excellently. Many job requirements are nothing more than a pointless power trip from employers who think they have more leverage than they actually do.


I would like to be paid though. What do I care about the terms of the interview as long as they hire me?

What is being suggested here is not participating in the mind numbing process that is called ‘applying for a job’.


You're absolutely right. Ditching the pointless corporate hoops, proving you can do the job, and getting paid like anyone else is what truly matters. Most hiring processes are just bureaucratic roadblocks that needlessly filter out great candidates. Unless you're working on something truly critical, there's no reason to play along with the nonsense.


Wanting to be paid under false pretenses is the definition of fraud.


That doesn’t make any sense. The best engineers I know can’t pass these interviews because they started working long before they became standard.


That doesn’t matter. If the current qualification bar is “must do X” and you fake it, you’re committing fraud.


Being paid for even excellent performance is a fraud.


> Wanting to be paid under false pretenses is the definition of fraud.

What? No, it isn't.

Regardless, if the job requirements state "X years of XYZ experience" and you have to have >X years of experience, then using AI to look up how to do a leetcode problem for some algorithm you haven't used since your university days is absolutely not "false pretenses" nor fraud.


If the interview process says, “you must do X without using AI” and you use AI and hide it, you’re committing fraud.


Nope.


> What do I care about the terms of the interview as long as they hire me?

well that's the neat part... they aren't going to. All this AI stuff just happened to coincide with a recession no one wants to admit, amplifying the issue.

So yea, even if I'm desperate I need to be mindful of my time. I can only do so many 4-5 stage interviews only to be ghosted, have the job close, or someone else who applied earlier get the position.


> What do I care about the terms of the interview as long as they hire me?

Because committing fraud to get hired is a pretty shitty way to live your life


If you lie about your qualifications to a degree that can be considered fraud, employers can and will sue you for their money back and damages. Wait till you discover how mind-numbing the American legal system is!


I’m sorry is the job “professional Leetcoder”?


Nonsense. I don't endorse lying about qualifications, but employers don't sue over this. Employment law in most US states wouldn't even allow for that with regular W-2 employees.


Yea, exactly.

If a candidate were up front with me and asked if they could use AI, or said they learned an answer from AI and then wanted to discuss it with me, I'd be happy with that. But attempting to hide it and pretend they aren't using it when our interview rules specifically ask you not to do it is just being dishonest, which isn't a characteristic of someone I want to hire.


On principle, what you’re saying has merit. In practice, the market is currently rife with employers submitting job postings with inflated qualifications, for positions that may or may not exist. So there’s bad actors all around and it’s difficult to tell who actually is behaving with integrity.


> If you don’t like them, don’t apply.

Due to the prevalence of the practice this is tantamount to suggesting constructive unemployability.

People were up in arms about widespread doping during the Lance Armstrong era. But the only viable alternative to doping at the time was literally to not compete at all.


I wouldn't call it cheating but most of the time it's just stupid. For majority of software developer jobs would be more suitable to discuss the solution of the more complex problem tham randomly stress out people just because you think you should.


> It is patently unfair to the individuals that are smart enough to do your work, but poor at some farcical representation of the work. That is cheating.

On the other hand, if you have 1,000 candidates, and you only need 1, why not do it if the top candidate selected by this method can do well on the test and your work?


It’s unfair but it meets their objective of finding a high in candidate. Google admits they do this.

The companies that do this only do it because they can. They have to have hundreds of people applying. The companies that don’t do this basically don’t have many people applying.


> it feels like there’s nothing I can do except do the same.

Why does it feel like that when you’re replying to someone who already points out that it doesn’t work? Cheating can prevent you from getting a job, and it can get you fired from the job too. It can also impede your ability to learn and level up your own skills. I’m glad you haven’t done it yet, just know that you can be a better candidate and increase your chances by not cheating.

Using an LLM isn’t cheating if the interviewer allows it. Whether they allow it or not, there’s still no substitute for putting in the work. Interviews are a skill that can (and should) be practiced. Candidates are rarely hired for technical skill alone. Attitude, communication, curiosity, and lots of other soft skills are severely underestimated by so many job seekers, especially those coming right out of school. A small amount of strengthening your non-code abilities can improve your odds much faster than leetcode ever will. And if you have time, why not do both?


Note also "And the majority of the time the answer is incorrect anyway."

I haven't looked for development-related jobs this millennium, but it's unclear to me how effective a crutch AI is for interviews--at least for well-designed and run interviews. Maybe in some narrow domains for junior people.

As a few of us have written elsewhere, I consider not having in-person interviews past an initial screen sheer laziness and companies generally deserve whoever they end up with.


> it feels like there’s nothing I can do except do the same. Otherwise I’ll remain jobless.

Never buy into this mentality. Because once you do, it never goes away. After the interview, your coworkers might cheat, so you cheat too. Then your business competitors might cheat, so you cheat too. And on and on.


sounds cheesy, but keep being honest. Eventually companies will realize (as we have years ago) that automating recruiting gets you automated candidates.

But YMMV. I have 9 years and still can get interviews the old fashioned way.


When I was interviewing entry level programmers at my last job, we gave them an assignment that should only take a few hours, but we basically didn't care about the code at all.

Instead, we were looking to see if they followed instructions, and if they left anything out.

I never had a chance to test it out, since we hadn't hired anyone new in so long, but ChatGPT/etc would almost always fail this exam because of how bad it is at making sure everything was included.

And bad programmers also failed it. It always left us with a few candidates that paid attention, and from there we figure if they can do that, they can learn the rest. It seemed to work quite well.

I was recently laid off from that company, and now I'm realizing that I really want to see what current-day candidates would turn in. Oh well.


For those tests I never follow the rules, I just make something quick and dirty because I refuse to spend unpaid hours. In the interview the first question is why I didnt follow the instructions, and they think my reason is fair.

Companies seem to think that we program just for fun and ask to make a full blown app... also underestimating the time candidates actually spend making it.


If you’re spending the time applying and submitting something then you might as well spend the extra 30 minutes or so to do it right, no?


Any time someone says ‘should only take a few hours’ they’re far underestimating the time it actually takes.


It's never been 30 minutes for me. Even leetcode timed exams tended to be 60-90 minutes.

recently I spent a good 10 hours making a crossword solver. Hiring freeze a few days after I turned it in. I completely get GP's mentality.


Not if you’re applying to hundreds, or thousands of jobs. Unless you know someone, it’s a quantity game.


I’ve screened a lot of resumes and given a lot of interviews over the years, and it’s usually obvious when people are trying the scattershot approach, they just don’t match. I feel like treating it like a quantity game is unlikely to improve your odds, and tbh spamming out hundreds or thousands of applications sounds like a miserable way to spend time. You could spend that time meeting and talking to people. I’ve never applied to more than 2 jobs at once, jobs that I actually want, and never had trouble getting at least one of them (and it still takes time and effort and some coding and interviews).


It wouldn’t be obvious they’re using a scattershot approach when they’re a good match, though. I don’t see the downside.


Maybe not at the resume screening phase, but it’s usually still obvious once the interviews start when people aren’t interested in your specific company. Some people get lucky, sure, but the downside is that you have to get lucky, it’s wasting valuable time on low probability events. If you’re familiar with the statistical process of importance sampling, in my experience on both sides of the interview table, it’s effective and worthwhile to spend more time curating higher quality samples than to scatter and hope.


>but it’s usually still obvious once the interviews start when people aren’t interested in your specific company.

Can you really blame them? If you're not a houshold name, why would you expect someone to spend hours researching your specific company?

On the other hand, it can come off as creepy if your a small company and suddenly someone nerds out about how your CEO said this one thing at a talk years ago and knows your lead has cancer based on his personal blog. I'd rather just treat it as a transaction of my skills and services for money. We are not a family (multiple layoffs have taught me so)

> it’s effective and worthwhile to spend more time curating higher quality samples than to scatter and hope.

Not in this market. Too many ghost jobs, too many people ghosting after multiple rounds. Too many hiring freezes when you spend a month talking with a company. If you want respect from candidates, don't disrespect them.


Naw I don’t blame them. I’m not suggesting anyone spend hours researching each company. And I don’t expect candidates to do anything, I’m saying the candidates who do are the ones that tend to land the job, but it’s entirely the candidate’s choice. All it takes is minutes, really.

You sound like you’ve been burned. That sucks and I’m sorry, I sympathize. I’m hearing that the job market is very tough right now. A big part of that is because it’s extremely competitive. Taking it personally and assuming it’s disrespect isn’t going to help get the job though (even if there was disrespect… but that’s not the only explanation, so it’s a dangerous assumption).


>I’m saying the candidates who do are the ones that tend to land the job, but it’s entirely the candidate’s choice. All it takes is minutes, really.

Well, everyone has different experiences. I never felt like knowing about a company put me ahead in my early days. I guess I have a dump stat in Charisma (not surprised).

Like you said, the market is competitive. No one's going to take the nice guy over the one who blitz's an interview unless that nice guy has connections. Those few minutes of thousands of applications adds up to days of research. I just lack that time and energy these days.

>You sound like you’ve been burned. That sucks and I’m sorry, I sympathize.

several times, yes. It's honestly worse than my first job search out of college 10 years ago.

>Taking it personally and assuming it’s disrespect isn’t going to help get the job though

I only ask for basic decency. Keep a candidate in the loop, don't drag the process on for the sake of it, any take home should warrant a response (even if it's a template rejection letter). i.e. respect people's time.

I haven't been burned in a lot of my interviews, I'm not talking about bummers like the several times I was interviewing before a hiring freeze. I don't even treat non-responses as an interview process. But several of them just end with absolutely no communication nor closure after speaking for weeks with recruiters and hiring managers.

I don't know what to call that in a day and age where AI is supposedly increasing efficiency, other than disrespect. This has never happened before 2023, which makes the times all the more weirder.


My experience is that I've applied to companies where I was a perfect fit, but did not get an interview, and then I've applied to companies where I had not used any of the tech stack and still got an interview... There's a lot of wierd reason, one common is that they want to hire a specific person, maybe even in the company already, but they still need to post a job ad due to company policy. Or one where I got an interview even though I had no experience in their tech stack they explained they need to make at least 5 interviews before they hire and they had already found their guy so they interviewed other non qualified so that their candidate/friend would stand out as the most qualified... So never take hiring personally. It's just random. Do enough work to get an interview, many employers are very good at judging if you will fit in or not, so just leave it for them to figure that out, and be yourself. And don't take it personally when you get rejected. There's still a shortage of experienced software engineers, and lots of jobs to apply to Also if you get a bad feeling, just back out. It's when you've started turning down offers you have become good enough at searching/interviewing, and that's when you will find something great. Try to have at least 3 offers before you accept one.


You don’t understand reality. If all companies have 1000 candidates your only approach is scattershot.

The only time the bespoke approach works is if you have like 30 candidates only. But then there are still issues here because the candidate is still one in thirty so if he does a bespoke approach 30 times it takes an inordinate amount of time.


That’s not reality. Not all companies have 1000 candidates. Not all companies have even 30 candidates.


This is the reality nowadays. There's an abundance of candidates thanks to bootcamps.


Got any evidence to share? It’s simply not true that “all” companies get a thousand applicants, whether you mean per job or total. Startups aren’t inundated with applicants. Neither are schools or hospitals or most web design shops or hundreds of other non-tech places that employ programmers. Some of the biggest tech names do get a lot of applicants, sometimes, for certain jobs, but I suspect you’re probably ignoring the majority of non-FAANG type businesses. Kids are definitely disproportionately aiming for the jobs that they’ve heard stories about paying really well, like AI and Apple, Facebook, Nvidia, etc. Those jobs can be super competitive, and they generally just don’t hire from bootcamps. Spamming entry level bootcamp resumes at big tech companies isn’t going to improve anyone’s odds much or at all, but whatever you don’t have to take my word for it.


The industry (all industries really) might want to reconsider online applications, or at least privilege in-person resume drop-offs because the escalating ai application/evaluation war that's happening doesn't seem to be helping anyone.


No it's because AI shifted power over to the applicant.


this is very strange statement. in what world did AI possibly shift power to the applicant?? applicants have almost never been in shittier position than they are now and things are getting much, much worse by the day


Yeah I don't get this either. I've been looking for a job for like 3-4 years, even an entry-level one, since I graduated college in May of 22 and I still haven't found one. I'm probably doing something wrong (and that's a different discussion), but it's getting harder and harder to know if it's me or the AI applicants or the AI ATS system. And then we have the AI job seekers which are AI-created accounts trying to find employment -- I've already started to see a few of these pop up on Linked In. They were banned, but still, the fact it's happening all is a bit worrying if not predictable.


The applicant can use AI to build CVs, tailor them and submit more, faster. Negating a lot of the automated algorithms that were being used to filter (torture) applicants.


and what exactly do you think is processing those AI-built CVs…? it is like sex, more and faster does not in any way imply better :)


How so? Tons of companies are moving to AI automated intake systems because they're getting flooded with low-quality AI generated resumes. Of course, the original online applications systems were terrible already which is what encouraged people towards low effort in their applications so it's become a stale-mate.


Did it? What I see instead is total mistrust of the open resume pool, because the percentage of outright lies, from resume to behavioral to everything else is just that high. So I see companies raise their hands and going back to maximum prioritization of in-network candidates, where we have someone vouching that the candidate is not a total waste of everyone's time.

The one who loses all power is the new junior straight out of school, which used to already be difficult to distinguish from many other candidates with similar resumes: Now they compete with thousands upon thousands of total fakes which claim more experience anyway.


Undergrads have many more opportunities to differentiate themselves than they realize. It could be internships, research, TA, clubs, sports, volunteering, Greek life, etc. Those put them closer to being "in-network" with certain organizations and people.

Even something like citizenship is a differentiating factor: an undergrad who applies to, say, a national lab won't compete with foreign students by definition.


> it's wild to me how many candidates think they can get away with it

Remember that you are only catching the candidates who are bad at cheating.


That's fine. The ones who are "good cheaters" are probably smarter than many honest people. Think about those school days where your smartest peers were cheating anyway, despite teaching you organically earlier on. Those kinds of cheaters do it to turn an A into an A+, not because they don't understand the material.


Is it cheating if I can solve the problem using the tools of AI, or is it just solving the problem?


Interviews aren’t about solving problems. The interviewer isn’t interested in a problem’s solution, they’re interested in seeing how you get to the answer. They’re about trying to find out if you’ll be a good hire, which notably includes whether you’re willing and interested in spending effort learning. They already know how to use AI, they don’t need you for that. They want to know that you’ll contribute to the team. Wanting to use AI probably sends the wrong message, and is more likely to get you left out of the next round of interviews than it is to get you called back.

Imagine you need to hire some people, and think about what you’d want. That’ll answer your question. Do you want people who don’t know but think AI will solve the problems, or do you want people who are capable of thinking through it and coming up with new solutions, or of knowing when and why the AI answer won’t work?


> They’re about trying to find out if you’ll be a good hire, which notably includes whether you’re willing and interested in spending effort learning

I admire this worldview, and wish for it to be true, but I can't help but see it in conflict with much of what floats around these parts.

There's a recent thread on Aider where the authors' proudly proclaim that ~80% of code is written by Aider itself.

I've no idea what to make of the general state of the programming profession at all at the moment, but I can't help but feel learning various programming trivia has a lower return on investment than ever.

I get learning the business and domain and etc, but it seems like we're in a fast race to the bottom where the focus is on making programmers' skills as redundant as possible as soon as possible.


>I admire this worldview, and wish for it to be true, but I can't help but see it in conflict with much of what floats around these parts.

Honest interviewers may not realize how dishonest other interviewers became in such recent times (2-3 years ago). Interviewing today compared to COVID times is night and day. Let alone the 10's Gold Rush.

The respect is long gone.


> Interviews aren’t about solving problems.

Eh, I wish more people felt that way, I have failed so many interviews because I haven't solved the coding problem in time.

The feedback has always been something along the lines of "great at communicating your thoughts, discussing trade-offs, having a good back and forth" but "yeah, ultimately really wanted to see if you could pass all the unit tests."

Even in interview panels I've personally been a part of, one of the things we evaluate (heavily) is whether the candidate solved the problem.


Isnt one of the ways of solving the problem using all the tools at your disposal? If at the end of the day, isnt having working code the fundamental goal? I guess you could argue that the code needs to be efficient, stable, and secure. But if you could use "AI" to get part way there, then use smarts to finish it off. Isnt that reasonable? (Devils advocate) The other big question is the legality of using code from an AI in a final commercial product.


Yes that’s a fair question. Some companies do allow LLMs in interviews and on the job. But again the solution isn’t what the interviewer wants, so relying on an LLM gives them no signal about your intrinsic capabilities.

Keep in mind that the amount of time you spend in a real job solving clear and easy interview style problems that an LLM can answer is tiny to none. Jobs are most often about juggling priorities and working with other people and under changing conditions, stuff Claude and ChatGPT can’t really help you with. Your personality is way more important to your job success than your GPT skills, and that’s what interviewers want to see… your personality & behavior when you don’t know the right answer, not ChatGPT’s personality.


Yeah everyone says that they are interested in how you got there but this isn’t true in reality from my experience. Your bias inevitably judges them on the solution because you have many other candidates who got the correct solution.


You’re right, interviewers will still care about whether you come up with a solution, and they care about the quality of the solution. The part you might be missing is that what I said and what you said aren’t mutually exclusive; they are both true. Interviewers do have to compare you to other candidates, and they are looking for the candidates that stand out. They want more than a binary yes/no signal, if at all possible. What I was trying to say is that the interviewer doesn’t need the solution to the problem they ask you to solve, what they need is to see how well you can solve it. I hope that’s stating the obvious, but it’s worth really letting it sink in. It’s super common for early-career programmers to be afraid of interviews and complain about them. Things change once you start doing the interviewing and see how the process works.


If you've been given the problem of "without using AI, answer this question", and you use an AI, you haven't solved the problem.

The ultimate question that an interview is trying to answer is not "can this person solve this equation I gave them?", it's usually something along the lines of "does this person exhibit characteristics of a trustworthy and effective employee?". Using AI when you've been asked not to is an automatic failure of trust.

This isn't new or unique to AI, either. Before AI people would sometimes try to look up answers on Google. People will write research papers by looking up information on Wikipedia. And none of those things are wrong, as long as they're done honestly and up front.


If you are pretending to have knowledge and skills you don't have you are cheating. And if you have the required knowledge and skill AI is a hindrance, not a help. You can solve the problem easily without it. So "is using ai cheating"? IDK, but logically you wouldn't use AI unless you were cheating.


Knowledge and skill are two different things. Sometimes interviewers test that you know how to do something, when in practice it's irrelevant if you A) know how to retrieve that knowledge and B) know when to retrieve it.


There is foundational knowledge you must have memorized through a combination of education and experience to be a software developer. The standard must be higher than "can use google and cut and paste." The answer can't always be - "I don't need to be able to recall that on command, I can google/chatgpt that when I end up needing it." Would you go to a surgeon who says "I don't need to know exactly where the spleen is, I can simply google it during surgery."


For the goal of the interview - showing your knowledge and skills - you are failing miserably. People know what LLMs can do, the interview is about you.


I guess its more of a question if you can solve the problem without AI.

In most interview tasks you are not solving the task “with” ai.

Its AI who solves the task while you watch it do it.


Some can be quite good at the cheating: At least good enough to get through multiple layers. I've been in hiring meetings where I was the only one of 4 rounds that caught the cheating, and they were even cheating in behaviorals. I've also been in situations with a second interviewer, where the other interviewer was completely oblivious even when it was clear I was basically toying with the guy reading from the AI, leading conversation in unnatural ways.

Detection of AI in remote interviews, behavioral and technical, just has to be taught today if you are ever interviewing people that don't come from in-network recommendations. Completely fake candidates are way too common.


I'm at the same company I think. I don't get why we can't just use some software that monitors clicking away or tabbing away from the window, and just tell candidates explicitly that we are monitoring them, and looking away or tabbing away will appear suspect.


I haven’t been doing that much interviewing, but in the dozen or so candidates I’ve had I don’t think a single one has tried to use AI. I almost wish they would, as then at least I’d get past the first half of the question…


I'm using AI for interview screeners for nontechnical roles that require knowledge work. The AI interviewing app is very very basic, its just a wrapper put together by an eng, with enough features to prevent cheating.

Start with recording the session and blocking right-click, and you are halfway there. Its not hard.

The AI app has helped me surface top candidates. I don't even look at resumes anymore. There's no point. I interview the top 10 out of 200, and then do my references and select.


If there's no independent verification, how do you know it's really top 10? (not middle 10, or random 10?)


because there's scores, and the interview is highly customized and has an answer key that has been tested beforehand by the test creators.


I second this. My previous company started in-person interview for this very reason.


I mean they could be googling things; I’ve definitely googled stuff during an interview. I do think in-person interviews area important though, I did some remote final interviews with Amazon and they were all terrible


It wasn't a police chopper, it was a military VH-60, also known as a "White Hawk" [1]. It's a VIP transport helicopter, the same type that is used to transport the president.

~The flight track of the helicopter [2] starts at a property in McLean, VA (edited to remove likely inaccurate info)~

The chopper was based out of Fort Belvoir, and based on similar past flight tracks, looks like it probably took off from there too. CNN is reporting that there were 3 soldiers onboard, and no VIPs.

1: https://simple.wikipedia.org/wiki/Sikorsky_VH-60N_White_Hawk

2: https://globe.adsbexchange.com/?icao=ae313d&lat=38.952&lon=-...


> The flight track of the helicopter starts at a property in McLean, VA

that's almost certainly not where the flight started, due to intricacies of how this sort of flight tracking works.

if you look at [0] it has tracks of both flights. toggle the right-hand sidebar, if it's not open already, and you'll see a table containing both planes. the helicopter (PAT25) is yellow, the plane (JIA5342) is blue. the legend right below that explains the color-coding - the plane's data came from ADS-B, while the helicopter's data came from multilateration (MLAT).

MLAT [1, 2] works by having multiple ADS-B feeder stations cooperate in real-time and deduce an aircraft's position based on timestamps of when the signal is received. it allows tracking aircraft that only broadcast the more limited Mode S data, instead of the newer and more detailed ADS-B.

because it requires multiple cooperating receivers, the start of the track in suburban McLean does not mean it took off from there. it just means that was the point in its flight where it became visible to enough receivers that MLAT was able to pin down a position.

you can also see this difference just by looking at the tracks - the plane is broadcasting its own position continuously, so its track is nice and smooth. meanwhile the helicopter's flight looks "jagged" in a way that does not match what its actual flight path would have been. this is an artifact of the small errors introduced by MLAT.

0: https://globe.adsbexchange.com/?icao=ae313d,a97753

1: https://www.flightaware.com/adsb/mlat/

2: https://adsbx.discourse.group/t/multilateration-mlat-how-it-...


And just to be clear, the POTUS/super-special VIP transport is run by the USMC out of Quantico[1], a bit further to the south along the Potomac. Belvoir is US Army, and does have VIP heelicopters (obviously), but it's not the same group that lands at the White House.

1 - https://en.wikipedia.org/wiki/HMX-1 They have some Ospreys and other things as well, but HMX-1 is the most famous and recognizable.


[flagged]


once you know to look at those two specific flights it probably gets easier, yeah. if you were looking at everything in the air at the time I think less so?


I mean, you can get the data from an API, probably. Then you just write a python script to keep an eye on colliding trajectories.


I'm sorry, do you HONESTLY think ATC doesn't do that? ATC was literally mid conversation with the helicopter to deconflict it's path when the collision happened.

What is it with people insisting that the smart people literally tasked with their job somehow have no idea how to do it?


Well, it appears that the situation came as a surprise to at least one of the parties involved.


[2] shows the helicopter taking off 2 miles away from the old saudi embassy in McLean marked “permanently closed” on google maps. (The current embassy is in DC proper, directly across the river from DCA airport)

I don’t think thats strong evidence that it took off from the old Saudi Embassy - thats pretty far away even given your caveat about accuracy.

Edit: it looks to me like the black hawk was coming from somewhere else with its ADS-B turned off entirely, and then turned on ADS-B once it reached the potomac to approach DCA. The first two datapoints of that flight already show it going 110mph, which its unlikely to be able to accelerate to in just 0.2miles after take off.

Edit 2: The route also looks very similar to this flight from 11 days earlier (but reversed in direction): https://www.flightaware.com/live/flight/PAT25 This shows the Blackhawk at 300 feet passing by DCA on what seems like a routine or training flight? I don't know how to look up historical flights to see if this is a commonly-flown route. On that flight, the Black Hawk flew past DCA at 300 feet of altitude, and the last FlightAware data for the American Eagle passenger flight showed 400 feet of altitude.


I didn't say it took off from the old embassy. The flight track starts at the backyard of a house that is currently owned by the embassy. You can see the owner of that property by searching that address here (the site doesn't support a direct link): https://icare.fairfaxcounty.gov/ffxcare/search/commonsearch....


That house doesn't seem to have enough clearance vs the trees to land a helo. Note that Langley (CIA) is nearby.


Thank you!


Regarding your edit: that's a good point, but the vertical ascent rate of the chopper at those first few data points shows 400-800 feet per minute, which is consistent with a chopper taking off...'

edit: your second edit makes me think you're right though, tonight's flight track was pretty consistent with the past flight track where it looks like it both took off from and landed at Fort Belvoir.


811 Lawton St, Mc Lean, VA 22101 Name: SAUDIA ARABIA ROYAL EMBASSY OF, Mailing Address: 8500 HILLTOP RD STE 301 C/O FINANCIAL DIRECTOR FAIRFAX VA 22031 4310


Note that that track is not ADS-B at all. It’s triangulated from mode S pings (“MLAT”, https://www.flightaware.com/adsb/mlat/). I don’t know how accurate these are.


CNN is reporting the helicopter came from Fort Belvoir.

https://www.cnn.com/us/live-news/plane-crash-dca-potomac-was...


The helicopter seems like it is typically stationed at Fort Belvoir. Does "out of Fort Belvoir, Virginia" strictly mean that the helicopter's flight started at Fort Belvoir, or that the helicopter itself is considered to be "out of Fort Belvoir" in a similar manner that LeBron James could be said to be "out of Akron, OH"?


The current embassy is indeed in DC proper, but it isn't directly across the river from DCA.

It's about a mile upriver, near the Watergate.


Military flights do not fly ADB hot outside of the DC FRZ. It was also clearly exactly following flight route 4.

There are a few locations in that area it could have been coming from. Anything else would have made no sense flying through the FRZ from/to Belvoir.


What is "flight route 4"?


The FAA publishes supplemental charts specifically for helicopters in areas with high concentrations of helicopter activity: https://www.faa.gov/air_traffic/flight_info/aeronav/productc...

Specifically the Baltimore-Washington route chart was relevant for these flights: https://aeronav.faa.gov/visual/10-31-2024/PDFs/Balt-Wash_Hel...

If you find the DCA airport on that chart, you’ll find routes 1 and 4 which roughly correspond to the helicopter’s flight path.


Right.

Helicopter was on Helicopter Route 4, per the map, apparently on course.

Aircraft was on approach to Runway 33, apparently on course.

That helicopter route crosses the approach to runway 33.

That's controlled airspace. How did they both have clearance to be there?

We'll know more tomorrow as all the audio and radar recordings are examined.


This comment was written by a US Coast Guard helicopter pilot. It gives a lot of information on how the two aircraft should have been able to share the space and some speculation on what went wrong:

https://www.reddit.com/r/aviation/comments/1idba8i/comment/m...


That makes sense.

A helicopter instructor suggests that possibly the helicopter pilots, who were told to go behind an aircraft that was landing, were looking at the previous aircraft that was landing.[1] That's just speculation at this point.

[1] https://philip.greenspun.com/blog/2025/01/29/reagan-national...


They could use the car autopilot solution: simply negotiate your coordinates with nearby traffic instead of trying to parse malformed visual data.


They were almost certainly both cleared to maintain visual separation.


> They were almost certainly both cleared to maintain visual separation.

DCA TWR: PAT25, traffic just south of the Woodrow Bridge, a CRJ, it's 1200 feet setting up for runway 33.

PAT25: PAT25 has the traffic in sight, request visual separation.

DCA TWR: Visual separation approved.

Audio of MID-AIR CRASH into Potomac River: https://www.youtube.com/watch?v=CiOybe-NJHk


That was clearly a training flight


But does that provide any context? Military pilots perform training flights regularly - it's not necessarily indicative of an inexperienced pilot.


Not really. I wasn't a pilot, just a lowly ABF in the Navy. But when we deployed, on an LHD, most of our flights were what you could probably "classify" as training. We ran flight ops every day because pilots have to maintain hours. We flew every day for 7 months and only 2-3 of those months were we in the gulf. And even in the gulf, not all flights were missions, I would say less than half were. That also isn't counting 4-6 months prior to deployment of work ups where we would go out to sea for a week and come back, everyday pilots flew training missions off our deck. All in all, all in the military probably spend most of their air time training then actually flying missions.

I also recall, even experienced pilots, would rotate out to training units as their "shore" (break from a deploy able unit) duty.

This wasn't a Navy aircraft, but I imagine a lot of that is the same regardless of branch.



BBC is reporting that the Army has confirmed that it was a UH-60, not a VH-60, ie: not a VIP transport: https://www.bbc.com/news/live/cy7kxx74yxlt?post=asset%3A62b9...

Although it's early on and these communications are often chaotic/inaccurate.


I thought it was a VH-60 given that it was callsign PAT25 (PAT is Priority Air Transport and they use the VH-60 for those flights), but if this was a training flight, they may have still used the PAT callsign while flying a UH-60.


Both are "Black Hawk" airframes, right? The VH-60 variant is just a UH-60 that was build/configured for VIP transport vs the normal UH-60 utility variant.

IE, other than paint, they look the same to a casual observer.


> I don't know whether it's going to take a second or half a minute.

You know how you could find out how long it's going to take? By clicking the button. It's free. It's easy. Hell, it would've taken you less time to click that single button and read the first sentence than it would have taken you to post this comment.

You didn't even bother to spend half a second actually opening the documentation page and then you come here complaining about how you don't know whats in the documentation. This is entirely a problem of your own creation. What gives?


The feedback provided wasn't helpful. You're absolutely correct in that shallow dismissals aren't kind, but you're wrong about who was shallow and who was not. OP didn't dismiss anyone, nor was their comment shallow. And they don't owe you or anyone else anything as far as "explanation" goes, and they can filter out whoever the hell they want. Click the link and look further, or don't.

Meanwhile, dr_ketyn's comment was both rude and not constructive in the slightest (here's a tip: starting feedback with "I hate this" isn't constructive or kind).


The SN850x isn't a "gaming part", it's a top-of-the-line consumer SSD that uses the exact same type of NAND chips (3D TLC) that Apple uses in its products.


There’s a lot of diversity under that “3D TLC” umbrella.

But anyway, in what world isn’t this a gaming product: https://shop.sandisk.com/products/ssd/internal-ssd/wd-black-...

“Built for elite gaming.

Crush load times and slash throttling, lagging, and texture pop-ins with the WD_BLACK SN850X NVMe™ SSD. …

Do more with WD_BLACK Dashboard The downloadable WD_BLACK Dashboard (Windows® only) monitors your drive’s health, lets you customize your RGB lighting, and, exclusively on the SN850X SSD, enables Game Mode 2.0 to transform your gaming experience.”


>Built for elite gaming //

That's just marketing language for "this is expensive af but you'll buy it because otherwise you're not an elite gamer!".


Also ‘has RGB LED’s all over it’


At one point that was true, but product lines have started to meaningfully diverge.


> There’s a lot of diversity under that “3D TLC” umbrella.

There really isn't. Apple is reported to use SanDisk 3D TLC NAND chips. SanDisk is owned by Western Digital, and the WD SSDs use SanDisk chips. They're literally the same chips.


They could in theory come off the same assembly line, that doesn’t mean the everything is identical.

Hell WD chips could be of higher quality as I am not suggesting I know their internal processes. I am saying things are optimized differently.


At this point of the conversation, you seem to be really grasping for theoretical stuff to defent Apple's margins with very little proof. Why?


I’ve said several times they could be using worse components.

The why I’m still talking is because people seem to think buying a gaming SSD is a good idea when they also want longevity / low risk of future. The parts can last 10+ years but they’re designed with something else in mind.


There really isn't much diversity in NAND flash product lines. Each generation of 3D NAND from WD+Kioxia basically consists of two sizes of TLC die and one or two sizes of QLC die. For the purposes of this conversation, binning doesn't matter because "SSD grade" is already the top bin. So the only variable on the NAND side for a high-end 2TB drive is the question of whether it's built with the high-capacity die (cheaper per GB), or twice as many of the low-capacity dies (potentially faster if it allows more controller channels to be fully populated, but that's usually not a problem at 2TB).


I’m not sure what you mean by SSD grade, Grade A to D chips aren’t strictly about binning but also traceability/fraud.

One hardware guy mentioned internal defects can cause differences is the amount of reserve sectors that a final product ends up with. That’s exactly the kind of arbitrary cutoff that lets companies charge different prices for the same part.


SSD-grade is the term used for flash with a low initial defect rate. See eg. https://www.szyunze.com/wp-content/uploads/2023/08/SpecTek-N... (from https://www.szyunze.com/spectek-unveiling-truths-about-degra... )

Lower-grade flash with higher initial defect rates is what gets used in USB flash drives and SD cards, and some bargain-bin SSDs with lower usable capacities (ie. 960GB rather than 1TB).

The stuff used in a WD Black or WD Blue branded consumer SSD is not a different quality grade from the stuff used in any other mainstream consumer SSD, Apple's included.


> They could in theory come off the same assembly line, that doesn’t mean the everything is identical.

It could just come down to different binning of the same part, and it would still make a difference.


> They're literally the same chips.

At what grade? Plus, how much extra endurance is baked in to Apple's drives, i.e. how over-provisioned are they?

My MacBook Air M1 reports 99% health after being daily driven (and some 26TB written to it) at work since 2020 (we got these as soon as they introduced), and I don't baby its drive in any way.


Any decent consumer SSD will be exactly the same, brands such as SK Hynix, Samsung, Crucial, WD, etc. same chips and same performance, much cheaper than the Apple tax.


I'll respectfully disagree on the performance front.

Any flash storage has two main components: A controller and a set of flash chips, and a third component enters to the picture when connecting these two: number & nature of channels.

Starting from the controller (and channels), beside the obvious PCIe generation, there are some other factors. DRAM support, NAND support (not all NAND is the same!), number of channels, and the speed of these channels. A DRAMless SSD will suffer after its "pseudo-SLC" cache runs out, and the performance of the drive will generally suffer if the number of channels can't absorb the traffic coming from the PCIe side. Here, to have a top notch SSD, you need to have a good/fast controller with DRAM support, and enough channels with enough speed to absorb all traffic requests, so you can get use of the premium NAND chips you bought.

Next, we have the flash layout. Flash chips vary in speed, density and drive. A high density flash chip might be slower, or a flash chip might require higher drive, resulting in higher temperatures in general. In some cases, instead of populating all channels, a manufacturer might decide to populate a few channels and leave the rest unpopulated, creating a big but slower SSD.

Beyond that, there are other considerations like over-provisioning at flash level, "soft SLC cache" size, wear leveling capabilities under sustained load, etc. etc.

For example, an enterprise SSD comes with the "same" TLC chips, but over-provisioned 5:1 or 10:1 (10TB flash for 1TB capacity)

Now, let's see some real-world examples:

- Kingston NV2, NV3: A budget SSD with great capacity and price. DRAMless, no channel number guarantee, and might come in with TLC or QLC chips. Burst speeds are OK, will make 90% of the people happy, but slows down in long transfers and under heavy load. Runs cooking hot in both controller and flash side.

- Kingston KC3000: A higher end drive with part/channel guarantees, handles sustained load better, runs way cooler, ironically.

- Samsung 980/990 Pro: Samsung's higher end drive. Runs cool, sustains speed all over due to DRAM and tons of channels and vertical integration of controller + NAND.

- Samsung T7 Shield: Looks like an bulky 1.8" drive, but it's selling point is it can sustain 1050MB/s writes wihtout even slowing down until it's full. Never gets warm.

So, flash drives comes in all shapes and sizes and with specifications and capabilities all over the place. A WD Blue and WD Black won't perform the same. Same for Sandisk's Plus, Extreme, Extreme Pro series.

This is why OWC was/is the go-to 3rd party SSD provier for Macs for quite some time. They tune their drives similar to Apple's and very similar to what OS expects as behavior. It's not slap some controllers and flash chips on a PCB, change three fields in a firmware and sell.

Flash storage is black magic at this point, and thinking every box is the same is a big mistake.

This comment can be easily 3x longer, but I want to keep it readable.


Western Digital themselves are literally calling the WD_BLACK line their gaming line[1], and their page for the SN850X in particular is dripping with "gaming"[2].

Maybe that doesn't make it a bad comparison, but the SN850X is def intended to be a gaming part.

[1]: https://www.westerndigital.com/brand/wd-black

[2]: https://www.westerndigital.com/en-in/products/internal-drive...


What is a competing part that you think would be more comparable?

Gamers are the ones buying expensive parts so it makes sense to market to that. The next tier after this is basically server-class 10-20k machines which Apple is definitely not competing with (and SSDs are not really that much better in that class anyway). Dismissing SSDs as “gaming” parts as if it’s diminishing the quality misunderstands what’s happening here. It would be one thing if WD was ignoring fsyncs to achieve this performance but gamers don’t care about writes so much anyway and there’s no indication WD did that.

Source: I have the WD and Samsung parts as well as cheapo random SSDs.


The other product lines would be WD Blues (marketed at "creative professionals working with large files") and WD Reds (marketed specifically for use in NAS's), but neither of these really support the argument that the SN850x isn't a good comparison, because both the Blue and Red lines are cheaper and less performant (and the Blues are even rated for less longevity), and just make it seem like Apple is price gouging even more.

The point I was trying to make by pointing out that the SN850x isn't a "gaming part" is that the SN850x is literally the top-of-the-line, most expensive consumer SSD sold by WD, and has practically the same specs as other top-of-the-line, most expensive competing parts like the Samsung 990 Pro. Being one of the most expensive SSDs on the market means that saying that the SN850x is a bad comparison because it's supposedly "lower price" is just false on its face.


Ahh you misunderstood what the lower prices is in reference to. Gaming parts often have a real premium, it’s specifically the price at a specific performance level where they preform well.

To be more clear, getting equal performance without sacrificing anything would raise costs even further.


I personally don’t think anything is a great comparison.

It’s easy to say moderate premium over normal business grade SSD’s but that doesn’t mean any specific number is correct. I’d say the equivalent to a 130$ to 220$ SSD assuming a stand alone equivalent exited, but the actual number depending on info Apple isn’t sharing. And yes the range is both above and below the specific part suggested.


It would be comical if it weren't so ridiculous. The standards group is surely aware of how ridiculous it is, yet they keep coming up with these idiotic names, and then try to defend it by saying "its only the technical name, not the marketing name", as if that matters.

They honestly just need to kill off USB at this point and just let Thunderbolt supersede it. Thunderbolt 4 and 5 are literally just implementations of USB4, except the Thunderbolt standards group is doing a hell of a lot better job at naming things and certifying cables than the USB group is.


Regarding Thunderbolt, I don't even bother buying USB-C cables anymore for anything important.

If it's in my backpack or used for a dock/monitor, it's going to be a Thunderbolt cable.

Expensive? Absolutely. Unnecessary? Almost certainly. But I haven't had any issues with them whatsoever.


I've been using USB cables since they exist, and I had never any "issues" with them.

The only "issue" I had, was that you often ended up without the proper one when you needed it. Almost all the cables came with the device which needed it. Only bought 2 which were longer.

What issues do you experience so frequently that it would justify investing more money into it?


You answered your own question:

Q: What issues do you experience so frequently that it would justify investing more money into it?

A: ... often ended up without the proper one when you needed it

That's precisely the issue GP solves by only carrying one type of cable, Thunderbolt. If you want to transfer data at top speed, render high resolution high refresh rate video, or charge at maximum wattage, TB4 handles it. One cable to rule them all, no surprises.


It rules them all, only if you are on a Mac yourself and have a Thunderbold cable.

If you are on anything different and want to hook up, say a Studio Display, you can't use USB-C (below 4) and that is not USB-IFs fault.

Besides that, what I meant was the connector standards mix before USB-C has become mandatory here, and I think you knew that. Other than that, all what you wrote is also determined by the devices connected, so this may be why most of the people out there don't even realize there might be a reason to buy Thunderbold.


I don't think you thought your statement through. What you're proposing is a massive mandatory price hike on all hardware just because you're slightly annoyed by naming.


There's no mandatory price hike required. Thunderbolt is royalty-free as of several years ago, and at this point USB4 pretty much _is_, at minimum, Thunderbolt 3. For example USB4 hubs are, per spec, required to be TB3 compatible, so I don't know why we would bother marketing them as "USB4 v1.0 / USB4 SuperSpeed++ / USB4 20 Gbps / USB 3.1 Gen2x2" when instead they can just be marketed as Thunderbolt 3 or 4.


True but due to lack of slow speed fallback it requires active signal conditioning electronics in every cable and the required interfaces on the client side are also way more expensive in peripherals. A simple mouse would cost $100 instead of $10 for no reason, it'll never need thunderbolt speeds.


Nobody is going to pay for a 40Gbps Thunderbolt cable to plug in their keyboard


Pricing aside, thunderbolt cables are usually thicker and more rigid. Sometimes you need a thin and flexible cable, cheap USB-C cable is a better choice.


And truly lightweight cables, for slow overnight charging (or for charging of small batteries, e.g. smartwatch scale) have all but disappeared with the shift from A/micro-B to C/C. It's awesome that we have near-universal connector for that wide a range of use cases, but that requires some learning about cable classes beyond the old "does the connector fit?" and that learning process is not over yet. And by learning I don't just mean us memorizing classes, but also an effective narrowing of classes, e.g. no more almost but not quite TB4 compliant ones.


Though poor cables do drop the voltage a bit I feel that the proper approach would be to to just use a weak charger. They are "all" USB-A and there are no lack of USB-A -> C cables.


I'm holding a thin USB-C Samsung cable right now.


USB4 is required to support Thunderbolt, and USB4 cables are similar to Thunderbolt in their price and thickness, so this problem already exists, just with shittier naming conventions.

Basically for any cheap use cases, you just have to buy a random "USB-C" cable with unknown capabilities, while for specific data use cases you have to buy a "USB-C" cable that also supports a specific data rate, either USB 3.1, USB 3.2, USB4 v1.0, USB4 v2.0, or Thunderbolt 3/4/5 (and most cables will support multiple of these, for example USB 3.2 Gen2x2 is the same speed as USB4 v1.0 and TB3).


Or ten for $1 off <your preferred Chinese marketplace>


Data rate between 10MB/s and 2 Gb/s.


Irrelevant for charging.


This kind of cheap cable won't fast charge in any case. Add a few dollars if you want that.


I hope I don't come off sounding like a twit, but does fast charging really matter all that much to people? I've had a few fast charge cables before, and although it's fine to have my cellphone fully charged in say, 20 minutes, it doesn't really mean anything to me, given that it will be left plugged in over night regardless.

Perhaps it's more useful to people who are constantly traveling, but for someone who isn't, I guess I just don't see a point in it. Would I turn it down? No. Would I pay more for it? If it's greater than 2$ more, no. Slow charge is "good enough" in my eyes.


I really value fast charging. Normally I turn it off because I do charge my phone at night. But sometimes I really need the boost and it's great that it can do so if needed. Especially because i normally limit my phone to 80%.

I don't use it a lot, probably once a month on average. But the times I do it's invaluable.


It's very important to me. I keep forgetting to plug it in at night and then I can just charge it for 30 minutes before I leave the house and can get a couple of hours usage into it, which is normally enough.


I was frustrated all day because I couldn't find my fast charging cable and just couldn't leave the phone plugged for more than 15-20 minutes at a time due to various activities, which also required a lot of battery charge (photo/video shooting), so I was dancing around the charger all day...


I never charge my phone overnight. My charger is on my desk. I'll typically do a slow charge, but I'll do a fast charge in some circumstances (e.g. when I'm going somewhere soon but my battery is low).


Laptops need the additional power that fast charge can deliver to run/charge.


Isn’t slower charging better for the battery too?


The problem with fast charging and batteries is overheating, and more frequent overcharging. It's possible to fast charge in entirely safe, non-damaging ranges.


> yet they keep coming up with these idiotic names, and then try to defend it by saying "its only the technical name, not the marketing name", as if that matters.

I think it matters a great deal, but not as they or you intend. Having technical and marketing terms sucks! I use Ubuntu on my home server. And whenever I need to troubleshoot some issues with packages or want to upgrade to a later release I am confronted with those awful names. I DONT KNOW WHAT JAMMY IS! JUST CALL IT 22.04, 22.10, 23.04, 24.04.......! I don't want to memorize random names and remember when they where released. And name your apt repos in the same version scheme goddamnit!

Same with USB! Just give it a sensible name and stop changing it for marketing reasons. Such a effing stupid thing to do!


Agreed. It's fine as a technical name but the consumer name doesn't seem to catch on or even be referenced most of the time. I'm still having difficulty with component manufacturers saying "usb 3.2" which was far as I can tell is 1x5, 2x5, 1x10, or 2x10. Plot twist it's always the slowest one but still, the standards body could've done that better.

Disagree on the replacement with thunderbolt, though. USB historically is very different, and it's USB4 that's a clone of TB3. Agreed the naming is better but a lot of micros have USB and thunderbolt would be ridiculous for them.


How are they better with the same meaningless 4 and 5?


Not all docks need DisplayLink. Thunderbolt-powered ones like this [0] or this [1] can support multiple displays for Macbook Pros without it, so if you want to avoid having to use DisplayLink, they're solid picks. The one thing you need to watch out for is that if you go with the second one, there are no HDMI ports, so you need a USB-C to HDMI converter, which in my experience can be flaky at higher refresh rates. If on the other hand your monitors support DisplayPort, then USB-C to DisplayPort is native, doesn't need a converter (just an adapter), and works better.

1: https://plugable.com/products/tbt4-ud5

2: https://www.caldigit.com/thunderbolt-4-element-hub/


From the first link

> On Mac systems, dual display is only supported on M1 Pro/Max, M2 Pro/Max, M3 Pro/Max, and M4/Pro/Max systems

So if you're still on an OG M1 it won't work for you.


Base M2 or M3 won't work either; a Thunderbolt dock cannot work around a lack of display pipes on the SoC. The M4 is the first base M-series chip that supports two external displays in addition to the internal display, hence the slightly different phrasing in your quote for the M4 generation.


Aside from my justified negative opinions about caldigit, i was referring to docks that offer 3+ (simultaneous) screen outputs (eg not one with 2 hdmi+2dp that can use either set but not all 4 together).

That requires DP.


> docks that offer 3+ (simultaneous) screen outputs ... requires DP

Still incorrect.

What you actually have to use is two thunderbolt ports on the MacBook. Then you can power not just two but four 6K screens, as each TB port can push 2 externals (in addition to built-in screen).

So this iVANKY dock is native TB quad screen, for example:

https://www.amazon.com/FusionDock-Thunderbolt-Monitor-Dockin...


Im sorry, despite their marketing, this is a 2 output dock. Offering TB passthrough doesnt count, just like we wouldnt call regular usb hubs "multiple display docks" if some manufacturer came out with usb screens (whih im sure exist already for lowres output).


>What's particularly fascinating to me, though, is how some people are so pro-cloud that they'd argue with a writeup like this with silly cloud talking points. They don't seem to care much about data or facts, just that they love cloud and want everyone else to be in cloud, too.

The irony is absolutely dripping off this comment, wow.

Commenter makes emotionally charge comment with no data or facts and decries anyone who disagrees with them as "silly talking points" for not caring about data and facts.

Your comment is entirely talking about itself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: