Hacker Newsnew | past | comments | ask | show | jobs | submit | goostavos's commentslogin

You do all of that when leaving a comment on HN? Why...?

I'm confused by this need(?) desire(?) to polish things that are irrelevant.


No, I do not, I mentioned asmuch in my post. But I do not hold it against those that do. I think if you want to make a point across, doing this the most effective way without detracting from the point is a good thing.

Relevance is in the eye of the beholder.


No amount of FBI stats about how often "assault" rifles are used will change people's minds. They don't like them and so want to take them away.

I don't know how to square the same people saying we're living under a tyrannical government also pushing legislation that makes sure said tyrannical government is the only one with guns.


I can't square people who think owning a gun will stop or prevent a tyrannical government. Especially when the tyrannical government just leverages its supporters as a vigilante force.


An armed populace creates a huge risk for a federal paramilitary force descending on a municipality with the intent to terrorize the citizens. They're not rolling in with tomahawks and tanks, they're coming in with assault rifles and window breakers.


It won't "stop" them but having to treat everyone like they might shoot back and show up with a 10:1 manpower advantage and armed to the teeth every time you wanna subject someone to state violence really puts a damper on your ability to do tyrannical government things.


The current time period is not proving that out. These are just ammosexual fantasies.


Not at all true. I haven't yet witnessed armed resistance to ICE, but it's in the cards, if the government wants to push. Given the number of veterans and folks that actually have skill with guns in the civilian populace, and the hiring standards of ICE, I think the civilian population, properly mobilized, would be incredibly effective at putting a damper on their illegal behavior.


Have yet to see that so I'm not putting stock in a hypothetical armed uprising.


It's an extremely dangerous line to cross, and it should be avoided if at all possible. At the same time, when no other options are available, it's better to be armed than not. I hope you never have to learn this first-hand.


It kind of is in that they're picking the easy targets. They're not being sloppy in places where wrong address has an unacceptably high (but still small) chance of having them confused for the DEA and shot back at by someone who isn't going to prison one way or another.


The problem with that thinking is that you have to have the will to act to stop tyranny, and no amount of armament will give you the will or the foresight to see it.


Sigh.. same.

The real AI fatigue is the constant background irritation I have when interacting with LLMs.

"You're not imagining it" "You're not crazy" "You're absolutely right!" "Your right to push back on this" "Here's the no fluff, correct, non-reddit answer"


“You’re not [X]—you’re [Y]” is the one that drives me nuts. [X] is typically some negative characterization that, without RLHF, the model would likely just state directly. I get enough politics/subtext from humans. I’d rather the LLM just call it straight.


>software engineers today are 100x more productive

Somebody needs to explain to my lying eyes where these 100xers are hiding. They seem to live in comments on the internet, but I'm not seeing the teams around me increase their output by two orders of magnitude.


I would say I'm like 1.2x more productive, and I think I'm more of the typical case (of course I read all of the code the LLM produces, so maybe that's where I've gone wrong).


If they did a year of work in ~3 days, presumably they're on a beach somewhere.


They are the people who have the design sense of someone like Rob Pike but lack his coding skill. These people are now 100x more capable than they were previously.


This is how you get managers saying

"we have taken latest AI subscription. We expect you to be able to increase productivity and complete 5/10/100 stories per sprint from now on instead of one per sprint that we planned previously".


Citation needed. For both the existence of said people (how do you develop said design sense without a ton of coding experience?) and that they are 100x more productive.


If you produced 1 line of code per hour before "AI" because you suck, and now produce 100 lines of code per hour with AI, you are now a 100x programmer.

I'm joking of course, but that's probably how some people see it.


No I think you're 100% correct. But these people also miss out on the irony that using "lines of code" as a metric is a literal meme amongst software developers.


No they’re not.


A lot of pride is wrapped up in the craft of writing software. If that goes away (I don't think it will) it would leave a lot of people wondering how they spent all their time.

(or something like that. Obviously I'm too well adjusted to have these existential worries)


I had my first interview last week where I finally saw this in the wild. It was a student applying for an internship. It was the strangest interview. They had excellent textbook knowledge. They could tell you the space and time complexities of any data structure, but they couldn't explain anything about code they'd written or how it worked. After many painful and confusing minutes of trying to get them to explain, like, literally anything about how this thing on their resume worked, they finally shrugged and said that "GenAI did most of it."

It was a bizarre disconnect having someone be both highly educated and yet crippled by not doing.


Sounds a little bit like the stories from Feynman, e.g.: https://enlightenedidiot.net/random/feynman-on-brazilian-edu...

The students had memorized everything, but understood nothing. Add in access to generative AI, and you have the situation that you had with your interview.

It's a good reminder that what we really do, as programmers or software engineers or what you wanna call it, is understanding how computers and computations work.


There's a quote I love from Feynman

  > The first principle is that you must not fool yourself and you are the easiest person to fool.
I have no doubt he'd be repeating it loudly now, given we live in a time where we developed machines that are optimized to fool ourselves.

It's probably also worth reading Feynman's Cargo Cult Science: https://sites.cs.ucsb.edu/~ravenben/cargocult.html


This the kind of interaction that makes be think that there are only 2 possible futures:

Star Trek or Idiocracy.


Hmmm, I think we're more likely to face an Idiocracy outcome. We need more Geordi La Forges out there, but we've got a lot of Fritos out here vibe coding the next Carl's Jr. locating app instead


we would be lucky to have idiocracy. president camacho had a huge problem and he found the smartest person in the country and got him working on it. if only we can do that


Star Trek illustrated the issue nicely in the scene where Scotty, who we should remember is an engineer, tries to talk to a computer mouse in the 20th century: https://www.youtube.com/watch?v=hShY6xZWVGE


Except that falls apart 2 seconds later when Scotty shocks the 20th-century engineers by being blazing fast with a keyboard.


Lots of theory but no practice.


More like using a calculator but not being able to explain how to do the calculation by hand. A probabilistic calculator which is sometimes wrong at that. The "lots of theory but no practice" has always been true for a majority of graduates in my experience.


Surely, new grads are light on experience (particularly relevant experience), but they should have student projects and whatnot that they should be able to explain, particularly for coding. Hardware projects are more rare simply because they cost money for parts and schools have limited budgets, but software has far fewer demands.


This is exactly the end state of hiring via Leetcode.


Makes me wonder if the hardware engineers look at software engineers and shrug, “they don’t really know how their software really works.”

Makes me wonder if C programmers look at JS programmers and shrug, “they don’t understand what their programs are actually doing.”

I’m not trying to be disingenuous, but I also don’t see a fundamental difference here. AI lets programmers express intent at a higher level of abstraction than ever before. So high, apparently, that it becomes debatable whether it is programming at all, out whether it takes any skill, out requires education or engineering knowledge any longer.


Wait, so they could say, write a linked list out, or bubble sort, but not understand what it was doing? like no mental model of memory, registers, or intuition for execution order, or even conceptual like a graph walk, or something? Like just "zero" on the conceptual front, but could reproduce data structures, some algorithm for accessing or traversing, and give rote O notation answers about how long execution takes ?

Just checking I have that right... is that what you meant?

I think that's what you were implying but it's just want to check I have that right? if so

... that ... is .... wow ...


If I'm understandinf correctly, I don't think what you're saying is quite right. They had a mental model of the algorithms, and then the code they "produced" was completely generated by AI, and they had no knowledge of how the code actually modeled the algorithm.

Knowing the complexity of bubble sort is one skill, being able to write code that performs bubble sort is a second, and being able to look at a function with the signature `void do_thing(int[] items)`and determine that it's bubble sort and the time complexity of it in terms of the input array is a third. It sounds like they had the first skill, used an AI to fake the second, but had no way of doing the third.


I found the first season OK enough, but the second season to be unwatchable.


Agreed, the characters now just abandon their established traits from one scene to the next in service of a contrived story



If the only reason you write is as a means to and end, sure. Inevitable. If you pursue it as a craft then the struggle and imperfections are part of the process. LLM usage would sand away those wonderful flaws.


I find the same. Even those who are interested in it in theory hit a pretty unforgiving wall when they try to put it in practice. Learning TLA+ is way harder than leaning another programming language. I failed repeatedly while trying to "program" via PlusCal. To use TLA you have to (re)learn some high-school math and you have to learn to use that math to think abstractly. It takes time and a lot (a lot!) of effort.

Now is a great time to dive in, though. LLMs take a lot of the syntactical pain out of the learning experience. Hallucinations are annoying, but you can formally prove they're wrong with the model checker ^_^

I think it's going to be a learn these tools or fall behind thing in the age of AI.


I think the "high school math" slogan is untrue and ultimately scares people away from TLA+, by making it sound like it's their fault for not understanding a tough tool. I don't think you could show an AP calculus student the equation `<>[](ENABLED <<A>>_v) => []<><<A>>_v` and have them immediately go "ah yes, I understand how that's only weak fairness"


Oh, hey -- you're that guy. I learned a lot of what I know about TLA from your writings ^_^

Consider my behavior changed. I thought the "high school math" was an encouraging way to sell it (i.e. "if you can get past the syntax and new way of thinking, the 'math' is ultimately straight forward"), but I can see your point, and how the perception would be poor when they hit that initial wall.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: