Hacker Newsnew | past | comments | ask | show | jobs | submit | TheRoque's commentslogin

Especially if you factor-in the fact that the AI companies are losing money for now, and that it's not sustainable.

Yeah, when does the other shoe drops and after being addicted to AI coding we suddenly have the rug pulled on price.

But in the past, you knew the codebase very well, and it was trivial to implement a fix and upgrade the software. Can the same be done with LLMs ? Well from what I see, it depends on your luck. But if the LLMs can't help you, then you gotta read the whole codebase that you've never read before and you quickly lose the initial benefits. I don't doubt someday we'll get there though.

I've hit this in little bursts, but one thing I've found is that LLMs are really good at reasoning about their own code and helping me understand how to diagnose and make fixes.

I recently found some assembly source for some old C64 games and used an LLM to walk me through it (purely recreational). It was so good at it. If I was teaching a software engineering class, I'd have students use LLMs to do analysis of large code bases. One of the things we did in grad school was to go through gcc and contribute something to it. Man, that code was so complex and compilers are one of my specialties (at the time). I think having an LLM with me would have made the task 100x easier.


Does that mean you don't think you learned anything valuable through the experience of working through this complexity yourself?

I'm not advocating for everyone to do all of their math on paper or something, but when I look back on the times I learned the most, it involved a level of focus and dedication that LLMs simply do not require. In fact, I think their default settings may unfortunately lead you toward shallow patterns of thought.


I wouldn't say there is no value to it, but I do feel like I learned more using LLMs as a companion than trying to figure everything out myself. And note, using an LLM doesn't mean that I don't think. It helps provide context and information that often would be time consuming to figure out, and I'm not sure if the time spent is proportional to the learning I'd get from it. Seeing that these memory locations mapped to sprites that then get mapped to those memory locations, which map to the video display -- are an example of things that might take a minute to explore to learn, but the LLM can tell me instantly.

So a combination of both is useful.


Hard to argue with such a pragmatic conclusion!

I think the difficulty I have is that I don't think it's all that straightforward to assess how it is exactly that I came not just to _learn_, but to _understand_ things. As a result, I have low confidence in knowing which parts of my understanding were the result of different kinds of learning.


Learning things the hardest way possible isn't always the best way to learn.

In a language context: Immersion learning where you "live" the language, all media you consume is in that language and you just "get" it at some point, you get a feel for how the language flows and can interact using it.

vs. sitting in a class, going through all the weird ways French words conjugate and their completely bonkers number system. Then you get tested if you know the specific rule on how future tenses work.

Both will end up in the same place, but which one is better depends a lot on the end goal. Do you want to be able to manage day-to-day things in French or know the rules of the language and maybe speak it a bit?


I'd say this is similar to working with assembly vs c++ vs python. Programming in python you learn less about low level architecture trivia than in assembly, but you learn way more in terms of high level understanding of issues.

When I had to deal with/patch complex c/c++ code, I rarely ever got a deep understanding of what the code did exactly - just barely enough to patch what was needed and move on. With help of LLMs it's easier to understand what the whole codebase is about.


If I haven't looked at my own code in 6 months it might as well have been written by someone else.

The most brilliant programmer I know is me three years ago. I look at code I wrote and I'm literally wondering "how did I figure out how to do that -- that makes no sense, but exactly what is needed!"

I had a funny example of that.

I had a question about how to do something in Django, and after googling found a good SO answer.

I read through it thinking about how much I appreciated the author's detailed explanation and answer.

When I looked at the author it was me from two years ago.


How can I learn this skill? Past Me is usually just this idiot who made work for Present Me.

Turns out, that is also past me. In fact, often the incredible code that brilliant me wrote, which I don't understand now, is also the code that reckless me wrote that I now need to fix/add to -- and I have no idea where to start.

Wow. Lucky you. When I come across code I wrote months ago, usually I'm like "what kind of crack was I on when I wrote this?"

I always check git blame before ripping on some code I find...9/10 times I wrote it :o

They're better than one might expect at diagnosing issues from the error output or even just screenshots.

uBlock is still as efficient if you're using Mozilla, blame the browser not the extension

Very correct. I’m on Zen and UBO works great for me. Chrome based browsers are screwed for ads

I'm curious as to why you are so excited ? What makes Ghostty so special ? (Especially compared to Kitty or Wezterm which I use)

WezTerm was my daily driver for a long time—it’s a great app.

Ghostty is blazing fast and the attention to detail is fabulous.

The theme picker is next level, for example; so are the typographical controls.

It feels like an app made by a craftsman at the top of his game.


Check the prizes for the bug bounties in big smart contracts. The prizes are truly crazy, like Uniswap pays $15,000,000 for a critical vuln, and $1,000,000 for a high vuln. With that kind of money, I HIGHLY doubt there aren't people grinding against smart contracts as you say.

True, I'd be curious to see if (and when) those contracts were compromised in the real world. Though they said they found 0 days, which implies some breaches were never found in the real world.

Not sure what you mean that "input that X has happened". You don't directly input the changes, instead, you call a function that creates that state change (or not, if it's invalid), by running its code. This code can include checks on who is the caller, it can check if you're the contract owner, if you're someone who already interacted with the contract (by checking previous state), or any hardcoded address etc.

They don't train on private repos, there has been no proof of that anyways

> but not using AI is simply less productive

Some studies shows the opposite for experienced devs. And it also shows that developers are delusional about said productivity gains: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

If you have a counter-study (for experienced devs, not juniors), I'd be curious to see. My experience also has been that using AI as part of your main way to produce code, is not faster when you factor in everything.


Curious why there hasn't been a rebuttal study to that one yet (or if there is I haven't seen it come up). There must be near infinite funding available to debunk that study right?

That study is garbo and I suspect you didn't even read the abstract. Am I right?

I've heard this mentioned a few times. Here is a summarized version of the abstract:

    > ... We conduct a randomized controlled trial (RCT)
    > ... AI tools ... affect the productivity of experienced
    > open-source developers. 16 developers with moderate AI
    > experience complete 246 tasks in mature projects on which they
    > have an average of 5 years of prior experience. Each task is
    > randomly assigned to allow or disallow usage of early-2025 AI
    > tools. ... developers primarily use Cursor Pro ... and
    > Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing
    > AI will reduce completion time by 24%. After completing the
    > study, developers estimate that allowing AI reduced completion time by 20%.
    > Surprisingly, we find that allowing AI actually increases
    > completion time by 19%—AI tooling slowed developers down. This
    > slowdown also contradicts predictions from experts in economics
    > (39% shorter) and ML (38% shorter). To understand this result,
    > we collect and evaluate evidence for 21 properties of our setting
    > that a priori could contribute to the observed slowdown effect—for
    > example, the size and quality standards of projects, or prior
    > developer experience with AI tooling. Although the influence of
    > experimental artifacts cannot be entirely ruled out, the robustness
    > of the slowdown effect across our analyses suggests it is unlikely
    > to primarily be a function of our experimental design.
So what we can gather:

1. 16 people were randomly given tasks to do

2. They knew the codebase they worked on pretty well

3. They said AI would help them work 24% faster (before starting tasks)

4. They said AI made them ~20% faster (after completion of tasks)

5. ML Experts claim that they think programmers will be ~38% faster

6. Economists say ~39% faster.

7. We measured that people were actually 19% slower

This seems to be done on Cursor, with big models, on codebases people know. There are definitely problems with industry-wide statements like this but I feel like the biggest area AI tools help me is if I'm working on something I know nothing about. For example: I am really bad at web development so CSS / HTML is easier to edit through prompts. I don't have trouble believing that I would be slower trying to make an edit to code that I already know how to make.

Maybe they would see the speedups by allowing the engineer to select when to use the AI assistance and when not to.


it doesnt control for skill using models/experience using models. this looks VERY different at hour 1000 and hour 5000 than hour 100.

Lazy from me to not check if I remember well or not, but the dev that got productivity gains was a regular user of cursor.

The other nightmare for these companies, is that any competitor can use their state of the art model for training another model. As some Chinese models are suspected to do. I personally think it's only fair, since those companies in the first place trained on a ton of data and nobody agreed to it. But it shows that training the frontier models have really low returns on investment

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: