Hacker Newsnew | past | comments | ask | show | jobs | submit | paulhodge's commentslogin

No you’re just deflecting his points with an ad hominem argument. Stop pretending to assume what he ‘truly feels’.


I don't even know who Rob Pike is to be honest. I'm not attacking him.

I'm not pretending to know how he feels. I'm just reading between the lines and speculating.


Maybe you should do some basic research instead of speculating. Rob Pike is not just some random software developer who might worry about his job.


I was just accused of ad hominem. Now you want me to get accused of appeal to authority?


No, the point is that your speculations simply do not make sense for someone like Rob. He is not a random software engineer in some company and also he is retired.


I’m basing this purely on what he said, not who he is. I think that’s the best way to judge this thread. Regardless, I was accused of ad hominem and you want me to appeal to authority.

Sometimes HN is weird.


You've made baseless assumptions about his "true" feelings. If you did some basic research, you would have quickly realized that your speculations were way off. This is about context, not about authority.


I already said many times that I was reading between the lines and it was speculation.

You keep asking me to appeal to authority. No thanks.

It is what it is. To me, it’s clear that he wants things to go back to pre ChatGPT because that’s the world he’s familiar with and that’s the world he has most power.

Otherwise, he wouldn’t make such idiotic claims.


> You keep asking me to appeal to authority.

I don't. I just asked to do some research instead of indulging in wild speculation.

> because that’s the world he’s familiar with and that’s the world he has most power.

Again, just baseless speculation. Rob had a very prolific where he worked on foundational technologies like programming language design. He is now retired. What kind of power would he be afraid to lose?

Would you at least consider the possibility that his ethical concerns might be sincere?


  I don't. I just asked to do some research instead of indulging in wild speculation.
You are. https://en.wikipedia.org/wiki/Argument_from_authority

  An argument from authority[a] is a form of argument in which the opinion of an authority figure (or figures) is used as evidence to support an argument.[1] The argument from authority is often considered a logical fallacy[2] and obtaining knowledge in this way is fallible.[3][4]

  Again, just baseless speculation. Rob had a very prolific where he worked on foundational technologies like programming language design. He is now retired. What kind of power would he be afraid to lose?
Clout? Historical importance? Feeling like people are forgetting him? If he didn't care about any of this, he wouldn't have a social media account.


I'm not saying that Rob is right because of his achievements. I'm only saying that your speculations in your original post are ridiculous considering Rob's career and personal situation.

> Clout? Historical importance? Feeling like people are forgetting him?

Even more speculation.

Just in case you are not aware: there are many people who really think that what the big AI companies are doing is unethical. Rob may be one of them.


Stop appealing to authority. Just argue about facts and what was said.

You also keep accusing me of speculation but I already mentioned multiple times that it’s speculation. I never said it’s not speculation. It’s you who can’t make a coherent come back argument except to tell me to research and then respect him.


It's you who didn't look up some facts before posting. Please read your original post and then tell me how it possibly relates to Rob's situation.

> You also keep accusing me of speculation but I already mentioned multiple times that it’s speculation.

Yes, you mentioned it yourself, but you don't seem to understand the problem with it.


AI is too useful to fail. Worst case with a bust is that startup investment dries up and we have a 'winter' of delayed improvement. But people aren't going to stop using the models we have today.


Agree that LLMs go too far on error catching..

BUT, to play devil's advocate a little: Most human coders should be writing a lot more try/catch blocks than they actually do. It's very common that you don't actually want an error in one section (however unlikely) to interrupt the overall operation. (and sometimes you do, it just depends)


Neat investigation but I didn’t totally follow how the project would be useful for reverse engineering, it seems like a project that would mostly be useful for evading bot checks like web scraping or AI automation.


I think this prediction of "vibe code cleanup" is massively overblown. It's amazing how much code quality doesn't actually matter to the business. Yes we recognize symptoms and downsides of bad code, and yes it matters specifically to the engineers that have to work on it. But only in extreme cases does bad code actually cause an existential threat to the business. The world already runs on bad code.


Pnpm 10.x also has a feature to disallow post-install scripts by default. When using Pnpm you have to specifically enable a dependency to let it run its post-install scripts. It's a great feature that should be the standard.

Yes if someone compromises a package then they can also inject malicious code that will trigger at runtime.

But the thing about the recent NPM supply chain attack - it happened really quickly. There was a chain reaction of packages that got compromised which lead to more authors getting compromised. And I think a big reason why it moved so quickly was because of post-install scripts. If the attack happened more slowly, then the community would have more time to react and block the compromised packages. So just slowing down an attack is valuable on its own.


solution: add your entire 'node_modules' folder to source control.


What benefit does doing that give me that the package-lock.json does not already provide?


it's kind of tongue-in-cheek but it would provide the maximum amount of isolation from any upstream package changes. Even if the package versions are removed from NPM (which happens in rare cases), you'd still have a copy.


James Shore prefers committing packages to source control, so yours isn't an entirely outlandish suggestion. The NPM package removal rug pull you describe is a use case I hadn't thought of.

Rather than loading up my git repo with binaries, I find it more appealing to maintain an enterprise repository that proxies to NPM and keeps a local cache for the enterprise. Part of what bothers me about letting `npm` point to the https://registry.npmjs.org public repository is the valuable trove of information they can gather about what my team is currently working on by watching what we download.

By pointing npm to a hosted repository proxy, not only can we protect against package deletion rug pulls, but we can also keep hidden details about what we are working on right now. There are also uptime benefits from self-hosting a repo, although registry.npmjs.org has been remarkably dependable.

The self-hosted proxying npm repository I have used in mega-corp was Artifactory, and it was pretty great.


I think different things are happening...

For experienced engineers, I'm seeing (internally in our company at least) a huge amount of caution and hesitancy to go all-in with AI. No one wants to end up maintaining huge codebases of slop code. I think that will shift over time. There are use cases where having quick low-quality code is fine. We need a new intuition about when to insist on handcrafted code, and when to just vibecode.

For non-experienced engineers, they currently hit a lot of complexity limits with getting a finished product to actually work, unless they're building something extremely simple. That will also shift - the range of what you can vibecode is increasing every year. Last year there was basically nothing that you could vibecode successfully, this year you can vibecode TODO apps and stuff like that. I definitely think that the App Store will be flooded in the coming future. It's just early.

Personally I have a side project where I'm using Claude & Codex and I definitely feel a measurable difference, it's about a 3x to 5x productivity boost IMO.

The summary.. Just because we don't see it yet, doesn't mean it's not coming.


If I've learned something about humans in my 40+ years of being alive is that in the long term, convenience trumps all other considerations.


I think it depends.

There are very simple apps I try to vibe code that AI cannot handle. It seems very good at certain domains, and others it seems complete shit at.

For example, I hand wrote a simulation in C in just 900 LOC. I wrote a spec for it and tried to vibe code it in other languages because I wanted to compare different languages/concurrency strategies. Every LLM I've tried fails, and manages to write 2x+ more code in comparatively succinct languages such as Clojure.

I can totally see why people writing small utilities or simple apps in certain domains think its a miracle. But when it comes to things like e.g. games it seems like a complete flop.


Yeah Dario has said similar things in interviews. The way he explained it, if you look at each specific model (such as Sonnet 3.5) as its own separate company, then each one of them is profitable in the end. They all eventually recoup the expense of training, thanks to good profit margins on usage once they are deployed.


There’s a magic component to rule breaking that a lot of online advice doesn’t usually talk about: You have to actually be right. Your ideas have to be good. Companies don’t want everyone breaking the rules because a lot of devs don’t have the engineering skill to back it up.

So if you start operating as a rogue agent then make sure you are good. Tom Cruise (stuntman and actor) had a quote I love- “Don’t be careful. Be competant.”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: