Hacker Newsnew | past | comments | ask | show | jobs | submit | kitd's commentslogin

You wouldn't know they had no history of leaving their local area unless you interviewed them.

Why does not the investigator have to supply some sort of evidence that she has a history of leaving their local area rather than putting the onus on the accused? This line of argument is halfway to "guilty until proven otherwise".

You and the GP that replied to me are way overstating what it means to be a "suspect". It just means the police are investigating you and consider it a possibility you've committed the crime. On its own, is not a sufficient status to search your home, subpoena your ISP, or arrest you - all of those things require a much higher burden of evidence, and oftena third party (judge's) approval. People routinely become "suspects" on much flimsier evidence than an unreliable software match - if I call in an anonymous tip that I saw you acting suspicious near the crime scene, you will probably become a suspect.

If you'd like, you can replace the term "suspect" in my post with "person of interest", which colloquially implies a lot less suspicion but isn't practically any different in terms of how the police interacts with you.


A populist far-right racist would fix all the potholes and bring HS2 in under budget? Got it.

You don't understand the core issue at heart in Britain.

The real distraction is the economic argument. The truth of the matter is natives feel like a stranger in their own country. I say this as someone who is mixed race and 2nd gen before you try and label me a racist. Yawn.


You need to change your social media algorithm.

On the other hand, the process of having Commons legislation rejected by the Lords, then amended and sent back can take almost a year. A government looking to push its legislative programme in a single parliament may choose to remove the most controversial elements in return for an easier passage through the Lords. In this way, just the threat of Lords scrutiny can be enough to moderate the output of the Commons.

If the Lords can’t veto bills, why does their rejection matter?

See also "boiling frogs".

But then I'm replying to @mr_toad so you probably knew that already.


You may remember James Bruton who built a "bike" using 2 balls. Now he has revealed his one-ball version. Check it out!

Thanks for the link. I had often heard of the importance of the battle of Marathon, but couldn't remember its provenance.

Here's an interesting excerpt from the Siege of Orleans though:

the struggle by which the unconscious heroine of France, in the beginning of the fifteenth century, rescued her country from becoming a second Ireland under the yoke of the triumphant English.

That's quite a statement from a 19thC English man.


Tools like Claude Code are the ultimate cheat code for me and have breathed new life into my desire to create

I'm in my 60s and retiring this summer. I feel the opposite. Agents have removed most of the satisfaction and fulfilment from designing, building, testing and completing a feature or component. And if frameworks are a problem, learning to create simply and efficiently without them has its own sense of satisfaction.

Maybe it's a question of expectations. I suspect weavers felt the same with the arrival of mechanised looms in the industrial revolution. And it may be that future coders learn to get their fulfilment otherwise using agents.

I can absolutely see the attraction to business of agents and they may well make projects viable that weren't previously. But for this Luddite, they have removed the joy.


OldAF. I have more ideas than I have time to code up prototypes. Claude code has changed all that, And given it cannot improve the performance of optimized code I've written so far, it's like having a never tiring eager junior engineer to work out how to make use of frameworks and APIs to deploy my code.

A year ago, cursor was flummoxed by simple things Claude code navigates with ease. But there are still corner cases where it hallucinates on the strangest seemingly obvious things. I'm working on getting it to write code to make what's going on in front of its face more visible to it currently.

I guess it's a question of where you find joy in life. I find no joy in frameworks and APIs. I find it entirely in doing the impossible out of sample things for which these agents are not competitive yet.

I will even say IMO AI coding agents are the coolest thing I've seen since I saw the first cut of cuda 20 years ago. And I expect the same level of belligerence and resistance to it that I saw deployed against cuda. People hate change by and large.


Can you elaborate on "resistance against cuda"? What were people clinging to instead?

IMO it was mostly that people didn't want to rewrite (and maintain) their code for a new proprietary programming model they were unfamiliar with. People also didn't want to invest in hardware that could only run code written in CUDA.

Lots of people wanted (and Intel tried to sell, somewhat succesfully) something they could just plug-and-play and just run the parallel implementations they'd already written for supercomputes using x86. It seemed easier. Why invest all of this effort into CUDA when Intel are going to come and make your current code work just as fast as this strange CUDA stuff in a year or two.

Deep learning is quite different from the earlier uses of CUDA. Those use cases were often massive, often old, FORTRAN programs where to get things running well you had to write many separate kernels targeting each bit. And it all had to be on there to avoid expensive copies between GPU and CPU, and early CUDA was a lot less programmable than it is now, with huge performance penalties for relatively small "mistakes". Also many of your key contributers are scientists rather than profressional programmers who see programming as getting in the way of doing what they acutally want to do. They don't want to spend time completely rewriting their applications and optimizing CUDA kernels, they want to keep on with their incremental modifications to existing codebases.

Then deep learning came along and researchers were already using frameworks (Lua Torch, Caffe, Theano). The framework authors only had to support the few operations required to get Convnets working very fast on GPUs, and it was minimal effort for researchers to run. It grew a lot from there, but going from "nothing" to "most people can run their Convnet research" on GPUs was much eaiser for these frameworks than it was for any large traditional HPC scientific application.


Thanks!

It seems funny though: The advantages of GPGPU are so obvious and unambiguous compared to AI. But then again, with every new technology you probably also had management pushing to use technology_a for <enter something inappropriate for technology_a>.

Like in a few decades when the way we work with AI has matured and become completely normal it might be hard to imagine why people nowadays questioned its use. But they won't know about the million stupid uses of AI we're confronted with every day :)


> The advantages of GPGPU are so obvious and unambiguous

I remember being a bit surprised when I started reading about GPUs being tasked with processes that weren't what we'd previously understood to be their role (way before I heard of CUDA). For some reason that I don't recall, I was thinking about that moment in tech just the other day.

It wasn't always obvious that the earth rotated around the sun. Or that using a mouse would be a standard for computing. Knowledge is built. We're pretty lucky to stand atop the giants who came before us.

I didn't know about CUDA until however many years ago. Definitely didn't know how early it began. Definitely didn't know there was pushback when it was introduced. Interesting stuff.


I'm dealing with someone in 2026 insisting that everything has to be written in Python and rely on entirely torch.compile for acceleration rather than any bespoke GPU kernels. Times change, people don't.

The completely low information and amateur hour aspect of what our HPC Welfare Queens were pushing above was that a couple hours invested into coding Intel's Xeon Phi alternative to GPUs demonstrated the folly of their BS "recompile and run" strategy and any attempt to code the thing exposed how much better a design CUDA was than their series of APIs of The Month that followed*. And I was all but blacklisted by the HPC community over standing up to this and insisting on CUDA or I walk, my favorite quote was "You lack vision and you probably wouldn't have backed the Apollo program or Lewis and Clark." Good times, good times...

*But TBF Xeon Phi was not a complete disaster for if you coded it in assembler you could squeeze out Fermi class GPU performance. Good luck getting the "recompile and run" crowd to do that though as they segued from that to relying on compiler directives going forward and that's how NVDA got a decade+ headstart that should never have happened, but did. Today a lot of these sorts are insisting that because of autograd, everything should be written in Python and compiled with an autograd DSL like torch. I am so glad I am close to retirement on that front. I already trust coding agents more than I trust this mindset.


Phi was cool, I think it could have been leveraged into something great. Imagine all consumer CPUs coming with 512 little pentiums in them or something like that.

And ahead of GPUs in some ways at the time. But that was entirely squandered by their idiotic recompile and run marketing. There was some serious denial that thread blocks that could synchronize without thunking back to the CPU along with the intuitive nature of warp programming were pretty much a hardware mode against anything that couldn't do the equivalent.

But good luck explaining that to technical leaders who hadn't written a line of code in over a decade and yet somehow were in charge of things. People really need to consider the backstory here if they want to do better going forward, but I don't think they will. I think history is going to rhyme again.


In the beginning, valid claims of 100x to 1,000x for genuine workloads due to HW level advances enabled by CUDA were denied stating that this ignored CPU and memory copy overhead, or it was only being measure relative to single core code etc. No amount of evidence to the contrary was sufficient for a lot of people who should have known better. And even if they believed the speedups, they were the same ones saying Intel would destroy them with their roadmap. I was there. I rolled my eyes every single time but then AI happened and most of them (but not all of them) denied ever spouting such gibberish.

Won't name names anymore, it really doesn't matter. But I feel the same way about people still characterizing LLMs as stochastic parrots and glorified autocomplete as I feel about certain CPU luminaries (won't name names) continuing to state that GPUs are bad because they were designed for gaming. Neither sorts are keeping up with how fast things change.


The divide seems to come down to: do you enjoy the "micro" of getting bits of code to work and fit together neatly, or the "macro" of building systems that work?

If it's the former, you hate AI agents. If it's the latter, you love AI agents.


I'd say that the divide seems to come down to whether you want to be a manager or a hacker. Skimming the posts in this submission, many of the most enamored with LLMs seem to be project managers, people managers, principal+ engineers who don't code much anymore, and other not hands-on people who are less concerned with quality or technical elegance than getting some kind of result.

Bear in mind also that the inputs to train LLMs on future languages and frameworks necessarily have to come from the hacker types. Somebody has to get their hands dirty, the "micro" of the parent post, to write a high quality corpus of code in the new tech so that LLMs have a basis to work from to emit their results.


I think it's pretty obvious what category you see yourself in.

I don't think you're a hacker. I think you enjoy writing code (good for you). Some of us just enjoy making the computer execute our ideas - like a digital magician. I've also gotten very good at the code writing and debugging part. I've even enjoyed it for long periods of time but there's times where I can't execute my ideas because they're bigger than what I can reasonably do by myself. Then my job becomes pitching, hiring, and managing humans. Now I write code to write code and no project seems too big.

But I'm looking forward to collapsing the many layers of abstraction we've created to move bits and control devices. It was always about what we could do with the computers for me.


I want to "hack" at a different level.

What I want to do is create bespoke components that I can use to create a larger solution to solve a problem I have.

What I don't want to do is spend 45 minutes wrangling JSON to a struct so that I can get the damn component working =)

A quick example: I wanted a component that could see if I have new replies on HN using the Algol API. ~10 minutes of wall clock time with Claude, maybe a minute of my time. Just reading through the API spec is 15 minutes. Not my idea of fun.


“Technical excellence” has never been about whether you are using a for loop or while loop. It’s architecture, whether you are solving the right problem, scalability, etc

Performance critical applications (game engines etc) don't agree with that

Most people aren’t writing game engines. Hell most people at BigTech aren’t worried about scalability. They are building on top of scalable internal frameworks - not code frameworks things like Google Borg.

The reason your login is slow is not because someone didn’t use the right algorithm.

Most game developers are just using other company’s engines.

While yes you need to learn how the architecture, the code isn’t the gating factor.

One example is the Amazon Prime Video team using AWS Step functions when they shouldn’t have and it led to inefficiencies. This was a public article that I can’t find right now.

(And before someone from Amazon Retail chimes in and says much of Amazon Retail doesn’t run on AWS and uses the legacy CDO infrastructure - yes I know. I am a former AWS employee).


> do you enjoy the "micro" of getting bits of code to work and fit together neatly, or the "macro" of building systems that work?

These are not toys. I want to make money. The customers want feature after feature, in a steady stream. It's bad business if the third or fourth feature takes ages. The longer stream, the better financially.

That the code "works" on any level is elementary, Watson, what must "work" is that stream of new features/pivots/redesigns/fixes flowing.


That is an amazing summary. It might not seem that amazing, but I feel like I've read pages about this, but nothing has expressed as elegantly and succinctly.

I do love the former, but it's been nice to take a break from that and work at a higher level of abstraction.

Same. After 40+ years of typing code on a keyboard, my hands aren't as nimble as they were, a little pain sometimes builds up (whether it's arthritis or carpal tunnel or something, I'm not sure). Being able to have large amounts of code written with much less input is a godsend - and it's been great to learn and see what models like Claude can really do, if you can remain organized and focussed on the API's/interfaces.

Do you have WisprFlow or similar STT setup? It's a real Star Trek moment vocally telling my computer what to build, and then to have it build it.

I tried WisprFlow after you mentioned it and after spending ages clicking through all the dialogs only to find it didn't work out of the box with my terminal (I use Claude cli almost exclusively). Could have been something wrong with my terminal I guess, since I wrote my own.

Fascinating. I'm all in on Ghostty these days.

I enjoy both. There’s still plenty of micro to do even in web dev if you have high standards. Read Claude’s output and you’ll find issues. Code organization, style, edge cases, etc.

But the important thing is getting solutions to users. Claude makes that easier.


Maybe have a play with them a bit more. LLMs are quite good at coding, but terrible at software engineering. You hear people talk about “guiding them” which is what I think they are getting at. You still need to know what you are doing or you’ll just drive off a cliff eventually.

At the moment I am trying to fix a vibe coded application and while each individual function is ok, the overall application is a dog’s breakfast of spaghetti which is causing many problems.

If you derive all your pleasure from actually typing the code then you’re probably toast, but if you like building whole systems (that run on production infrastructure) there is still heaps of work to do.


I very much agree! It feels like it's going to be exceptionally challenging in the coming years to convince non-technical people of the value of true SWE; by that I mean, SWE is not just coding, it's everything around that too.

I am in my 50s. I agree with what others have said about your happy place. For me, it is not APIs and fine details of operator overloading. I love solving problems. So much so that I hope I never retire. Tools like Claude Code give me wings.

The need for assembly programmers diminished over the decades. A similar thing will happen here.


Or retire and realize the beach forever is not your version of retirement, and get back to it. I spent a week in the Philippines on the beach before getting bored of that and pulling out a laptop and digging into some Linux thing with Claude code, and then now I'm torn between which app to work on to launch.

> Agents have removed most of the satisfaction and fulfilment from designing, building, testing and completing a feature or component

I highly recommend not using these tools in their "agentic" modes. Stay in control. Tell them exactly what to write, direct the architecture explicitly.

You still get the tremendous benefit of being unlocked from learning tedious syntax and overcoming arcane infra bottlenecks that suck the joy out of the process for me, but you get freed from the tedious and soul crushing parts.


But then you don't get the same gains in output that agentic modes get you. It just goes off and does stuff, sometimes for hours if you get the loop tuned right.

Obviously you should do whatever you want, however you want to do it, and not just do whatever some Internet rando tells you to do, but glorified autocomplete is so 1 year ago. Everyone knows the $20/month plans aren't going to last, time will tell if the $100/month ones do. The satisfaction is now in completing a component and getting to polish it in a way you never had time for before. And then totally crushing the next one in record time. To each their own, of course, but personally, what's been lost with agentic mode has been replaced by quantity and quality.


Yes I'm not recommending "glorified autocomplete". Just shortening the cycle. Give it tasks that would involve maybe a couple of hundred lines of code at a time. I find this captures both the rewarding aspects and gets a lot of the productivity gain - and I'll argue a lot of the remainder of that "productivity gain" sits in somewhat debatable territory : how well all this code holds up that has been developed without oversight is going to be something we only really find out in a few years.

You will maybe like this platform: https://solve.it.com/

Their tag line: "Don't outsource your thinking to AI. Instead, use AI to become a better problem solver, clearer thinker, and more elegant coder."

I have followed the course myself and it reignited my passion. During the course I built a cool side project from scratch, small steps, no vibe coding using the course's principals. It was really satisfying, I felt in control again while learning new things.


I have mixed feelings but echo your sentiments a bit. On the one hand, I can get a lot more done and feel "unchained" so to speak. I have long hated doing frontend development and now it doesn't matter and I love that. However, I don't feel satisfaction from solving problems the same way I used to. I had one long session with Claude a few weeks ago and I told Claude I was done for the night at which point it fired back with "Sounds good, look at how much you accomplished." and I responded with you mean "look at how much you accomplished, I just told you what to do."

It is still weird to me that I talk to a remote Python app but that's how we write code nowadays. Still, I felt almost mocked when Claude plauded my "accomplishments".

So I'd say that I am definitely more productive than I used to be but I enjoy the work less on one level. But on another level, I feel like I can build a lot more and tackle problems I wouldn't have tackled in the past. It's a mixed blessing. It's also a WIP, I expect that the way we write code will change even more, over the next few years.

I love it, I hate it, it's the Brave New World of software development.


> I'm in my 60s and retiring this summer.

Congrats! I'm in that age where I'm envying more the ones like you than the 20-something :)


I’m kinda in both camps. I can make multiple times more proof-of-concepts than ever before, which is awesome. Especially for internal work tools. But then I rely on it too much, and I don’t really know how the thing works, and it makes it hard to get excited about adding to it

50s here. I love building and designing software to solve problems. For years now I haven’t liked the actual coding part. AI has given me a super power.

Scale the Lego pieces more and it’s the same. Bigger projects have more moving parts and require the same thinking.

Id agree it splits both ways. I think in the short run it can be super fun but once you expand your thoughts to the long run it takes the steam out of rediscovered joy of discovery and creation.

Its almost like it reignites novelty at things that were to administratively heavy to figure out. Im not sure if its fleeting or lasting.


Someone else who has read Godel, Escher, Bach by Hofstadter?

Excellent book cheers

Ah, yes ... "volcanic" organisation.

shuffles papers

"Hang on ... it's here somewhere ..."


> feeling resistance

In a software context, I wonder what the impact of the language used is on the sense of "resistance"?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: