Hacker Newsnew | past | comments | ask | show | jobs | submit | evnu's commentslogin

For me, if it's worth thinking about it, it's worth writing it down. Doesn't matter if it's a todo list I just came up with, a system diagram, whatever I am currently working on, or thoughts on a human interaction I just witnessed. The act of writing it down guides me in my thinking.


"perhaps even why it was necessary at all" not being asked anymore is what I fear as well. Stumbling over problems repeatedly gets attention to architectural defects. Papering over the faults with a non-complaining AI buries the defects even deeper, as the pain of the defects isn't felt anymore.


Uiua is the first one that made array languages "click" for me due to the formatter.


Sometimes it works to find a solution which makes the life of teams easier and comes with an additional gain in security. That can potentially be sold more easily to the team then, as you are solving a problem the team actually experiences.


I wondered about that as well while looking into the http handler. I think a missing space between method and path overruns the buffer (haven't tried running it though).


Maybe because aging is non-linear.


Yeah that could be. I need to adjust my mindset around the changes.


Some years ago, I started to use FIXME to indicate that something is blocking the PR and needs to be done before merging, and TODO if something can be done at a later point in time. Then, CI only needs to grep for FIXME to block merging the PR, which works for practically any language. Works pretty well for me, maybe that tip can help others as well.


> Instead you should use high quality sources, then ask the LLM to summarize them for you to start with (NotebookLM does this very well for instance, but so can others).

How do you determine if the LLM accurately reflects what the high-quality source contains, if you haven't read the source? When learning from humans, we put trust on them to teach us based on a web-of-trust. How do you determine the level of trust with an LLM?


> When learning from humans, we put trust on them to teach us based on a web-of-trust.

But this is only part of the story. When learning from another human, you'll also actively try and devise whether they're trustworthy based on general linguistic markers, and will try to find and poke holes in what they're saying so that you can question intelligently.

This is not much different from what you'd do with an LLM, which is why it's such a problem that they're more convincing than correct pretty often. But it's not an insurmountable issue. The other issue is that their trustworthiness will wary in a different way than a human's, so you need experience to know when they're possibly just making things up. But just based on feel, I think this experience is definitely possible to gain.


Because summarizing is one of the few things LLMs are generally pretty good at. Plus you should use the summary to determine if you want to read the full source, kind of like reading an abstract for a research paper before deciding if you want to read the whole thing.

Bonus: the high quality source is going to be mostly AI written anyway


Actually, LLMs aren’t that great for summarizing. It would be a boon for RAG workflows if they were.

I’m still on the lookout for a great model for this.


We found Erlang to be the right choice at small scale. Namely, we used it in iot applications, where the self-healing property of proper supervision trees resulted in a very stable implementation. We also saw the same benefits in the cloud part of our application. No complicated k8s setup, just simple supervisor trees.


> I think it will spawn a lot more code that was as bad as it was before.

And that makes it even harder for seniors to teach: it was always hard to figure out where someone has misconceptions, but now you need to work through more code. You don't even know if the misconceptions are just the AI misbehaving, the junior doing junior things, or if the junior should read up on certain design principles that may not be known to the junior yet. So, you end up with another blackbox component that you need to debug :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: