Hacker Newsnew | past | comments | ask | show | jobs | submit | javchz's commentslogin

I think this is the way, automated testing for all patches, small changes, and manual testing for big releases.


One should be melting sand to get silicon, anything else it's too abstract to my taste.


Glad you’ve got all that time on your hands. I am still working on the fusion reactor portion of my supernova simulator, so that I can generate the silicon you so blithely refer to.


Spite is an underated productivity tool.


I've partaken in some SDD (Spite Driven Development) myself related to the Garmin ecosystem. The problem with it is once you stop caring, the development stops too.


My new T-shirt


As many flaws as the npm/yarn/pnpm ecosystem has, its interoperability is waaaay better than the whole juggling act between pip, ven, poetry, Anaconda, Miniforge, and uv across projects.

UV it's a step in the right direction, but legacy projects without Dockerfile can be tricky to start.


Liferea looks too old, has a lot of bugs... But man that thing makes me happy, just headlines and click what I want to read.


What bugs did you have ? I am still using it and am very happy with it.


Specially in the high end, you want oled or high refresh rates? You have to buy a "smart tv" that requieres internet to setup, even if you plan use it only with an HDMI device.

I miss old dumb tvs


Yes, but even lifestyle changes (like a diet low in glycemic load and building muscle) can help reduce many of the harmful effects of type 2 diabetes, even sending it into remission for some people in early stages.

Type 1 is a different story. It’s the lack of natural insulin production (due to a damaged pancreas, autoimmune or other causes), basically the opposite problem to type 2, and no amount of lifestyle changes will replace of need of insulin doses.


What it's funny it's that because tokenization there is a non zero chance a LLM audit may not see anything wrong here, similar to the strawberry problem.


Nah, cr and rc are different tokens and LLMs would have no issues telling them apart. An older model might have trouble explaining that cr and rc are similar and can thus get easily mixed up, but the characters are probably more different to the LLM than they are to us.


What about all that GitHub training data using the wrong domain? Even being a different token it’s still being trained as a correct value.


The same with Google photos, it groups similar cats as just one. Fun fact does the same for human twins


Google photos is the same with our family cat, she has her own auto-updated category when we add photos including her. I love this feature.


I can already see LLMs Sommeliers: Yes, the mouthfeel and punch of GPT-5 it's comparable to the one of Grok 4, but it's tenderness lacks the crunch from Gemini 2.5 Pro.


Isn't it exactly what the typical LLM discourse is about? People are just throwing anecdotes and stay with their opinion. A is better than B because C, and that's basically it. And whoever tries to actually bench them gets called out because all benches are gamed. Go figure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: