Hacker Newsnew | past | comments | ask | show | jobs | submit | xmcqdpt2's commentslogin

I thought they picked it specifically because it is gender neutral, but now I double checked and apparently it's only gender neutral in French,

https://en.wikipedia.org/wiki/Claude_(given_name)


> Since the code in question was doing IO that you knew could fail handling the situation can be as simple as setting a flag from within the signal handler.

If you are using mmap like malloc (as the article does) you don't necessarily know that you are "reading" from disk. You may have passed the disk-backed pointers to other code. The fact that malloc and mmap return the same type of values is what makes mmap in C so powerful AND so prone to issues.


Yes, and for writing (the example is read-write) it's of course yet another kettle of fish. The error might never get reported at all. Or you might get a SIGBUS (at least with sparse files).

Maybe all the users are OpenClaw instances?

There is no "why." It will give reasons but they are bullshit too. Even with the prompt you may not get it to produce the bug more than once.

If you sell a coding agent, it makes sense to capture all that stuff because you have (hopefully) test harnesses where you can statistically tease out what prompt changes caused bugs. Most projects wont have those and anyway you don't control the whole context if you are using one of the popular CLIs.


If I have a session history or histories, I can (and have!) mine them to pinpoint where an agent either did not implement what it was supposed to, or understand who asked for a certain feature an why, etc. It complements commits, sessions are more like a court transcript of what was said / claimed (session) and then you can compare that to what was actually done (commits).

It's not reproducible though.

Even with the exact same prompt and model, you can get dramatically different results especially after a few iterations of the agent loop. Generally you can't even rely on those though: most tools don't let you pick the model snapshot and don't let you change the system prompt. You would have to make sure you have the exact same user config too. Once the model runs code, you aren't going to get the same outputs in most cases (there will be date times, logging timestamps, different host names and user names etc.)

I generally avoid even reading the LLM's own text (and I wish it produced less of it really) because it will often explain away bugs convincingly and I don't want my review to be biased. (This isn't LLM specific though -- humans also do this and I try to review code without talking to the author whenever possible.)


That would be great because "I got it from Wikipedia and Arxiv" isn't exactly useful.

From reading your second link (and please tell me if I got it wrong) it sounds like it isn't actually tracking to training data but to prototypes which are then linked a posteriori to likely sections of the training data. The attribution isn't exact, right? It's more like "these are the likely texts that contributed to one of those prototypes that produced the final answer." Specifically the bit in PRISM titled "Nearest neighbour Search" sounds like you could have a prototype that takes from 1000 sources but 3 of them more than the others, so the model identify those 3, but the other ones might matter just as much in aggregate?

It says that the decomposition is linear. Can you remove a given prototype and infer again without it? That would be really cool.


This part of the claim is involved, so we have future posts to clarify this. And yes, you can remove a prototype and generate again. We show examples in that prism post.

In prism, for any token the model generates, you can say, it generated this token based on these sources. During training, the model is 'forced' to match all the prototypes to specific tokens (or group of tokens) in the data. The prototype itself can actually be exactly match to a training data point. Think of it like clustering, the prototype is a stand-in for training data that looks like that prototype, we force (and know) how much the model will rely on that prototype for any token the model generates.

The demo in the post is not as granular because we don't want to overwhelm folks. We'll show granular attribution in the future.


I think in practice it's less of a programming language and more of a scripting environment. It's like excel for math. There are many more people using it to produce mathematical results (like how excel is used to produce reports and graphs) than people who use it to produce programs.

This is why its not particularly problematic that it is closed source. Most people I've worked with who use it produce mathematical results with it that are fully checkable by hand.


Le "tu" m'irrite vraiment venant d'un ordinateur.

Java also has covariant mutable arrays. I can't believe they created the whole language and didnt realize that covariant arrays are unsound? Or didn't care?

They didn’t care about preventing all unsoundness at type check time. As long as JVM can detect it and throw an exception, it’s good enough for Java.

> Or go the opposite way: If you want a language that feels dynamic and leads to prototyping, well a type system that is total and complete might be too heavy. Instead of only allowing programs that are proven to be typed correctly you might want to allow all programs that you can not proved to be wrong. Lean into gradual typing. Everything goes at first and the typing becomes as strict as the programmer decides based on how much type information they add.

If you have generics and want type annotations to type check at compile time, you are going to need unification,

let l: List<Animal> = List(dog, cat)

At that point, you have written all the machinery to do inference anyway, so might as well use it.

I guess you could have a language where the above must be gradually typed like

let l: List<Animal> = List<Animal>(dog: Animal, cat: Animal)

but that doesn't seem particularly ergonomic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: