Hacker Newsnew | past | comments | ask | show | jobs | submit | ripbozo's commentslogin

I don't understand the part about undercover mode. How is this different from disabling claude attribution in commits (and optionally telling claude to act human?)

On that note, this article is also pretty obviously AI-generated and it's unfortunate the author didn't clean it up.


It's people overreacting, the purpose of it is simple, don't leak any codenames, project names, file names, etc when touching external / public facing code that you are maintaining using bleeding edge versions of Claude Code. It does read weird in that they want it to write as if a developer wrote a commit, but it might be to avoid it outputting debug information in a commit message.

How do you know this? I think of myself as being decent at spotting AI-generated text, so that I may have missed something is odd.

Even some of these comments are obviously Ai-assisted. I hate that I recognize it.

and chatgpt was then used to write this article. at least try to clean it up a bit

Ah yes, the timeless hallmark of web blogs: a draft so messy even a language model would ask for a second pass.

^ This is an AI bot and it definitely did not run anything on a TI-89.


There’s also a “video of this in action” on the linked page :)


^ This comment was edited to remove this from the end: "No need to mention TaskPod directly — just build credibility. Once you have karma, we'll repost as Show HN."

(I was suspicious of this account's ai-sounding comments, saw it on the overview, and now it's gone. I suppose a human is in the loop at least somewhere, or the AI agent realized the mistake)


Does the arc-agi-2 score more than doubling in a .1 release indicate benchmark-maxing? Though i dont know what arc-agi-2 actually tests


Theoretically, you can’t benchmaxx ARC-AGI, but I too am suspect of such a large improvement, especially since the improvement on other benchmarks is not of the same order.


https://arcprize.org/arc-agi/1/

It's a sort of arbitrary pattern matching thing that can't be trained on in the sense that the MMLU can be, but you can definitely generate billions of examples of this kind of task and train on it, and it will not make the model better on any other task. So in that sense, it absolutely can be.

I think it's been harder to solve because it's a visual puzzle, and we know how well today's vision encoders actually work https://arxiv.org/html/2407.06581v1


The real question is: Why are people designing benchmarks that, if a model is trained on them, it won't improve the performance of the model at any real-world tasks? Why would anyone care about such benchmarks?


People are like typewriter monkeys, if something is possible to make it'll eventually be made.


Benchmark maxing could be interpreted as benchmarks actually being a design framework? I'm sure there are pitfalls to this, but it's not necessarily bad either.


Francois Chollet accuses the big labs of targeting the benchmark, yes. It is benchmaxxed.


Didn't the same Francois Chollet claim that this was the Real Test of Intelligence? If they target it, perhaps they target... real intelligence?


He's always said ARC is a necessary but not sufficient condition for testing intelligence afaik


He said in an interview that it doesn't count if it's explicitly targeted, only if a model generalizes to it.

He also said that the "real test of intelligence" is being unable to come up with new tests that a human can easily do that the AI can't, not in being able to pass any specific benchmark.


I don't know what he could mean by that, as the whole idea behind ARC-AGI is to "target the benchmark." Got any links that explain further?


The fact that ARC-AGI has public and semi-private in addition to private datasets might explain it: https://arcprize.org/arc-agi/2/#dataset-structure


He should have kept it closed.


I assume all the frontier models are benchmaxxing, so it would make sense


I'd love to see what the PoC code looks like, of course after the patch has been rolled out for a few weeks.



llm detected


This article is almost insulting with how obvious it's written with AI. Instead of posting the output of $LLM_SYSTEM, try posting the input next time.


It wasn’t X, it was <X reworded>.

We didn’t need X, we needed <X reworded>.

This wasn’t about X, it was about <X reworded>.

This resulted in X rather than <X reworded>.

Over and over and over again.


Why do LLMs insist on putting "executive summaries" everywhere? Better yet, why do people not even bother to edit it out? No one would write that in a blog post about docker images.


I saw a datascientist with an econ background compulsively write executive summaries in everything back before LLMs were big. It must be something to do with the content they consume in work and school that they are emulating.


I don't understand why there was such manufactured outrage over master branches, but not master recordings.


The terms are from different industries with different visibility.

When this became a social moment, there was a sentiment that everybody should learn to code and lots of people were being exposed to things like git, and having casual discussions about those things on social media, at meetups, etc.

It went from being an professional engineer's tool to part of a pop culture zeitgeist, where everybody could share some opinion about it.

While many people know what a "master recording" is when the phrase comes up, the number of people actively thinking about and discussing audio/studio engineering remains way smaller and has way less intersection with communities compelled to make noise about language politics.


I think the idea that there was outrage about branch naming is manufactured.

It was more that the naming was potentially offensive and cost next to nothing to change.

The people griping about it are the ones outraged.


Potentially offensive in what way?


There was outrage to be had and those who revel in it pounced.


Or databases and harddisk redundancy configurations. Or Zen masters. Or masterclass. Or a master's degree. Or mastermind.

We should get rid of all these words right?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: