> What good is your open problem set if really its a trivial "google search" away from being solved. Why are they not catching any blame here?
They are a community run database, not the sole arbiter and source of this information. We learned the most basic research back in highschool, I'd hope researchers from top institutions now working for one of the biggest frontier labs can do the same prior to making a claim, but microblogging has and continues to be a blight on any accurate information so nothing new there.
> GPT-5 was still doing some cognitive lifting to piece it together.
Cognitive lifting? It's a model, not a person, but besides that fact, this was already published literature. Handy that a LLM can be a slightly better search, but calling claims of "solving maths problems" out as irresponsible and inaccurate is the only right choice in this case.
> If a human would have done this by hand it would have made news [...]
"Researcher does basic literature review" isn't news in this or any other scenario. If we did a press release every journal club, there wouldn't be enough time to print a single page advert.
> [...] how many other solutions are out there that just need pieced together from pre-existing research [...]
I am not certain you actually looked into the model output or why this was such an embarrassment.
They are a community run database, not the sole arbiter and source of this information. We learned the most basic research back in highschool, I'd hope researchers from top institutions now working for one of the biggest frontier labs can do the same prior to making a claim, but microblogging has and continues to be a blight on any accurate information so nothing new there.
> GPT-5 was still doing some cognitive lifting to piece it together.
Cognitive lifting? It's a model, not a person, but besides that fact, this was already published literature. Handy that a LLM can be a slightly better search, but calling claims of "solving maths problems" out as irresponsible and inaccurate is the only right choice in this case.
> If a human would have done this by hand it would have made news [...]
"Researcher does basic literature review" isn't news in this or any other scenario. If we did a press release every journal club, there wouldn't be enough time to print a single page advert.
> [...] how many other solutions are out there that just need pieced together from pre-existing research [...]
I am not certain you actually looked into the model output or why this was such an embarrassment.
> But, you know, AI Bad.
AI hype very bad. AI anthropomorphism even worse.