It seems more like a non experienced guy asked the LLM to implement something and the LLM just output what and experienced guy did before, and it even gave him the credit
Copyright notices and signatures in generative AI output are generally a result of the expectation created by the training data that such things exist, and are generally unrelated to how much the output corresponds to any particular piece of training data, and especially to who exactly produced that work.
(It is, of course, exceptionally lazy to leave such things in if you are using the LLM to assist you with a task, and can cause problems of false attribution. Especially in this case where it seems to have just picked a name of one of the maintainers of the project)
Did you take a look at the code? Given your response I figure you did not because if you did you would see that the code was _not_ cloned but genuinely compiled by the LLM.
> then you're doing the opposite of what the author proposes
No, it’s exactly what the author is writing about. Just check his example, it’s pretty clear what he means by “thinking in math”
> Scientific conensus in math is Occam's Razor, or the principle of parsimony. In algebra, topology, logic and many other domains, this means that rather than having many computational steps (or a "simple mental model") to arrive to an answer, you introduce a concept that captures a class of problems and use that.
If you think the ads are working and have 10k potential customers then you start thinking about how to increase your conversion rate thinking you could get a chunk of those 10k, you might think distribution is solved.
But if it turns out only 2.5k are real humans then your conversion rate might not even be an issue and it’s just the marketing strategy that needs tweaking.
The whole point is that they are giving you fraudulent traffic which you use as real data to figure out the next steps. If you don’t know it’s fraudulent or how much of the clicks are fraudulent then you are taking decisions under the wrong assumptions.
> You can’t stop fraudulent clicks just like you can’t stop your SuperBowl ad from playing while your viewers are in the bathroom
That’s not even a good analogy, we are taking clicks, not impressions.
Earlier today I was scrolling at the “work at a startup” posts.
Seems like everyone is doing LLM stuff. We are back at the “uber for X” but now it is “ChatGPT for X”. I get it, but I’ve never felt more uninspired looking at what yc startups are working on today. For the first time they all feel incredibly generic
It seems more like a non experienced guy asked the LLM to implement something and the LLM just output what and experienced guy did before, and it even gave him the credit
reply