Excellent link.
So the best solution is to take the authors observation and add the average seasonal lag to arrive at the „real“ observed spring, summer, fall and winter.
I don’t think that is a good example. No one is debating whether LLMs can generate completely new sequences of tokens that have never appeared in any training dataset. We are interested not only in novel output, we are also interested in that output being correct, useful, insightful, etc. Copying a sequence from the user’s prompt is not really a good demonstration of that, especially given how autoregression/attention basically gives you that for free.
> That means the group of characters it outputs must have been quite common in the past. It won't add a new group of characters it has never seen before on its own.
My only claim is that precisely this is incorrect.
You couldn’t design a better system for incentivizing leaks if you were trying. Hell, the CEO literally said as much. Not sure how you can conclude the markets aren’t the problem.
Yeah I had to reread that part... I was like, no way the CEO of Polymarket publicly said on record that it incentivizes leaks. Had to check to make sure I wasn't on the onion.
Wow, it took some time for me to dig the interview out(0). I think it's stupid that Atlantic did not link to it, and that they misrepresented the context.
I agree that company info being leaked is whatever. No one is hurt by knowing that Apple is working on a foldable phone; maybe an exec loses his million dollar bonus and can't upgrade his yacht this year, and the market can operate off of that knowledge.
But the flip side is that there's no way to distinguish between leaked company info and leaked government info, and up until this era of history, there was rarely financial incentive for anyone to leak govt info, and even if there were, it was almost impossible to do so completely anonymously.
I'm not necessarily agreeing with the article. Who knows if that actually happened? But the incentives make it more plausible than ever.
Rhetorical question: why do non-insiders still bet in these markets? Surely, after all of the focus on insiders, people will begin to realize that betting without insider knowledge is a fool’s gambit..
Because the way these companies make money is incentivizing the behavior of gambling addicts. It's just like asking why people will continue taking drugs if it's known to harm them.
You act like they all act rationally with maximum information.
Kids are growing up with the culture of sports betting, meme stocks, Robinhood for easy investing (even if you can’t afford a single share of a stock), virtual items and loot boxes, “blind box” products, etc. the entire economy runs on taking advantage of people with gambling compulsions / addictions.
And to answer your rhetorical-but-not-really question, not all people know they are “the fish” (referring to the quote from the movie Rounders).
Are we automatically discarding everything that might or might not have been written or assisted by an LLM? I get it when the articles are the type of meaningless self improvement or similar kind of word soup. However, if hypothetically an author uses LLM assistance to improve their styling to their liking, I see nothing wrong with that as long as the core message stands out.
I've seen so many LLM-generated articles by this point that obviously had no human editing done beforehand — just prompt and slap it onto the Web — that it makes me wonder every time. If I read this article, will I actually learn only truth? Or are there some key parts of this article that are actually false because the LLM hallucinated them, and the human involved didn't bother to double-check the article before publishing it?
If someone was just using the LLM for style, that's fine. But if they were using it for content, I just can't trust that it's accurate. And the time cost for me to read the article just isn't worth it if there's a chance it's wrong in important ways, so when I see obvious signs of LLM use, I just skip and move on.
Now, if someone acknowledged their LLM use up front and said "only used for style, facts have been verified by a human" or whatever, then I'd have enough confidence in the article to spend the time to read it. But unacknowledged LLM use? Too great a risk of uncorrected hallucinations, in my experience, so I'll skip it.
This was my exact thought process reading this. The business side of my company does not care or want to wait for complex solutions that sound cool to engineers. If anything, we have the opposite problem: convincing business stakeholders when complexity is in fact warranted.
At least Snow Crash was a fun read. I find a lot of this stuff just tedious - like yeah, wow, aren't you cool, you let your robot burn money and wreck shit and waste time, cool, couldn't you have done something real with your conspicuous amounts of free time?
Like, I'm getting to the point where I'm hoping that a football player shows up to shove these people into a locker where they can think about things without a screen for a couple hours.
He reuses (or rather, rehomes) too by finding buyers for cleaned and sharpened knifes of good quality. That's a plus from an environmental viewpoint.
I dislike scalping where no real value is added to the service provided beyond getting there first, but this guy uses his skills to pick out good knifes, does quality assurance and presumably sharpening, and sells them with the ability to inform buyers about the type of knife and its intended use.
reply