And even worse than that is after you get the slightly condescending spiel about how the problem is normal and real but the solution is simple… it turns out it was completely bullshitting and has zero idea what is actually causing the problem let alone a solution.
It’s awful dealing with some niche undocumented bug or a feature in a complex tool that may or may not exist and for a fleeting few seconds feels like you miraculously solved it only to have the LLM revert back to useless generic troubleshooting Q&A after correcting it.
It was entirely intentional. The Rogerian school of psychotherapy stereotyped by “how does that make you feel” was popular at the time and the most popular ELIZA script used that persona to cleverly redirect focus from the bot’s weaknesses in comprehension.
Why are you absolutely convinced the government requesting a contractor (Palantir) no longer use a technology they've determined to be unsuitable for their needs would be "extremely illegal", yet demanding every single company engaged in government contracts can no longer use Anthropic for any use whatsoever is totally fine?
I cannot follow the logic there at all. It's like being concerned that asking your neighbor to move their car would be too rude so your solution is to bulldoze their entire driveway. A federal judge evidently disagrees with your legal theory here so perhaps you're off the mark (in fact they specifically call out that the DoD failed to attempt less drastic remedies in violation of the law behind the designation):
Defendants’ designation of Anthropic as a “supply chain risk” is likely both contrary to law and arbitrary and capricious. The Department of War provides no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur.
There are other serious procedural problems with the government’s actions. Anthropic had no notice or opportunity to respond, which likely violated its due process rights. And the Department of War flouted procedural safeguards required by Congress before entering the supply chain risk designation, including that it consider less intrusive measures.
> Why are you absolutely convinced the government requesting a contractor (Palantir) no longer use a technology they've determined to be unsuitable for their needs would be "extremely illegal", yet demanding every single company engaged in government contracts can no longer use Anthropic for any use whatsoever is totally fine?
Because that’s what the law is! Because 1) 3252 gives them a mechanism to exclude a certain vendor from their supply chain broadly, and 2) singling-out a specific vendor for any other reason (favoritism, corruption, etc.) is not legally permissible under any other law.
You can argue that the law doesn’t make sense, but you can’t argue that the law is not the law?
Well, we were able to observe in the impact of legalization of online betting in real time, watching every other ad slot during sports games turning into gambling app ads designed to hook 20 something men into lighting their paycheck on fire for a brief dopamine hit.
Plus, prediction markets going from a harmless novelty where people would bet a few bucks on an election into a massive offshoot of gambling industry incentivizing manipulating outcomes/insider trading, once again leaving the average gambler left holding the bag and poorer.
You honestly couldn't design a better experiment to test the theory of whether open and legal vs. banned but underground gambling leads to better outcomes. So I'm not sure why it matters that other countries have different laws (that likely were more thoughtfully designed than basically just saying "it's legal now" out of the blue).
It proves that the medical community did not learn from Semmelweis.
Reports on Thalidomide side effects were ignored, suppressed or dismissed. Distributors sat on such reports for months while continuing to sell the drug. Overall it took several years from the first observed birth defects until the drug was banned in most countries.
Numerous other examples before and after that (including deliberate ignorance of fatigue and medical errors resulting from it) show how medicine elevates institutional interests and groupthink over people's lives.
This appears to be an advertisement for a (somewhat inscrutable) AI product they're selling called CANONIC, that also has a cryptocoin bolted on to it somehow.
Oh boy is right. If BITCOIN is SPECulation to distribute cryptographic transactions on a blockchain pre AI, CANONIC is a SPECification to distribute WORK contracts with your AI on a blockchain. So BITCOIN=SPECulation. While CANONIC COIN=SPECification… of WORK!
This is partially true. Thanks for the comment. I’m the developer of CANONIC. It’s an AI GOV framework. But crypto it is not. The opposite.
CANONIC is a learning language to fully govern AI. Ask yourself why you are still programming computation with LLMs when they repeatedly outperform humans on such coding tasks. What’s missing is AI GOV. CANONIC is a contract with your AI. COIN becomes an artifact of good AI governence. Hardly an opaque transaction. :)
It’s a tortured metaphor for benefit or value or bounty. So, to work with the cardiologist, she publicly issues a bounty (numerator) in COIN and I privately estimate the effort denominator. Triage the table of bounties by this fraction.
I guess it could also be used to communicate that some problems are too difficult for modest resources, if the reward exceeds 255.
Anyway, the only thing worth spending it on is resource upgrades, right?
Sora was one of the earliest demos of a "wow okay that is good enough to be mistaken for real" GenAI model, which is what that comment was referencing with the "weapon" reference (the tech behind it not just Sora™ Videos).
Sure, by the time they productized it, Sora was no longer SOTA thanks to the AI arms race. And ultimately positioned as a TikTok for Slop with an annoying watermark so didn't take the world by storm on its own.
But since it was unveiled GenAI videos as a whole have become commonplace everywhere else on the internet, with plenty of negative impact already in terms of spam or manipulation, and we're barely in year 2 so far.
IMO what's really wishful thinking is believing that society will necessarily adapt for the better in response to a deluge of AI spam/ads/propaganda.
You could have said the same about say, pre-AI deceptively edited/ragebait/made up content going viral on FB, "actually this is good because soon people will realize they are being tricked/lied to, they'll think extra-critically before sharing dubious content next time".
Which has not happened. I can only see AI videos/images making the problem worse as people are fed personalized, narrowly targeted content that seem to perfectly appeal to their own beliefs/biases/emotions/etc.
Also, if anything it seems like we will have to trust authoritative groups more thanks to GenAI. If I have to consider every video on the internet from e.g. Iran as fake, I'm going to turn to NYT or WSJ who can be relied on to (usually) share only original content, or highly vetted 3rd party content.
I agree that the solution we may find might not necessarily be for the better. In fact, there are a couple solutions I've seen that fall onto that category, like banning GenAI (does nothing to solve the underlying issue while control over economic production always requires increased authoritarianism).
I can't really provide a truly good solution, as this problem has large ramifications into philosophy and ethics, but I'd think it would involve solutions like attestation and certificates, and, primarily, thinking of shared media (text, images, videos, etc.) not as facts, but, strictly as allegations.
(FYI, this is an LLM bot, check their comment history and note the repetitive structure with every comment they've ever posted all within the last hour)
> This is the right question but hard to answer in practice ...
> The brownfield vs greenfield split is the real answer to ...
> The babysitting point is the one people keep glossing over ...
It’s awful dealing with some niche undocumented bug or a feature in a complex tool that may or may not exist and for a fleeting few seconds feels like you miraculously solved it only to have the LLM revert back to useless generic troubleshooting Q&A after correcting it.
reply