what is the solution then to age gating apps that the public feels should be age gated? (TikTok, Instagram, etc). it seems like every app implementing its own guessing system would have even more holes, right?
this is one where I am sympathetic. the moment when someone, with their parent, is setting up a device seems like the best point to check age. right?
The companies you mentioned are the ones profiting handsomely off their intentionally addictive platforms. They're the ones with massive legal departments. Obviously they should be the ones liable to make sure the kids aren't getting abused on their platforms, not a bunch of volunteer Linux developers who couldn't care less about social media or monetization.
They could've written these laws to go after Apple and Microsoft specifically, and assume that most kids wouldn't have the wherewithal to install Linux themselves. That may or may not be effective. But no, the way the law is written, any hobbyist OS dev is now legally liable for the abuse kids might suffer on massive social networks that are completely unrelated to the OS.
The funny thing is that Estonia actually already figured this all out. Their national ID system allows any platform to reliably verify anybody's age without gaining access to any other information about them. It's the perfect system for reliable checking age while maintaining perfect privacy about all other personal data. But I don't think we'll see that in the US in my lifetime, so we'll just have to keep fighting over all these ineffective privacy nightmares instead.
the solution is to remove the bits of those apps that are harmful to children (and adults): the algorithmic data feed, the infinite scroll, the engagement tactics, the advertising
So, for instance, pornography and gambling should be 100% illegal? Or at the least, all social media sites should censor any discussions that aren't child-appropriate?
No, I don't think pornography, or arguably gambling has the same "manipulative addictiveness" hooks in the same way. The equivalent for those would be something like, if a company every time you opened their porn app had your phone emit a silent puff of nicotine (just... imagine that existed for the sake of analogy). It's about the difference between going on to Facebook and seeing your feed of your friends' posts and seeing your feed of posts selected based on content expected to ragebait you into responding.
They're saying we should remove the features in general because they're anti-features harmful to everyone, and focusing on children distracts from that fact.
This conclusion is up for debate, but that's what they mean.
"Scientists may have...that ability could...early experiments suggest...if verified, it could..."
I have become jaded with publications that hedge like this. In my experience most of these discoveries never pan out, they just disappear. And not being in the field myself, I don't know how to judge.
Does anyone in quantum computing have a read on how big a deal this is (or isn't)?
The gap between the laboratory and the factory is big. A technology usually requires a ton of refinement before it's ready for mass adoption. EVs are a good example.
You should read them as publicity to convince stupid politicians to continue to fund basic research when they are more inclined to go for tax cuts for billionaires. Annoying, but a necessary evil.
The key thing here is not whether it's AI. The key thing is quality and signal. No one wants to read to a low quality human comment either.
If the AI output was actually better than talking to a real human, more useful, more concise, serving the job to be done, then no one would have a problem with it. In fact they would appreciate it. That future is not here in many areas.
The problem is people are wielding AI right now and either [a] the models they are using are not good enough, [b] they aren't being given enough context, or [c] they are deployed in a way that makes it sloppy
(Insert joke about whether this comment is AI. It's not, but joke away)
No. It doesn’t matter how good an llm model is. If a person has something to say and they can give the llm enough context to say it well, they should just write it themselves. Theres 0 reason to bring a llm into it. Doing so simply makes your writing less trustworthy because as a reader I don’t know if what I’m reading is genuine from the writer or simply average of all texts filler.
No it isn't. I really do not care what the LLM has to say. If a person has taken the (substantial) time necessary to fill the context with enough information that something interesting comes out, I would much rather they simply give me the inputs. The middleman is just digested Internet text. I've already got one of those on my end.
That does somewhat depend on the size of the context.
LLMs won't add information to context, so if the output is larger than the input then it's slop. They're much better at picking information out of context. If I have a corpus of information and prompt an extraction, the result may well contain more information than the prompt. It's not necessarily feasible to transfer the entire context, and also I've curated that specific result as suitably conveying the message I intend to convey.
This does all take effort.
My take is also that I am interested in what people say: I have priors for how worthwhile I expect it to be to read stuff written by various people, and I will update my priors when they give me things to read. If they give me slop, that's going to affect what I think of them, and I expect the same in return. I'm willing to work quite hard to avoid asking my colleagues to read or review slop.
> LLMs won't add information to context, so if the output is larger than the input then it's slop
That doesn't align with my observations. A lot of times they are able to add information to context. Sure it's information I could have added myself, but they save me the time. They also do a great job of taking relatively terse context and expanding upon it so that it is more accessible to someone who lacks context. Brevity is often preferable, but that doesn't mean larger input is necessarily slop.
I disagree. If my colleague can't be bothered to write a PR comment themselves then I can't be bothered to read it. If I can gain the same insights from interfacing an LLM directly then there's no point in this intermediary dance.
Yup. The comment about the LLM generated PRs is telling. The complain is the LLM generated PRs don't describe design intent. You know how to avoid that? Tell the LLM to provide intent, and if need be, give it the intent. A PR which doesn't capture intent should be categorically rejected and the parties responsible should expect to never get a PR through without it.
> The key thing here is not whether it's AI. The key thing is quality and signal. No one wants to read to a low quality human comment either.
This is so obviously true to intelligent people (and is even a point made in the article) ... it's sad that you're getting downvoted.
The OP wrote
> When I talk to a person, I expect that they are telling me things out of their head — that they have developed a belief and are trying to communicate it to me.
But when I'm having a conversation about a subject (rather than with a friend, partner, or other person with whom I have a relationship and the conversation is part of the having of that relationship) I don't care what is in that person's head, I care about the truth of the matter, so I'm far more interested in their sources, their logic and the validity of same. Unless I'm a psychologist doing a survey, why should I care about some random person's beliefs? Since I'm a truth seeker, I care about their arguments, and of course the quality of their arguments is of paramount importance. I appreciate people who can back up their arguments, and LLM summaries that are chock full of facts gleaned from the massive training data that includes a vast amount of human knowledge are fully appreciated--while being aware that hallucination is possible so I often double check things regardless of the source. OTOH, the pushback to this is from people I consider worse than irrelevant--they not only are willfully ignorant but they reject knowledge seeking for irrational ideological reasons. (I myself see the LLM industry to be extremely problematic, but as long as LLMs exist and are capable of producing quality signal--which is the given here--then I will use them.)
This whole page is illustrative: so many people are telling us things out of their head ... that have nothing to do with the article because they didn't read it. So they blather about their beliefs and opinions about support--because that's how they interpreted the title. These comments are useless.
P.S.
> If all you care about is the facts, and not the other’s relationship to them, why engage with a person at all?
I already said: I'm a truth seeker. Also I sometimes seek to persuade people in public forums--and not necessarily the person I'm corresponding with. And missing is any reason why I should care about internet randos' relationships with their beliefs, other than as a psychological survey.
> You could query a LLM for whatever subject, argument or counterpoint you wish.
I can do better, and can do more, as noted.
> Besides, your hypothetical summaries chock full of facts don’t exist, at least not yet. Most LLM summaries are chock full of filler, thus the name slop, thus why us “ignorant” people hate reading it.
This is an example of a belief that is not supported by the facts--if it's even a belief, which I doubt--it's emo ideology. Putting "ignorant" in quotes doesn't falsify it, and I have never encountered a remotely intelligent person who "hates" reading LLM summaries--this is in the same category as people who reject Wikipedia citations because "anyone can edit it". This person unintelligently reduces all LLM output to "slop"--maybe he should try actually reading the head article, which has a quite different take.
If all you care about is the facts, and not the other’s relationship to them, why engage with a person at all? You could query a LLM for whatever subject, argument or counterpoint you wish.
Besides, your hypothetical summaries chock full of facts don’t exist, at least not yet. Most LLM summaries are chock full of filler, thus the name slop, thus why us “ignorant” people hate reading it.
As an American who has lived in Japan and traveled around Asia, Europe, and South America, Japan's attention to detail is almost superhuman. From how bathroom lines are managed, packages are wrapped, garden moss is curated, dishes are plated, everything is almost perfect. It's like the level of service in Michelin restaurants, applied down to the lowliest of jobs.
There's nitpicks people will find with a statement like this but I've never found anything like it.
It seems unbelievable that this is the first time the child ever picked up a paintbrush and applied paint to a surface.
It's probably more like: this is the first "published" final painting he ever did, after doing hundreds of other practice paintings/sketches that don't "count"
My anxiety about falling behind with AI plummeted after I realized many of these tweets are overblown in this way. I use AI every day, how is everyone getting more spectacular results than me? Turns out: they exaggerate.
Here are several real stories I dug into:
"My brick-and-mortar business wouldn't even exist without AI" --> meant they used Claude to help them search for lawyers in their local area and summarize permits they needed
"I'm now doing the work of 10 product managers" --> actually meant they create draft PRD's. Did not mention firing 10 PMs
"I launched an entire product line this weekend" --> meant they created a website with a sign up, and it shows them a single javascript page, no customers
"I wrote a novel while I made coffee this morning" --> used a ChatGPT agent to make a messy mediocre PDF
Getting viral on X is the current replacement for selling courses for {daytrading,amazon FBA,crypto}.
The content of the tweets isn't the thing.. bull-posting or invoking Cunningham's Law is. X is the destination for formula posting and some of those blue checkmarks are getting "reach" rev share kickbacks.
Yeah, if you get enough impressions, you get some revenue, so you don't need to sell any courses, just viral content. Which is why some (not ALL) exaggerate as suggested.
It's a bit insane how much reach you need before you'd earn anything impactful, though.
I average 1-2M impressions/month, and have some video clips on X/Twitter that have gotten 100K+ views, and average earnings of around $42/month (over the past year).
I imagine you'd need hundreds of millions of impressions/views on Twitter to earn a living with their current rates.
Thanks a lot for your transparency Jeff! Much needed in this area. And your content is quality, much unlike what else being discussed here.
It is really hard to actually make anything substantial on social media exposure. Unfortunately this does not stop many from exaggerating claims in order to (maybe become) be internet famous, or seeing high number of clicks etc. So it is both bad business for creators, and poisoning the discourse for readers - the only real winners are the social media companies and the product companies that get hyped up.
> Unfortunately this does not stop many from exaggerating claims in order to (maybe become) be internet famous
I've been thinking about this a lot lately in another context -- vira priests being anti-vax and realized it's the other way around: their motivation doesn't matter, but the viewers don't want to see moderate content, they want to see highly polarized and controversial topics.
The same with the claims about AI. Nobody wants to hear AI boosts productivity in nuanced way, people either want to hear about 10X or -10X so the market dictates the content/meme.
I'm not as familiar with your content but how often do you post? I have a friend who posts 'meme' type of content (all original) and he makes a decent amount, but he has it all queued up.
I pretty much never even went there for technical topics at all, just funny memes and such, but one day recently I started seeing crazy AI hype stories getting posted, and sadly I made a huge mistake and I clicked on one once, and now it’s all I get.
Endless posts from subs like r/agi, r/singularity, as well as the various product specific subs (for Claude, OpenAI, etc). These aren’t even links to external articles, these are supposedly personal accounts of someone being blown away by what the latest release of this or that model or tool can do. Every single one of these posts boils down to some irritating “game over for software engineers” hype fest, sometimes with skeptical comments calling out the clearly AI-generated text and overblown claims, sometimes not. Usually comments pointing out flaws in whatever’s being hyped are just dismissed with a hand wave about how the flaw may have been true at one time, but the latest and greatest version has no such flaws and is truly miraculous, even if it’s just a minor update for that week. It’s always the same pattern.
I actually read through the logs and the code in the rare instances someone actually posts their prompts and the generated output. If I'm being overly cynical about the tech, I want to know.
The last one I did it on was breathlessly touted as "I used [LLM] to do some advanced digital forensics!"
Dawg. The LLM grepped for a single keyword you gave it and then faffed about putting it into json several times before throwing it away and generating some markdown instead. When you told it the result was bad, it grepped for a second word and did the process again.
It looks impressive with all these json files and bash scripts flying by, but what it actually did was turn a single word grep into blog post markdown and you still had to help it.
Some of you have never been on enterprise software sales calls and it shows.
> Some of you have never been on enterprise software sales calls and it shows.
Hah—I'm struggling to decide whether everyone experiencing it would be a good thing in terms of inoculating people's minds, or a terrible thing in terms of what it says about a society where it happens.
“I used AI to make a super profitable stock trading bot”
—-> using fake money with historical data
“I used AI to make an entire NES emulator in an afternoon!” —-> a project that has been done hundreds of times and posted all over github with plenty of references
> “I used AI to make a super profitable stock trading bot” —-> using fake money with historical data
Stocks are another matter. There were wonder "algorithms" even before "AI". I helped some friends tweak some. They had the enthusiasm and I had the programming expertise and I was curious.
That was a couple years ago. None of them is rich and retired now - which was what the test runs were showing - and I think most aren't even trading any more.
I feel the same.
I understand some of the excitement. Whenn I use it I feel more productive as it seems I get more code done.
But I never finish anything earlier because it never fails to introduce a bizarre bug or behaviour that no one in sane made doing the task would
> "I wrote a novel while I made coffee this morning" --> used a ChatGPT agent to make a messy mediocre PDF
There was a story years ago about someone who made hundreds of novels on Amazon, in aggregate they pulled in a decent penny. I wonder if someone's doing the same but with ChatGPT instead.
Pretty sure there was a whole era where people were doing this with public domain works, as well as works generated by Markov chains spitting out barely-plausible-at-first-glance spaghetti. I think that well started to dry up before LLMs even hit the scene.
It had happened in Japan. There was on author who were updating 30+ series simultaneously on Kakuyomi, the largest Japanese web novel site. A few of them got top ranked.
Afaik, I think the way people are making money in this space is selling courses that teach you how to sell mass produced AI slop on Amazon, rather than actually doing it
At the end of the day, it doesn't really get you that much if you get 70% of the way there on your initial prompt (which you probably spent some time discussing, thinking through, clarifying requirements on). Paid, deliverable work is expected to involve validation, accountability, security, reliability, etc.
Taking that 70% solution and adding these things is harder than if a human got you 70% there, because the mistakes LLMs make are designed to look right, while being wrong in ways a sane human would never be. This makes their mistakes easy to overlook, requiring more careful line-by-line review in any domain where people are paying you. They also duplicate code and are super verbose, so they produce a ton tech debt -> more tokens for future agents to clog their contexts with.
I like using them, they have real value when used correctly, but I'm skeptical that this value is going to translate to massive real business value in the next few years, especially when you weigh that with the risk and tech debt that comes along with it.
Since I don't code for money any more, my main daily LLM use is for some web searches, especially those where multiple semantic meanings would be difficult specify with a traditional search or even compound logical operators. It's good for this but the answers tend to be too verbose and in ways no reasonably competent human would be. There's a weird mismatch between the raw capability and the need to explicitly prompt "in one sentence" when it would be contextually obvious to a human.
Yep - no doubt that LLMs are useful. I use them every day, for lots of stuff. It's a lot better than Google search was in its prime. Will it translate to massively increased output for the typical engineer esp. senior/staff+)? I don't think it will without a radical change to the architecture. But that is an opinion.
I completely agree, I found it very funny that I have been transitioning from an "LLM sceptic" to a "LLM advocate", without changing my viewpoint. I have long said that LLM's won't be replacing swathes of the workforce any time soon and that LLM's are of course useful for specific tasks, especially prototyping and drafting.
I have gone from being challenged on the first point, to the second. The hype is not what it has been.
"I used AI to write a GPU-only MoE forward and backward pass to supplement the manual implementation in PyTorch that only supported a few specific GPUs" -> https://github.com/lostmsu/grouped_mm_bf16 100% vibe coded.
One of my favorite stories from the dotcom bust is when people, after the bust, said something along the lines of: "Take Pets.com. Who the hell would buy 40lb dogfood bags over the internet? And what business would offer that?? It doesn't make sense at all economically! No wonder they went out of business."
Yet here we are, 20 years later, routinely ordering FURNITURE on the internent and often delivered "free".
My point being, sure, there is a lot of hype around AI but that doesn't mean that there aren't nuggets of very useful projects happening.
I would encourage people to test this out for themselves, I think you will find a different result. People today are starved for in-person connection, but are afraid to initiate the conversation.
This doesn't come naturally to me, but after working on it over a few years, 95% of the time strangers are excited to chat and say hi and make a friend.
You mentioned working on it — do you have a particular strategy, venue, or opening line/guiding ethos that you find works well?
I love making friends with strangers, but usually rely on the "handshake protocol" of a casual observation or small talk that is then accepted (with a similar slight-deepening or extension of the thought) or rejected (casual assent or no response at all), until the bandwidth opens and I can foster a more meaningful moment of connection with a pivot like "Oh awesome that you do $THING for work. Do you enjoy what you do?" or "Oh I don't know much about $LOCATION_YOURE_FROM. Good spot for a vacation, or good spot to drive straight through?"
As somewhere between "thinks like an engineer" and "on the spectrum," I really enjoy hearing others' strategies or optimizations (optimizing for quality, connection, warmth) for social situations.
I found out that everybody has at least one subject that they are super passionate and knowledgeable about, and that I can learn at least this one thing from any human being. So instead of pushing the conversation into my areas of expertise, I find it more fun for everybody to let people steer it to what they really care about. This way we both get a sense of connection, it takes the weight of my shoulder to have to perform or amuse people, I get to learn random interesting things, and on top of that people think I am an amazing conversational partner, even though its them who do most of the talking (lol). Sometime people go full autistic on you and give you a massive ear beating but then you always have the option of saying "hey, it's been great talking to you, but I gotta run for a $thing. see you around!"
FWIW I think you're already doing the thing. That's it. But I'd suggest trying not to care too much about optimisation. It's unnecessary in my view because it implicit puts goals & outcomes as the end, when it's, ore about meandering and seeing where things go, endless possibilities.
> "Oh I don't know much about $LOCATION_YOURE_FROM."
I always love the most to chat with strangers in line or wherever when I'm in a foreign country, as there's so much good dirt for digging with someone from a far away place. It's funny, though, the number of times I strike up a conversation with someone halfway around the world only to find out they live within a few miles of me. Last time I was in London, for example, the lady in line in front of me had an Australian accent, and I always enjoy talking to Aussies. Yep, she was an Aussie... Who lives a few towns over from me in the US, in the same apartment complex my wife lived in when I met her.
There does feel like some wide resignation (more so with younger people <35 if I can generalise a bit) that we're too far gone everyone being closed off. But I've generally found that there is no real resolve to that resignation. Many just do not want to, or feel comfortable, making the start. Once the start is done though, the pleasantness of the experience is generally visible.
Speaking as someone who worked for the SF bay area's largest homeless shelter nonprofit:
People who end up homeless long-term usually have negative social behaviors that push others away. When you help them, they don't tell an interesting story, they act angry or yell at you. When you give them money, they don't make you feel you happy, they make you feel afraid or annoyed.
This is unfortunately often due to mental health issues or drug problems. It's very sad, and ends up completely isolating them from all friends, family, and strangers who could help them.
Edit: This article actually puts this into clear terms, long term homeless people are poor "kindees"
what is the solution then to age gating apps that the public feels should be age gated? (TikTok, Instagram, etc). it seems like every app implementing its own guessing system would have even more holes, right?
this is one where I am sympathetic. the moment when someone, with their parent, is setting up a device seems like the best point to check age. right?
am I missing something?
reply