79% of ALL child sex trafficking. 4 out of 5 child sex slaves exist thanks to Facebook's policies.
But sure, go on and talk about "leeway" and "limited capabilities" for a company worth nearly a trillion dollars. Do you honestly believe this is acceptable? What are your vested interests here?
Since you're emphasizing the ALL, I am obligated to nitpick that it is not all. The source article says that, but it's wrong; the underlying link clarifies that it's 79% of sex trafficking which occurs on social media. As has been discussed downthread, a social media platform with large marketshare is always going to have a large percentage of every bad thing that can happen on social media.
Do you have a citation for that? You may be right for all I know. I don't know much about it. But that seems unlikely to me, and if it's true, I'd like a reference I can show others when I'm trying to get them to finally close their account.
> [the report] found that 65% of child sex trafficking victims recruited on social media were recruited from Facebook
Even in 2020, I'm very skeptical that so many children were on Facebook that it could account for 2/3 of recruitment. My own kids say that they and their friends are all but allergic to Facebook. It's the uncool hangout for old people, not where teens want to be.
I may be wrong, and I'm certainly not going to tell someone that they're wrong for citing a government study. Still, I doubt it.
The number is wrong / the citation is misleading. It’s closer to 20-30% according to that study, the 79% is referring specifically to cases involving social media, of which Meta platforms are obviously going to make up a large percentage.
There’s also a reporting bias here I’m sure - if Meta is better at reporting these cases then they will become a larger percentage, etc.
You don't really need a majority of potential victims to go to location X for victims from location X to make up a majority of victims; that just means that location X is a low-risk, high-reward place for criminals to lurk looking for victims.
Thanks for looking into it and pulling out that quote. I notice there are some moving goalposts — the parent article claims 79% of _all_ minor sexual trafficking (emphasis mine), but the govt report found
> 65% of child sex trafficking victims recruited _on social
media_ were recruited from Facebook, with 14% being recruited on Instagram
(Emphasis mine). I think the parent article is repeatedly lying about the facts, that’s super annoying. I’m not at all surprised that Facebook and Instagram have the lions share of social-media victims, because they also have the lions share of social media users.
> 4 out of 5 child sex slaves exist thanks to Facebook's policies.
Even if your 79% number is correct, this does not follow. It like if someone said, 30 years ago, that 95% of total advertisements were in the classified section that 9 out of 10 retail sales happened thanks to the classifieds.
(I’m not trying to excuse Facebook’s behavior. But maybe criticisms of Facebook would be more effective if they stayed on track.)
I’m not nitpicking a weird edge case. I’m nitpicking a completely unsound inference. Even if Facebook indeed accounts for 79% of total instances of children being trafficked, it does not follow at all that removing Facebook from the picture would have reduced the number by anywhere near 79%.
Nobody in Salem wanted to be seen to stand up for witches.
I have never had a Facebook account because I never liked what they do, but this 'evidence' against them seems like they are relying on the seriousness of the allegations more than the accuracy.
You are saying that from our perspective. I don't think the argument that witches are not real would have gained you much ground back then.
We don't have the years of analysis of what actually happened for things happening right now.
While a lot of people feel a lot of certainty about all manner of social media harms, the scientific consensus is much less clear. Sure you can pull up studies showing something that looks pretty bad, but you can also find ones that say that climate change is not occurring. The best we have to go on is scientific consensus. The consensus, is not there yet. How do you tell if Jonathan Haidt is another Andrew Wakefield?
I'm not making any claims of certaincy. I have not published any books making claims of harm. I have not gone on a tour of interviews the world over trying to build public opinion instead of building consensus that the information is true.
That's how I know.
I also don't go around talking about race based differences in IQ, but that's just Haidt.
I think Yegge needs to keep up with the tech a bit more. Cursor has gotten quite powerful - it's plan mode now seems about on par with Claude Code, producing Mermaid charts and detailed multi-phase plans that pretty much just work. I also noticed their debug mode will now come up with several thesises (thesi?), create some sort of debugging harness and logging system, test each thesis, tear down the debugging logic and present a solution. I have no idea when that happened, but it helped solve a tricky frontend race condition for me a day or two ago.
I still like Claude, but man does it suck down tokens.
As a father of two small children during COVID, I can't begin to thank fnnch enough for his Honey Bear Hunt project: https://upmag.com/honey-bear-fnnch/
Hundreds (if not thousands) of honey bears were posted in windows around SF. It was one of those things that happens in SF every now and then, a mix of whimsy and hustle and unexpected joy. We couldn't take our kids to school, we couldn't take them to the park. Instead, we would drive them around town and have them point out all the honey bears they saw. "Honey bear! Another one!"
Variants of this were in NL as well, but it was just stuffed animals (I believe in support of health care workers); people went out for walks to go and spot them.
I wish stuff like that would happen again, it was an interesting time where people actually stayed home and explored their environments, their home and themselves a lot. Before that (or at the same time?) it was AR games like Pokemon Go. I'm out of touch with what's happening now, it just feels like people have reverted or gone into a new normal. Or maybe that's just me.
Wow, read through the comments and you weren't joking. I attribute this to crossroads of "this release is v0.1 of what we are building" and the HN crowd who have been scrolling past 120 AI frameworks and hot takes daily and have no patience for anything that isn't immediately 100% useful to them in the moment.
I find the framing of the problem to be very accurate, which is very encouraging. People saying "I can roll my own in a weekend" might be right, but they don't have $60M in the bank, which makes all the difference.
My take is this product is getting released right now because they need the data to build on. The raw data is the thing, then they can crunch numbers and build some analysis to produce dynamic context, possibly using shared patterns across repos.
Despite what HN thinks, $60M doesn't just fall in your lap without a clear plan. The moat is the trust people will have to upload their data, not the code that runs it. I expect to see some interesting things from this in the coming months.
I'm on the "higher level of abstraction" side, but that seems to be very much at odds with however Anthropic is defining it. Abstraction is supposed to give you better high-level clarity at the expense of low-level detail. These $20,000 burning, Gas Town-style orchestration matrices do anything but simplify high level concerns. In fact, they seem committed building extremely complex, low-level harnesses of testing and validation and looping cycles around agents upon agents to avoid actually trying to deal with whatever specific problem they are trying to solve.
How do you solve a problem you refuse to define explicitly? We end up with these Goodhart's Law solutions: they hit all of the required goals and declare victory, but completely fail in every reasonable metric that matters. Which I guess is an approach you make when you are selling agents by the token, but I don't see why anyone else is enamored with this approach.
"You don't know what MITM attacks are? Well learn quick."
I miss the days of having confidence in people to fill the gaps to do their job. Now we demand junior engineers to system design Twitter and memorize algo tricks for leetcode tests. These were useless measures before, hopefully LLMs finally kill them off for good.
Yes, exactly this. My biggest issue is how uncurious the approach seems. Setting a "no-look" policy seems cutting edge for two seconds, but prevents any actual learning about how and why things fail when you have all the details. They are just hamstringing their learning.
We still need to specify precisely what we want to have built. All we know from this post is what they aren't doing and that they are pissing money on LLMs. I want to know how they maintain control and specificity, share control and state between employees, handle conflicts and errors, manage design and architectural choices, etc.
All of this seems fun when hacking out a demo but how in the world does this make sense when there are any outside influences or requirements or context that needs to be considered or workflows that need to be integrated or scaling that needs to occur in a certain way or any of the number of actual concerns that software has when it isn't built in a bubble?
Isn’t that the whole point of this approach? Everything is specified just in terms of how the end user will actually use the software, at a high level. Then the LLMs basically iterate relentlessly until the software matches what the end user wants to do.
I explored the different mental frameworks for how we use LLMs here: https://yagmin.com/blog/llms-arent-tools/ I think the "software factory" is currently the end state of using LLMs in most people's minds, but I think there is (at least) one more level: LLMs as applications.
Which is more or less creating a customized harness. There is a lot more that is possiible once we move past the idea that harnesses are just for workflow variations for engineers.
Bit by bit, we need to figure out how to rebuild human contextual understanding in a way that LLMs can understand. One thing that gets overlooked is the problem if incorrect data. You can provide all of the context in the world but LLMs tend to choke on contradictions or, at the minimum, work a whole lot harder to determine how to ignore or work around incorrect facts.
"Forgetting" and "ignoring" are hugely valuable skills when building context.
I can’t help but feel the logical conclusion to such context conundrums is that”what if we spoke Haskell to the LLM, and also the LLM could compile Haskell?”
And, yeah. Imagine if our concept-words were comprehensible, transmittable, exhaustively checked, and fully defined. Imagine if that type inference extended to computational execution and contradictions had to be formally expunged. Imagine if research showed it was more efficient way to have dialog with the LLM (it does, btw, so like learning Japanese to JRPG adherents should learn Haskell to LLM optimally). Imagine if multiple potential outcomes from operations (test fail, test succeeds), could be combined for proper handling in some kind of… I dunno, monad?
Imagine if we had magic wiki-copy chat-bots that could teach us better ways of formalizing and transmitting our taxonomies and ontologies… I bet, if everything worked out, we’d be able to write software one time, one place, that could be executed over and over forever without a subscription. Maybe.
But sure, go on and talk about "leeway" and "limited capabilities" for a company worth nearly a trillion dollars. Do you honestly believe this is acceptable? What are your vested interests here?
reply