> I'm not sure how 21% lower is considered "not statistically significant", in trying to suppress the spread, ANYTHING > 0% is helpful. Full stop.
Statistical significance has a specific meaning in the context of hypothesis testing. It is a measure of likelihood that the observed result occurred due to a real difference between groups (rather than random chance).
It seems that they are adding up the margin of errors for 82/1461 and 87/1461, (schools responded divided by schools surveyed), giving a total margin of error of ~20% for these optional vs. mandatory masked student statistics. This is a problem with using surveys with a low response rate.
In their own words in that section, by the incident rate ratio it is statistically significant, even after having been adjusted for county level 7 day incidence.
You can try and figure it out on page 4 of the cdc report, it does not appear to be a null hypothesis test.
All of this is also true for our actual government[0]. I have zero confidence that if our government were in charge of running Facebook/Twitter/any other social media app the results would be better.
I agree. I don't think our government is qualified to make these sorts of decisions. I don't know who is. I am not advocating for nationalization of these services.
I am only saying that these people never signed up to have to make decisions about such impactful matters. They are not political philosophers, and they are stuck making decisions of that gravity. I can be mad when they do it wrong, but I can also recognize how tragically outmatched they are. At least governments have judiciaries and cabinets and checks and balances and constitutions and stuff. These guys were just trying to make money on the internet, and suddenly human rights in China became their problem. Nobody seriously expects Twitter to have a full blown judiciary and legislature for processing bans. Nobody expects them to write a constitution which becomes a treasure of a historical document, on how to properly govern the flow of the world's conversation. But at this point, those things would actually be appropriate. It's not surprising they're struggling trying to solve the problem with algorithms. I don't think anyone could succeed at that, and they don't even realize they have the responsibility and opportunity -- they're just trying to do their best to be socially responsible and then get back to making money on the internet.
The best suggestion I have for them is to hold out their hands to humanity and say -- "Look, we have a tremendous opportunity here, and it's bigger than just us. How should we use it?"
Facebook have tried this. They have made am independent governing board that theoretically can tell Facebook what to do w.r.t. content censorship decisions.
No surprise, it appears to be staffed by people who were selected for their middle-of-the-road "you must censor a bit but not too much" type of views. It gave them a limp wristed rap on the knuckles when they banned Trump, said it was arbitrary and didn't follow the same rules enforced on everyone else, but that they still agreed with doing it.
I think what we're seeing here is what happens when you lack some sort of free speech libertarian fundamentalism. Facebook don't have to engage in "statecraft", whatever that is, no more than the designers of SMTP did. They could choose not to. They could say "we will shut down accounts when under court order to do so, end of story". Then governments who think a citizen is breaking a law about speech would have to go to court, win, and then the judge would say, here is an order requiring Facebook to shut down the account of this law breaker (which could automatically hide all content they created). All the evolved mechanisms, the checks and balances of the actual state would be in effect.
But Facebook is based in Silicon Valley and like most firms there, has systematically made deals with far-left devils in order to hire them and put them to work, often without really understanding if it's worth the cost. Does Google actually need 144,000 employees for example? It hardly seems more productive than when I started there and it had 10,000. Their "hire first ask questions later" approach inevitably leads to hiring lots of extremists and wingnuts, people who are there primarily to get closer to a nexus of power they can wield for their own political agendas. The constant dramas we see emanating from Mountain View, Palo Alto and San Francisco are the inevitable consequence.
Tech firms could fix this problem very quickly if they wanted to: just announce a renewed commitment to freedom of speech, platform principles and passive moderation. Any employee who doesn't like it can leave. Many would, but those companies are so over-staffed they'd barely notice, and the environment for those who remain would be drastically more pleasant.
The problem with that is if Facebook committed to free speech then users would post a lot of offensive content which drives away mainstream advertisers. We've already seen that happen. Facebook tightened their censorship several years ago specifically because large advertisers were leaving the platform over concerns about their ads appearing next to user generated content that negatively impacted their brands. Obviously Facebook isn't going to do anything that puts advertising revenue at risk.
That's a rather fundamental flaw in their whole business model, isn't it? Advertisers who don't want their ads appearing next to user generated content, on a social network, have missed something rather important.
It's obviously not a flaw. Facebook is highly profitable. The vast majority of user generated content is inoffensive. We're just discussing a small minority of edge cases.
The government, in theory, is bound by the first amendment. FB being run by a government bound by traditional first amendment restrictions would be worlds better than what we have now.
If the majority of people here are software engineers, would it surprise people that someone has automated the job of crafting believable bullshit? Not to mention disseminate it faster and better than we have ever been able to?
I see it as a problem that we can iterate “content” faster, identify “audience groups”, run marketing analytics, A/b test “narratives”, all to craft believable, plausible “content” and then mass broadcast it.
We’ve built systems that create content faster and better than a normal human BS filter can block.
How does this have anything to do with the First Amendment? How do free speech rules bring the balance of power back to individual human levels of filtering?
Mind you the First Amendment is an American construct. It does nothing for things like genocides in Myanmar, or journalist suppression or hate crimes and the like.
> We’ve built systems that create content faster and better than a normal human BS filter can block.
Maybe so, but I just don't trust that the "BS filter" big tech has constructed will block only "BS" and not true things inconvenient to a certain strident brand of west coast morality. The NY Post story from last year certainly wasn't "BS".
I don't think algorithmic BS is anywhere near as big a risk as you think it is, and I think the risk of outright censorship is far larger than you imagine. Big tech should be a common carrier and viewpoint discrimination in moderation should be illegal.
And my answer to people who believe this is always the same - please volunteer your their time to an active subreddit of your choice, preferably one with an active political aspect.
Look, I am not unsympathetic. I have personally gone through the whole cycle - I started from "the antidote to bad speech is more speech, not censorship", To advocating for better tools to handle misinformation.
I would honestly LOVE for the world to work how I thought it did. I worry about the tools we create to clean our "gardens". Yet, without those tools I know most large communities would fail to be governed.
I am largely tired, of these debates on HN where old arguments are rehashed, un-tempered with empirics. Heck, people should be upset that the data that can answer these questions is under NDAs.
> please volunteer your their time to an active subreddit of your choice, preferably one with an active political aspect.
Why reddit? That site is a soup of immaturity, did you expect otherwise? On reddit you have people with 5 accounts creating conversations to control narratives and make it look like you have a larger group than you do supporting your case. This is more difficult to do on FB because they do a bit of account validation and other measures.
And your example of a political forum on reddit is a great example of why we don’t want censorship. On reddit admins ban for simply not speaking to the narrative or saying something that is not misinformation yet the admins don’t like. This is literally the same problem we have with FB now. The moral of the story is, stop censoring. It’s ok someone doesn’t say what you think is the truth, I promise the world will keep spinning.
How does that work? reddit speech is not worthy of being heard? Its too immature to be even worth experimenting with moderation?
If people are creating 5 accounts to create a narrative, well then thats the job isn’t it? Stopping those 4 fake accounts.
But that would be censorship. So what is your solution for the problem you yourself have described?
Plus, If your first option is to choose a forum where you self select out of dealing with messy problems, then your experience is invalid for guidance on dealing with messy problems - no?
I can’t seem to see anything but a contradiction of your own purposes here.
Perhaps you see how these things are not contradictory>?
> How does that work? reddit speech is not worthy of being heard? Its too immature to be even worth experimenting with moderation?
I didn't say it wasn't worthy of being heard, I said it's immature and there's zero cost to account creation and therefore you have literal children going to that site and spitting nonsense. Moderating that site is like herding cats.
> If people are creating 5 accounts to create a narrative, well then thats the job isn’t it? Stopping those 4 fake accounts.
But that would be censorship.
Yup, banning them would. Did I say ban them? It was used as a reference to immaturity, because immature / crazy people create accounts to make a cohort that does not exist.
> So what is your solution for the problem you yourself have described?
In short, change the way these people are raised. Change what they're being taught in schools that enables this behavior. It's unacceptable to throw online temper tantrums when you don't get your way. It's also unacceptable to attempt to force those into your beliefs. So start by teaching children that instead of raising the entitled society that we have today.
For those, seemingly like yourself, who thinks the world will end if a group of people start believing the world is flat, I'd say learn to ignore people.
> Plus, If your first option is to choose a forum where you self select out of dealing with messy problems, then your experience is invalid for guidance on dealing with messy problems - no?
I can’t seem to see anything but a contradiction of your own purposes here.
Perhaps you see how these things are not contradictory>?
How about let me speak for myself and quit carrying the conversation forward using assumptions from yourself?
Without access to specialized knowledge, like being an expert in a subject, users will either check the authority/standing of the speaker, the emotional appeal of an argument, or the underlying logic of it.
In my simple way of putting it - creating a website, or creating a post, and then having it disseminated is dirt cheap today. You can get an article on a news website, having it referenced by a youtube channel, have that sent to a twitter feed.
That alone is sufficient to discuss an increase in the volume of content being created - however, that volume also gets disseminated as fast as it is created, which is what accounts for the speed.
This is also without looking into the fact that people tend to use superficial traits to assess whether information is credible online.
"Yet, research shows that people rarely engage in effortful information evaluation tasks, opting instead to base decisions on factors like web site design and navigability. Fogg et al. (2003), ... They argue that because web users do not often spend a long time at any given site, they likely develop quick strategies for assessing credibility."
From: Credibility and trust of information in online environments: The use of cognitive heuristics, (Miriam J.MetzgerAndrew J.Flanagin)
So a good looking website, with content that purports to be endorsed by known authorities, and hits the right cultural blind spots for its audience will get past their filters.
If you’re gathering information faster than your BS filter can process it, then you’ve also exceeded your capacity to process the information your receiving. Assuming BS filter means some extension of comprehension
Yes, I think so too. However people still will consume content as long it gives those dopamine hits. The brain makes you think you are doing something, even if its not really comprehending what its consuming.
Which is getting closer to the problem as I see it - the tech infrastructure has out scaled the default biological tools we are born with.
If you’re consuming information but comprehending it then this is listening vs hearing. This is my original problem with this “faster than BS filter”. If you claim that information is coming too fast for them to comprehend then they are not comprehending it and therefore not getting the information. Which would make this all false.
They aren't saying that the feature gating code adds to app size, but rather that there's a lot of code behind feature gates (for tests/staged rollout/locale-specific features/etc) that most users won't see which still add to app size.
Fail fast and KISS are pretty celebrated virtues of sustainable project development. I understand that on the scale of Facebook you have issues with project management but if there is a significant amount of code that is locked to specific cohorts of users aren't you opening up the door to unprofitable levels of complexity and long running poor investments?
I'm sure a lot of developers on here try and minimize their use of integration branches in the day-to-day (they are necessary for some things but keep them short and sweet) and try and get in-progress features into master ASAP - that's largely due to the fact that maintaining multiple copies of the same basic logic can quickly become extremely difficult to manage.
Localization is a really big exception to this but that's why, whenever possible, you'll see game companies limit localization to strings only - including logical statements in the realm of information to be localized can make security issues extremely fun to track down along with causing frequent usability breaks in less used localizations.
I don't know - whatever the reasons for it and no matter the resources FB has - this stuff increases in cost exponentially and if they do have a really fragmented codebase it's likely that the majority of their labour goes into process definition and QA to make sure that they don't break the Swahili language version of the landing page for China when they change their contact us link.
Fail fast works on the web, where you can redeploy the app with the next page refresh.
It works poorly on mobile, where users are not keen to reinstall an app every few days, and some do not update for months and years, because of lack of space, scarce bandwidth, old hardware, or just neglect.
It doesn't just apply to the web though - it can even apply to OS design. On the web it's usually (ab)used to leverage users as the metric of whether something is failing, but in theory fail fast is just about learning that something isn't working through any means - whether that be user reports, automated tests or proofs of concept.
Additionally, at least where Facebook is concerned and IIRC, they actually do heavily utilize out-of-app data in their mobile app. There is a good deal of code, but a lot of the UI ends up being tweaked by data that's being served to the client.
That's not a feature of advertising as a business model, it's just a feature of growth-focused businesses. The same forces apply to products that you pay for with money. For a very obvious example, just look at Candy Crush.
No, there’s no reason to want to go OTC for just 20k and most desks won’t will deal with you for that amount. That isn’t even half a BTC at current prices, every major exchange has the market depth to support that.
These studies aren't nearly as dire as the grandparent comment that claimed the United States wouldn't be part of the developed world in 20 years.
Much of that report focuses on things like traffic congestion and delayed flights from busy airports:
> According to Petroski, the delays caused by traffic congestion alone cost the economy over $120 billion per year.
Ironically, the abrupt shift to remote work and work-from-home due to COVID has significantly reduced traffic burdens in many cities.
I'm all for improving infrastructure where necessary, such as the ~10% of bridges that are structurally deficient or improving access to cheap broadband. I'm not enthusiastic about rallying cries to encourage more car traffic, though.
It owns neither... but sure, politics only happen in china... Amazon never did anything political ever... except for that Wikileaks thing... and the Parler thing... but US gov definitely doesn’t own them
The certificate used on this website says it’s owned by Cloudflare. That doesn’t help in this instance.
Especially since this means anything you submit will be decrypted by Cloudflare. It may then be transmitted in the clear to whoever runs the backend sever that Cloudflare is proxying for.