Hacker Newsnew | past | comments | ask | show | jobs | submit | Qi_'s commentslogin

The author seems to be filled with a sense of “knight-in-shining-armor-ism.” If Facebook’s actions really are a problem to most people, then most people would stop using it.

He calls Zuckerberg a "pope-emperor" of Facebook's users, but that's because the users choose to give him power. If the users stop choosing to give him power, he loses it.


>If Facebook’s actions really are a problem to most people, then most people would stop using it

what do you call this argument, best of all worlds-ism? So a priori any institution that persists is beneficial to its constituents merely because it continues to exist? Do concepts like power and dependence exist in the kind of worldview that produces these arguments?


There's an actual name for it -- Panglossianism -- named for Dr. Pangloss, the tutor in Voltaire's Candide who keeps insisting, despite catastrophically mounting evidence, that "all is for the best in this best of all possible worlds." (This was a deliberate parody of moral philosphy written by some of Voltaire's 18th-century contemporaries, most notably Leibniz.)



My argument is based on free-market economics. The users/customers seem to not care enough to put their time and money elsewhere.


No it’s not, and a cursory glance at studies of human behaviour (what economics is really about, in a way) would dispel this notion.


Could you please elaborate on what you mean? Facebook is a company and therefore its success and failure is dependent on its customers/users, right?


It seem you don’t understand the economics of this situation.

Facebook makes money from ad partners not users. Users are not paying Facebook for the software. People who want to run ads are. There is no market for newsfeed software here. Only a market for user attention.

Because FB made a creepy tool 15 years ago they won some users and have an established base they can market to. People using it these days do so because of the network effect not because it’s the best tool to keep in touch with people. Facebook has to maximize time spent with users eyeballs glued to the newsfeed because that’s where ads appear. The friend network portions of the software only exist vestigially at this point to keep people locked in to a newsfeed platform.

Facebook is dependent on their customers but their customers aren't the people glued to the newsfeed, those are the product literally for customers paying for an advertising platform. If it was legal, Facebook would literally chain you to your computer and force you to interact with the world through their platform because that maximizes their profits.


The gp does say users-slash-customers, usually meaning "or".

There were social media platforms before them that lost despite the network effects, and there are platforms that have appeared after them that gained a ton of users despite the lock-in (some of which they have bought, which is in my view the only thing about them that should be amenable to regulation). EDIT: based on TFA and only tangentially relevant to this thread/the moral crusade, making sure online tools can be modified/scraped/etc. in any normal way is something else I would regulate. EDIT2: based on the rabbit hole of anti-FB articles, what is needed is more of a DE-regulation, removing or significantly narrowing the scope the of the laws FB uses to threaten developers. It's not that modifying a webpage is not protected, it's that it can explicitly be construed as illegal, in part because of the previous moral crusades.

The users derive some value from Facebook. I'd rather we didn't have holier-than-thou people regulate every pastime they don't like... reminds me of moral panics over everything from video games to weed.


It is illegal in most of the rest of the world to advertise/market prescription drugs specifically because it causes undesirable outcomes where patients are telling their doctors what to prescribe, which is backwards. And drugs still exist in those countries. I don’t think anybody is saying “make Facebook illegal”. The ask is to deeply explore our understanding of ad-based social media and consider whether such machines have any place in a healthy society. Facebook or some equivalent would still exist if you curtailed ad-profits, I don’t really understand the scare/worry that all our good internet things would just vanish if we clamped down on the harmful business model… Every law is a moral value judgement on the type of society we want to participate in (unless you’re an absolutist). I think your moral panic examples are generally unfounded concerns. I would have agreed with you 10 years ago that Facebook panic is also FUD. However I think we have clear a history of examples of harm to go off of now with FB. We clamped down on the Tobacco industry and it wasn't just holier than thou zealots having a field day as fun ruiners. Real harm lead to real legislation. We just need to realize things as they are and stop pretending that FB newsfeed style social machines are benign especially in the face of hard evidence to the contrary. I totally agree with your “I would rather” in the general sense. I simply think there’s a specific case here that warrants scrutiny.


I think I understand what you are trying to say, however...

> I don’t think anybody is saying “make Facebook illegal” ... consider whether such machines have any place in a healthy society.

That sounds exactly like a moral crusade. You don't want to make it illegal, but would like to consider if it has "any place", and also someone will have to define "healthy society". If you asked people in many places and times (if not most), they'd tell you homosexuality is an abomination that has no place in a "healthy society". Even at present, some people literally treat it as a disease in need of a cure.

And yes, I think prescription drugs should not be a thing (well, prescriptions are fine but if I want to buy metformin because I think I'm smarter than everyone else I should be able to do it and prove myself right (or wrong).

Tobacco is a perfect example for me personally, because I dislike it, so I feel good about forcing everyone to not smoke. On the other hand, it hurts almost entirely the user, so the only justifiable restrictions would be the ones where the user is imposing costs on others, e.g. charging them more for healthcare... I like the advertising bans, but I would have also liked e.g. never having children around, via e.g. banning them from all restaurants and gyms. Doesn't make it justified.


> We clamped down on the Tobacco industry and it wasn't just holier than thou zealots having a field day as fun ruiners.

Well-said! There's a big difference between enacting laws that protect vulnerable groups (eg.: smokers / the public) from powerful influences (eg.: Tobacco companies / propaganda), and enacting laws meant to impose standards of "moral purity". A very big difference.


Where do you draw a boundary? What you're saying is "you are doing a thing that is bad for you, /I/ know better what /you/ should be doing". You can talk about externalities, but unless it's something that is very narrow and direct, e.g. "you ruin your health via smoking so you get charged extra for certain healthcare services", it becomes a slippery slope. "Healthy society" type stuff is especially suspect, stuff like "undermining society/government/cohesion" has been used by authoritarians for the vaguest of reasons.


The users aren't the customers, they are the product.


Like cigarettes? I have no sense of whether FB is physically addictive like smoking.

But, if a company directly contributes to decline in civil discourse and harms democracy, and thereby harms the users, then isn't it a government role to step in? And isn't this the governments role even if users can't see the harm at the micro-level of their daily interactions with the company? I guess that is the argument at least. Really its a balance between values: individual freedom and the collective good.


> if a company directly contributes to decline in civil discourse and harms democracy, and thereby harms the users, then isn't it a government role to step in?

The chief problem here is “who decides?” If a government in power is being undermined, they have incredibly strong incentives to determine that those undermining actions are “harmful to democracy” (rather than merely harmful to their party). (I think we could point to many examples in US politics in the last 5 years where “this is bad for my party” is cast as “this is bad for democracy!”)

Which is why I think that, of all the speech that must be protected, political speech is of the highest criticality to protect. (And that claims by the government or strongly politically aligned citizens that ‘X is bad for democracy’ should also be viewed with a healthy amount of skepticism.)


Facebook is addictive tbh. I heard they hired some prominent psychologist to make it more addictive.

My mother is surely addicted to facebook. She knows all data implications but she enjoys getting likes/shares etc. And fake news etc which is intentionally specious make the platform even more enjoyable.

I can't tell her to stop facebook and I am sure it is harming her right?

Yes this is government role but as with every government in democracy most of them are vested for short term profits and company like facebook lobbies a lot. We know facebook pushes millions of dollar in lobby so why would government solve it? Also facebook also provides sweet taxes to government.


>I heard they hired some prominent psychologist to make it more addictive.

This is basically what the argument to ban it amounts to whenever this discussion comes up.

Facebook is a simple site, there's nothing to it really. It's just an endless mediocre content website that happens to have your friends on it.

Good user experience = "designed to be addictive by army of ill-wishing psychologists."

The only reason id like to see it banned just so something better can finally take it's place


Humans enjoy being liked, what specific harm does your mother experience from using facebook?

Better to be addicted to typing words than drugs, it's free and you don't die.


Since when is “most people don’t speak up so it’s obviously okay” a healthy way to determine whether anything should be legal or not?

Where is the knite-in-shining-armor-ism? What even is that? Doctorow is asking for for us to build a society where we don't take a company on their word that they will do right by users because he cites multiple examples where a company leader charged as the steward of software integral to peoples’ lives has said they will and then about faced to chase profits. There’s clearly something worth discussing here… Remember how nice and interoperable chat was before Google killed xmpp?


> If Facebook’s actions really are a problem to most people, then most people would stop using it.

Just like sugary soda, loot crates, cigarettes, and opium then? Companies design products that are bad for people all the time.

> Zuck: they “trust me”

> Zuck: dumb fucks

This was said after his AOL hacking days [1], so he already had a certain predisposition towards "ordinary people".

Given that his history and his present behavior aren't much different, it seems he hasn't changed much. He designs products to extract from people.

[1] https://qf0.github.io/blog/2020/01/28/Mark-Zuckerberg-was-a-...


Have you thought how old he was when he said this? I am sure you never did anything stupid as a teenager.


I don’t see your counter argument. People eat sugar because they want to. If they didn’t like it, they wouldn’t eat sugar and no one is forcing them to.


By that measure, most of those who live under oppressive dictators think their leaders are okay. If not, "they would just rise up and replace them".

Doctorow's piece here reads to me as a call for viable alternatives. The prose attempts to advance a number of his favored alternatives, which makes it a little confusing.

To me, I believe we are headed toward a world where social media operates via an open API, like email. The road between here and there will be rocky.


"If Facebook’s actions really are a problem to most people, then most people would stop using it."

No. Because all your friends are on Facebook. And by buying out all the serious competition (Whats App, Instagram) they have guaranteed a monopolistic position. Even if you move to another place, if will grow too big, odds are it will end up being acquired by FB.


Arguably facebook provides useful, valuable services and also causes some harm. Lets suppose the good it does outweighs the harm. That's not a good argument to simply accept the harm it does. The benefits and harm are not an inextricably linked package, it should be possible to mitigate the harm without eliminating the benefits Facebook provides to people.


The nature of addiction (etymologically, “without-say-ism”) is that the problem is most felt by those who can’t stop using it.

We can turn this to a giant regulate-tech topic but the source complaint, simple enough, was a desire to have the feed consume less of life’s attention and be more worthy of what attention he was giving it.


A regulation like that has to be examined for the incentives it creates, not just the goal. If companies are forced to disclose research that involves customer or public harm, they will probably just stop doing the research, or fabricate false results. Thus, information is lost rather than disclosed. No companies want to say “our product is harmful to our customers.”


It seems like all new social networks have a vicious fact to overcome: nobody's on it, so nobody uses it, so nobody's on it.

How do social networks reach that "critical mass" of users?



one way would be to be interoperable with an existing ecosystem from the start.

Michelle Lim makes a great case for this here:

https://www.michellelim.org/writing/into-the-fediverse/


Thank you for sharing the link. I like the idea of standardization among smaller social media platforms. Basically, the success of a platform built on ActivityPub becomes more tied to the success of ALL platforms built on ActivityPub.


reddit founders in the early days would use fake accounts to make it seem more active


The argument was that Facebook is neutral as a platform. Similar to the internet, it serves all kinds of content. Some of the content is good, and some is bad. That doesn't necessarily mean the platform is good or bad.


Facebook is not a neutral platform. It has a lot of moderation and algorithmic ranking of posts.


Having worked in growth before (not at Facebook), I can tell that you vastly underestimate the impact FB teams have on how/when/what/for how long/how many times/etc content is displayed to end users. This is absolutely not a neutral impact.


An alternate explanation is that the algorithm tries to promote engagement and user retention. Presumably, people susceptible to radicalization engage with the content discussed in the article. It would be unreasonable to expect Facebook to not act in its own self-interest.


> An alternate explanation is that the algorithm tries to promote engagement and user retention. Presumably, people susceptible to radicalization engage with the content discussed in the article. It would be unreasonable to expect Facebook to not act in its own self-interest.

That's the whole point. Oh they're just trying to make a buck like everyone else is exactly the problem.

They are a running a paperclip maximizer that turns passive consumers of misinformation into "engaged" radicals and the system that is Facebook has no incentive to correct this.

https://en.wikipedia.org/wiki/Instrumental_convergence


Any algorithm that can maximize engagement can be tuned to minimize radicalization and dissemination of hatred and fascism.

I'd argue that it's absolutely in Facebook's self-interest to reduce their active role in promoting fascism, racism, homophobia, etc.


Digital advertising is much more accessible for the small fish too. If a business has a Facebook presence, $10/day can get a campaign going, and scaling it up is trivial.


Facebook for example has a “tracking pixel” script that can be included in your website to get detailed info about conversions from Facebook ads.


If it were really this easy people would be doing it, the world really is a complex place.


This article is just trying to sell the reader on the "Rare Breed" idea. For every successful "Rare Breed," there are nine who failed. I say let the companies decide who they want to employ.


How does YouTube separate content presented as fact vs content presented as opinion? False facts is one thing, but what if someone is just hesitant about vaccine safety?


While the title says All anti-vaccine content, that actual article notes, it's blocking content that says "vaccines cause chronic health effects or contains misinformation on the substances in vaccines". A person who is hesitant about vaccine safety wouldn't be blocked then. Only if you say that cause x side effect (when that is not backed in science), or if you provide false info on whats in the vaccine, since the ingredients are facts. Someone saying their is a microchip in it, would not be a fact.


They seem to be using Human moderators and only banning people/channels with a consistent track record and very clear violation of their admittedly arbitrary red line.


OK, that seems fine. I know YouTube's automatic DMCA policies are notoriously bad, but it sounds like this is a more accurate manual approach.


What is the advantage of Fortran over only C in numpy’s case specifically?


Fortran has the more optimized libraries (blas) and is also capable of optimizations that C isn't (guarantees that pointers dont alias).


Modern C has the restrict keyword for that. There isn't any competitive advantage left for Fortran over C or C++, the only reason why Fortran is still part of the modern numeric stack is that BLAS, LAPACK, QUADPACK and friends run the freaking world and nobody is ever going to rewrite them to C without a compelling reason to do so.


Fortran's primary competitive advantage over C and C++ is that it's a conceptually simpler language that's easier for humans to use safely and effectively for numerical computing tasks.

It's not just being tied to BLAS and friends. Fortran lives on in academia, for example, because academics don't necessarily want to put a lot of effort into chasing wild pointers or grokking the standard template library.

For my part, I'm watching LFortran with interest because, when I run up against limitations on what I can do with numpy, I'd much rather turn to Fortran than C, C++, or Rust if at all possible, because Fortran would let me get the job done in less time, and the code would likely be quite a bit more readable.


I'm aware of the restrict keyword, though its usage is not typical. It enables an optimization that fortran has by default, that was my point.

And yes, I already said that packages like blas have had massive amounts of work put into them and have lasting power.


I didn't downvote you, if that's what you were hinting at, however your phrasing was more likely to be interpreted as "C can't do that" as opposed to "C doesn't default to that".

It seems we agree, after all.


I believe we do.


Note that while BLAS and friends aren't getting rewritten in C, there is an effort underway to write replacements in Julia. The basic reason is that metaprogramming and better optimization frameworks are making it possible to write these at a higher level where you basically specify a cost model, and generate an optimal method based on that. The big advantage is that this works for more than just standard matrix multiplies. The same framework can give you complex matrices, Integer and boolean matrices, matrices with different algebras (eg max-plus).


That's a cool idea. I don't know how realistic it is to achieve performance parity, but the "generic" functionality is definitely intriguing.


The initial results are that libraries like LoopVectorization can already generate optimal micro-kernels, and is competitive with MKL (for square matrix-matrix multiplication) up to around size 512. With help on macro-kernel side from Octavian, Julia is able to outperform MKL for sizes up to to 1000 or so (and is about 20% slower for bigger sizes). https://github.com/JuliaLinearAlgebra/Octavian.jl.


Except "modern" C does not prevent one to misuse restrict and doing so is UB.


Also, when Travis Oliphant was writing numpy he liked the Fortran implementations of a lot of mathematical functions, so it was an "easy" borrow from there instead of reimplementing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: