This is great. But how do we build social media networks that reinforce centrality/influence of accurate truth-tellers, and penalize sensationalism, extremism etc.?
This seems like one of the big hard problems of our era. If we can solve it, maybe this is a phase transition humanity is going through where we begin to operate on a higher level of complexity, similar to when life transitioned to multicellular.
Would this not require quantifying real, objective truth? How does one compute truth without relying on human input? (Which instead trends towards truthiness/the "feeling" of whether something is true.)
I am not being dismissive, I genuinely would like to know.
No - consider it mostly a design and infrastructure problem.
When looking at social media,
it’s part public forum that needs some type of discovery/filter mechanism, and part a tool for individual users and community to communicate and collaborate.
The barrier in current social media networks is largely skewed towards manipulative design that optimizes towards datamining and addictive gamified systems and interfaces.
Sure you could try to build a social network for open science and peer reviews of research projects, but the bar is set so low right now that any improvements to interfaces that facilitate a more comprehensive search/discover/filter system on datasets will be a massive improvement over now.
Information needs to be discoverable, but people need to be free from propaganda.
> When looking at social media, it’s part public forum that needs some type of discovery/filter mechanism, and part a tool for individual users and community to communicate and collaborate.
> The barrier in current social media networks is largely skewed towards manipulative design that optimizes towards datamining and addictive gamified systems and interfaces.
I think you're spot on. Even at the level of individual we have to do heavy noise filtering to reach at the signals that matter to us. We have heuristics to go towards people that we find useful and stay away from those are mere nuisance. Social media is this one giant noise machine that actively throws bullshit at us, ads are 99.99% noise considering their usability/mental processing cost, ranking algorithms are optimized to make signals that get you stay engaged overly salient in comparison to their natural incidence rates, our self-association is heavily distorted; content of friends that rile us up are showed as often as the friends we like etc.
> Information needs to be discoverable, but people need to be free from propaganda.
I think the only solution to this is breaking away any recommendation engine from the rest of the product and make it available for competition. "Use Facebook with our RecommendSmart, scientifically proven to make you less depressed than the default one". "Me and my friends use Twitter with SocratesSort, has been great at starting deep conversations on topics we care about, totally troll free".
You can’t quantity or even know “objective truth”. We can get really close for somethings, but knowing objective truth is akin to being a god. At the end of the day, everything is a model relying on some axioms.
Inter subjective truth, however, can be reached and is what we rely on most (a dollar bill is a fancy piece of cloth, but we all agree its worth a dollar). It is reached through consensus making.
Gathering a consensus is traditionally done through government or hierarchy, ultimately leading up to a single human or single groups input as “truth”. This method has continually begun to disintegrate as communication tech gets better (printing press -> mobile phone internet).
So the solution, to me, is to create consensus systems that rely on the input of many - use the law of large numbers, economic incentives, and the kaleidoscope of subjective truths to reach the most accurate objective truth we can.
It's true that society uses consensus as a proxy for truth. Even when scientists make a new discovery, it isn't considered "truth" until they convince the community – sometimes even taking decades!
Sadly, this consensus can be manufactured by those in power. Censorship helps to a surprising degree, for example. Social media sock puppets, astroturfing, bribery, the list goes on.
How do we fight against manufactured consent? Is it even possible at this point?
Yeah this is the crux of it isn't it? And it's not just the problem of manufactured consent either, there is also the problem of mistaken consent that grows organically out of human frailties like our cognitive biases and appetites for drama.
Yes thank you. Between this and your other comment to me in this thread I think you've really gotten to the heart of it. I appreciate you putting into concise words what was rumbling around in my head when I first asked the question.
But what then of the tyranny of the majority, lemon mentality being itself reinforcing? This cannot be know as anything but an agreed truth, or one applicable only to a greater objective, but never The Objective Truth. We need to embrace conflict and mutual exclusion to recognize more nuanced aspects which are relevant and "truthful" for a minority too.
I think this is one of "the big questions" right now.
Philosophy tells us that you can't compute truth without relying on axioms. But computer science tells us that if we accept basic axioms, the computation of truth quickly becomes orders-of-magnitude too complex to compute.
I suspect that this all leads us to needing to rely on coarse human input as "axioms".. which of course leads to the issue of which humans do we rely on as stalwarts of the truth? It's a bit of a chicken-and-egg problem.
My hope is that studies like these will tease out the nuances of networks so that we can engineer networks to nudge the nodes of better truth-telling to more centrality in the networks, and that gradually we'll master the art of building intelligent networks. After all, biology did it with the human brain.
> Philosophy tells us that you can't compute truth without relying on axioms.
Philosophy tells no such thing. It is not the province of philosophy to tell us the final word on what is what, and without corresponding it to any empirical exploration, asserting such a claim is mere dogmatism.
Computationalist model of "truth" (by which I think you mean reality) is dying. Embodied-embedded cognition offers an alternative in which your intelligent system has to be deeply embedded within all the other networks it has to interact with, and its adaptivity and constraints define it more than anything. There is no making an intelligent network in a test tube (talking about general intelligence).
> After all, biology did it with the human brain.
Biology might have put the required machinery, but machinery by no means is a guarantee that it will be neither intelligent nor adaptive. You could "engineer" your own network that is your body-brain to get better at conforming to reality, which is called self-transcendence and cultivating wisdom, and arguably the same principles would work for our social networks, artificial networks and us alike.
But going back to the notion of embeddedness, can a social network that will ultimately aim to conform to the norm of making more money be wiser? Can a wiser social network really out-survive a dumber one? Isn't both going to be ultimately embedded in the collective intelligence that is our economy? Therefore both will be constrained by the limits of the intelligence/wisdom of the economy, and unless there is a bunch of benevolent rich that will implement the engineered wiser social network and gift it to the humanity and get humanity to actually use it, there is no such place, i.e it is a utopia.
Regarding the quote it’s Gödel's incompleteness theorem that proves the ever present need for more “axioms” - and it exists in the cross disciplines of philosophy/math.
First incompleteness theorem says even with axioms you can't "compute" all truths in a formal system. That is a far cry from "we need axioms to compute truths".
It’s called a library in my idea of it. The sum of human knowledge, curated by experts in every field. I don’t think we can compute that last bit. We may not have to.
Maybe it's not necessary to define truth for this. Considering metrics that you want to influence might be a better way - hate crime arrests in locales, negative/divisive message content, donations/volunteering for positive causes, etc. But I'm a pessimist and I think moving these metrics in the right direction would adversely affect the $$$ metric that shareholders care about, so it's not going to happen.
Its a very big question - we almost need some sort of level of detail about the commenter, to understand their expertise, backgrounds, experiences, abilities, etc - but once again, how would you quantify it? For all topics, not everyone's voice should be considered equal
> where we begin to operate on a higher level of complexity, similar to when life transitioned to multicellular.
I've thought about this exact thing before and something to keep in mind is that there will be a split where part of the group consents to being absorbed into the mega-organism, and part will stay individuals.
Humanity won't move in unison. If it happens, part of the group will stay behind, just the same as we still have single celled organisms.
the study itself is a bit "pnas'y". In fact, a lot of work studies information aggregation in different kinds of settings.
As you, I think, suspect, things can also easily go the other way. Well known is the fact that extensive information / interaction networks can decrease diversity (a general fact of averaging mechanisms in network settings that holds even in Games on networks). A decrease in diversity in information aggregation is often detrimental (for example through correlation of errors, canceling out one mechanism of wisdom of crowds, or through conformism).
Further, wisdom of crowds only exists (relative to individual guesses) if in growing networks the opinions are not controlled by influential groups or echo chambers (Jackson did work on this). This of course, is what ends up happening a lot, in part due due to aforementioned factors.
Since the proposed mechanism relies on weighting accurate individuals higher, it is even more susceptible to such biases than just wisdom of crowds in general.
Funny. The rating mechanism makes this equivalent to a reputation based prediction market.
These can also fail and are gameable.
The key assumptions are that people form social networks according to DeGroot process, which is not actually true. People tend to maximize objective gain not information/influence gain - usually objective is not stated and there are various ones, from knowledge, entertainment, desirability, clout/influence and power, even financial ones. Unfortunately, this does result in a divergent scenario.
As for feedback, current social networks have simplistic "like" system that only weights engagement in a hidden matter and generic desirability points with partial feedback scenario. Trying to get full feedback would require someone to be able to read likes of every user, not feasible. What could be feasible is guessing how "likeable" a post is - gaming the system.
That said, the paper is rather high quality considering its limitations. Interesting that essentially listening to a moderate number of curated but not isolated experts had best results.
(Top 5 interesting enough, this being ~33% of group.)
The trick is in identifying these top performers in a real multiobjective scenario, and whether the selection strategy still works.
Gameable perhaps, but in proper meritocratic[0] systems, voting rings etc would be detected and the perpetrators hopefully banned and they would also lose the ability to participate again at a later stage, giving an incentive to play fairly next time.
This type of language is real slippery though when we start applying it to subjects and ideas that are not falsifiable.
I think when you apply this in that context you will inevitably end up with digital authoritarianism.
I would say it is pretty trivial to increase the centrality of nodes in a network of non-falsifiable "true-tellers" with standard propaganda techniques. All the techniques from print/radio/tv work way better on digital networks.
> In this paper, we test the hypothesis that adaptive influence networks may be central to collective human intelligence with two preconditions: feedback and network plasticity
In the paper they had the participants of the experiment to manually pick who they want to follow. But what if the system connected them automatically to high signal-to-noise individuals based on feedback alone?
I've been working on something like this with my hobby project https://linklonk.com - an information network where the connections between you and other users are determined by your ratings of content.
When you upvote an article - you connect to other users who upvoted that article. When you downvote - your connection to those who upvoted it becomes weaker. That way the strength of your connection to other users captures the signal-to-noise ration of those users for you.
The stronger you are connected to someone - the higher their other upvoted items are ranked on the "For you" page.
For example, I upvoted this paper on LinkLonk: https://linklonk.com/item/6534389451373608960 If you also upvote it then you will get connected to me and will see more of my recommendations on the main page. The next user who upvotes it will connect to me and to you, etc.
Since you know that your content ratings have direct effect on what content your future self will see, you are incentivized to think whether each piece of content that you just consumed was truly worth your time. This kind of retrospective thinking is missing when we hit upvote/retweet/like in the existing social systems.
My project is in a very early stage and suggestions/ideas are welcome.
sounds polarizing.
I wouldn't decrease your connection upon downvoting, but I would increase connections with others who downvoted. I would only decrease connection over time without similar ratings.
The purpose of the downvote is for you to say what content wasted your time. Those who brought that content to you deserve to lose your attention so they do not waste your time in the future, do they not? That's why LinkLonk decreases your connection to those who upvoted it. It also displays a popup saying "You will see less content from N users and M feeds that upvoted this" to explain how the downvote button works.
LinkLonk also increases your "downvote connection" to other who downvoted that item as you are suggesting. A "downvote connection" is how much weight the other person's downvote has for you. That is, it captures how good their past downvotes were for you and how much their future can be trusted. So there are two kinds of connections:
- Upvote connection - gives others ability to promote/curate good content for you.
- Downvote connection - how much others can bury/moderate bad content.
And as you are also suggesting, the connection is decreased over time without similar ratings. Each time someone you are connected to upvotes something your connection to them becomes slightly weaker. So if you ignore content from a user/RSS feed then it will have lower ranking for you over time. So in practice the downvote button should not be used much at all.
You can think of every user as a neuron where upvotes on the same items strengthen their connections.
Though there is one important bit of asymmetry: you connect to those who upvoted that item *before* you. That way people who recognize useful information earlier earn more trust. In a sense this is a "proof of work" - to recognize valuable content before it becomes popular.
My hope is that this asymmetrical nature of connections will get less informed people to connect to more informed people. Which is the opposite of the echo-chamber effect - when less informed people are connected to similarly less informed peers.
And yes, I'm slowly preparing to do a "Show NH" for LinkLonk. But I probably need to grow the number of active users a bit before I do that, otherwise the "Shown HN" will bring a lot of clicks that will just bounce.
So let me get this straight - you become connected to those who upvoted the item before you, but not to those who upvoted after, right? So this incentivizes the avant garde, rather than the promoters of stuff that's already going viral. Very interesting.
Correct. This also provides some protection from gaming of the system. You can't simply upvote a bunch of popular items to get other people who already upvoted it become connected to you.
You have to be good at prediction what people will like in order to get them connected to you. And this is what a good curator does.
I wonder how this dynamic would affect virality in relation to truthfullness?
As in, will this dynamic tend to reward the spread of "edgy" fake news more than current network paradigms already do, or will it tend to slow its spread, or will it be neutral on that axis? Seems like a hypothesis that would need to be tested, but maybe you have a hunch or insight on the matter.
The overall system behaviour is hard to predict. So yes we need to try it out and see.
But as a user I can say that I do behave differently when it comes to upvoting content on LinkLonk. It is much less impulsive because my ratings have consequences for myself. If I find something that I completely agree with but I haven't learned anything new - I would not upvote it on LinkLonk, while I would be tempted to signal boost it elsewhere (like HN). Let's see whether this generalizes and what kind of system behaviour emerges out of it.
You are right, LinkLonk is a filter bubble. The difference from other systems (e.g., algorithmic feeds powered by machine learning that optimize for "engagement") that exhibit the filter bubble dynamics is that LinkLonk puts all of the control into the hands of the user. The user is responsible for the content they upvote which directly determines who they get content from in the future.
In a sense this is similar to how users of RSS feed readers control which feeds they subscribe to. They are responsible for the content they consume. What LinkLonk adds is a transparent layer of automation that helps you subscribe/unsubscribe based on your content ratings.
My hope is that LinkLonk will help people get more informed, but I cannot be sure. The project is a live experiment to find out if this is right system of incentives.
This seems like one of the big hard problems of our era. If we can solve it, maybe this is a phase transition humanity is going through where we begin to operate on a higher level of complexity, similar to when life transitioned to multicellular.