Skip to content(if available)orjump to list(if available)

Myths about Social Media

Myths about Social Media


·January 18, 2022


I think the echo chamber one misses the point (or maybe the analogy of an echo chamber is bad).

It's true that people are exposed to other views, but there is still the echo effect of what their bubble thinks of those other views. Two people discussing their differences is not the same as two (or N) bubbles hurling outrage at one another, which I think is closer to most of twitter - the echo chamber still applies within your bubble and its criticism of others


The vast majority of people's exposure to other views are just the dumbest iterations of said views, for the express purpose of "dunking" on them to further calcify their previously held positions also. At least on twitter the goal is to find morons on the other side of the ideological divide and mock them.


I am not talking about obvious (and easily labeled morons), but instead there are many educated and real scientists that raise questions and there is no dialogue, but instead only defunct-fact checkers totalitarian response. Science is always open to listen, discuss and stand open to be corrected and in no time in history there was "attack on me is attack on science" attitudes that ended up be right or productive. It is circus in all places without this approach and open dialogue.


What kind of reasoned, introspective dialogue do you think you're going to have 140 characters at a time?


Kind of reminds me when one of the foremost mRNA researchers, Dr. Robert Malone was temporarily banned on LinkedIn regarding his views on the vaccine. While speaking at a JRE interview, he says later on after the ban, LinkedIn (probably after public backlash) sent him an apology letter, quoting they could not assemble a team (or they themselves were not) qualified enough to fact-check him. Let that sink in for a moment...

When did we turn to social media spaces to get our health facts from. Fact checking from big tech is the silliest thing normal people have embraced on the Internet. Kind of loony when you think of the historical context of the internet.


I’ve heard that called the “weak man” as opposed to the straw man. Almost all debate I hear is against the weakest or nuttiest form of an argument.


I think that stretches the definition of 'echo chamber'. That's simply what a community is, a group of like-minded people with shared views.

There's nothing automatically wrong with this. People can have shared criticisms of others if there is some meaningful difference between the groups that justifies that disagreement. I think people mistake 'hurling outrage' with simply two (or more) groups being fundamentally in disagreement.

The answer to this is actually the opposite of what people commonly suggest. Just let people with irreconcilable differences go their own way which is why I like the researchers suggestion to improve tools to shield oneself from hate.

Instead of demonizing so called 'echo chambers' I think we should just call it what it is, freedom of association.


"echo chamber" completely misses the point: exposure to outrageous posts is the goal.

If you want constructive dialog that builds towards an actionable plan a dedicated forum and a wiki would be the right tools.

Its just dumb to use a platform that exposes your imagination to people who think the exact opposite - and does so on purpose!

Band aid solutions where these fools get to have their precious walled garden internet empire and eat it too are just never going to work.

You post a picture of your dinner on facebook and some are going to think it looks disgusting. One vegetarian is going to take time out of their busy day (30 seconds) to write a couple of sentences describing your plate full of boiled corpses. Your bestie says it is mean to say that. RE: It is mean to eat animals. You have a hard time deleting your besies comment while they summon the cohords to deal with this vegitarian picking on you. 6 months later and you are still talking about it.


I agree that echo chambers aren't bad by default, the issue is they commonly fall into groupthink patterns. If the subject doesnt have a strong tie to reality it's easy to trend towards extremism.


It is basically incorrect. Most people in these ingroups are hesitant about sharing things in public. They know it is not ok. The most communication is done within closed safe environments. I have several sect-members on FB and the total noise they make in my feed is very low.


Can't say I'm impressed.

It seems plain as day to me that many people are far more hateful online than in real life, mostly under the disguise of anonymity and/or the physical divide. You can say some truly nasty things online that in the physical world would make you wake up in a hospital. There's no real correction mechanism online.

I can't believe the research doesn't address the point that some others here are making as well: the most harmful content is promoted, to the point that you exclusively see harmful content, which is then normalized. Social media promotes the crazy and silences the reasonables.

Not a word on the massively increased speed of information. There is no time to refute any point because the damage is done in minutes and garbage spreads, and then the next one comes.

Not a word on the complete lack of trust in information itself. In media, science, even simple verifiable facts. It doesn't seem to matter any more, people just make up their own facts.

And then the solutions:

"Invest in remedying the offline frustrations that drives online hate"

Ah, ok then. The solution is to just improve the world.


Most research shows that anonymous accounts actually facilitate more conciliatory and nuanced discussions.

The big issue in social media is that the moderates are attacked by both sides. They are often attacked more viciously by those of their own political persuasion that are even more to the left/right than they are. This especially occurs if they are agreeing at all on any point with “the other”. Obviously this has a substantial chilling and muzzling effect.

Anonymous accounts have less face to lose in the real world if they take a more moderate stance. However, over time even anonymous accounts build social credit with the in-group of whatever ideology they support.

This has many negative effects. One is that politicians get the most visible engagement on their posts from the most extreme followers and so the politicians themselves moved to more extreme positions because they think they don’t have any moderate support. Twitter is not real life.


Indeed. And not just politicians, also traditional media, which are now hard to tell apart from "normal" Twitter accounts. They're all in on fast juicy action for maximum engagement.

So you can't trust your peers, politicians or traditional media. They're all Twitter-like now.


> It seems plain as day to me that many people are far more hateful online than in real life,

They’re still thinking it IRL.


Maybe it would be worth repeating the 'myths' he lists in the tweets:

Myth 1: A lot of misinfo on social media

No, research suggests there is little, shared by few & having small effects. Those sharing misinfo are not dumb. But they have intense political animus, which motivates to share what fits their worldview, true or false.

Myth 2: Social media makes people hateful

No, research suggests that online hate reflects offline frustrations that make them hateful both online & offline. The hateful are few in numbers but they are attracted to politics and, hence, are much more visible.

Myth 3: Social media are echo chambers

No, research shows that, for most, social media breaks the bubble. We are more connected to "the others" on social media than in our offline lives. That is why it feels unpleasant - because it is the most hateful "others" we meet.


I would love to read the research he's referring to, but I couldn't find any links or citations. I intuitively believe that there's truth to all of these so called myths, but I'd love to be proven wrong. Anyone know where I can read more about this?


He tweeted a Dropbox link to the presentation at the end, and it has a list of papers referenced.


I would, too, because I've seen research that says otherwise. I've talked with this researcher[1] about it in the past, not sure if their page has the study they were conducting online yet, but the page has papers that are relevant/adjacent to this topic.



Did some digging, here is the paper he references on echo chambers/bubbles:


My initial concern, based on reading the abstract, is that since this study only looks at Twitter and not Facebook, it isn’t enough evidence to make any conclusions about echo chambers in a general way as so many people get their news from Facebook.


The last Tweet links to a Dropbox with slides & citations:


> I intuitively believe that there's truth to all of these so called myths, but I'd love to be proven wrong.

His opinions are based on claimed research while yours are based on intuition. I’d sooner trust his opionions compared to yours because he at least could be exposed as a liar if it turns out that the research doesn’t back up his opinions. You on the other hand would at worst be called naive.


As they say, a paper is worth the peer review performed on it.




I think the myth is the misinterpretation of scale. People tend to believe social media is primarily hate and misinformation, that most users are simply addicted to it like dope fiends, and that it's nothing but echo chambers, etc. All of these things are true, but likely to a lesser degree than assumed.


Agree, the negative part is disproportionately impactful.

As an analogy, my g/f and I recently went grocery shopping. As we got out she was fuming at how old people completely ignore COVID rules, whilst society is largely making sacrifices for them.

I was part of the whole experience, and two people ignored the rules. One did not have a mask, another did have one but ignored social distancing.

There were probably some 200 people in the store. So actual reality is that almost everybody complied. But it takes just one or two that don't, to make a sweeping conclusion like that.


Myth 2: Social media makes people hateful

No, research suggests that online hate reflects offline frustrations that make them hateful both online & offline

That conclusion seems completely wrong to me. Sure, there are previous frustrations, but when you feel attacked online, you become hateful as a defense mechanism… and of course, there is always some YouTube channel ready to reinforce that anger.

And this happens with almost every social media since freenode


When I feel attacked online I stop going to the places where I don’t feel welcome. As you say, some people react by engaging and escalating, but I don’t think most people stick around long in those situations. What this guy is saying seems to be that the people who escalate are likely people who already behave that way offline.


My experience is that people are more socially refined when offline and that anti-social behavior is there but gets corrected and contained. Offline experiences can escalate much more but at tge same time there seems to be more conflict resolution. Everything is more intense.

But it’s refreshing to read about a different view on the topic.


Yeah, these are general features of mass media that predate the internet.

Much of the hysteria about social media is signal boosted by the traditional media establishment that it disrupted


Yes, 100%.

A very recent extreme example from Ireland: Just last week, there was a horrific murder of a woman out running. Ashling Murphy, a teacher.

The next morning, print and radio media ran the leaked news that a Romanian national with many priors had been picked up by local Irish Gardai (cops).

For a day or two, racism and xenophobia ran rampant - then sure enough, he was found not suspicious and released. The man's life is ruined now.

IMMEDIATELY afterward, print and radio media blamed social media for jumping to conclusions, doxxing the man and ruining his life (social media were actually very good at removing his name). Old media are now calling for regulation of social media and blaming them for the whole thing, projecting and deflecting with impunity.


There are countless other examples - Joan Burton in Jobstown, Irish Water protests, Maurice McCabe - every time the government or Gardai or media are caught in a major lie thanks to social media, there are a couple weeks of stories about online bullying and the need to regulate social media.

It's so charmingly obvious, and the lie is blatant, but it works on enough people that they get away with it. The cost to society is extreme, from enabling corruption to slowing progress to dulling and poisoning the minds of the gullible.


I doubt these dare-I-say-conspiracy theories.

Among other reasons, just consider that media suffered as much, financially, from and eBay, which devastated classifieds. For many local papers, that was the single largest change in the last two decades.

And I don’t remember any campaigns against those companies.

The individual journalist and editor actually writing and deciding on a story also doesn’t hold the grudges you assign to all of “media”. They don’t have the power to affect any change that could ever make the sort of difference needed to come back to them in any meaningful way. And they are unlikely to experience the strong emotions “media’ might have, because they haven’t experienced much of the loss that the industry has: they still have a job, for example. And by now, they are likely too young to know better times pre-internet.


>The individual journalist and editor actually writing and deciding on a story also doesn’t hold the grudges you assign to all of “media”.

That assertion doesn't hold up when you read the personal Twitter feeds of many journalists and editors of the publications pushing the anti-tech narrative.


Why do you assume it is a conspiracy? It's just incentives & other market forces at work.


2. FB promotes content with the fastest growth of attention: upvotes, downvotes, comments. It's the opposite of HN, basically. The most engaging content doesnt have to be divisive, but since the majority are emotional beings, and have little interest in abstract thoughts, they react better to emotions. Maybe in a few centuries, the majority will be more concerned with thoughts, and less with emotions, and the equivalent of a flame war will be snarky scientists exchanging with obscure arguments about correctness of some irrelevant theorem, e.g. NP-deniers would reject the NP=P equality, NP-protagonists would support it, and the moderate minority would advocate for a middle ground - that neither statement is provable. Those moderates will be dismissed as Goedel-ists.

3. FB is like a big open club that admits anyone and everyone, and all events take place in the only hall room. Predictably, it turns into a shouting match and FB has to ban some members, and by banning some and not others ideas, the club necesserarily turns into an echo chamber.


The echo chamber isn't due to bans. It has to do with Facebooks algorithms featuring you news stories you think you will relate to.

Besides, I have yet to see anyone banned from any social media site whose absence makes the site worse. In fact, I blame the various social sites for not banning them sooner.


> Myth 2: Social media makes people hateful > No, research suggests that online hate reflects offline frustrations that make them hateful both online & offline.

That's just a sleight of hand. Sure, people may have been hateful before, but social media amplifies that hate exponentially.

Social media thrives on outrage, being addictive, and uses algorithmic ranking to feed it as much as possible.

> The hateful are few in numbers but they are attracted to politics and, hence, are much more visible.

Few in numbers? Have you seen what goes on during election years?

Who hasn't been talking about the January 6th events?


There's no way he's right in this. Sure, hate or may germinate offline to some degree, but it is stoked almost entirely online.


Are people talking about Jan 6 hateful? I don't get what you're saying.


How often do you seen an intellectual debate regarding Jan 6, rather than an emotional one in which the other side is vehemently attacked?




I agree with a lot of that. In my opinion, the problem is the news feed. My very "hot take" is that social media does not only NOT create echo chambers, it's actually the opposite problem.

People naturally self-organize according to common interests and values. Prior to "modern social media", forums were popular and each forum was special-interest and broken into sub-forums. There was usually the obligatory "Off Topic" forum where anything goes (and you'd still have organized threads) and often the "Only Religious" thread and "Only Politics" thread. People would venture into those threads knowing exactly what to expect, and they could bail out at any time. Some of those threads got heated, but I don't remember people saying Forums were causing a mental health crises or that people were being censored because mods deleted off-topic or obscene content.

Social media like Twatter and Facebook introduced the "news feed" where everything and anything gets shoved down your throat and you have no ability to filter by content or topic. The best you can do is follow / un-follow or mute certain pages or friends. I don't know anyone who doesn't want a chronological sort option, but "the algorithm" wants to push the most engaged with content to the topic. And we all know that the most engaged with content tends to be the most incendiary and provoking.

Don't get me wrong, exposing yourself to "uncomfortable" information and challenging your beliefs is healthy and everyone should do it from time to time for self-growth. But the vast majority of people who go on Twatter of Facebook are looking for entertainment, not rational discourse. They log on after a hard day at work looking to unwind with cute cat gifs and instead get their crazy uncle's political rantings shoved down their throats or, far worse, some clickbait news headline that makes everyone of all political stripes want to click because it's so stupid and provocative.

It's no wonder people seem more angry on social media.


Unfortunately, his solutions are non-starters. Openness and oversight of data and algorithms won't do anything without clear ethical requirements. All it will really do is slow down iteration. Prioritizing tools to shield against of hateful content is not in the best interest of social media platforms. Sharing is the main mechanism of retention.

As for the offline stuff...we've been trying to do that since before the internet. See how far that's gotten us.

If I had any solution to offer, I'd say it's to use large platform social media to funnel people to smaller communities where most of the real engagement happens (like Discord). We're already seeing it with brands that don't feel like they can reach their whole audience anymore. Politicians can do that too.


> Prioritizing tools to shield against of hateful content is not in the best interest of social media platforms

It's well understood that hateful content is harmful to retention. Off the top of my head I can think of at least 3 long term projects at IG that were premised on this.

All major social networks prioritize tools to reduce hate because they know that consuming and sharing of hateful content is bad for their bottom line.

The problem is that people are hateful and they express hate online. It's no longer the case that social networks are amplifying this pattern in any way.


> It's well understood that hateful content is harmful to retention

Not sure how you are defining harmful here, but emotionally charged content designed to upset and anger leads to higher engagement numbers. I would argue that's quite harmful.


I'm willing to concede this, but I find it incredible that this is true and yet users have very little control over how their timeline is filtered.


If you're talking about Facebook or Instagram, users actually have a lot of control! You can snooze or completely remove other users, groups, and pages. You can ask for "less content like this" and FB used ML to show you less of similar things. You can add keyword filters to the comments sections under your posts. You can block users from commenting or from posting in your group. You can even do all of these things with ads as well, including removing ads from certain categories.

There aren't power user controls like custom regex filters, but that's because the vast majority of users would not use them, and it's not worth the UI complexity and risk that things like that add.


I posted this because of this interview Michael Bang Petersen did last October (there's a transcript but you have to click to see it, then scroll down to where Petersen appears):

I thought it was surprisingly excellent, with lots of insights cutting directly against the most hyped notions.


The noosphere is so young, so recently emerged. That a couple early aggressive/exploitative/disruptive signals would dominate & run amock is unsurprising. That we mistake the whole enterprise as maligned while it is only so few causing chaos is unsurprising but tragic.

I feel like we are all demanding too much, insisting on certainty & comfort. We both need to let these companies figure their own paths out- allow the diversity of their approaches- while also starting to let users band together & do their own moderation, create their own socialized defenses & overlays. I dont want to tell these companies what to do, how to handle these problems, but right now most of their terms of service prevent users & others from mounting any kind of their own defense. Ultimately the only people I trust to tell us who the bad users are are other users, and we're not all going to agree. I think that's ok, and that we should embrace sovereignty: we should let more democratic form of social-media-ing emerge.

This researcher has such a better problem redefinition than our simple fears project. I'd love to have more hope that the world could engage the real problems, could avoid the convenient frustrated blame-games & bully-pulpit regulation the drums of conflict & tension beat for.


For me this is everyone else figuring out why the web 1.0 was so great. If you have your own place you have to make an effort. People might still visit but they will not return if there is nothing interesting going on.

Platforms in stead use a system of guaranteed readership. Effortless publications that look like they belong because they are consistent with the rest of the garbage heap. (Advertisement is the biggest turd. If there are ads there is even incentive to keep the content crappy enough.)

Its like being invited for dinner by a friend vs a soup kitchen. Why is it not the same? Why are the people at the soup kitchen less friendly? -- are you kidding me?

Moderation has to be the most obvious advantage. If you bring the hate you wont be invited for the next dinner.

It feeds back into it self: People who don't run websites just for fun fail to understand or appreciate what it takes which makes their judgement unreasonable. They desire others to live up to standards they themselves do not meet.

A professor publishing on twitter?

> Exposure to hate can help legitimize hate, in part because our views of the other political groups becomes biased.

This was the whole point! Expose one camp to the most stupid argument made by the other camp for ENGAGEMENT!

On your own website you wouldn't just quote the most stupid things you've found on the internet.


At this link there is the article where M.B. Petersen (and collaborator) gives a more precise notion of the question behind Myth 2 ("Does social media make people hateful?")

I haven't formed an opinion yet, but I'd be curious to hear from anybody who has one. (see among other things how they test for a "hostility gap", page 16).

[edit: "more hateful" -> "hateful"]


I haven’t been on social media for years but maintain a minimal LinkedIn for professional purposes. To me, it seems a great choice I made years ago. There’s too much life to live and not enough time to let randos on the Internet into your time. Increasingly, I see people unable to disconnect from social media. Whatever happens there seems immediately wired into their brain (and by their choice to some extent).

Edit-If something makes you feel consistently bad or worse about yourself then tuning out or disconnecting from it is a sane choice. Even murderers can lead meaningful lives behind bars. (Dostoevsky was good for this insight)


I'd say the fundamental problem with social media is that current incentives strongly encourage companies to make bad social networks.

Cutting advertising and mass spying out of the picture could be enough to all but completely solve the problem. But there may be other ways to fix it. Maybe significantly increasing the liability and risk platforms are exposed to for widely-posted/shared content.


Rather than try to corner company's into regulating & providing speech through checks & balances on their profitability, I'd really like to see online speech be something that users are broadly capable of moderating independently. Most moderation & curation services & systems should be opt in & separate. And ideally should function across sites, allow us to comment & reveal problematic users that work across sites (or to ignore those who are un-invested, low quality sock puppets).

Trying to change what these companys are, what they do: it seems like a truly sisyphean struggle. I don't see any hope of coercing them into becoming better. To me, the onus to be a responsible, healthy, positive society lies on the members of society; we lack the technical starting place to begin to experience these platforms in our own manner, lack the ability to start to self govern. But the conventional attitude right now seems to be that heightened centralization, amplified stronger louder more-active regulation & clamping down, that, as you propose, bigger sticks is the only win. To me, the bigger stick option is mad, is rampant destruction; we cannot place all the responsibilities of society upon an entity. Society has to have it's own stake in here, we can't just pass the buck & demand someone else fix the social quagmires.

But society must be given a chance to defend itself. Something the currently freedomless, constrained, walled gardens quite explicitly forbid.


I suspect advertising might actually sometimes be a moderating effect in that sites/networks occasionally clean up their act to retain major advertising partners. Not great on the whole, but might be one meagre positive.




I will admit that it is impressive to compress this message into several tweets. That said, I have a fundamental problem with the message presented. Biggest one of all is buying into 'hate' speech primarily due to how loosely it is being defined and how eagerly various governments jump on it to curb remnants of free speech. The fact that this is presented to the public and law makers suggests to me that:

1. Author knows his audience and wants to present issues in the language they understand 2. Author believes it

Both could easily be true.