Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand why commenters in these threads keep noting that free speech only protects people from government censorship. This is a common, distracting, and empty statement. Proponents of free speech are pro free speech as a general concept and principle, beyond what protections are afforded under American law today. The idea of free speech predates the existence of the United States. Free speech is hugely valuable to defend, because what society finds acceptable or unacceptable is very much subjective and changes with time/location/culture/setting/leadership/etc. Having an open exchange of ideas is good and necessary for the long-term health and stability of society, especially if we care about being a collectively truth-seeking society.

There are also frequent comments on such articles saying that content creators can just seek another platform, which frankly seems like an obviously unhelpful suggestion. Twitter, Reddit, Patreon, and others are massive in scale and have a ubiquity and reach that isn't found elsewhere. Platforms that benefit from network effects don't face effective competition, and investors typically won't invest in new competitors in those arenas, because it is such a long shot to break through those barriers. We could argue that Patreon is not one of the platforms whose value is driven by network effects, but the underlying payment processors (e.g. Visa) definitely benefit from network effects. And of course, Visa has deplatformed many parties, including famously, Wikileaks back in 2010.

There are also examples of folks who followed that advice and left Patreon for other platforms (e.g. SubscribeStar) and then got deplatformed (e.g. by Stripe or PayPal). There are examples of lower-level entities like Visa/Mastercard _forcing_ platforms built on top of them to censor someone or risk being banned by them. Clearly, these privately-owned platforms are monopolies or oligopolies in a sense, holding access to large segments of the population with no competitive forces acting on them. Alternatively, we can look at them as being the digital public square, and therefore they should be subject to regulation that prevents them from taking action beyond what the law in their jurisdiction requires.

The big risk is this: when only a few entities funnel so much societal discourse or control our communication infrastructure or process payments, those entities making arbitrary decisions about who they serve has similar impacts/risks to the government imposing similar restrictions through the law. These companies should not act as a moral police and should not impose their own personal governance above what is minimally required by the law. Nor should they rely on the judgment of an angry mob to make decisions.



It's an interesting problem. Speech has never truly been free, sometimes and in some places its been regulated by government, but in modern liberal democracies its been regulated by culture. Polite society regulates speech by rejecting people who engage in whatever that society views as harmful speech.

Social media platforms and the internet generally, substantially weaken the power of culture to regulate speech in the way that it used to. I don't know what we do about this. We don't want government regulating speech, and it doesn't seem like allowing social media platforms to regulate it arbitrarily is particularly good either. But the gates that culture and localism placed on speech in the past did, seemingly, serve some useful purpose.

Do we want to live in a world where speech is truly unregulated, even by shame or culture? Maybe. It's possible that the answer here is yes we do, that sunlight is always the best disinfectant, and that the truth always emerges victorious in the end. But it's also possible that those things aren't true. I don't have an answer, but I don't think one way or the other is the obviously correct path forward either.


> Social media platforms and the internet generally, substantially weaken the power of culture to regulate speech in the way that it used to.

I fail to see how this is the case. It does absolutely grant the power to regulate speech. What Social media platforms have changes is that the people who are carrying out the informal regulation of speech have shifted to being a very narrow subset of the population, and one that is overwhelmingly made up of one category of culture. Tech, especially in the Bay Area, is effectively a political monoculture. Support for Republicans is often in the single digits[0].

The evidence really does indicate that sunlight is the best disinfectant. Despite the constant concern over Trump's comments on immigrants, for example, support for immigration in the US is at record levels.[1][2] Despite the concern over explicitly fascist rallies at Charlottesville and DC, these rallies actually cased a significant drop in support for far-right.[3] And lastly, while the Republican has become a party of Trump and has adopted much of his rhetoric the result has been largely bad for the party. They lost over a dozen seats in the midterm election - surprising given that the midterms are when the Republicans tend to do well.

Yes, sunlight is the best disinfectant, and the data demonstrates it. Ironic, then, that some would want to shield these views that they despise from said disinfectant.

0. https://www.recode.net/2018/10/31/18039528/tech-employees-po...

1. https://www.nytimes.com/2018/06/23/us/immigration-polls-dona...

2. https://news.gallup.com/poll/235793/record-high-americans-sa...

3. https://www.aljazeera.com/indepth/features/2017/09/alt-weake...


> I fail to see how this is the case. It does absolutely grant the power to regulate speech.

I didn't say it prevented speech regulation in general. I said that it attenuated the previous regime of speech regulation. It does so by replacing it with a new one, where speech norms are determined by a tiny group of people in unelected, unreviewable positions at private companies.

> The evidence really does indicate that sunlight is the best disinfectant. Despite the constant concern over Trump's comments on immigrants, for example, support for immigration in the US is at record levels.[1][2] Despite the concern over explicitly fascist rallies at Charlottesville and DC, these rallies actually cased a significant drop in support for far-right.[3] And lastly, while the Republican has become a party of Trump and has adopted much of his rhetoric the result has been largely bad for the party. They lost over a dozen seats in the midterm election - surprising given that the midterms are when the Republicans tend to do well.

You may be right. But the story is far more complex than you're letting on. If sunlight is always the best disinfectant, then why did these movements gain steam in the first place? Sure, after an adverse event like Charlottesville, support may drop. But these movements just elected a president. Where was the disinfectant then?


Largely because the opposition did exactly that: they started deplatforming and attempting to more coercively prevent right wing views from speaking. Deplatforming grew in popularity around 2013 or 2014. Students at Brown shut down Ray Kelly's speech in 2014 [1]. We did prevent these groups from going out into the sunlight. That gave them the chance to grow in the shadows.

Also, with regards to Trump, I largely see his election as happening despite his association with the alt right rather than because of it. It's a big liability not just for him, but the Republican party. Trump's biggest advantage wasn't anything to do with him, but the fact that Democrats had alienated manyc centrists in the leadup to the election, and fielded a candidate that lacked the enthusiasm to rally their base.

1. this is not to say that Ray Kelly is alt right. The fact that he got shut down despite not being nearly that extreme, though, is still demonstrative of my point.


> Largely because the opposition did exactly that: they started deplatforming and attempting to more coercively prevent right wing views from speaking. Deplatforming grew in popularity around 2013 or 2014. Students at Brown shut down Ray Kelly's speech in 2014 [1]. We did prevent these groups from going out into the sunlight. That gave them the chance to grow in the shadows.

I'm not a fan of de-platforming, but there's pretty decent evidence that it can be effective [1]. I'm aware of no empirical data supporting the notion that it is counter-productive. That being said, it's certainly a theoretical possibility. It could be the case that exposing ideas to the light of day robs them of their power. But then, why are conspiracy theories so persistent? Why was Alex Jones so successful, in spite of the unbelivably simplistic and falsifiable lies he was telling?

I see almost zero evidence that sunlight acts as a disinfectant for ideas that appeal to people's preconceptions, and a lot of evidence that the modern left's tactics of shaming and de-platforming are actually the most effective ways to change culture and minds. To be clear, I don't like those tactics. I think that, in the very long run, they are probably harmful. But it is hard to seriously deny their efficacy.

> Also, with regards to Trump, I largely see his election as happening despite his association with the alt right rather than because of it. It's a big liability not just for him, but the Republican party. Trump's biggest advantage wasn't anything to do with him, but the fact that Democrats had alienated manyc centrists in the leadup to the election, and fielded a candidate that lacked the enthusiasm to rally their base.

I think that's part of it. Another part of it is that Trump saw through the stalemate stable equilibrium of left/right politics in the US. He correctly surmised that there was a middle path, that activated ethnic and nationalistic identities of the right, while simultaneously stoking economic anxiety traditionally associated with the left, to form a coalition that had unanticipated electoral power.

1. https://motherboard.vice.com/en_us/article/bjbp9d/do-social-...


That deplatforming deprives the person being deplatformed of a platform is obvious, to the point that it's effectively a tautology. However, concluding that this means deplatforming is effective is extremely naive. When tech companies engage in acts of censorship like deplatforming, it causes many do lose trust in the perceived lack of partiality of these platforms. So while individual people getting censored may see their audiences diminish, support for the views they espouse and distrust of the authority carrying out the censorship often increase. In case it wasn't clear, the lack of efficacy in deplatforming I referred to in my previous comment was in reference to attempts to curb ideas and political movements - not individuals within those.

Again, deplatforming gained traction in the early to mid 2010s. It coincides more or less directly with the rise of the Alt Right. Increases in deplatforming is correlated with support for the far right, not against it.


> That deplatforming deprives the person being deplatformed of a platform is obvious, to the point that it's effectively a tautology. However, concluding that this means deplatforming is effective is extremely naive. When tech companies engage in acts of censorship like deplatforming, it causes many do lose trust in the perceived lack of partiality of these platforms. So while individual people getting censored may see their audiences diminish, support for the views they espouse and distrust of the authority carrying out the censorship often increase. In case it wasn't clear, the lack of efficacy in deplatforming I referred to in my previous comment was in reference to attempts to curb ideas and political movements - not individuals within those.

It didn't just impact the individuals, it reduced the behavior site-wide.

> Again, deplatforming gained traction in the early to mid 2010s. It coincides more or less directly with the rise of the Alt Right. Increases in deplatforming is correlated with support for the far right, not against it.

This is a pretty clear correlation/causality confusion. If factor A is on the rise and triggers reaction B, you cannot use the rise of B to prove that B caused A. Now, your narrative may be correct, but the narrative story you've provided does not demonstrate it.


> It didn't just impact the individuals, it reduced the behavior site-wide.

And? Even saying it reduced the behavior site-wide is not an effective measurement to conclude that deplatforming works in reducing the prevalence of those views in society. Again, "Deplatforming X views from platform Y resulted in less of X on platform Y" is effectively stating the obvious. To demonstrate the effectiveness of deplatforming, one would have to determine whether deplatforming actually results in fewer people believing in the views that are being deplatformed. I have not encountered any instance of this occurring. Ask yourself this: when you are banned from a forum for views you believe in, or you witness someone banned for views you agree with do you tend to turn around agree with the censor? Or do you become more enthusiastic for that view and lose respect for the censor

> This is a pretty clear correlation/causality confusion. If factor A is on the rise and triggers reaction B, you cannot use the rise of B to prove that B caused A. Now, your narrative may be correct, but the narrative story you've provided does not demonstrate it.

We're seeing trust in media and tech companies plummet. While the fact that a rise in extremist views is correlated with increases in deplatforming is not hard evidence of causation, it's extremely difficult to claim that deplatforming works to reduce said views in the face of that positive correlation between the two. That's trying to claim a causal relationship in the face of evidence of the opposite correlation.

In other words, if we see A rise alongside B it is indeed jumping the gun to say that A certainly causes B. But it's even more dubious to say that A reduces B in the face of that correlation.


> I have not encountered any instance of this occurring.

How could you witness such a thing occurring? This seems like an unreasonable evidentiary standard.

> We're seeing trust in media and tech companies plummet. While the fact that a rise in extremist views is correlated with increases in deplatforming is not hard evidence of causation, it's extremely difficult to claim that deplatforming works to reduce said views in the face of that positive correlation between the two. That's trying to claim a causal relationship in the face of evidence of the opposite correlation.

Do HIV drugs cause HIV? Do civil rights movements cause racism? De-platforming is a treatment. Of course it's going to co-occur with the thing it's attempting to treat. This is evidence of nothing at all. What you need to do, and what the studies I reference did do, is examine individual communities pre and post treatment. That is how you start to get at causality. The analysis is imperfect, to be sure, but it's a lot better than looking at simple correlation.


> Do HIV drugs cause HIV?

How is immunodeficiency treatment at all related to deplatforming? Viruses aren't thinking human beings.

> Do civil rights movements cause racism?

The Civil Rights movement did not enagage in deplatforming. Many of them explicitly acknowledged that their opponents also deserve the ability to speak. It was often the civil Rights movement itself that was subject to deplatforming.

> Of course it's going to co-occur with the thing it's attempting to treat. This is evidence of nothing at all.

The rise of deplatforming preceded the rise of the Alt Right by about a year or two. They didn't always co-occur, one preceded the other. Their continued co-occurrence suggests that the implementation of deplatforming either 1. has no effect on that brand of extremist, or 2. maybe even causes it.

This is not consistent with treating a disease. Usually, a disease is present sometime before treatment is administered. Then as treatment is administered the symptoms are reduced, if the treatment is successful. This is not what we are witnessing with the relationship between deplatforming and the brand of right wing extremism we've been seeing lately.

> What you need to do, and what the studies I reference did do, is examine individual communities pre and post treatment.

Limiting measurement to individual communities is not a good way to measur it's overall effect. Again pointing out the fact that when a community deplatforms a certain view that view is no longer present is pointing out an obvious consequence. Of course the platform sees a reduction in the view that was deplatformed. That's basically just restating the definition of deplatforming: kicking a person or group off the platform.

If you want to measure the effect of deplatforming on society, then the analysis has to be society-wide. Otherwise one is effectively just building a bubble of the communities that do engage in deplatforming, and burying their head with respect to it's impact on the rest of society.


> How is immunodeficiency treatment at all related to deplatforming? Viruses aren't thinking human beings.

> The Civil Rights movement did not enagage in deplatforming. Many of them explicitly acknowledged that their opponents also deserve the ability to speak. It was often the civil Rights movement itself that was subject to deplatforming.

My point is that the type of reasoning you used here would lead you to draw both of those conclusions.

> The rise of deplatforming preceded the rise of the Alt Right by about a year or two. They didn't always co-occur, one preceded the other. Their continued co-occurrence suggests that the implementation of deplatforming either 1. has no effect on that brand of extremist, or 2. maybe even causes it.

That's an a-factual statement. When did the alt right "rise"? Was it when Mencius Moldbug started writing Unqualified Reservations in 2007? When Richard Spencer joined the National Policy Institute in 2011? During Gamergate in 2014? Similarly, when did de-platforming 'start'? Was it when people first started protesting The Bell Curve when it was published in 1994? Was it when British National Union of Students adopted a no-platform policy?

The point is, neither of these events have a well-defined starting point, so any claim of one preceding the other is silly, and has no basis in fact.

> Limiting measurement to individual communities is not a good way to measur it's overall effect. Again pointing out the fact that when a community deplatforms a certain view that view is no longer present is pointing out an obvious consequence. Of course the platform sees a reduction in the view that was deplatformed. That's basically just restating the definition of deplatforming: kicking a person or group off the platform.

Your objection suggests a specific causal model, though. You're right that kicking users off of the platform will tautologically reduce the content. However, what if you didn't kick people off the platform? What if instead, as in the example I linked, you banned the sub-communities dedicated to advocacy of the proscribed topics? The people stay, the community goes. Then, you look at the level of the material in other sub-communities on the site. That is what those studies did, and that is why they demonstrate causality.


One can argue when these terms were initially coined. But we do have hard data on when they became prevalent in the public mind. Look at the Google trends for "deplatforming"[1], "no platforming"[2] and "alt right"[3]. "No platforming" had some blips starting in the late 2000s, but begins rising significantly in 2015, "deplatforming" in January of 2016, and "alt right" in august of 2016. There is evidence to the claim that deplatforming (or at least, widespread interesting in deplatofrming or "no platforming") preceded widespread interest in the alt-right.

> The people stay, the community goes. Then, you look at the level of the material in other sub-communities on the site. That is what those studies did, and that is why they demonstrate causality.

Yes, but as I stated multiple times by now the key limitation here is that they only looked at the material on the same site. Site X bans Y (whether in full or in only some subforums). You observe a reduction of Y on the site. That's not evidence that this action reduced Y in society as a whole. There is a causal relationship between deplatforming and reduction of the deplatformed view on said platform. Nobody is disagreeing with that - most people would likely read such a statement and think "no kidding, Sherlock".

For example, pointing to the fact that when Reddit banned racist subreddits racist content on other subreddits were reduced is proof that banning racist subreddits reduced racist content on Reddit. This is not at all surprising, and is something most would call obvious. But to portray this as proof that banning racist subreddits reduces racist content in society as a whole is a very large misrepresentation. This study did not study the impact on society as a whole - only the forum that is carrying out the deplatforming.

And again, I do not attempt to claim the the correlation between the rise of deplatforming and the rise of the alt right is irrefutable proof that the former causes the latter. But claiming that the former helps prevents the latter is not backed up by the evidence we do have.

1. https://trends.google.com/trends/explore?date=all&q=deplatfo...

2. https://trends.google.com/trends/explore?date=all&q=no%20pla...

3. https://trends.google.com/trends/explore?date=all&q=alt%20ri...


> One can argue when these terms were initially coined. But we do have hard data on when they became prevalent in the public mind. Look at the Google trends for "deplatforming"[1], "no platforming"[2] and "alt right"[3]. "No platforming" had some blips starting in the late 2000s, but begins rising significantly in 2015, "deplatforming" in January of 2016, and "alt right" in august of 2016. There is evidence to the claim that deplatforming (or at least, widespread interesting in deplatofrming or "no platforming") preceded widespread interest in the alt-right.

The terms themselves don't seem particularly relevant. The idea of deplatforming people has been around and practiced for a while. The alt-right dates back to at least Gamergate, and its roots in neoreaction, TRP, MGTOW, /pol/, etc can be traced back much further. I don't think Google trends really proves much here.

> Yes, but as I stated multiple times by now the key limitation here is that they only looked at the material on the same site. Site X bans Y (whether in full or in only some subforums). You observe a reduction of Y on the site. That's not evidence that this action reduced Y in society as a whole. There is a causal relationship between deplatforming and reduction of the deplatformed view on said platform. Nobody is disagreeing with that - most people would likely read such a statement and think "no kidding, Sherlock".

It's not tautological that that should happen. Remember, they're looking at the prevalence of that view elsewhere. It's not at all obvious that it should be the case that when you ban the 'Fat People Hate' subreddit, fat-shaming content elsewhere on reddit decreases.

It would be very hard to prove this effect on general social sentiment even for a site as big as reddit, because society is so much larger. Facebook might be big enough to have a measurable effect on society writ large, but their policing mechanism, and the internal organizational structure of Facebook doesn't really lend itself to these sorts of experiments.

> For example, pointing to the fact that when Reddit banned racist subreddits racist content on other subreddits were reduced is proof that banning racist subreddits reduced racist content on Reddit. This is not at all surprising, and is something most would call obvious. But to portray this as proof that banning racist subreddits reduces racist content in society as a whole is a very large misrepresentation.

It didn't just reduce the aggregate racist content on reddit. It reduced the aggregate racist content above and beyond the literal content that was removed. In other words, when they banned r/CoonTown, r/politics got less racist. That is not at all an obvious consequence.


Sure, it reduced toxic content "elsewhere" but that "elsewhere" is limited to the same space that is administered by the same authority. Banning /r/coontown may have made posters in /r/politics less toxic, likely because they witnessed the shift in moderation policies. Also because racist users likely stopped using the service for posting racist content. But you're acting as though this means this content wasn't posted at all. For all we know, this just displaced it to 4chan, Gab, or something else.

Again, I agree that banning racist subreddits led to a reduction of racist content across the board on Reddit. But you're treating this as proof that said bans reduced racist content in society as a whole, which is a baseless claim even with the aforementioned analysis of the impact on other subreddits.


I agree that it isn't absolute proof. It is possible that an effect like the one you described took place. But it isn't the only evidence. I'd direct you again to people like Alex Jones. I think it's extremely hard to argue that Alex Jones and his toxic brand of disinformation didn't benefit enormously from access to platforms like Facebook, Twitter, and Youtube. I think you'd be extremely hard pressed to argue that his reach has increased as a result of being de-platformed. You may be able to make the case that it has retrenched the support of his hardcore followers, but that is not the same thing as signal boosting his message in society at large.


Yes, "The idea of free speech predates the existence of the United States."

The idea of the freedom of association also predates the existence of the United States.

The freedom of association means that organizations get to exclude people from the association. Including because of their speech.

The phrase 'deplatform' rejects of the freedom of association.

Quoting from John Stuart Mill's "On Liberty", chapter IV, "Of the Limits to the Authority of Society Over the Individual"

> It would be a great misunderstanding of this doctrine, to suppose that it is one of selfish indifference, which pretends that human beings have no business with each other's conduct in life, and that they should not concern themselves about the well-doing or well-being of one another, unless their own interest is involved. ...

> We have a right, also, in various ways, to act upon our unfavourable opinion of any one, not to the oppression of his individuality, but in the exercise of ours. We are not bound, for example, to seek his society; we have a right to avoid it (though not to parade the avoidance), for we have a right to choose the society most acceptable to us. We have a right, and it may be our duty, to caution others against him, if we think his example or conversation likely to have a pernicious effect on those with whom he associates. We may give others a preference over him in optional good offices, except those which tend to his improvement. In these various modes a person may suffer very severe penalties at the hands of others, for faults which directly concern only himself; but he suffers these penalties only in so far as they are the natural, and, as it were, the spontaneous consequences of the faults themselves, not because they are purposely inflicted on him for the sake of punishment.

It is indeed possible "to extend the bounds of what may be called moral police, until it encroaches on the most unquestionably legitimate liberty of the individual", but what I'm pointing out is that you cannot simply look at "freedom of speech" as the sole or even paramount freedom under discussion.


"The big risk is this: when only a few entities funnel so much societal discourse or control our communication infrastructure or process payments, those entities making arbitrary decisions about who they serve has similar impacts/risks to the government imposing similar restrictions through the law."

So the problem isn't kicking haters off a private platform, it's that corporations have grown to large.

I agree.


My take on the situation is this: internet-based communities can be analogized roughly to a privately-owned forum/market. Before I get into specifics, I realize that analogies are never 100% accurate, but I think they can help get the point across.

A business, Patreon, buys a building and sets up shop. They open booths for people to setup shop in and solicit funding for their projects. The general public can walk through the booths and choose who to support. This being a privately-owned business, and in their private property (building -> website or webserver), they reserve the right to deny entry to whomever they wish.

If I were a business owner, I would not want one of those booths to be occupied by someone reciting hate speech and scaring off other patrons (pun not intended, patron in the traditional sense) from other booths and from my business as a whole.

It is entirely possible, and in this day and age fairly easy, for those who got banned from Patreon to set up their own website hosted on their own servers in order to spread their message. I wouldn't say 'go to another platform' because the same thing would happen again. I would say 'make your own platform/website/blog'. You can set up agreements with Paypal or many other places to solicit funds. At this point, with Net Neutrality and ISPs not being able to ban you because they should be a public utility, you can not be removed from speaking how you wish. You can setup your own server easily to do this.

Caveats: net neutrality is an ideal in this scenario, since the FCC removed it. I am not 100% sure on payment providers, but I am sure there is a way to solicit funds without them in some way.


> Proponents of free speech are pro free speech as a general concept and principle, beyond what protections are afforded under American law today.

You certainly don't speak for all of us (re: "beyond").


I'd add that I can't help being frightened by the psychology of someone eager to control other people's speech. It is a really toxic behavior in a liberal democracy.


I have a soapbox.

I like to stand up on it and yell out my views. Sometimes I let my friends use it that way, too.

Some guy comes along and would like to use my soapbox to yell out his views. But I don't like him and I don't agree with his views.

Would you like the government to come force me at gunpoint to let him use my soapbox?

What you're arguing, basically, is that once my soapbox gets popular enough that lots of people want to use it, you do want the government to force me, at gunpoint, to let them use it even when I find their views repugnant.

Or, basically, what this person said, and they said it better and in fewer words:

https://twitter.com/lessdismalsci/status/1076488300188307456


> Would you like the government to come force me at gunpoint to let him use my soapbox?

That's easy, are you the only one with that soapbox? Do you own all the soapboxes? Are you a Corporation who is taking advantage of it's market dominance and near monopoly on modern free association to control public discourse?

So yes if your soapbox networks are now an integral part of public debate then the government should either regulate you or nationalize your assets for the public good. It's no different from why ISPs should be kept neutral, why power companys should be kept neutral and why public highways should be kept neutral.


So to prevent an angry mob from taking away someone's platform, you insist we need the ability for an angry mob to take away someone's platform.

I think you need to think this through a bit more.


> to prevent an angry mob from taking away someone's platform

What? My reply had nothing to do with defending platforms from angry mobs. To reiterate in case you are misunderstanding something, if a company fits the criteria I stated in my previous reply (aka companies like Google/Alphabet) then it should be either regulated or nationalized.

So no, nothing about preventing angry mobs from taking over someone's platform. Unless you consider the government regulating businesses who are abusing their monopoly on public discourse to be an angry mob.

Honestly can't tell if you are trying to make a weird gotcha here or flat out replied to the wrong post.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: