Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How social-media platforms dispense justice (economist.com)
63 points by privong on Sept 9, 2018 | hide | past | favorite | 28 comments


> Internet firms in America are shielded from legal responsibility for content posted on their services. Section 230 of the Communications Decency Act of 1996 treats them as intermediaries, not publishers—to protect them from legal jeopardy.

> When the online industry was limited to young, vulnerable startups this approach was reasonable.

That's the wrong way to look at it. The size of the company should not matter here. It's how they operate that should determine which regulation applies to them. There are only two reasonable alternative:

Either they are intermediaries, in which case they should have the duty to indiscriminately intermediate whatever they are asked to intermediate. This means no centralised moderation. Maybe allow moderation at the sub-community level (as Reddit does with its subforums), but no centralised censorship, except maybe upon reception of a warrant.

Or they are editors, in which case they should have full legal responsibility of the content, and suffer full legal consequences of whatever illegal speech they allowed.

I believe such a clean dichotomy is the only way to ensure free speech in the face of such centralised services. 'Cause I think the likes of Facebook, YouTube, Twitter… would rather choose the "intermediary" path than risk taking legal responsibility for their enormous user base.


This implies no suggestions algorithms and news feeds either. If these sites are pure intermediaries, then running machine-driven editorializing by controlling and suggesting the content people see is not a neutral act - those algorithms are biased and they choose how they act.

Imagine a phone company which selectively didn't send calls to you because it's automated checkers determined you weren't interested. If you got nothing but telemarketing, well, now they're no longer a pure transit are they?

Social media owners are already editors and they're trying to shirk their responsibility.


> This implies no suggestions algorithms and news feeds either.

Yep, it does.


Frankly, I'm surprised this doesn't get more attention.

At present, Twitter, et al are in an incredibly weird position. They claim to be "public spaces" which is the foundation of the lawsuit against Trump blocking people and making company announcements SEC-compliant. Further, they claim to be "neutral" but apply "quality filters" and flag bad (but not illegal!) content. But then Twitter revokes the blue checks for poor behavior and they all ban people for bad (but not illegal!) behavior. at least claim they weren't playing as editors.

So they're only the particular thing to a particular audience when they need to be but something else when it's more convenient.

I don't see how the reasoning fits together.


I don't think such black and white thinking is appropriate here. Most websites are neither or something in between. It would be intellectually nice if there was such a simple distinction, but it's entirely artificial, a remnant of an information society that no longer exs ists.


> Maybe allow moderation at the sub-community level (as Reddit does with its subforums), but no centralised censorship

Reddit most definitely has centralized censorship. They have additional subreddit censorship left to the discretion of subreddit moderators.

Reddit, like most social media sites, have been slowly moving from intermediary platforms to edited platforms.

> 'Cause I think the likes of Facebook, YouTube, Twitter… would rather choose the "intermediary" path than risk taking legal responsibility for their enormous user base.

They prefer the intermediary position, but are pressured to act as editors by politicians and the news companies.


The root incentive to moderate is to keep the government from intervening or preventing people turning against the platform. Government starts interfering and people turn against them when there is a moral panic[1].

In democratic societies the best way for the social-media platforms to achieve their goal is to muffle moral panics directly. That's different goal from the moderation that government and public want from them (criminal or hidden actor propaganda.)

I predict that the attention will be directed towards throttling sudden spikes in moral panic inducing content. Legitimate moderation will be just small part of the moderation effort. Most of the energy goes into protecting the sensibilities of "proper" society.

There will be limiters for the reproduction number of any kind of controversial thinking. It will be just easier and cheaper.

[1]: Moral Panics http://criminology.oxfordre.com/view/10.1093/acrefore/978019...


It will to see how the platforms fare once they have to exercise editorial control over their content.

It will be especially interesting to see how their assumed competitive advantage over 'traditional media' holds once it is subject to a massive increase in overhead, which surely follows from having an army of moderators.

I wonder if they will start to ebb, consolidate, and find a place alongside other media. Those that are scarcely profitable right now (Twitter for instance) should be the early indicators.


Honestly, I’m surprised that social media is a thing. It appears to make its users miserable, especially twitter and Instagram, and some of them barely break even.


You could say the same about heroin. Or even alcohol, really. Taking immediate pleasure at the price of pain later is a failure mode of human brain; we all fall for it, to smaller or greater extent.


Good point.


>"Although most of the moderators work for third-party firms, the growth in their numbers has already had an impact on the firms’ finances."

I was kind of surprised to read this. Wouldn't content policy be much more coherent if these were employees of the company? I'm curious why they elected to use third-party firms. I understand that localization is an issue but this a global company with deep resources so I'm doubting that could be it.


I'm curious why they elected to use third-party firms.

Plausible deniability, and no liability for the long term effects on the mental health of those exposed to extreme imagery. Knowing these companies almost certainly a tax dodge of some sort is involved too.


Sure plausible deniability sounds about right for these folks. However I would hope that the mileage on that is limited to single use. I'm sure they will use this excuse to point blame elsewhere during their next crisis but I doubt it will work for the one after that.


What if they found the "creative" solution to delegate the policing to the governments in question (as a "third party"), in exchange for not having to pay taxes. Then deniability goes both ways!


Maybe it's easier for them to increase manpower temporarily if it's outsourced?

i.e. there's probably going to be more need to moderate Facebook in the run up to the midterms this November, and even more so with actual elections. There's probably a lot of factors that go into determining how many people are needed to moderate and it could be easier/cheaper to hand over staffing to third party firms.

That and as you mentioned, localisation.



Now these platforms hide behind "external fact checkers", "black box algorithms", etc. The real problem is that every fact is theory-laden, a lesson learned from the history and philosophy of (natural) sciences.

Today, we are in a state of disputing facts, because people are disputing theories behind stating such facts. Let me give you an example: a reporter finds two dead bodies. The fact that two dead bodies discovered may not be under dispute. The moment, one says that these two were killed, the other says that these two committed suicide--you see, two theories (homicide vs suicide) are presenting the phenomenon in two different ways.

Further accept that these two were indeed killed. Now look at possible reasons behind these deaths. Multiple people offer often competing, contradictory, reasons behind these deaths.

Basically, there are no facts. What we call facts are theory-laden facts that are agreed up on by all sides; in other words, all side agree on the underlying theory (say, homicide).


There are facts. You said yourself "The fact that two dead bodies discovered may not be under dispute." That's the fact. Everything else you said isn't a fact, it's a hypothesis. Just because people don't know the difference between a fact and a hypothesis/conclusion doesn't mean there are no facts; unless you want to go full ontological skeptic and dispute that it is possible to ever know anything (never go full sophist!).

The real issue is that we have a breakdown of good faith in civil conversation. That makes society impossible. At the point where you argue that there are not two bodies in front of me, even though there are two bodies in front of me, I might as well stop arguing with you and instead start punching you. Then you can argue that your pain is just theoretical.


>There are facts. You said yourself "The fact that two dead bodies discovered may not be under dispute." That's the fact. Everything else you said isn't a fact, it's a hypothesis. Just because people don't know the difference between a fact and a hypothesis/conclusion doesn't mean there are no facts;

The distinction between fact and hypothesis is a red herring.

First, unless one has verified something themselves, with their own eyes or carefully checked primary sources, there are no facts, just reporting of things claimed to be facts.

In other words, there are no "two bodies in front of you" and you're not being denied by some fellow you discuss with while you're both looking at them. Just reporting that such two bodies exist somewhere, that one believes and the other doesn't. And more often than not, the reporting is of even more abstract things, like statistics (collected with who knows what methodology, and presented and baked to prove who knows what point, using all the tricks one can use to lie with statistics).

Second, supposedly physical-domain facts can be fake as well. "This man was shot by person X" (leaving no room to hypothesis) while X might not have done it, or might have been framed - even if there were witnesses attesting that X did it and even a court found it so (many people have been later found innocent e.g. by DNA or further research decades after the fact, victims of over-jealous prosecutors, false testimony, setups, facial similarities, racist bias, and so on). Despite their guilt being not a hypothesis but a fact with "evidence", it was still bogus.

Second, pure hypothesis is often presented as fact, and people are called out for not believing it, all the time.


    > unless you want to go full ontological skeptic 
    > and dispute that it is possible to ever know 
    > anything
The only flaw in claiming that nothing is 100% knowable is that it unsettles people. "Never go full sophist" is just an appeal to "common sense." I don't mean that in a good way, I mean it in a "everyone knows the earth is the center of the universe" way.


> Just because people don't know the difference between a fact and a hypothesis/conclusion doesn't mean there are no facts; unless you want to go full ontological skeptic and dispute that it is possible to ever know anything (never go full sophist!).

Journalists themselves have been blurring the line between facts and opinion for a long time now. The simple act of omitting an information because the information doesn't fit a specific narrative is already an attempt at building that very narrative.

And journalists have become very good at information hiding to shape points of views. First right wing media, then left wing media who broadly adopted that strategy. So it's not simply about facts. You can be factual while not giving the whole picture of an event, you can still deceive people.


What if they were conjoined twins who couldn't have previously been separated without killing both?

*edit

Or, in less cutesy terms, considering that much of the pro/anti-abortion argument (on both sides) often rests on tacit and irreconcilable definitions of what a person is, the point where the woman's body ends and another begins, when is it appropriate (other than perhaps under the most obvious of circumstances) to charge and assailant that there were two homicides, rather than one? That is, when does the definition become more than personal belief?


>The fact that two dead bodies discovered may not be under dispute. The moment, one says that these two were killed, the other says that these two committed suicide--you see, two theories (homicide vs suicide) are presenting the phenomenon in two different ways.

Not to mention a huge history of governments and news organizations all around the world presenting it as one case, when it was the other.

I'm not even talking about "conspiracy theories" here. I'm taking about the tons of later confirmed cases (when more data became available, when some whistleblowers came out, when the regime fell and a new one opened the old locked archives, and so on). Even for the US, there's a ton of history, from the Watergate and Hoover, to FBI efforts to discredit civil rights activists, the Contra affair (drug dealing, selling guns to Iran), helping Pinochet, to WMD claims, to Snowden's revelations, and so on. Imagine less transparent regimes (you don't need to go that far to exotic dictatorships either, e.g. just read Italy's political history from the 70s to 2018).

Younger people used to understand this much better in the 60s and 70s and trust those sources less. Now those same outlets (and the online one's that replaced them) like their shit doesn't stink and can't ever possibly stink, and many go along with this and treat anyone casting doubt as some deranged conspiracist.


> Today, we are in a state of disputing facts, because people are disputing theories behind stating such facts. Let me give you an example: a reporter finds two dead bodies. The fact that two dead bodies discovered may not be under dispute. The moment, one says that these two were killed, the other says that these two committed suicide--you see, two theories (homicide vs suicide) are presenting the phenomenon in two different ways.

Let me list the facts in your "story":

- There are two dead bodies

- One person claims a killing

- Another person claims a suicide

Those are the facts. The claims themselves are not facts, the fact is that there are claims. They're not theories either, they're just claims, which may themselves be supported facts not presented here.

What makes something a fact is evidence: I can walk up to the dead bodies and confirm their death, I can talk to person A or B and have them repeat their claim, or maybe there are convincing records of the dead bodies and the people giving the claim. If evidence should show up supporting either person's claim, either the murder or the suicide may become fact, but not sooner.

Ultimately, in such a situation, you cannot 100% rule out that it's all the most elaborate hoax ever, or that we're all in a simulation. We're just gonna have to disregard that to get on with our lives.


> or a photo alleging that Donald Trump wore a Ku Klux Klan uniform in the 1990s (leave it up but reduce distribution of it, and inform users it’s a fake).

If they’re so concerned about misinformation, why would they leave a fake photo up, even with a disclaimer? Nah, I must be reading too much into this, surely this is a policy they apply evenly, and they’d do the same with a similarly fake photo of Obama.


Not sure if you're being sarcastic. I assume they would do the same for doctored photos of Obama. Removing content entirely should only be reserved for when serious crime is likely to occur, I think.


Social media companies must be allowed to do policing this way only if they abide by the national laws pertaining to freedom of expression. I mean, if a company wants to list as a social media company where people share information, and they also do policing, they must be answerable to local courts if they take down posts using their judgement which goes against the established freedom laws of the land, without hiding behind the garb of being a private enterprise etc and saying that its their prerogative to manage their platform as they please. I don't know if such a thing is already in place.

The companies are having a knee jerk reaction to the misuse by public, and like all KJR this is heading the wrong way. The fact that the companies whose primary aim it to maximize profits are taking it upon themselves to do these sort of ethically and legally questionable activities itself will lead to bad ramifications.

One obvious solution to controlling fake news is to remove the invisibility cloak of users by forcing them, by law, to use government issued ID to authenticate their accounts. This should make people more responsible before sharing misinformation. However, the fragility of technology that would lead to abusing authenticated user accounts is a big enough concern to put the onus on the companies.

I don't care if the companies "employs executives who are thoughtful about the task of making their platforms less toxic while protecting freedom of speech" or use third party authenticating sources, or some fancy AI/ML, but unless they are forced to abide by the laws, things will only worsen. Whether they like it or not, social media companies are now entangled in social and legal aspects and have to deal with all that comes with it. They have enjoyed exponential growth in profits, now its time to get real.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: