Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think you'll find a lot of lawyers who specialize in speech and 230 that agree with this interpretation either, and it would have extremely wide-ranging consequences for the entire Internet that the fiercest advocates of this position would not like at all.


I expect that you are correct. However, allowing an org to act as a publisher by heavily curating who is allowed to use their 'forum' is having wide-ranging consequences for our society. As they are occupying a spot in the regulatory scheme that they no longer deserve, redressing this with regulation is necessary.


I don't think anyone wants another long thread recapitulating the whole debate about Twitter's obligations to to society; they can just read the thousands of HN comments that have been written in the last week about it.

What I will say is, however you hope to resolve this problem, eliminating the 230 protections is probably not the right way to go about it if you want providers like Twitter to be less intrusive, or for alternative venues to be viable at all. I think the only coherent "free speech" strategy that involves attacking 230 is accelerationism; that maybe by blowing up the US commercial Internet we'll somehow all migrate to a completely free blockchain Internet run out of the Azores or something.


What do you think about not nuking 230, but making its protections reliant on specific conduct? We can sidestep the 1A issue altogether since these companies would do anything to avoid being held responsible for the libelous, harassing, defamatory, threatening and sometimes terroristic content that their users post every day. That is nothing short of a gift given by the government, and it can be modified or restricted.

Moderation doesn't scale, so I think this is a case of either do what 230 requires or cease existing as a going concern - either of these would be good outcomes, so this is a powerful lever.

Some ideas in no particular order; a platform owner is only shielded from liability inasmuch as they (choose as many as applicable):

1. Provide a forthright accounting of any negative actions taken against an account (no shadowbans, no silent editing or hiding of content from discovery) at the time the action is taken with a forthright explanation of how the conduct broke the stated rules.

2. Provide an appeals process for bans/negative actions run by a neutral third party, with any ambiguity resolved in favor of the appellant.

3. Do not make or enforce ex-post-facto rule changes

4. Demonstrate no pattern of unfair or unequal application of the stated rules

5. If a ban is issued, a "wind-down" period must be granted to allow the banned user time to move what they can of their social network somewhere else.

6. Upon request, your own account's data must be provided in full.

OR

7. Remain completely hands-off from a content removal standpoint. Content is removed if it is either literally illegal or breaks the service and under no other circumstance.

This last one would still allow for spam filtering and content categorization, which would allow the user experience to change little from today, and puts the most control in the hands of individual users.


I think policy intervention that is backstopped by threats to remove provider liability protections is going to backfire, because when you get when providers are liable is radically more intrusive moderation, not less moderation.

As a matter of principle, any regulatory regime that would put HN as Dan moderates it at risk is bad, and what you're proposing would seem to threaten HN. All your bullets here seem like things that will pull providers into litigation.


It would make sense to combine these ideas with a circuit breaker that only kicks in once you have a certain MAU count.

Having it apply to every forum everywhere would suck and be unworkable, but once you're at Facebook/Twitter/Reddit/etc. levels of exposure, there are a different set of interests and responsibilities to society in play.


You seem to be describing a scheme where Twitter is legally required to be very careful about moderation, but Parler isn't, which seems crazy to me. Also, it doesn't address other vehicles for suppressing toxic speech; for instance, no 230 change you come up with is going to obligate cloud providers to do business with Parler and StormFront.

I should add a point I should have made earlier, which is that 230 is in no way based on a notion of being "publishers" or "platforms". That's a super common misconception about the law.


Not at all. In my ideal world, Parler would have the same restrictions applied, and would have fallen afoul of point 4 at the very least, given their massive popularity spike. They also weren't completely hands off, so the last option is off the table for them.

That ambiguity is precisely what I try to address. That distinction might not exist now, but it arguably should.


230 isn’t a gift to Twitter, it’s a gift to us. A world where 230 was restricted in the way you describe wouldn’t see Twitter agree to one of your two modes. They’d opt to instead take a heavy hammer to anything even remotely objectionable, so-as to avoid having to deal with any of the controls on your list.

It’s also not clear to me why we’d expect a private company to have to answer to you or me or anybody else about decisions they make. We can choose to not use their services if we don’t agree with them (and many people on this site have done exactly that), but any rule that attempts to say “once you’re popular enough, your business has to follow somebody else’s rulebook for how you decide what content you must host” isn’t going to make sense to me.


They can't even enforce their own existing rules against things that are absolutely banned or illegal already - look at Facebook's controversies.

Put plainly, I do not believe it is possible for a social network to moderate hard or fast enough at Facebook/Twitter scale to reject section 230 immunity. Even if they took the step of pre-moderating all content before it appears on the site, there is simply too much content coming in for that to be a realistic option (and not lose a ton of users due to the delay putting people off).

To give you an idea of the scale we're talking about, Twitter does about 500,000,000 tweets per day.

>It’s also not clear to me why we’d expect a private company to have to answer to you or me or anybody else about decisions they make.

They answer to society at the end of the day, which can express its desires via the legal system. If society tires of social networks acting as unaccountable gatekeepers to the national conversation, society can act.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: