Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's Twitter's own servers and they can choose who can use them or not. Permitted speech on the legal scope does not matter when talking about businesses.


> "Ownership does not always mean absolute dominion. The more an owner, for his advantage, opens up his property for use by the public in general, the more do his rights become circumscribed by the statutory and constitutional rights of those who use it."

This from Marsh v. Alabama, 326 U.S. 501 (1946), a case decided by the United States Supreme Court, in which it ruled that a state trespassing statute could not be used to prevent the distribution of religious materials on a town's sidewalk, even though the sidewalk was part of a privately owned company town. The Court based its ruling on the provisions of the First Amendment and Fourteenth Amendment. https://en.wikipedia.org/wiki/Marsh_v._Alabama


Everything the court has done since Marsh v Alabama has walked that decision back, and I think you'll have a hard time finding legal experts to back the interpretation that Twitter owns the obligations of a public square.

We've had threads about it on HN, but it's also (for obvious reasons) come up recently, and here's Ken White citing a recent SCOTUS decision knocking this idea down:

https://twitter.com/Popehat/status/1141766582382678016

(The whole thread is good).


There's another thing that I think is often glossed over in discussions of Marsh v. Alabama (I'm not a lawyer though, and Ken is probably smarter than me anyhow).

But that is that Marsh v. Alabama had the company wanting to use a state law to kick people out (and this was repeated with the California case Pruneyard). "The state doesn't need to actively help you kick people exercising their 1A rights in a place you don't want them to" is very, very different from "The state can prevent you from exercising your own autonomy to prevent someone from re-accessing your property".

If the company town put up a fence and a gate, they wouldn't be forced to let anyone in.


I wholly agree with you. They do, however, own the obligations of a public forum if that is how they ask to be regulated.

They are playing cute with political speech. They aren't publishing in the traditional sense. But heavy curation of independent content is (at their volume) publishing - without the regulation accorded publishers. They are, by their actions, espousing certain political ideas by only allowing those ideas to exist in their 'public forum'.

For anyone, even a staunch libertarian, to claim that the government should not get in their kitchen on that basis is naive in my opinion.


I don't think you'll find a lot of lawyers who specialize in speech and 230 that agree with this interpretation either, and it would have extremely wide-ranging consequences for the entire Internet that the fiercest advocates of this position would not like at all.


I expect that you are correct. However, allowing an org to act as a publisher by heavily curating who is allowed to use their 'forum' is having wide-ranging consequences for our society. As they are occupying a spot in the regulatory scheme that they no longer deserve, redressing this with regulation is necessary.


I don't think anyone wants another long thread recapitulating the whole debate about Twitter's obligations to to society; they can just read the thousands of HN comments that have been written in the last week about it.

What I will say is, however you hope to resolve this problem, eliminating the 230 protections is probably not the right way to go about it if you want providers like Twitter to be less intrusive, or for alternative venues to be viable at all. I think the only coherent "free speech" strategy that involves attacking 230 is accelerationism; that maybe by blowing up the US commercial Internet we'll somehow all migrate to a completely free blockchain Internet run out of the Azores or something.


What do you think about not nuking 230, but making its protections reliant on specific conduct? We can sidestep the 1A issue altogether since these companies would do anything to avoid being held responsible for the libelous, harassing, defamatory, threatening and sometimes terroristic content that their users post every day. That is nothing short of a gift given by the government, and it can be modified or restricted.

Moderation doesn't scale, so I think this is a case of either do what 230 requires or cease existing as a going concern - either of these would be good outcomes, so this is a powerful lever.

Some ideas in no particular order; a platform owner is only shielded from liability inasmuch as they (choose as many as applicable):

1. Provide a forthright accounting of any negative actions taken against an account (no shadowbans, no silent editing or hiding of content from discovery) at the time the action is taken with a forthright explanation of how the conduct broke the stated rules.

2. Provide an appeals process for bans/negative actions run by a neutral third party, with any ambiguity resolved in favor of the appellant.

3. Do not make or enforce ex-post-facto rule changes

4. Demonstrate no pattern of unfair or unequal application of the stated rules

5. If a ban is issued, a "wind-down" period must be granted to allow the banned user time to move what they can of their social network somewhere else.

6. Upon request, your own account's data must be provided in full.

OR

7. Remain completely hands-off from a content removal standpoint. Content is removed if it is either literally illegal or breaks the service and under no other circumstance.

This last one would still allow for spam filtering and content categorization, which would allow the user experience to change little from today, and puts the most control in the hands of individual users.


I think policy intervention that is backstopped by threats to remove provider liability protections is going to backfire, because when you get when providers are liable is radically more intrusive moderation, not less moderation.

As a matter of principle, any regulatory regime that would put HN as Dan moderates it at risk is bad, and what you're proposing would seem to threaten HN. All your bullets here seem like things that will pull providers into litigation.


It would make sense to combine these ideas with a circuit breaker that only kicks in once you have a certain MAU count.

Having it apply to every forum everywhere would suck and be unworkable, but once you're at Facebook/Twitter/Reddit/etc. levels of exposure, there are a different set of interests and responsibilities to society in play.


You seem to be describing a scheme where Twitter is legally required to be very careful about moderation, but Parler isn't, which seems crazy to me. Also, it doesn't address other vehicles for suppressing toxic speech; for instance, no 230 change you come up with is going to obligate cloud providers to do business with Parler and StormFront.

I should add a point I should have made earlier, which is that 230 is in no way based on a notion of being "publishers" or "platforms". That's a super common misconception about the law.


Not at all. In my ideal world, Parler would have the same restrictions applied, and would have fallen afoul of point 4 at the very least, given their massive popularity spike. They also weren't completely hands off, so the last option is off the table for them.

That ambiguity is precisely what I try to address. That distinction might not exist now, but it arguably should.


230 isn’t a gift to Twitter, it’s a gift to us. A world where 230 was restricted in the way you describe wouldn’t see Twitter agree to one of your two modes. They’d opt to instead take a heavy hammer to anything even remotely objectionable, so-as to avoid having to deal with any of the controls on your list.

It’s also not clear to me why we’d expect a private company to have to answer to you or me or anybody else about decisions they make. We can choose to not use their services if we don’t agree with them (and many people on this site have done exactly that), but any rule that attempts to say “once you’re popular enough, your business has to follow somebody else’s rulebook for how you decide what content you must host” isn’t going to make sense to me.


They can't even enforce their own existing rules against things that are absolutely banned or illegal already - look at Facebook's controversies.

Put plainly, I do not believe it is possible for a social network to moderate hard or fast enough at Facebook/Twitter scale to reject section 230 immunity. Even if they took the step of pre-moderating all content before it appears on the site, there is simply too much content coming in for that to be a realistic option (and not lose a ton of users due to the delay putting people off).

To give you an idea of the scale we're talking about, Twitter does about 500,000,000 tweets per day.

>It’s also not clear to me why we’d expect a private company to have to answer to you or me or anybody else about decisions they make.

They answer to society at the end of the day, which can express its desires via the legal system. If society tires of social networks acting as unaccountable gatekeepers to the national conversation, society can act.


Laws can, do and need to change as technology changes the political reality. No one elected twitter. Building a pretty website should not give a private entity the power to control political speech.


I was going to rebut this, but thought better of it. My point is just, there's not much you can do with the jurisprudence as it exists, despite what you might think Marsh means.


Isn’t there precedent for people being able to speak in malls as part of freedom of speech in certain state constitutions as a consequence of Pruneyard vs Robbins?


There is, but that case only applies to California (where Twitter is HQ'd, fair enough), and I don't think it's ever been tested for an online service with a ToS.


I don't think many are saying Twitter's decision was illegal. They're just saying they don't think they should have made it.


I'd argue once X amount of users use your service you are sort of public service company already. You can't use "oh we are a business so we can do anything we want" to defend yourself about that.


It's a little more subtle than that - more strength of network effects than mere number of users - but pretty much.


>It's Twitter's own servers

That are offered to the customers below cost of running them in order to stifle competition, which is only possible as long as the government antitrust body is looking the other way.


> That are offered to the customers below cost of running them in order to stifle competition, which is only possible as long as the government antitrust body is looking the other way.

Twitter as best as we can tell is not offering services below cost to customers.

The free users are not the customers. The paying users purchasing ads are using the services to derive value from the population of free users on the platform.

If you want to reform this, target how companies convince people to give up their data in exchange for functionality rather than for money.


Twitter's freedom to boot anyone off their platform doesn't mean they are free from consequences.

Same argument used towards hate speech but this is more serious IMO because big tech is the new Standard Oil or Big Tobacco.


Yes they can, but they shouldn't.


> Permitted speech on the legal scope does not matter when talking about businesses.

Sure it matters. It matters because a bunch of people want businesses to allow all legal speech on their platform.

And this group of people is growing in support, and they might eventually get enough support to force these business to do so, using legislative changes such as required these major companies to follow common carrier laws.


It also matters in the sense that one can't avoid certain regulation by claiming to be a public forum rather than a publisher while not being a public forum at all




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: