Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Chris Anderson of TED asked Elon a lot of the questions and concerns being discussed here. Check it out if you want to understand his perspective and goals with twitter.

“Having a public platform that is maximally trusted and broadly inclusive is extremely important to the future of civilization”

To that end, he committed to transparency. For example, changes to tweets or users would be made visible and apparent so there's no more behind the scenes manipulation. The algorithm itself would be open sourced. Anyone could view it on Github and suggest changes or point out issues.

https://youtu.be/cdZZpaB2kDM?t=666



Those sound like great changes. I'd love to see the algorithm in its current state


I agree it would be nice but it would also be a boon for bad actors who want to manipulate the system. Reddit now closely guards its algorithms (it didn't used to) to make it more difficult to game the system. For every honest curious developer who wants to vet the system, there are plenty more people out to spread miss-information.


At some point this concern becomes invalid in my book: Take democratic elections — hiding the rules of how elections work, because you are afraid someone might game them would be absurd. Because the point of democratic elections is to get results that most people can accept and for this transparency and simplicity is crucial. If it would turn out someone is gaming the system it would be time to change the rules and/or how they are enforced.

Now you can't really equate Reddit with an democratic election, but places like those are the closest we have come to an public square in the online world and hiding the mechanisms which decide who gets how much visibility is not without effect (on the trust within the system).


The cost of actually administering fairness in elections (maintaining voter rosters, verifying identities, preventing double-voting and providing public auditability while ensuring voter anonymity, prosecuting fraudsters...) is quite high compared with what an ad-supported global platform can afford. Just look at how tough it's been for Twitter to kick out inauthentic actors, eg Russian troll farms or spam bots. Spending more resources on botfighting is difficult from Twitter's standpoint since it doesn't by itself drive revenue or engagement, and they are fighting determined permanent attackers, some even state-funded.

Speaking of, the primary revenue feed for Twitter is advertising, which directly competes with fairness and transparency goals: ad business is predicated on the idea that more $ = more speech, regardless of the intrinsic value of the speech; and since there is no practical way to know where the $ came from, it does an end run around transparency goals.


In democratic elections one voter has one vote. There are no bots, no persons with multiple accounts, no trolls, no denial of service, etc, etc.


Solution: verify identity of every users to make sure they are real people and make them accountable for what they say publicly.


They won't do it unless it's regulated across all platforms worldwide.

Adds too much friction and scares away those who want anonymity.

KYC works for financial institutions because it's regulated across the board. And let me tell, it's a pain.


I know they will never do it. However I cannot help think having a twitter with only non anonymous verified identity would be nice. Personally I don't have special interest speaking with people that want to be anonymous when I use Twitter.

It's probably another product though :)


I also don't like anonymity, but some people might need to remain anonymous for safety reasons. Those living under repressive political regimes, for example. They'd need anonymity to get their message across.

There's no way for social platforms to tell apart which anonymous users are "good" or "bad"...

KYC wouldn't help against state actors, which are the offenders with potential to cause most damage. They're the issuers of their national IDs in the first place, it's useless to verify their ID.


_just_ verifiy identity.

you're are not presenting a feasible solution.

you can easily verify who this account belongs to, you cannot verify easily that this is the only account I have.

which is the problem.

further complicated by irrational attitudes to official identity such as you see in america. have a fraud resistant national ID system? fuck no! we want to use an unsafe mechanism never built for this purpose and impossible to safe guard!

good luck with that


I think you have buried the lede, which is that these platforms are no longer about an acceptable good. Votes or in twitters case, engagement metrics, are just one part of the magic mixture that drives more engagement. It's not about fair outcomes, or social good, and it is hard to hide your engagement optimisations when everyone can see how you are tweaking the system to generate more ad revenue. A cynical take I know, but we are taking about billion dollar corporations, not cheeky startups.


> I agree it would be nice but it would also be a boon for bad actors who want to manipulate the system.

Has this ever happened, though? How is it different from giving hackers source code?


Yeah. The issues is that in the environment they work in, there are no known secure recommendation algorithms that give good recommendations. So the only option is security by obscurity.


reddit is not a paragon here


I agree. Is anyone? Full disclosure: I used to work at Reddit. I was just putting in my 2 cents why Elon's proposal might not be immediately actionable without a lot of hard thought about how to deal with bad actors without the obscurity layer.


It's always the algorithm with you, isn't it Alan Turing?


if the algorithm is open source, bad actors can see it as well. it'll become remarkably easy to spam and game the system. this could drive users away, decrease revenue and leave Mr. Musk holding a $40B hot potato


Counter point, maybe it just accelerates the game of cat and mouse already played between bad actors and algorithm designers, possibly leading to faster iteration and improvement of their algorithm to find hard to fake signals.


I feel like this is the outcome that would occur. Similar to how some of the most secure software is open-source (Linux).


Musk could start his own Mastodon right now and do what he wants, but if he wants to bring along his Twitter network (or make it easier for any Twitter user to switch), then perhaps hurrying up Bluesky is another way to get what he wants short of taking Twitter private.

It seems like the focus needs to be on what can fight back misinformation/spam better given the context: more speech or moderation. The need for human moderation will likely still exist, but if Musk wants to accelerate the process as you say, Bluesky seems like a good way to start.

If Twitter is making choices that increase their bottom line at the expense of community, then absolutely, more transparency and open source would help with that.


Get rid of the algorithmic timeline and make everything chronological. Manipulation and gaming over.


That would also get rid of users. It's pretty clear that chronological ordering is non-optimal for engagement in a feed system.


If Musk really cares about free speech, speech should be more important than user engagement and algorithms manipulate the visibility of speech. They should be the first thing to go.


And revenue will shrink accordingly. If that's his plan, he's right that it can only happen in a private company. Otherwise the resulting revenue deceleration will send Twitter into a stock price self-fulfilling tailspin as stockholders start seeing it as a doomed platform.


I just wish we would implement a system where users vouch for each other in order to use the platform. A sort of web-of-trust to stamp out (or at least temporarily punish) whole areas of the social graph that are being used for manipulation and abuse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: