It's not weird. I suspect a lot of us have no idea, and the author shouldn't assume the average readers know this. It would help to introduce "Fin" before making the claim that it's obvious.
I've never seen or interacted with Fin that I've noticed. Admittedly I've seen a lot less of Intercom over the last 5 years, they used to be on every SaaS site, but it seems there's a ton of competition there now. One of the banks I use uses Intercom for their support but I've never seen an AI bot there, only humans.
I suspect it's very visible for _Intercom_, but not necessarily so for everyone else.
I think the only gap I’ve come across is that trying to drive two monitors through a display link dock it doesn’t really have the GPU to not have that be laggy.
It’s unfortunate we haven’t solved the micro-payment problem. Crypto was an obvious solution but anything would require a hefty network effect. But imagine like a starbucks card or whatever you have your micropayment card, and it auto reloads when it hits zero with 20 bucks or whatever. When you visit the times, a modal pops up, “This article costs $0.02. Read it? y/n or $1 for a day pass”. Sure pirates will get around it but they already do. Just make it grandma easy and you’re done. It’s just the money probably isn’t good enough for VC dollars to roll something out with enough big players to jump in.
It has been tried a bunch of times. I think a core problem is unlike most micro transaction opportunities you're asking customers to pay money to be told bad news. To buy something that will make them miserable. There's a fundamental disconnect there that means people aren't going to be inclined to do it.
The conclusion of that article is that the model doesn't work because of processing fees and friction from entering information.
The author discounts Bitcoin because it has high fees, but some cryptos have 0 fees and others have very low fees. With crypto you also don't need to enter any information, simply scan the QR code and enter the amount you'd like to pay.
If crypto was adopted, the model would work just fine.
Personally, I always donate 10 cents to a dollar in Monero when I read an article[1] that I enjoyed that offers crypto donation addresses. Primal[2] has built a crypto wallet into their app and you can see people send "zaps" of Bitcoin when they appreciate a post and it has adoption.
This is a different model though. This is a single site doing micro transactions which I agree doesn’t work. But a global/general one doesn’t exist and probably would be fine. It would have the same friction as adding moves on a phone game or whatever and reload minimums would handle the fees.
I never saw either as a purchase option on any major newspaper or site. It’s a chicken and egg problem. You need to be large enough to be able to get big name partners on board to seed the network.
Edit: in reality this already exists. Amazon/Apple Pay/Google Play already have reloadable gift cards/accounts. Just like using it on the web, click yes to pay ten cents with whatever. The accounts can still be used to buy whatever. Done. Just have to gate it to gift cards accounts.
Blendle was largely targeted in the Netherlands and Germany so if you’re in the US it isn’t surprising that you didn’t see it. But it had major publishers on board and it failed.
But you’re right, it’s a chicken and egg problem that won’t get resolved. If an org is already making money via subscription they have no incentive to do micropayments.
An approach that might work is low cost yearly subscriptions. So $6 a year instead of per month. Cost to the consumer becomes $0.50 a month for services that scale well (like news), but avoids the service fee and money laundering problems of micropayments.
See this sounds excellent to me. In order to make it work for the boardroom though, it'd be more like $0.50/article or $0.99 for "breaking news".
I can imagine the math being roughly "Divide the monthly cost by the amount of articles an average user reads per month. Now slide it up to look round"
Maybe I'm being cynical, but I think the economics would break down pretty quick, right?
There’s no way this scenario doesn’t get wall gardened off in some sort of way - as the AI SEO market will decimate current AI results in the next 3 to 6 months for sure. The slop is already making organic product hunting impossible.
This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present. If the user decides you don’t, ala social media 1.0.
> If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present
Hacker News is a site that presents data by algorithm. Under your definition, Hacker News goes away, too.
A more accurate framing would be that they’re going after personalized recommendation algorithms. It’s not obvious that offering a recommendation algorithm would mean that the site is no longer an impartial common carrier.
Goes away, or is liable for the content promoted to the frontpage under the OP's take?
But I'd agree, that it's personalisation rather than just curation that's the issue.
I think even requiring sites to have a "bring your own algo" version (and where ads are targetted to the algorithm, rather than the person) would cure a lot of ills.
As is, even with something like Spotify where you _are_ paying there's no easy way to "reset" your profile to neutral recommendations
> Goes away, or is liable for the content promoted to the frontpage under the OP's take?
Same thing. There is no Hacker News if Y Combinator becomes liable for user submitted content.
It’s an obvious backdoor play to make sites go away. If a site becomes liable for content posted, you cannot allow users to post content without having the site review and take responsibility for every comment and every post.
The people proposing it haven’t considered how damaging that would be for the ability of individuals to share ideas and their content. When every site with “an algorithm” is liable for content posted, nobody is going to allow you to post something. It’s back to only reading content produced and curated by companies for us. Total own-goal for the individual internet user.
I think you could finesse it by saying that on HN, the users submit the content and the users also determine (by voting) what is popular. Ycombinator doesn't promote or bury any particular post with their own algorithms; they don't exercise any editorial review or control. (I don't think that's exactly true today, but it could be).
But to the larger point, I would actuall agree that sites should "review and take responsibility for every comment and every post." They are the ones amplifying and distributing this content, why should they have zero responsibility for it?
Yes that would dramatically change what gets published online, but I think that would be a good thing.
And how do you think any other website decides what to recommend you, if not other users' actions? Remember the Netflix prize? The data set they gave you is how other people rated movies. You can absolutely build a recommendation system without manual input from the operator.
And HN absolutely does promote submissions at the moderators' discretion. The moderators sometimes give old but overlooked submissions a second chance, they also turn the flamewar detector on some stories that they think deserve more attention which effectively promotes them against users's will.
Importantly all except one of those things is impartial to the user, and even that one is merely binning based on a single category. Algorithm here is a red herring IMO people are objecting to a couple fairly specific things. One being personalization carried out by the other party, the other designs that introduce partisanship or are detrimental to the end user (ie addiction and other dark patterns).
So do you think the same logic applies to ISPs? Should they be reviewing all the content that they allow to transit their network and ban you if you try to evade their controls by using uncrackable encryption because if they mess up and allow you to distribute copyrighted or defamatory material they will be held liable? Remember that section 230 was originally enacted to protect them from liability.
No I don't think it applies to ISPs. They aren't involved in selecting or soliciting the content, or providing the sofware and platform that creates or distributes the content. They are "just pipes." Their purpose is to move bits.
This is not a correct understanding of ISPs though. They do already have certain obligations to restrict content on their networks. In particular they are required to remove subscribers when they become aware that those subscribers are participating in copyright infringement.
> They are the ones amplifying and distributing this content, why should they have zero responsibility for it?
If LinkedIn started allowing hardcore pornography, many of their advertisers would leave.
With that in mind, are you certain LinkedIn takes “no responsibility” for the content they distribute? It would seem they have a multimillion-dollar stake in the outcome of their efforts to shape their commercial product.
The main difference is that HN uses time to segregate cohorts and TikTok uses interests to segregate cohorts. If enough people within these cohorts upvote / give watch time then the content is shown to more cohorts.
I understand the basic principle. Clearly that's one of the inputs. What I'm questioning is your implied assertion that there's nothing else to it.
I don't for a second believe that tiktok (or facebook or any of the others) employs a primitive algorithm that impartially orders results based on a simple and straightforward metric without consideration for their own interests.
>I don't for a second believe that tiktok (or facebook or any of the others) employs a primitive algorithm
Is your contention that whatever future law have some mechanism to decide the complexity of the algorithm? How would you design a law such that the reddit ranking algorithm is primitive, but tiktok's algorithim is "advanced".
You're changing the subject. I said nothing about the law, only objected to a claim about the internal mechanisms of tiktok.
If we're discussing hypothetical laws then my preference is for several. Banning various dark patterns (what the EU is doing here), banning opaque individualization outside the control of the individual in question, and banning motivated editorialization (such a intentionally promoting a particular political position). And yes, a straightforward application of what I wrote there would make the netflix recommendation algorithm as it currently stands illegal. I have no problem with that.
I agree with what OOP said. But it’s not my intent to “shut sites down.” I have this view to try to increase diversity of media consumption and break people out of echo chambers. If your business model is so shit you have to exploit weaknesses in human brains to keep people viewing ads and can’t adapt, then that’s your problem.
If you have an algorithm whose sole purpose is to “engagement” with your own platform (by intentionally and purposely pushing clickbait, ragebait, and media that keeps reinforcing your clicks) you should no longer get section 230 protections - you are no longer a neutral party. These algorithms exist to create echo chambers and keep you clicking so you can consume more ads.
I would love to hear other ways of solving the problems of social media.
> I have this view to try to increase diversity of media consumption and break people out of echo chambers.
Making sites liable for all user-posted content would do the reverse of this. Every platform that lets people submit content would have to stop doing that, because it’s an impossible liability to manage.
You’d have to host your own site. You wouldn’t be able to share anything about it on a social media site because its user-generated content. No visitors unless you advertise it through paid contracts with companies that can review it and decide to accept the liability.
Newspaper "Letters to the Editor" manage to do this. Users "submit" things to the newspaper, the editor curates and decides what to keep and what not to, and then the newspaper publishes the user generated content. Just like social media: Users submit things to the site, TheAlgorithm curates and decides what to keep and what not to, and then the site publishes the user generated content.
If web sites and social media can't "scale" to do this, then maybe they should scale down. "Making sites liable for all user-posted content" would not kill social media, but would definitely scope it down to what can be effectively curated.
I don't think there are enough dangs to effectively curate much of the internet, and scaling it back by how much would be the result? 95%? That is before settling on definitions of effectively curate I suppose.
"Effectively curate" here simply means "willing to take legal responsibility for" (although in practice I assume there would be an insurance policy involved because that's just how things are done).
I notice that parent describes "engagement" algorithms and you somehow jump to "all sites". So I think we'd see "engagement" algorithms disappear and very primitive approaches with prominent transparency measures in place would replace them. I expect we'd all be better off were that to happen.
"letters to the editor" curated by employees would become a part of their business model and regular contributions would go away? Why would that assumption be incorrect? I wouldn't run a website where a casual user having a moment could result in my imprisonment. I would only allow non-lbtq content that didn't mention race or immigration, as the chilling effect there is real. A DA would for sure come after me if my site became influential.
If YCombinator has to officially approve every article submitted, then it will become a publisher of a news site, not a social media site. Essentially, it would be a New York Times site with unpaid writers.
Well the argument was that Hackernews would no longer exist, and I asked why and your response was that it would be like the NY Times, but the NY Times website does exist so I don't understand what point you are trying to make then.
Got it. If the page doesn't fulfill the original purpose that people wanted to go to it, it ceases being interesting. The fact that the page merely exists is meaningless, much like a blank website.
Well, you pointed to the NYTimes which, again, has not changed, so what is your point? Maybe the NYTimes is not a good example? I don't know, you brought it up. Are you saying the NYTimes is not an interesting website? It seems to also have the news and discussion of the news, so what exactly am I missing?
It's a matter of resources, not corporate status per se. For better or for worse, the current status quo largely democratizes content promotion. You and I can post these two comments here and put our ideas and names in front of a bunch of strangers for $0.
In a world where the risk-adjusted cost of allowing third-party comments on your platform shoots up, someone has to pay that cost. A personal blog hosted on your server might struggle to find any significant reach without a real advertising budget, because distributing speech/content that promotes your platform would no longer be ~free.
I don't necessarily believe that the major social media platforms would fully evaporate, but I'd expect some or all of these changes across the ecosystem:
* Massively scaled up LLM-based moderation/censorship.
* Replacement of direct user content posting with an LLM-based interface (to chat with an LLM about what you want it to write on your behalf).
* Payment-gated public posting, e.g. monthly or per-post fees to cover liability/insurance and/or LLM inference costs. Possibly higher fees for direct authorship vs LLM pair posting.
* Massive rise in adoption of decentralized architectures, either via current mainstream platforms if legally tolerated or via anonymous dark web platforms otherwise. Maybe Tor becomes as normalized as VPNs, or maybe the Western legal environment shifts hard against general-purpose computing.
I understand where this sentiment is coming from, but I think it's taking a lot of the current status quo for granted. What you guys are proposing isn't necessarily a targeted change that would simply make bad guys stop doing bad things. It's more likely a massive structural change that would dramatically alter the social and economic fabric of the internet as we know it, and not in a way that most of us would like.
But still an algorithm. The difference is that we (at least some of us) place a greater trust in the integrity behind how information surfaces on HN. I think that some parts of it are open source, and the moderators are transparent enough about what isn't public + there is a mix of folk knowledge that explains how HN works under the hood.
Depersonalized algorithms or recommender systems aren't inherently better than personalized ones. HN is an exceptional example of the former but I think at scale people would come up with a different crop of complaints for them.
Yes it's still an algorithm. Cable TV programming is another example. Everyone sees the same content. The ads are changed at the local broadcaster level but are not tailored to the individual, and are not harmful in the ways the EU is regulating. If anything, everyone watching the same thing is good for social cohesion. Everyone discusses the latest TV episode the next day at the office.
Right. Withholding the fact that cable television doesn't appear to be the typical distribution method anymore, how do broadcasters select/schedule their programming?
What's your point? It seems like you're pedantically focusing on a single word without regard for the actual meaning of the broader statement. No one is proposing to regulate things done in the traditional manner of cable tv, nor other uniform and impartial approaches.
@conception (root): "If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present"
@Aurornis: "Hacker News is a site that presents data by algorithm. Under your definition, Hacker News goes away, too."
@Aurornis (cont'd): "When every site with “an algorithm” is liable for content posted, nobody is going to allow you to post something. It’s back to only reading content produced and curated by companies for us. Total own-goal for the individual internet user."
@Aurornis (cont'd): "If a site becomes liable for content posted, you cannot allow users to post content without having the site review and take responsibility for every comment and every post."
@tencentshill: "The algorithm is not personalized. It's the same for every user. No issue there..."
Me: "But still an algorithm".
@tencentshill: "Yes it's still an algorithm. Cable TV programming is another example."
Me: "...how do broadcasters select/schedule their programming?"
***
If the "broader statement" that you're referring to is @conception's, then I agree with @Auronis that this would have negative effects on how websites like Hacker News operate. Failing to distinguish personalized recommendation systems from depersonalized ones and proposing regulation that affects them the same is an impartial approach.
The speculated consequence is that platforms (e.g., Hacker News) will not want to assume liability for the content that users share. [0] If this were to happen only a few platforms would exist, at least on the clear/open web. The general online experience would become something like a pastiche of 60s cable television with three or four providers authorized to broadcast media.
With the direction that democracy is trending across the world that would mean state-run or state-approved media. Or all online communities will have to organize and operate like more traditional institutions like this biking community in London is doing: https://www.lfgss.com/conversations/401988/.
[0]: Some parts of this community already suspect that moderation conveniently buries controversial or subversive submissions. See this one from today! https://news.ycombinator.com/item?id=48110927
Legislation needs to be clear and unambiguous, sure. Nonetheless no one had chronological sort or raw vote count or whatever else in mind when they used the term "algorithm" here so pretending they did is obtuse and pedantic. Misinterpreting the position of the other party does not typically make for enlightening or insightful conversation.
Cable TV is an example of something that no one is objecting to. The EU is targeting specific practices (particularly addictive UX patterns). Some people (myself included) would also like to see algorithms that provide personalized (on the individual or small cohort level) output banned. HN is clearly not that.
I think there's an interesting discussion to be had about where exactly the line is between a general class and a small cohort. Certainly applying more than a few general classes simultaneously can quickly land you back in near-individual territory.
> Nonetheless no one had chronological sort or raw vote count or whatever else in mind when they used the term "algorithm" here so pretending they did is obtuse and pedantic.
No one until you it seems.
> Cable TV is an example of something that no one is objecting to.
@tencentshill's reference to cable TV originates from the question of whether Hacker News operates via algorithm and would be subject to the sweeping regulation proposed by @conception. The answer is yes.
If I wanted to be pedantic I'd try to argue that cable TV operates according to its own kind of algorithm. And I almost did, so you got me there at least. But there's enough factors that contribute to television programming that it's debatable how far it is from using one (or a recommender system, rather) and whether under different circumstances the EU's issue with "endless scroll" and "autoplay" would be aimed at TV.
Of course the main difference is that television in Europe is probably regulated different than the internet.
I'm not objecting to the internet being regulated like television. For the record, I don't hold one to the same standard of utility as the other. I'm speculating on what would happen if the internet were to be regulated like television according to the combined scenarios advanced by @conception and @tencentshill. Do you follow?
I believe you are obtusely misinterpreting the other two commenters. The pedantry is your insistence on an overly zealous interpretation of the use of the word "algorithm" by @conception (well really it was @aurornis with the pedantry but you followed on). It's clear enough that the original wording was sloppy; forcing analysis of an unintended scenario isn't fruitful.
> I'm speculating on what would happen if the internet were to be regulated like television ...
You've lost me. Previously you were arguing that all platforms with user generated content would disappear. Now you appear to acknowledge that the scenario as described permits platforms that operate analogously to cable TV, which is to say they don't present individualized content.
I'm no longer clear what your current position is nor what you might be attempting to communicate to others or advocate for here.
The facebook/meta algo might be same for all users, but it had different inputs for each user.
HN, on the other hand, everyone has the same front page. If I like a post I can favorite it to 'bookmark' it, but HN won't modify my front page based on what I favorite, whereas facebook will.
I think the GP's argument is, when it comes to social media, "one size fits all" might be less addictive than "custom made" :)
And also the algorithm here is title-blind. The content of the story bears no sway over its place in the rankings. I do not believe dang cherry-picks either except for the very rare sticky?
It's that true? I thought I had seen it said that there were keyword penalties to discourage things like political posts that could be turned off and on
It would be a lot better if the user just had more control over the recommendation algorithm, either to replace it with an alternative or tune it. For example, I never want to watch YouTube shorts. Every time I see them, I click "show less often" since it is the only way I can express this preference, and still YouTube shows me them.
Obviously YouTube knows that even among people who do this, they still get good engagement out of YouTube shorts, so they keep showing them, but these users have explicitly asked YouTube to not show them.
It would be like a recovering alcoholic whose landlord comes by every week and leaves free samples of booze, because they get paid by the booze company, even though the alcoholic has asked them to stop.
> Hacker News is a site that presents data by algorithm
Does it though? I mean by "algorithm" in this context we mean "personalized algorithm meant to maximize engagement and retention".
Not e.g. "sort by upvotes and decay by time" or even "filter content based on coarse user location".
Does HN show me a different front page than everyone else based on which articles I have read or upvoted? That would make me feel worse about the site because I don't want a personalized HN feed I want to read what everyone else is reading (which is incidentally why I refuse to give up linear TV).
> Does it though? I mean by "algorithm" in this context we mean "personalized algorithm meant to maximize engagement and retention".
I addressed that in the second half of my comment already.
But yes, HN qualifies as a site that displays by algorithm. If you mean personalized recommendation algorithm then it’s important to call that out. The last thing we want is regulation so broad that it catches every site that ranks things.
No one _ever_ even considers "algorithms" in the CS sense here (such as "sorting"), and even bringing that notion up would be deliberately dumbing down the discussion (yet it keeps happening in this thread over-and-over-again because people are for some reason very "well ackshually sorting is an algorithm").
"Algorithm" in this context is very clear what it is. It is not what the word means in Computer Science or in general. Just from the context and without any clarification needed "algorithms" in social media means "addictive personalized feeds".
I think we need a different word, so that Computer Science grads stop getting wrapped around this axle. We're obviously not talking about Quicksort when we're talking about social media algorithms and other recommendation/discovery algorithms. Heck if I know what that word would be.
Yes absolutely. Sadly I think that ship has sailed. Now if you ask 100 people in the street what "algorithms" are, I bet a majority among those who answer anything at all will answer it's something related to evil social media corporations.
Have you ever browsed by New and seen the firehose of shit which doesn’t make it to the front page? HN sorted by new is effectively useless and you might as well shut the site down at that point.
“Chronological only” might work for something like Twitter where you’re choosing to follow specific individuals to see their posts, it can’t work for curation sites like HN/Reddit.
Yeah for sure, I see what you were saying. Changing that part might not achieve the desired effect though is what I was saying. Context dependent on the site here of course, but in a general sense I could see meta et al. being nonplussed by this to a significant extent.
I think you are just reflexively trying to argue your point without even thinking about it.
30 million tiktoks are posted by day. What do you mean your are going to allow "users to filter". This "regulation" will be trivially defeated by TikTok-Videos LLC uploading videos and TikTok-DataScience, providing the most popular filtering algo.
At the end of the day, many children will simply default to using the best algorithm, and all this regulation helps no one.
Wow clearly this a problem that can never be solved, better to not regulate these tech giants that have anti-democratic and anti-human beliefs. We're simply too powerless to regulate these entities!
Yeah, I clearly prefer non-hamfisted regulation under the guise of "protect the children". Half-assed attempts like this are worse than useless.
Just like with GDPR, the tech giants will put their foot on the scale and continue to operate how they fit, and the smaller guys will die under a thousand paper cuts. I'd rather not add more regulation that cements Meta as the sole media platform of the internet.
The difference is you can’t prove that hacker news has a bunch of psychologists on staff who are dreaming up ways to make the website addictive.
If you take TikTok to court and go through discovery you’re going to find internal communications of people talking about ways to get people to stay on the app longer, ways to make the content more addictive, ways to maximize ad reach, etc.
Hacker news just tossed a simple upvote downvote system and called it a day.
Plus it has no endless scroll, no graphics at all, limits your comment frequency, has no push notifications, etc.
The majority of terminally addicted people I have interacted with at length have both recognized the terminal nature of their addiction and been unable to do anything about it.
Honestly the damage done by TikTok et al is so severe that I’m okay with a little collateral damage. We will build new things.
But I also see no reason you can’t separate out forums with upvoting from the personalized engagement optimized feed. They are fundamentally different designs. (In other words, Subreddits are safe, the Reddit homepage is regulated unless it changes.)
When we talk about "The Algorithm" in terms of social media, the term has just about taken on a meaning opposite to the original one.
My coder's view of the simplest possible algorithm: "People that I follow, their posts, ordered by most recent first." it's transparent, easy to understand, consistent, and a few lines of code.
Big Social media's "The Algorithm" : a complex, utterly opaque, personally targeted, frequently shifting internal set of rules that they manipulate to maximise engagement and revenue, while hiding this from the users whole attention is being monetised, designed according to business priorities.
Clearly, if you use the second kind of Algorithm then you are no longer an impartial common carrier.
In the case of Instagram: You show the videos from the people you follow on instagram, then no more short videos at all. Possibly a search box.
If you search on youtube then it can rank any way it wants, just not use e.g. anything from the viewing history. No "related videos" column. That's what YouTube used to be. But YouTube (unlike TikTok) worked well before it had rabbit holes.
For TikTok the situation is worse. Their whole app just doesn't exist unless you have the custom feeds. This would make YouTube be 2010 youtube, Instagram be 2010 Instagram (great!) but it would effectively be a ban of TikTok's whole functionality (again, great!).
I think it would be great if all of these apps had an option to function like you propose: Your feed is a simple view of people you’ve chosen to follow. The end.
Then all of the people who have trouble with self-control on infinite feeds can enable this mode, and everyone who wants the recommendation algorithm can leave it on.
This is the optimal outcome that actually serves everyone’s personal goals for using these platforms. If we get into a conversation where some are demanding we don’t allow anyone to use a recommendation algorithm because they feel the need to control what other people see, that’s a different conversation. That conversation usually reveals other motives, like when people defend the algorithm sites they view (Hacker News, Reddit, whatever) but targets sites they don’t like TikTok.
I don’t endorse using these apps, but for what it’s worth, Instagram actually does have this feature (tap “instagram” at the top and select “following”). You get a chronological feed with no adds and no reels. Of course they don’t provide an option to make that the default as far as I know.
Yup so all they need to do is only allow that content feed for anyone under X years in some specific countries. Seems like they'll survive this, and it won't even be very expensive to fix.
Reminder that any regulation that depends on age is a trigger forcing ID checks for everyone.
You can’t put a restriction on people under X years without gathering information about everyone’s age. You can’t confirm everyone’s age without some ID check. You can’t do an ID check based on anonymous tokens (too easily shared) so every age check mechanism has some ID revealing step, either to the company or to a 3rd party like a government entity (which will pinky swear they’re not looking at the data).
Instagram and Facebook both have such features. They’re hidden, though. With Instagram you tap the logo in the top middle of the app and choose “Following”. With Facebook it’s hidden away under the “Feeds” section in the app.
I’d love for there to be an option to have them as default. It’s obvious ($$$) why they won’t do that unless forced to by regulators.
> I think it would be great if all of these apps had an option to function like you propose: Your feed is a simple view of people you’ve chosen to follow. The end.
This is something EU regulation requires them. Earlier this year the Dutch courts ruled as such, all the way up to appeal. It's just a matter of time before other European courts repeat this ruling.
Why do you assume the recommendation algorithm should be the default? The algorithm is the dangerous thing, THAT should be the opt-in mode not the other way around.
IMO they should not only be opt-in, but should actually be required to publicly list the parameters and weights they’re using and allow users to tune those weights.
Sure, if that makes the angry mob happy then let’s make it default. Then every new user can click the button once and be back to the app they expect.
> IMO they should not only be opt-in, but should actually be required to publicly list the parameters and weights they’re using and allow users to tune those weights.
I wonder how many people here know that many of the popular apps have rolled out finer controls for recommendation algorithms so you can do this. On Instagram you can go in and see the topics your recommendation algorithm picked up and modify them manually if you like.
I think the goalposts will just continue to move, though.
No they should have to pick every time whether they want to be in follower mode or discovery mode. Dismissing concerns as “the angry mob” is richly ironic considering the entire objection is that recommendation algorithms seem precisely tuned to foster angry mob dynamics. So yeah it will make the angry mob happy because it will be removing the primary mechanism for inciting angry mobs.
People here know that they have finer controls (which are still not actually that fine and also don’t really make the parameters auditable). The problem is these settings are hidden away in places most people will never look. And also, I stress again, none of this is actually auditable because they treat these as some kind of trade secret special sauce and there’s really no reason society should feel obligated to support or enable this business model.
> considering the entire objection is that recommendation algorithms seem precisely tuned to foster angry mob dynamics.
That actually wasn’t the objection in the article we’re discussing at all.
The objection is that recommendation algorithms show people more content they want to view, which leads vulnerable people (kids in this case) to consume more content.
More of what they want to view by showing a feed of largely inflammatory content that targets easily risible emotions that encourage commenting and interacting, which makes people keep coming back to argue more. The mob dynamics are part of the addiction loop.
Not sure what confiscation would accomplish that regulation couldn’t? I mean we’re all aware that if regulators target TikTok then a new app would pop up and take its place.
But the thing about regulation is that it doesn’t need to be water tight. You can just target a small handful of large players and it will improve the situation in practice. It doesn’t matter if 998/1000 apps use addictive feeds if the largest two apps don’t and they have 90% of users/views.
It’s naive to think that regulation is going to cover the entire global internet.
If you regulated domestic companies out of existence, global options would pop up in their place. You could try to block them all in app stores but people would go to the web views.
I think that's still mostly fine. Youtube is already not an app but a web site (It has apps too but I think it's less app centric than e.g. instagram).
Obviously we need the ability to regulate also global options. Typically if these actors truly become big, then they have a presence in their "target" countries, such as ad sales.
Do it like a library. When a person walks into a library, they're presented with a short curated list of books suggested from the librarian. All visitors to the library see the same books. From there, the visitor can go about their business searching for what they want.
If they don't know what they want, perhaps a good use case for the newfangled LLM-search we have now would be "What's an interesting or popular topic I haven't searched for before?" to which the AI will respond with a list of newly searchable terms.
The first unwatched video from the user's followed/subscribed channels. Chronological, reverse chronological, sorted alphabetically, by the user's channel prioritisation, by likes, by views... whatever the user chooses. And then an end of feed.
For new users? A search bar and a set of (human? AI?) curated seed recommendations that the platform is comfortable with being held liable for.
If they just signed up they have no followings or subscriptions. So now what, you need to show accounts to follow first? Thats the same problem as deciding what the first video to show is. How do you decide who they should follow? Or the vision is that you can only have friends as if it's 2005 and you can't discover anything serendipitously?
I don't consume any content from my friends on something like tiktok where I'm interested in discovering people that have good content under topics I'm interested in. I don't know who those people are and I want to discover new ones that come up not just follow some already popular accounts.
>So now what, you need to show accounts to follow first
Youtube won't show you anything at all if you have a new account with watch history turned off. It says something like "turn on watch history and watch videos so we can recommend some for you".
Undoubtably the change needed here will introduce friction, will reduce viewing time, and society will be better off for it.
The whole idea here is to make content consumption more deliberate and mindful rather than just opening the app and veging out to an endless feed of slop.
That’s also an algorithm. An unsophisticated one, but an algorithm nonetheless.
You can (and should) argue that such a simple algorithm doesn’t “count”, but fundamentally the exact wording of the grandparent post never works, legislatively.
> That’s also an algorithm. An unsophisticated one, but an algorithm nonetheless.
The problem always has been "(personalized) opaque algorithms". Time sorted by followers isn't really opaque, nor is "sorted by likes" or whatever. The problem is always pulling in parameters that a users either has no active control over or are so variable they effectively could be random.
Can everyone just please stop saying "well ackshually sorting is done with an algorithm" and just assume at least not-idiotic-intent here? No no one will ban "algorithms" or suggests anything of that kind. Yes it's a terrible name. Yes it will be hard to formulate what's allowed and what isn't. But a very simple litmus test is: what are the inputs to the algorithm?
users coarse geographic location? Fine
AI detected language of the content? Fine
global popularity of the video clip? fine
user's past behavior: number of videos with similar content they watched? Average number of seconds this particular user usually waits until scrolling further?
The pattern is obvious. Personalized algorithms is what's targeted. Let's keep the discussion intelligent.
Your litmus test isn't correct and your assumption of personalisation isn't correct either. All of the criteria that you see as fine are controlled under the relevant legislation and are considered personalisation, requiring transparency etc.
Furthermore, bills have been brought to EU parliaments that have erroneously attempted to ban all forms of ranking, which would include even the most basic information retrieval algorithms. So it isn't obvious at all what is meant by 'algorithm'.
> Any ordering is an algorithm technically, so yes just "banning algorithm" doesn't wor
Algorithm in this context (and presumably in any proposed legal text) is about personalization and purpose.
No one worries about presenting content based on total popularity, coarse geography. user's browser language, or anything like that, regardless of whether the actual ranking algorithm (in the CS sense) is an algorithm. Yes it's a terrible name for what's being discussed, but let's not lose focus on the purpose because of that.
The internet solved the problem of millions of millions in it's implementation details, you share a URL. You follow people, they share URLs, it grows organically, same way every website worked pre... Instagram? I'm not sure who moved to the algorithmic feed first.
I would say, no *personalised* algorithms other than those based on deliberate user choices would solve the problem. So, what user chooses to follow, or the same for everyone in the country.
This seems to be consciously dishonest. Show them "most recent" or "most upvoted" or "A to Z." Pretending like this is hard is bizarre. People have always selected sort and filter algorithms, until companies started taking them away.
Of course it's easy: such decisions were taken _before_ the feeds where algorithmically built.
You rely on unambigous, "physical" properties of the videos.
There is a physical property of all the videos: the time of publication.
There is a physical property of all the channels: did you subscribe to it, or not ?
So, you show, in (reverse) chronological order of publication, the list of videos published by the channels you subscribed to.
Now, of course, a brand new user would have no subscription - you show them a search box.
But then, now, your search algorithm has to weight the various channels that match - but your algo can be relatively transparent, relatively auditable, and the same for all users (unless given explicit preferences, and of course national laws, etc, etc...)
I'm sorry, but, I have a "subscriptions" page in youtube or substack, and they're chronological, and they show me what I want to watch. You keep that.
There is a "home" page in both service that is algoritmically built, and they show me crap that the algo want me to watch. You get rid of that.
Do this, and I can consider you a "neutral" actor, and accept that you shift the blame to content producer.
Or, keep the algo feed, but don't take money from advertiser when I watch yet another flat earther video because YOU decided it was trending.
If you want to decide what I watch, and make money from that decision - congrats, you are an editor. You get the earnings, and the responsibility.
Please don't tell me, with a straight face, that the people who build the algo don't "decide" what I watch. If they want to tweak the algo to downgrade the flamewars and outrage and conspiracy theories and violence and abuse, they can. They do not want to, for business reasons. [1]
That's fair, up to a point - we need publications with editors that agree on having "edgy" content. I'm not advocating for blanket censorship.
I did not like social network preventing me from _sharing_ articles about Biden's son laptop (this was actually beyond the law, but somehow they managed to find the resources and programmers to implement _that_, because, at the time, the execs where cozying with a different administration.)
I'm advocating for "accepting your responsibility as an editor".
The conversation has iterated a couple times and one point that people (on this site at least) are stuck on is “well however you rank things—latest, most popular—you’ll need to use some kind of algorithm, maybe quicksort.” This isn’t what the general public or politicians mean when they say “an algorithm” but it does make something of a point, what exactly the general public and politicians mean when they say that… it’s a bit ambiguous.
I think the EU has fully digested this point, and is focusing on the “addictive design” phrase instead, for good reason. It makes it obvious that the problem is a bit fuzzy and related to the behaviors induced, not some cut-and-dry algorithmic thing.
This kind of complex leglislation already exists in many areas of the law: revenue collection being the most obvious one. We could choose to treat "societal harm" the way we treat "tax collection".
I'm not saying there aren't infinite edge cases and second-order effects - but we tolerate those already for many things. I'm not pretending this is simple or even desirable - I'm merely stating it's possible if we want to do it.
My biggest fear is that (like the UK Online safety act) this acts to favour the huge corporations because they are the only ones that can afford a team of lawyers. Any legislation should aim to carve out exceptions to avoid indirectly helping monopolies.
Great example. These companies are already experts at circumventing taxes, what makes you think they can’t weasel their way around some arbitrary written law?
Just look at the malicious compliance that Apple and Google have around the App Store stuff, they’ll find a way to comply with the law and implement different addictive dark patterns.
I’m not saying that I disagree that these companies need to be regulated, I absolutely do. I just think it’s going to be a complicated process, and not “oh just ban everything that’s an algorithm”.
And I have absolutely 0 faith in companies like Meta willfully complying.
I have a feeling taxes are possible to circumvent only because a government tends to have one arm that wants to collect taxes, and another that wants to reduce them to encourage certain outcomes (like having a business setting up shop within its borders).
The US may have this dual incentive structure since it wants to build its tech giants while limiting their control, but the EU doesn't. The arrival of a foreign tech social media giant might make the legislation a bit more palatable to pass.
It will undoubtedly be complex to regulate all dark patterns away. But there are a few obvious, easy wins. It'd be a shame to make perfect the enemy of good.
But here’s the real problem: people don’t care. And I say that as someone who hasn’t used social media since 2014.
My observation of people’s behavior indicates that when all is said and done, people don’t care—they would rather get the endorphins from posting, liking, following, etc.
But the solution is to allow people to control their own algorithm, and to have open source solutions where communities manage their own social network.
It’s not the algorithm that is the problem it is that people don’t have the choice to curate their own content.
Although it should be noted that Mamdani’s average donation size skewed much smaller than Cuomo’s, so it is possible that Mamdani was “bribed” by the general public.
This is some kind of a meme where people believe things can’t be defined in legal terms and therefore can’t be regulated. These people are usually not lawyers.
Does anyone know where it’s coming from? I can certainly believe that incompetent jurisdictions have a ton of issues with people misapplying the law and using loopholes.
Albert Hirschman wrote a great book about the rhetoric people use to stifle policy proposals 35 years ago. “It’s futile; it won’t ever work” is one common argument. It’s not a meme so much as a cynical reflexive intuition
> This is some kind of a meme where people believe things can’t be defined in legal terms and therefore can’t be regulated. These people are usually not lawyers.
No they’re engineers who think rules have to function as rigidly in every field as they do in programming.
They either can’t or don’t want to accept that the law is a social construct and what it actually means to you is determined by the weight of precedent, as applied by judges and regulatory bodies. Things are vaguely worded in the law all the time. If people want to dispute how the enforcement is done they sue and judge decides how the rule should be applied.
The point isn't that it can't be regulated. What the original comment said was
> This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present.
But this is not in fact easy. It's hard to define what "present data by algorithm" means in a coherent way, and it's hard to extend liability for the content you present to liability for the manner in which you present it. You could make it work, if for some reason you really wanted to, but it's easier to pursue the strategy described in the source article of regulating specific abusive patterns.
The easy benchmark to setup can easily be, that any feed that displays the data in a way other than the following is considered an editorial choice and thus the platform is liable as a publisher:
1. In a chronological order, and only filtered based on user selected options.
2. In any other order explicitly selected by the user.
An exception can be made to allow filtering out content that violates the platforms terms and conditions.
Alternatively there can be no exception, effectively making these platforms unworkable. This is also a choice. We do not need these platforms, including this one.
If the user selects "sort by algorithm" then I don't see how you've changed anything other than the default. I think it's pretty obvious just changing the default won't work.
That's because the default is 99% the way the app is designed to be used. If the default is regulated, then they will just say "sorry the default is boring, click here to bring back the feed" and everyone will just click.
As a matter of fact, the social media companies will then have an incentive to make the default really bad, which is absolutely what they will do. This would be the malicious compliance I was referring to elsewhere.
I think most people over here are oversimplifying this and underestimating the ability of these companies to get what they want.
"Algorithm" is a method of selecting the content to display. You're listing presentation types, not selection types. Presentation has nothing to do with supervised selection. Selecting the next video in the infinite scroll would be the algorithm, not the infinite scrolling mechanism itself.
Instead, a regulation could mandate the administration an anonymized unbiased mandatory eval test at the end of every week/bi-week/month just like instruments for psych evaluation (e.g. do you feel your <mental-health-metric> has become worst in the last <time-period> on <scale>. Did you have <mental-health-marker> after watching content on social media?).
The said regulation can then mandate that after calibration and correction the feed pull back by training the algorithm to adjust it in a rapid A/B test.
This is all doable by the companies themselves, but since they wont, the key is to mandate it and publish the aggregate results regularly — like make it part of the quarterly share holder's SEC reporting requirement or something.
Everything other than sorting the list of entities by a standard measurement unit (time, length, mass, temperature, amount) needs to be covered by this law.
The moment you add other entities to the list (e.g. ads inbetween posts), then it's also subject to the same restrictions.
This effectively means “every online platform ever” and would also have included MySpace and the OG Yahoo etc, and as such would not really single out the truly bad actors.
And then we’ll end up with with another cookie-banner style law which had good intentions but actually missed the point entirely.
Maybe MySpace should be covered. I mean, MySpace probably(?) had the technical capacity to act maliciously in the manner that modern social media sites do, then business model just hadn’t evolved to the modern toxic state yet.
The cookie banner law is fine for the most part. Sites that do the malicious-compliance thing of over-prompting the user for permissions are providing a strong signal that they are bad actors. It’s about as much as we can expect without banning them entirely…
I stopped using facebook around 2015-ish, when they stopped allowing sorting by date. Prior to this, hi5 and the likes also allwoed sorting by date. So no, not every online platform ever.
This doesn't differ much from the legal reality that I've seen. Terms need to be defined, yes. It will require work to do so. And that work should be done even if it's a bother.
New York did a pretty good job in their law that limits addictive feeds. Here's what their law says:
> "Addictive feed" shall mean a website, online service, online application, or mobile application, or a portion thereof, in which multiple pieces of media generated or shared by users of a website, online service, online application, or mobile application, either concurrently or sequentially, are recommended, selected, or prioritized for display to a user based, in whole or in part, on information associated with the user or the user's device, unless any of the following conditions are met, alone or in combination with one another:
> (a) the recommendation, prioritization, or selection is based on information that is not persistently associated with the user or user's device, and does not concern the user's previous interactions with media generated or shared by other users;
> (b) the recommendation, prioritization, or selection is based on user-selected privacy or accessibility settings, or technical information concerning the user's device;
> (c) the user expressly and unambiguously requested the specific media, media by the author, creator, or poster of media the user has subscribed to, or media shared by users to a page or group the user has subscribed to, provided that the media is not recommended, selected, or prioritized for display based, in whole or in part, on other information associated with the user or the user's device that is not otherwise permissible under this subdivision;
> (d) the user expressly and unambiguously requested that specific media, media by a specified author, creator, or poster of media the user has subscribed to, or media shared by users to a page or group the user has subscribed to pursuant to paragraph (c) of this subdivision, be blocked, prioritized or deprioritized for display, provided that the media is not recommended, selected, or prioritized for display based, in whole or in part, on other information associated with the user or the user's device that is not otherwise permissible under this subdivision;
> (e) the media are direct and private communications;
> (f) the media are recommended, selected, or prioritized only in response to a specific search inquiry by the user;
(> g) the media recommended, selected, or prioritized for display is exclusively next in a pre-existing sequence from the same author, creator, poster, or source; or
> (h) the recommendation, prioritization, or selection is necessary to comply with the provisions of this article and any regulations promulgated pursuant to this article.
Ok so then the "algorithm" must be made available to authorities (or even better, the public at large) and be approved or rejected based on a court or a law. Obviously an algorithm based on "engagement" or "narrative" should be rejected with prejudice every time.
I don't see a single difficult example here. The answer is "NO." It's strange that you couldn't even find one.
I mean "Is including likes an algorithm?" You might as well ask if having a dog in the video is an algorithm. Any question about "likes" would be if you're manipulating the video selection based on likes, or is the user given a control to manipulate the video selection based on likes. If it's you it's an algorithm. If it's the user, it's a control. If you lie about the likes, then it's an algorithm. If you're transparent about the likes, then it is a control.
The other ones aren't even worth discussing. You might as well ask if having a blue logo is an algorithm, or if Comic Sans is an algorithm. "It's all so complicated!"
-----
edit: that being said, the EU does not care about this issue at all, and has had plenty of mandate and plenty of time to have done something about it if it did. They are also going to say "it's all so complicated." Because their problem is the unpopularity of center-left neolib governments that are just barely holding on with extreme minority support through bureaucratic means because they wrote the regulations. They want to keep what's came for British Labour during the recent council elections from coming for them.
So I guarantee that content will somehow become an "algorithm." The goal is to keep people who don't like them from speaking to each other.
> For example, by constantly ‘rewarding' users with new content, certain design features of TikTok fuel the urge to keep scrolling and shift the brain of users into ‘autopilot mode'. Scientific research shows that this may lead to compulsive behaviour and reduce users' self-control.
> Additionally, in its assessment, TikTok disregarded important indicators of compulsive use of the app, such as the time that minors spend on TikTok at night, the frequency with which users open the app, and other potential indicators.
I think it’s pretty uncontroversial to say these features are a problem. Anyone who has used these apps knows the feeling of “one more video”. Obviously TikTok will claim otherwise with their business model threatened.
The mechanism would be that if the user has chosen to follow an account then posts from that account falls under common carrier. If the platform choses to show you other posts then it's under their responsibility.
"this" - you mean, engagement optimization? i think it would be different content. i don't know how much liability matters, people spend all day watching netflix too, and it is "liable."
ironically, i'm only reading this kind of low brow take because people upvote it, not because it makes any sense.
How does this specific horrible take rank so highly on HN whenever something adjacent to big tech gets posted. "impartial common carrier" is not even an extent legal concept.
It's been argued to death already, I just have to express shock that I'm still seeing this non-starter constantly here.
Interactive computer services under CDA 230 are not the same as common carriers and your lack of sophistication on this point implies your suggestion this is “simple to solve” is not coming from a position of wisdom.
This is a bit of systems difference. Under a french law system you would write laws to regulate the harms away. Under english common law liability court cases about the harm would lead to precedents and then to common law derived from it. Though not an expert on this.
This exactly. I find it perplexing that social media companies get to make decisions about what people see but then also get to pretend that they are just a neutral communication medium. They're clearly not.
HN also decides what you see. Yes, it's (mostly) based on other users' upvotes, but on TikTok et al. its the same just with a different metric (watchtime, interaction, retention and probabily more). Where do you draw the line? Or do you have a different proposal how these generated feeds should work? I don't think just showing content by users you follow is going to cut it, because ideally the purpose is to show new and interesting stuff nobody in your circle was aware of.
Why would anyone go to a new platform if they didn't know anyone to follow there? I don't see a problem there. I download TikTok and search for SexyDancingDinosaur I heard was on there and press follow.
This is just pedantic. "Algorithm" is obviously shorthand for: a recommendations system that shows me things I didn't explicitly opt into.
Compare e.g. Mastodon vs Twitter or Bluesky. The former simply won't show you anything you didn't explicitly subscribe to, and there's no hidden ranking system.
The law is not a computer program. It is up to human interpretation. The law merely needs to define the intent, which is actually fairly easy to explain: you're not a common carrier if you're mediating and promoting and ranking and pushing beyond what the user has subscribed to with their choices.
You can get technical that "sorting" and "filtering" is a form of that, but you'd be applying the lens of a software engineer, not a lawyer.
It is pedantic, and you have to be pedantic when talking about laws and regulations. The vaguely written laws have the tendency to be interpreted in the most restrictive way possible by the executive branch.
> "Algorithm" is obviously shorthand for: a recommendations system that shows me things I didn't explicitly opt into.
In that interpretation that is applicable to any form of broadcast, including TV and radio, driven by the user ratings of their previous programs.
I understand how you feel but you can't be like "I want the law to make things worse for these businesses" and then when asked to define the boundaries of the law you say "that's pedantic."
In many cases technology laws are myopic in that they only see the most massive websites and forget that there is a whole www outside facebook. Is sorted by likes/upvotes a recommendation? Is the total of likes of your friends a recommendation? Can only data points from the last week be considered? Can there be a falloff by age?
At which point does the weights of the variables start to constitute a "recommendation"?
Alternative suggestion: Force them to open up the service and allow third party clients. Take Art. 20 GDPR "Right to data portability" and extend it to public content.
People have argued that by censoring what users can say, these platforms made themselves editors, if its not flat out illegal, I don't see why anyone should waste any time trying to police the internet, its a fools errand. I've had Facebook's AI ding me for posting literal memes that out of context sound ridiculous.
Do you really think a couple algorithm changes are all that's needed to make social media something into something that won't have a significant negative impact on the average child if they're exposed to it?
Historically, the village 100% changed diaper, feed your children, nursed and generally helped out. Aunts, cousins, parents, friends all pitched in in the community to care for children.
Historically, what you speak of is an idealized and generalized image. What village are you talking about? Where? When? What was the socioeconomic status of the family? Etc.
In reality it would vary whole lot, not just in terms of time and place in a general sense, but also for individual families. If you had many relatives nearby, perhaps, but in some cases you might not, or you might actually have to be taking care of not just your children but also your parents-in-law who are disabled and your aunt who is mentally unstable partially due to her own husband and children dying in the famine a couple years back.
And maybe you are also poor so you need to work land that isn't even your own, in addition to your own (maybe rented) plot, and you are socially shunned on top of that and your neighbors sure as hell aren't going to help out with your own children. But at least you only have two kids now since two died and you managed to give another away to live his whole life in a monastery.
I think kids and their free labor were the biggest wealth generating asset for the poor and as such wouldn't be given away except in the most extreme circumstances.
People make up history out of romanticized ideas of it all too much. Aunts, friends, cousins and parents had all own children and housework to care for. And the young couple was expected to provide more then they took in terms of help.
I have been to many places, in different cultures, and countries. Outside of blood relationships and church, I have not seen a villager change a diaper for another without compensation.
reply