There will be no great migration like we saw in 2010 with users shifting from Digg to Reddit but, instead, only the slow trickling escapes of users to more dispersed communities.
Here the human condition can flourish in a more localized way, with more participation (less lurking). No more winner takes all.
It's definitely something that's happened community by community. A lot of space news I care about is still just on Twitter but urbanism stuff has mostly moved to Bluesky, for instance.
The real issue is that none of these alternatives (Threads, Mastodon, Bluesky) offer anything other than "we're not Twitter".
Digg to Reddit was a unique case, because Digg very specifically fucked up their site, badly, with the V4 update. Reddit was in a great spot to pick up users from Digg because of not only having a similar overarching purpose as a link aggregator, but additional features like subreddits which enabled more smaller and casual link sharing and comment sections. It was a clear upgrade from Digg V4. I do think that Reddit would have eventually overtaken Digg anyway, and V4 only sped up the process.
Technically and product-wise, there's not a whole lot wrong with Twitter right now. If you're on there to look at funny memes, cat pictures, celebrity news and pornography -- which encapsulates about 98% of Twitter use cases -- it still functions much better than the alternatives. The migrations are happening for meta reasons, either political or ToS-related (specifically, X claiming they can use images you post for AI training). This isn't a recipe for long-term success, it's a precursor for people making bunch of noise for a month and then heading back to Twitter.
As someone who doesn't really participate in these large social networks -- even modern HN is way too mainstream for me honestly -- I do think it's a good thing people get off them, though. Smaller communities are a good thing. Shouting your loudest, hottest political takes on Twitter so you can pat yourself on the back for 10k likes is a fast track to mental health issues.
> Technically and product-wise, there's not a whole lot wrong with Twitter right now.
The Bluesky app both performs better and uses much less battery than Twitter does. I think because it uses Google ads now, but not sure.
Twitter also has disk space leaks - I regularly find the app has gone up to 3GB or so. (And it's not from image caching, seems to be an SQLite db of all accounts I've seen posts from.)
Bluesky app is becoming a fine React Native exemplar, and it's been a blast watching former Facebook React guy Dan Abramov, now working at Bluesky, start using Native for the first time. https://bsky.app/profile/danabra.mov
> Digg very specifically fucked up their site, badly,
And Twitter didn't? They use the "nazis at a bar" metaphor for a reason. UX is not the only way to screw up a site. Ashley Madison didn't torpedo because of a bad UI redesign.
Even on a product level, the change to not hide tweets from blocked accounts may as well have been a Digg v4 for high profile people. There was no profit to be had here and no one to gain with this update. Purely ideaologically driven.
> it still functions much better than the alternatives.
in which ways? Genuinely curious. All these social media feeds, by design, all just blended into the Instagram/tiktok mush of infinite scrolling and predictive "you might like this!" sorts of algorithms to meximize engagement. None feel much easier/harder.
unless you have two enormous networks where one happens to be libertarian/right leaning and the other is mostly very left. both have huge audiences and can likely thrive just fine on their own. i don't particularly think it's healthy, but it seems like that's just how humans are.
I think that if you have a neutral social network and a politically-charged social network, the neutral one will attract more eyeballs. People like to see ideas challenged and debated. Heavily-moderated and single-sided networks (Mastodon, Truth social, etc.) are simply boring compared to celebrity drama and political clashes on twitter.
What you have in the modern internet is that left-leaning users avoid networks that aren't moderated in their favor (in a conscious attempt to prevent moving the overton window), which leads to right-wing takeover (and eventually death of the social network because there are only right-wingers, see the graveyard of reddit alternatives). This trend would have been reversed a decade ago.
AI is going to continue to have incremental progress, particularly now in hardware gains. No one can even define what AGI is or what it will look like, let alone be something that OpenAI would own? Features progress is too incremental to suddenly pop out with "AGI". Fighting about it seems a distraction.
I don't think the chance is 0%, but I do think that the chance is very, very close to 0%, at least if we're talking about it happening with current technology within the next hundred years or so.
Are you one of those people? how can you be so confident? I think everyone should have updated their priors after how surprising the emergent behavior in GPT3+ are
I don't think GPT3's "emergent behavior" was very surprising, it was a natural progression from GPT2, and the entire purpose of GPT3 was to test the assumptions about how much more performance you could gain by growing the size of the model. That isn't to say GPT3 isn't impressive, but its behavior was within the cone of anticipated possibilities.
Based on a similar understanding, the idea that transformer models will lead to AGI seems obviously incorrect, as impressive as they are, they are just statistical pattern matchers of tokens, not systems that understand the world from first principles. And just in case you're among those that believe "humans are just pattern matchers", that might be true, but humans are modeling the world based on real time integrated sensory input, not on statistical patterns of a selection of text posted online. There's simply no reason to believe that AGI can come out of that.
Please read the paper. The authors are using more precise and specific metrics that qualitatively measure the same thing. Instead of having exact string match being 1 if 100% correct, 0 if there is any failure, they use per-token error. The crux of their argument is that per-token error is a better choice of metric anyway, and the fact that "emergent abilities" do not occur when using this metric is a strong argument that those abilities don't really exist.
However thermal energy does not more precisely or specifically measure a phase transition. They are only indirectly linked - nobody would say that thermal energy is a better measure of state-of-matter than solid/liquid/gas. Your argument makes absolutely zero sense. Frankly it seems intentionally ignorant.
Per token error is a fairly useless metric. It's not predictive and it tells you absolutely nothing.
They say it's a superior metric but clearly the wider research community disagrees since no one has cared to adopt per token error as a metric in subsequent papers.
>and the fact that "emergent abilities" do not occur when using this metric is a strong argument that those abilities don't really exist.
If your conclusion is that those abilities don't exist then you clearly didn't read the paper very well.
They never argue those abilities don't exist, they simply argue whether we should call them "emergent" or not.
>However thermal energy does not more precisely or specifically measure a phase transition. They are only indirectly linked - nobody would say that thermal energy is a better measure of state-of-matter than solid/liquid/gas. Your argument makes absolutely zero sense. Frankly it seems intentionally ignorant.
Phase Changes are literally driven by changes in thermal energy.
Water boils when it absorbs enough thermal energy to break intermolecular forces keeping its liquid state together.
solid/liquid/gas is descriptive. It's not a measure of anything.
Anyway, the point is simple. Despite thermal energy driving state change after a certain threshold, that "point" doesn't look like anything special.
Smooth quantitative change sometimes results in sudden qualitative changes.
Just feed the output through a quality system (with retry if the quality is too bad), scaffold it a bit, then run it back into the LLM. Should work(tm)
Progress is definitely not inremental, it's exponential.
The same performance (training an LLM with a given perplexity) can be achieved 5x cheaper next year while the amount of money deep learning infrastructure gets increases exponentially right now.
If this method is able to get to AGI (which I believe but many people are debating), human intelligence will just be mostly ,,skipped'', and won't be a clear point.
In nature, exponential curves reveal themselves to be sigmoidal on a long enough time scale. Since you're on HN you probably have a mathematical bent, and you should know that.
Could you let me know what use case or challenge you're seeing? I can help to answer this question more specifically. Thanks for taking the time to reach out!
Hi, I've kept the user management server both simple (MVP) and focused on reducing the time it takes for application developers to build/integrate user authentication. My goal right now is to listen and gather any and all feedback developers have in this area (likes/dislikes) so I can understand better.
A general hypothesis I have is that the Deno team is just taking on too much to make the investment work in terms of, well everything really: version compatibility to reduce breaking 3rd party libs, one-by-one certification of npm module compatibility, deno core module re-writes, a hosting company, maintaining developer tooling/ecosystem - to name a few.
Here the human condition can flourish in a more localized way, with more participation (less lurking). No more winner takes all.