Because it's a fantasy for an unknown amount of time. 1 year? 10? 50? Never? There hasn't been a single proper breakthrough in continual learning that would enable it. Anyone that studies CL will also get super pissed at it the problem and solution counteract each other to our current understanding but a fruit fly does it no problem!
Echoing the other comment they showed another big thing which is that the output if an AI model is the AI model. If you mass prompt scrape their AI you can recreate it almost exactly.
Very dangerous if you think about it that the product itself is the raw building block for itself.
Openai spends 1B$ on their model, releases it and instantly it gets scrapped by a million bots to build some country or company their own model.
I bet you the predictions are largely correct but technology doesn't care about funding timelines and egos. It will come in its own time.
It's like trying to make fusion happen only by spending more money. It helps but it doesn't fundamentally solve thr pace of true innovation.
I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.
Well, the predictions are tied to the timelines. If someone predicts that AI will take over writing code sometime in the future I think a lot of people would agree. The pushback comes from suggesting it's current LLMs and that the timeline is months and not decades.
> I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.
It comes from the company best equipped with capital and infra.
If some university invents a new approach, one of the nimble hyperscalers / foundation model companies will gobble it up.
This is why capital is being spent. That is the only thing that matters: positioning to take advantage of the adoption curve.
All of moltbook is the same. For all we know it was literally the guy complaining about it who ran this.
But at the same time true or false what we're seeing is a kind of quasi science fiction. We're looking at the problems of the future here and to be honest it's going to suck for future us.
I don't think the elite think all voters are dumb more like they think they're easy to manipulate to vote for something (which is largely true). Anecdotally I easily get manipulated by the type of information I consume. I occassionally catch it after the fact or a conversation with others but there's no telling how much I've just accepted that's manipulated.
From that angle it's a game of who has the money, power, and diatribution to enact this manipulation.
Twitter being a prime example. Is Elon "right"? Maybe but the main point is it doesn't matter as he has the distribution.
If you have money but low to no distribution -> you do what gary is doing. Maybe he'd be interested in removing rights to vote but someone like Zuck would NOT because he has outsized ability to influence as he sees fit.
lol pretty much. their reservation price was more than met.
Although considering Brin's interactions with female employees etc, no surprise really. They were full of it from the off. Page is better at hiding it.
reply