This is kinda a hilarious coincidence, because I just watched a podcast with Andrej Karpathy, the guy who was running their autopilot program back when the sage "fully autonomous in one year" predictions kicked of their 15+ year long run (and counting).
These days he's moved on to predicting 10 years for AGI and is, I shit you not, citing his 15 year track record of making accurate predictions (timestamped link below if you want have a laugh).
There's a funny but very crude saying in Slovakia, which where he's from so he might know it, lol. I cannot write it here for obvious reasons but it's related to letting people do things to you for money ...
As others have pointed out, there were a lot of incentives ($$$) for Krapathy to behave like that during his tenure at Tesla.
I think (from personal experience) that they do this because 20 year olds have few responsibilities, which means they can work longer hours and take more risks.
It’s not “energy” because productivity of older more experienced founders is bound to be higher. Rather, younger inexperienced founders have to compensate for their lower productivity by increased hours.
I’d say overall the risk factor is more important, because married folks with kids have to worry about losing their job and finding another one a lot more. If your spouse isn’t working or is on reduced income, you have to prioritise job safety.
The only way out of that for older founders is to be rich, and that’s an advantage in general no matter the age. In the UK many founders are highly privileged and can take risks because there is little downside for them.
There’s definitely a gap in the market for a VC that invests in experienced founders with more tolerance for moving more slowly but with higher quality results. Deep tech is one area this would work.
Yeah, because it works as law of probability not law of physics. So out of ten thousand immature 20s a dozen may be very good making something that generate lot of wealth and immaturity in those cases might be helping not hurting them.
There are many ways to do it. How good of a way it is depends on what the market rate is for a 20 something possible unicorn/probable dud.
People in that age range sometimes have a nice set of traits like youthful self confidence and the energy levels needed to bust their ass and sell, along with the right amount insecurity that will drive them to need to prove themselves. When you combine this with the naivety and inexperience that lets you get a lopsided deal, with your experience that means that doesn’t happen to them after you, you end up in a situation where you can afford to have 10 or 20 duds between each winner. And it’s a lot of fun.
Hm. Is Karpathy the fool who convinced Musk that the best sensing technology (lidar) shouldn't be used? That is disappointing. I like Karpathy and always thought the cameras-only mistake came from Musk as chief fool.
It's impossible to know from this data point alone if Karpathy was a leader in this regard, convincing Musk, or if he was tapped to be merely the enabler.
Karpathy was in charge of the autonomous driving unit, while Elon was in charge of playing video games, calling random people on Twitter pedos, making flamethrowers, poaching a bunch of NASA engineers and selling them back to the government, digging holes, gaslighting us about the Hyperloop, and uhhh... yeah, too busy to have been super hands on with this I'd imagine.
He doesn't say that at all on the video. He refers to his experience in the industry in regards to other people making predictions. To be clear, he knows that FSD predictions were too optimistic, and that's why he says that AGI will take a decade at least. Almost everyone else in the industry is predicting AGI "soon", i.e. in 1 or 2 years.
There are literally zero people who know the answer to that question, or even have an estimate that is based on anything more than a hunch, it could be one year, or it could be ten years or more.
> literally zero people who know the answer to that question
But plenty willing to guess. Folks without domain expertise tend to average the experts' guesses without accounting for the condition of being willing to guess in the first place.
If someone averages only the guesses from this subset, they’re ignoring the conditional selection effect, that the dataset (the set of guesses) is biased by the very act of being willing to guess. So the "average of guesses" doesn't represent the true expert population, and ignores the conditional selection effect.
There's also just the reality that most of these experts have a lot of skin in the game, and there's lots of pressure in this world to a true believer, so I'm not going to buy into it too much.
But together with a bit reasoning, you have something like a HN comment that people willingly wanna discuss, instead of some off-hand "that'll never come".
I'm fairly sure what type of sentiment I'd prefer, at least it'd be half-assed then.
I think less than a decade on the basis that the hardware requirements to experiment with different algos have recently become quite reasonable so you get things like Karpathy's nanoGPT that he wrote in a month and can be trained for a few hundred dollars. People are going to be trying all sorts of ideas to try to get more human like intelligence.
That is not a basis. LLMs do not even exist within the same plane of intelligence that AGI would. NanoGPT gets us no closer to AGI than Tylenol gets us to curing cancer. And predictions about when we'll "cure cancer" have been about as accurate as I suspect AGI will be. When you don't understand the problem or the solution, you can't estimate the timeline for the solution. LLMs regurgitate and summarize text with unreliable accuracy, and they feign intelligence by reproducing the work of intelligent humans.
To me LLMs seem quite similar to the bit of the brain which does speaking without thinking about it. That would suggest people could build on that to do the other things the brain does like thinking and learning. We shall see I guess.
> LLMs do not even exist within the same plane of intelligence that AGI would. NanoGPT gets us no closer to AGI than Tylenol gets us to curing cancer.
You're saying that with a lot of confidence, but the truth is that we don't know if paracetamol might be the cure of cancer, less than we know for sure LLMs "do not even exist within the same plane of intelligence that AGI would".
Almost all discoveries are surprising, and after being discovered looks kind of obvious in hindsight. It just happened to be someone who stumbled over the right way of putting together the right pieces, and boom, something new discovered and invented.
With that said, I personally also don't believe LLMs to be the solution to AGI, but I won't claim with such confidence that they won't be involved in it, would be kind of outlandish to claim something like that this early on.
> Almost everyone else in the industry is predicting AGI "soon", i.e. in 1 or 2 years.
I've not read any original quote from anyone saying that, either. Closest things I have seen were people doing a game of telephone with Altman's "thousands of days" ending up as that, and the 2027 paper which was predicated on the USA systematically deleting every obstacle rather than vibe-coding a tariff policy and having ICE mass-arrest people building relevant factories, and even then the authors indicated 2027 was on the optimistic side.
No serious person is predicting that. There are a few categories of person who are making those predictions:
- AI CEOs, who require hype to be able to keep hemorrhaging money long enough to... hemorrhage more money
- The poor AI researchers the AI CEOs drag into public to lie in support of their narrative
- People who are not close to the tech, but who are influential in poltiics, finance, podcasts, etc who are wholly convinced by the grifters above - they did a little bit of poking around and saw some smart looking regurgitated boilerplate
- Grifter influencers, entrepreneurs, etc who don't care, they just want to push AI doom or vibe coding dreams for clicks, or sell some shitty AI solutions to companies eager to get in on the hype
The people you don't hear from are the AI researchers who know most of this is bullshit, but they aren't going to go on the record publicly expressing that they don't believe in the tech they're working on, because they like to be employed (and for absurd amounts of money)
I don't recall Andrej making "next year!" claims, it was always Elon. I found Andrej's talks from that time to be circumspect and precise in describing their ideas and approach, and not engaging in timeline speculation.
Do we know that Karpathy was the one making those predictions? Musk is absolutely notorious for throwing completely made up and detached-from-reality deadlines onto his teams, including making public obligations to such.
Karpathy, the celebrity AI influencer with no relevant autonomous vehicle experience, was hired to make Elon's bad predictions look good. He was an active and willing component of the scam. I'm sure he was paid handsomely.
Autopilot's first director, Sterling Anderson, was fired because he was not willing to go along with the scam.
I watched the same video. Karpathy isn't predicting AGI in 10 years. He's saying we won't have AGI in less than 10 years -- contrary to what industry boosters are promising -- and that what we end up with won't be some sort of god-in-a-box.
I mean this question in good faith, and with all due respect to Karpathy: is there any reason to give this guy any credence beyond his ability to teach about LLMs? The only interesting industry experience of his that I'm aware of is leading Tesla's AI division during the period where they decided on this disastrous and dangerous vision-only approach that has resulted in multiple deaths. That alone makes me think he's not only incompetent but unethical. Am I missing something?
> during the period where they decided on this disastrous and dangerous vision-only approach that has resulted in multiple deaths
To be fair, it was a direct decision from Elon due to covid supply chain shortages of radar and ultrasonic sensors. Not from engineers (as is common at Elon companies)
But Andrej deserves some of the blame because he was too busy sucking on the $TSLA stock teat to say anything
Yes, the decision was definitely made well before covid. It's just that musk and fanboys couldn't stop crowing about it during the covid shortages because that temporarily made camera-only sensing look like a good idea.
Oh come on. How many of us have been an engineer on a project and had to watch an exec make promises we knew were not reality?
It also doesn’t necessarily mean he was a yes man. Often in these situations people spell out their confidence levels plainly and directly to such execs and it just bounces off.
I’m also seeing below people suggesting he could have publicly voiced his concerns, but that probably wasn’t even a legal option for multiple reasons.
These days he's moved on to predicting 10 years for AGI and is, I shit you not, citing his 15 year track record of making accurate predictions (timestamped link below if you want have a laugh).
https://youtu.be/lXUZvyajciY?si=3PyVM476W6k3n-DR&t=181