Hacker Newsnew | past | comments | ask | show | jobs | submit | nemo1618's commentslogin

I'm old enough to remember when companies worth $1 billion were called "unicorns." Now we have a company raising 122 times that? Valued at nearly 1000 times that...?

At least they're throwing consumers a bone via the ARK deal. It's crazy how little AI exposure is available to anyone who isn't already wealthy and/or connected.


I think this is reality-distortion field rivaling that of Jobs', and a crisis of faith. Nobody apparently believes that capital is worth investing into anything but AI.

> Nobody apparently believes that capital is worth investing into anything but AI.

This is the main reason we see this insane investment into AI imo. If you imagine having lots of money, where should you invest that currently?

Housing market: Seems very overvalued (at least in germany). Also with the current uncertainty and inflation its hard to make an investment that pays back over 20-30 years. So building is also difficult.

Stocks are very volatile currently. Not only since Iran. To me it seems since the financial crisis 2008 investors don't enjoy stocks as before.

Gold: Only if you are paranoid about collapse of society. It doesn't make sense to invest into s.th. without interest rates.

Crypto: Same as gold, but better if you like gamling. I would assume most people who are very rich don't gamble with most of their fortune.


Looking around, and especially forward, it would be military tech, e.g. [1], and its supply chain, e.g. [2] :-\ Valuations are not as crazy, but I bet there'll going to be a lot of demand in the coming decade, unfortunately.

Chip production, too, of course, but it's overflowing with money already, apparently. It's growing though, because there are real actual shortages of stuff like RAM and SSDs, there's money to be made immediately if you can. Chinese RAM manufacturers are building out like crazy.

[1]: https://www.ultimamarkets.com/academy/anduril-stock-price-ho...

[2]: https://www.marketscreener.com/quote/stock/RHEINMETALL-AG-43...


> it would be military tech

Anduril is the only company in this sector in the US that has any promise and they aren't even public. Most of us are not going to get our hands on this.

Traditional defense sector looks more like Jeep, or Kodak...


Anduril has yet to deliver anything of consequence. I hope they shake up the industry but to say they are the next hot thing and write off the primes at this stage is premature.

Invest in the Ukrainian drone producers which proved themselves on the literal battlefield! Some of the Gulf states already did.

The demand for more Patriot missiles is large, now that much of the stockpiles have suddenly been spent. Raytheon should do fine just based on that.

this admin will probably source patriots made in china

> Looking around, and especially forward, it would be military tech, e.g. [1], and its supply chain, e.g. [2]

Only viable if you’re okay with the ethical implications of funding war.


Would you be fine with the ethical implications of funding the industry to fight WWII? Would you consider funding Ukrainian military unethical? Or Taiwanese?

This is, sadly, not theoretical, and I'm afraid we'll soon see more of such choices, not fewer.


> Stocks are very volatile currently. Not only since Iran. To me it seems since the financial crisis 2008 investors don't enjoy stocks as before.

These returns do not qualify as “enjoying stocks”?

https://investor.vanguard.com/investment-products/etfs/profi...

The returns are higher than before 2008, the previous 15 years are unprecedented.

https://www.macrotrends.net/2526/sp-500-historical-annual-re...


> To me it seems since the financial crisis 2008 investors don't enjoy stocks as before

Maybe in Europe. The US stock market has nearly tripled since then. Literally the best period of stock growth in history.


"The Roaring Twenties roared loudest and longest on the New York Stock Exchange. Share prices rose to unprecedented heights. The Dow Jones Industrial Average increased six-fold from sixty-three in August 1921 to 381 in September 1929. After prices peaked, economist Irving Fisher proclaimed, "stock prices have reached 'what looks like a permanently high plateau.'"

https://www.federalreservehistory.org/essays/stock-market-cr...


You can argue that current market multiples are higher than 1929 [1] - and they're certainly high - but this also ignores the mechanism that drove that crash, focusing only on the symptoms. We simply aren't doing the kind of consumer margin buying that drove the '29 crash. It isn't even close. Average schlubs were leveraged to the stratosphere to buy shares of boring industrial stocks.

[1] https://www.multpl.com/shiller-pe


> The US stock market has nearly tripled since then. Literally the best period of stock growth in history.

The only thing I meant to point out was that a very high stock price by itself is no guarantee that there isn't a crisis around the corner. We plugged a lot of holes after 2008 and then reversed a lot of those fixes, I hear retail investors talking about their stocks at birthday parties again. Deja vu... of course this time it will be different. Or not. Let's just say that with the proverbial bull in the earthenware goods store on the loose if we only end up with another financial crisis that might actually not be so bad.


I actually calculated wrong. It went up 7.5x, not 3x.

In the roaring twenties stockbrokers allowed clients 10:1 margin. Investors were not as well-informed as they are today. There was no deposit insurance.

The SEC wasn't nearly as powerful as it was in 2024 and there was way more shady shit going on. In that respect, and the repeal of Glass-Steagall we're reverting to the pre-depression era.


Ok second best :-) I wasn't alive in the 1920s though

True, but it is close enough in time that we should heed the lessons learned lest we repeat the experience.

Do you know the actual lessons of that crash? Because we don't allow retail investors to go 10:1 on leverage anymore. There are a lot more lessons and none of them apply to this situation (even Glass-Steagall). This is much closer to the dot com crash in 2001 in how it looks, just a lot more concentrated and probably a bit bigger. If all you got is "number go up too much" then you probably shouldn't be investing your own money.

The good news is that its almost all rich folks money on the line here and a small amount of dumb money. That's very different than, 2008 where it was mostly the indexes that got hit and that's more middle class/upper middle class concentrated.


you gotta have some of all the above actually.

OpenAI is making $24b a year. It's a 32x revenue multiple. High, but not insane. Spinning this as a story of overinvestment doesn't make sense.

Are you conflicting price to earnings to price to revenue?

32x earnings is high. 32x revenue is probably insane.

That's the tao of hyper-financialization. It must keep growing irrational exuberance big and up forever like stonks or it bursts like DotCom and tulip mania. It's funny money that cannot be liquidated for real value for more than a tiny fraction of the imaginary trillions being thrown around. Similarly, Nvidia $4T mkt cap makes absolutely no sense when it has but a few incestuous customers-parters-investors throwing around tens of billions each per year devoid of fundamentals like essential service offerings that turn a profit. Those handful of whale customers will make their own chips or cease buying large qtys at any time.

I wonder what is not getting invested in bc AI has been crowding out everything else since 22.

It has to be brutal out there for everybody else, if all the money is going to AI.


And not even actual capital either, as much of the investment amounts into AI have been through cloud and GPU credits so that AWS or Microsoft Azure don't actually have to hand over billions in straight cash.

But they're really cagey about actually handing money over to them today

It's the result of too much echo chambered bullshit floating around daily about how capable LLMs really are. It's literally crypto/blockchain all over again. It's one big lie that a lot of people have bought into which causes it to self-perpetuate, like religion.

  > At least they're throwing consumers a bone via the ARK deal.
I had to look this up. There's a venture fund you can invest in with as little as $500 as a consumer -- though it's limited to quarterly withdrawals.

https://www.ark-funds.com/funds/arkvx

The fund is invested in most of the hot tech companies.


ARK was all the rage around early pandemic time when wallstreetbets was in the news a lot. Most people probably know it from then.

ARK funds has cult like following but then again they are a typical high beta player who outperforms in hot markets and heavily underperforms in cold ones. Fees are high. The CEO (CIO) is a women who looks for investment advice in the Bible and asks God for his thoughts (I am not joking).

If anything being associated with ARK in any form is a big negative signal.


An ARK ETF is a smell to me. Besides, based on their holdings, i would never invest. 18% of the fund is SpaceX

I would not call an effective 2.9% expense ratio "throwing a bone".

Also, the valuation for such a debt laden company should be viewed with great skepticism. I'm afraid a lot of mutual funds will end up holding the bags.

It's not that far off from the standard 2% mgmt fee and 20% of excess performance?

The money is worth much much less than it was before, we live in times of global hyper inflation.

> At least they're throwing consumers a bone via the ARK deal. It's crazy how little AI exposure is available to anyone who isn't already wealthy and/or connected.

It is deliberate. Period.

It's always been known that you make money in the private markets and pre-IPO companies and retail is the final exit for insiders and early investors.

Retail is not allowed to be early into these companies (Because that would ruin the point of being an insider) and this "exposure" has to be at the near top.


Who are "these" companies? Did retail get into Google, Facebook, Amazon, Tesla, etc before the top?

Also, aren't AI businesses losing a lot of money each year? Pretty sure there is some risk involved that is not good for retail.


There are ways now for retail to get in to these companies including, check out hiive or equityzen...just beware of massive dilution.

VCX (Fundrise) has way more exposure than ARKVX

It's also trading at a huge premium. Probably worth a read if you're considering it: https://www.morningstar.com/funds/fundrise-innovation-is-not...

Even a billion dollars is crazy money. If you have a company with a subscription service that costs $100 yearly, you have ~2m customers, with a 50% profit margin. Your company makes ~100m every year in profit. Imo that's what is actually worth a billion dollars, maybe even a bit less.

That company is probably worth about $8b, FYI. Obviously that's an estimated average but a P/E ratio of 80 give you that valuation.

yep, I do a simple version of this in Google Sheets. Very useful to be able to "Ctrl-F" your life, especially when combined with Google Maps location history.


If this was a joke, it certainly flew over most people's heads...


This will happen with GUIs as well, once computer-use agents start getting good. Why bother providing an API, when people can just direct their agent to click around inside the app? Trillions of matmuls to accomplish the same result as one HTTP request. It will be glorious. (I am only half joking...)


> But like humans — and unlike computer programs — they do not produce the exact same results every time they are used. This is fundamental to the way that LLMs operate: based on the "weights" derived from their training data, they calculate the likelihood of possible next words to output, then randomly select one (in proportion to its likelihood).

This is emphatically not fundamental to LLMs! Yes, the next token is selected randomly; but "randomly" could mean "chosen using an RNG with a fixed seed." Indeed, many APIs used to support a "temperature" parameter that, when set to 0, would result in fully deterministic output. These parameters were slowly removed or made non-functional, though, and the reason has never been entirely clear to me. My current guess is that it is some combination of A) 99% of users don't care, B) perfect determinism would require not just a seeded RNG, but also fixing a bunch of data races that are currently benign, and C) deterministic output might be exploitable in undesirable ways, or lead to bad PR somehow.


Deterministic output is incompatible with batching, which in turn is critical to high utilization on GPUs, which in turn is necessary to keep costs low.


Batching doesn't mean the computation suddenly becomes non-deterministic. Ideally, it just means you perform the same computation on multiple token streams in the batch simultaneously, without the values interacting with each other. Vectorization, basically.

Batching leads to cross-contamination in practice because of things like MoE load-balancing within the batch, or supporting different batch sizes with different kernels that have different numerical behavior. But a careful implementation could avoid such issues while still benefiting from the higher efficiency of batching.


> This is emphatically not fundamental to LLMs! Yes, the next token is selected randomly; but "randomly" could mean "chosen using an RNG with a fixed seed."

This. Thanks for saying that, because now I don't need to read the article, since if the author doesn't even get that, I'm not interested in the rest.


LLMs are, fundamentally, compressed lookup tables that map input -> input + next token. Or, If you like, input -> input + list of possible next tokens with probabilities.


D) they can’t do rolling deployments of new versions of the models or tweaks to the model if people assume nothing should change for the same prompt


The temperature parameters largely went away when we moved towards reasoning models, which output lots of reasoning tokens before you get to the actual output tokens. I don’t know if it was found that reasoning works better with a higher temperature, or that having separate temperatures for reasoning vs. output wasn’t practical, but that’s my observation of the timing, anyway. And to the other commenter’s point, even a temperature of 0 is not deterministic if the batches are not invariant, which they’re not in production workloads.


IMO this is one of the best use cases for AI today. Each function is like a separate mini problem with an explicit, easy-to-verify solution, and the goal is (essentially) to output text that resembles what humans write -- specifically, C code, which the models have obviously seen a lot of. And no one is harmed by this use of AI; no one's job is being taken. It's just automating an enormous amount of grunt work that was previously impossible to automate.

I'm part of the effort to decompile Super Smash Bros. Melee, and a fellow contributor recently wrote about how we're doing agent-based decompilation: https://stephenjayakar.com/posts/magic-decomp/


And the renaming of all the variables from the auto-gen ones into something human readable was always a thankless task which LLMs are really good for.


> And no one is harmed by this use of AI; no one's job is being taken

what about: see cool app, decompile it, launch competing app.

(repeat)


Decompiling seems like the hard way to go here. Lots of clones pop up for popular games and apps all the time. I don't think you need to go down the decompile route to achieve that.


"The steamroller is still many inches away. I'll make a plan once it actually starts crushing my toes."

You are in danger. Unless you estimate the odds of a breakthrough at <5%, or you already have enough money to retire, or you expect that AI will usher in enough prosperity that your job will be irrelevant, it is straight-up irresponsible to forgo making a contingency plan.


What contingency plan is there exactly? At best you're just going from an automated-already job to a soon-to-be-automated job. Yay?

I'm baffled that so many people think that only developers are going to be hit and that we especially deserve it. If AI gets so good that you don't need people to understand code anymore, I don't know why you'd need a project manager anymore either, or a CFO, or a graphic designer, etc etc. Even the people that seem to think they're irreplaceable because they have some soft power probably aren't. Like, do VC funds really need humans making decisions in that context..?

Anyway, the practical reason why I'm not screaming in terror right now is because I think the hype machine is entirely off the rails and these things can't be trusted with real jobs. And honestly, I'm starting to wonder how much of tech and social media is just being spammed by bots and sock puppets at this point, because otherwise I don't understand why people are so excited about this hypothetical future. Yay, bots are going to do your job for you while a small handful of business owners profit. And I guess you can use moltbot to manage your not-particularly-busy life of unemployment. Well, until you stop being able to afford the frontier models anyway, which is probably going to dash your dream of vibe coding a startup. Maybe there's a handful of winners, until there's not, because nobody can afford to buy services on a wage of zero dollars. And anyone claiming that the abundance will go to everyone needs to get their head checked.


My contingency plan is that if AI leaves me unable to get a job, we are all fucked and society as a whole will have to fix the situation and if it doesn’t, there is nothing I could have done about it anyway.


As a fellow chad I concur. Though I am improving my poker skills - games of chance will still be around


You likely already know, but the "Pluribus" poker bot was beating humans back in 2019. Games of chance will be around if people are around, but you'll have to be careful to ensure you're playing against people, unassisted people.

https://en.wikipedia.org/wiki/Pluribus_(poker_bot)


Yeah, thanks, I only play live games. I'm in australia so online poker is illegal here. I was thinking of getting a vpn and having a play online, then I saw this recently https://www.reddit.com/r/Damnthatsinteresting/comments/1qi69...


So much of these degenerate online gambling / "investment" platforms are illegal here for good reason. If you are just a normal person playing fairly, you are being scammed. Same for things like Polymarket, the only winners are the people with insider knowledge.


Even horse racing, it's a solved problem, and if you start winning they'll just cancel your a/c (happened to a friend of mine)


this has been me ever since my philosophy undergrad.


This is a sensible plan, given your username.


Yeah seriously. Don't people understand the fact that society is not good at mopping up messes like this—there has been a K shaped economy for several decades now and most Americans have something like $400 in their bank accounts. The bottom had already fallen out for them, and help still hasn't arrived. I think it's more likely that what really happens is that white collar workers, especially the ones on the margin, join this pool—and there is a lot of suffering for a long time.

Personally, rather devolving into nihilism, I'd rather try to hedge against suffering that fate. Now is the time to invest and save money. (or yesterday)


If white collar workers as a whole suffer severe economic setback over a short term timespan, your savings and investments won’t help you.

Unless you’re investing in guns, ammo, food, and a bunker. We’re talking worse unemployment than depression era Germany. And structurally more significant unemployment because the people losing their jobs were formally very high earners.


That’s the cataclysmic outcome, though. Although I deemed that that’s certainly possible and I would put a double digit percentage probability on it, another very likely outcome is a very severe recession, or a recession, wear a lot of, but not all, white collar work is wiped out. Maybe there’s a significant restructuring in the economy I think in a scenario like that, which also seems to be in the realm of possibility, I think having resources still matters. Speech to text, sorry for the poor grammar.


It’s definitely possible that there’s an impact that is bad but not cataclysmic. I figure in thst case though my regular savings is enough to switch to something else. I could retire now if I was willing to move somewhere cheap and live on $60k a year. There’s a lot of things that could cause that level of recession though without the need for AI.

I do also think the mid level bad outcome isn’t super likely because of AI is good enough to replace a lot of white collar jobs, I think it could replace almost all of them.


> You are in danger. Unless you estimate the odds of a breakthrough at <5%

It's not the odds of the breakthrough, but the timeline. A factory worker could have correctly seen that one day automation would replace him, and yet worked his entire career in that role.

There have been a ton of predictions about software engineers, radiologists, and some other roles getting replaced in months. Those predictions have clearly been not so great.

At this point the greater risk to my career seems to be the economy tanking, as that seems to be happening and ongoing. Unfortunately, switching careers can't save you from that.


We are the French artisans being replaced by English factories. OpenAI and its employees are the factory.


Checking the scoreboard a bit later on: the French economy is currently about the same size as the UK.


That has little to do with what I wrote, and isn't addressing the central issue.


I'm not worried about the danger of losing my job to an AI capable of performing it. I'm worried about the danger of losing my job because an executive wanted to be able to claim that AI has enhanced productivity to such a degree that they were able to eliminate redundancies with no regard for whether there was any truth to that statement or not.


> Unless you estimate the odds of a breakthrough at <5%

I do. Show me any evidence that it is imminent.

> or you expect that AI will usher in enough prosperity that your job will be irrelevant

Not in my lifetime.

> it is straight-up irresponsible to forgo making a contingency plan.

No, I'm actually measuring the risk, you're acting as if the sky is falling. What's your contingency plan? Buy a subscription to the revolution?


> What's your contingency plan? Buy a subscription to the revolution?

I’ve been working on my contingency plan for a year-and-a-half now. I won’t get into what it is (nothing earth shattering) but if you haven’t been preparing, I think you’re either not paying enough attention or you’re seriously misreading where this is all going.


This ^ been a SWE for 20 years the market is the worst I have seen it, many good devs been looking for 1-2 years and not even getting a response, whereas 3-4 years ago they would have had multiple offers. Im still working but am secure in terms of money so will be ok not working (financially at least). But I expect a tsunami of layoffs this and next year, then you are competing with 1000x other devs and Indians who will works for 30% of your salary.


That's called an economic crisis, it has nothing to do with AI, my friends also have trouble to find 100% manual jobs which were easily available 2 years ago.

Yes I said the word that none of these company want to say in their press conference.


Thats because there are more tech/service workers competing for the manual jobs now.


Tech workers aren't numerous enough to have that effect.

Besides that, why aren't we seeing any metrics change on Github? With a supposedly increase of productivity so large a good chunk of the workforce is fired, we would see it somewhere.


A lot of non-AI things have happened though.


So AI is going to steamroll all feasible jobs, all at once, with no alternatives developing over time? That's just a fantasy.


It'd probably be cold day in Hell before AI replaces veterinary services, for example. Perhaps for mild conditions, but I cannot imagine an AI robot trying to restrain an animal.


All these so-called safe jobs still depend on someone being able to afford those services. If I don't have a job, I can't go see the vet, the fact that no one else can do the vets job is irrelevant at such a point.

I would like to know if there's some kind of inflection point, like the so-called Laffer curve for taxes, where once an economy has X% unemployment, it effectively collapses. I'd imagine it goes: recession -> depression -> systemic crisis and appears to be somewhere between 30-40% unemployment based on history.


Every job deemed "safe" will be flooded by desparate applicants from unsafe jobs.


> it is straight-up irresponsible to forgo making a contingency plan.

What contingencies can you really make?

Start training a physical trade, maybe.

If this the end of SWE jobs, you better ride the wave. Odds are you're estimate on when AI takes over are off by half a career, anyways.


Working in the trades won’t help you at 40-50% unemployment. Who’s going to pay for your services. And even the meager work remains would be fought over by the hundred million unemployed who are all suddenly fighting tooth and nail for any work they can get.


Isn’t it a bit silly to say AI is going to eat the entire economy, but you have a contingency plan?

It seems kind of like saying “I’m smarter than all the AIs in this one particular way.” If someone posted that, you would probably jump in to say they’re fooling themselves.


Unless I misunderstand your metaphor, there is nothing you can do about the steamroller, it is going to roll, no matter what.


I think it's a combination of a) reflexive dislike of any hyped-up tech, mainly due to the crypto era, and b) subconscious ego protection ("this can't be legit, otherwise everything I've built my identity around will be thrown into question").

The best models already produce better code than a significant fraction of human programmers, while also being orders of magnitude faster and cheaper. And the trendlines are stark. Sure, maybe AI can't replace you today. Maybe it will hit that "wall" people are always forecasting, just before it gets good enough to threaten your job. But that's a rather uncomfortable proposition to bet a career on.


> humans still need to have their hands firmly on the wheel if they won’t want to risk their businesses well being

What happens when businesses run by AIs outperform businesses run by humans?


The humans will still own the business (unless you are proposing some alternative version of AI ownership), so in effect there will be always a human who is concerned about their business’s well being.

I doubt that we would get into a world where a company would be allowed to run without human involvement (AI directors and AI management) as you will have nobody to hold accountable.


Well, wasnt this what are all these blockchain DAO entites where supposed for? :D


Yes, I was just about to bring this up as well. One could argue that they were simply too early. It will be interesting to watch things like ERC-8004.


Let's be real. The sky is blue because God thought it was a pretty color, simple as. All this stuff about wavelengths and resonant frequencies and human color perception got retconned into the physics engine at some point in the past millennium, that's why all these epicycles are needed.


Our lord Zeus always thinks of everything


His noodly appendage touches all.


> thought it was a pretty color

So was blue intrinsically pretty and thus made into the sky, or considered pretty and thus imprinted in the minds of humans that way?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: