Survivorship bias and the corporate finance world of today is completely unrecognizable from the world of Google and Apple. Just look at the resulting performance of the SPAC craze
It’s a $200M contract. That’s not nothing but it’s definitely not such a huge sum for these companies at their scale when they’re spending billions on infrastructure.
I’m sure anthropic has signed up more revenue this week in response to this debacle to cover it. Where they’re actually screwed is if the gov follows through and declare anthropic a supply chain risk.
It's not "just" a $200m contract, it's the start of a lucrative relationship
1. Stargate seemed to require a dedicated press conference by the President to achieve funding targets. Why risk that level of politicization if it didn't?
2. Greg Brockman donated $25mil to Trump MAGA Super PAC last year. Why risk so much political backlash for a low leverage return of $200m on $25m spent?
3. During WW2, military spend shot from 2% to 40% of GDP. The administration is requesting $1.5T military budget for FY2027, up from $0.8T for FY2025. They have made clear in the past 2 months that they plan to use it and are not stopping anytime soon
If you believe "software eats the world" it is reasonable to expect the share of total military spend to be captured by software companies to increase dramatically over the next decade. $100B (10% of capture) is a reasonable possibility for domestic military AI TAM in FY2027 if the spending increase is approved (so far, Republicans have not broken rank with the administration on any meaningful policy)
If US military actions continue to accelerate, other countries will also ratchet up military spend - largely on nuclear arsenals and AI drones (France already announced increase of their arsenal). This further increases the addressable TAM
Given the competition and lack of moat in the consumer/enterprise markets, I am not sure that there is a viable path for OpenAI to cover it's losses and fund it's infrastructure ambitions without becoming the preferred AI vendor for a rapidly increasing military budget. The devices bet seems to be the most practical alternative, but there is far more competition both domestically (Apple, Google, Motorola) and globally (Xiaomi, Samsung, Huawei) than there is for military AI
Having run an unprofitable P&L for a decade, I can confidently state that a healthy balance sheet is the only way to maintain and defend one's core values and principles. As the "alignment" folks on the AI industry are likely to learn - the road to hell (aka a heavily militarized world) is oft paved with the best intentions
First, I have to say I loved your thoughtful & detailed comment. You have clearly considered this from the financial side; let me add some color from the perspective of someone working with frontier researchers.
> As the "alignment" folks on the AI industry are likely to learn
I will push back here. Dario & co are not starry-eyed naive idealists as implied. This is a calculated decision to maximize their goal (safe AGI/ASI.)
You have the right philosophy on the balance sheet side of things, but what you're missing is that researchers are more valuable than any military spend or any datacenter.
It does not matter how many hundreds of billions you have - if the 500-1000 top researchers don't want to work for you, you're fucked; and if they do, you will win because these are the people that come up with the step-change improvements in capability.
There is no substitute for sheer IQ:
- You can't buy it (god knows Zuck has tried, and failed to earn their respect).
- You can't build it (yet.)
- And collaboration amongst less intelligent people does not reliably achieve the requisite "Eureka" realizations.
Had Anthropic gone forth with the DoD contract, they would have lost this top crowd, crippling the firm. On the other hand, by rejecting the contract, Anthropic's recruiting just got much easier (and OAI's much harder).
Generally, the defense crowd have a somewhat inflated sense of self worth. Yes, there's a lot of money, but very few highly intelligent people want to work for them. (Almost no top talent wants to work for Palantir, despite the pay.) So, naturally:
- If OpenAI becomes a glorified military contractor, they will bleed talent.
- Top talent's low trust in the government means Manhattan Project-style collaborations are dead in the water.
As such, AGI will likely emerge from a private enterprise effort that is not heavily militarized.
Finally, the Anthropic restrictions will last, what, 2.5 more years? They are being locked out of a narrow subset of usecases (DoD contract work only - vendors can still use it for all other work - Hegseth's reading of SCR is incorrect) and have farmed massive reputation gains for both top talent and the next administration.
This is an interesting perspective. What happens if there is a large global war? Do researchers who were previously against working with the DoD end up flipping out of duty? Does the war budget go up? Does the DoD decide to lift any ban on Anthropic for the sake of getting the best model and does Anthropic warm its stance on not working with autonomous weapons systems?
I don’t know the answers to these questions, but if the answer is “yes” to at least 1 or 2, then I think the equation flips quite a bit. This is what I’m seeing in the world right now, and it’s disconcerting:
1. Ukraine and Russia have been in a skirmish that has been drawn out much longer than I would guess most people would have guessed. This has created a divide in political allegiance within the United States and Europe.
2. We captured the leader of Venezuela. Cuba is now scared they are next.
3. We just bombed Iran and killed their supreme leader.
4. China and the US are, of course, in a massive economic race for world power supremacy. The tensions have been steadily rising, and they are now feeling the pressure of oil exports from Iran grinding to a halt.
5. The past couple days Macron has been trying to quell tension between Israel and Lebanon.
I really do not hope we are not headed into war. I hope the fact that we all have nukes and rely on each others’ supply chains deters one. But man does it feel like the odds are increasing in favor of one, and man does that seem to throw a wrench in this whole thing with Anthropic vs. OpenAI.
"We" here clearly means USA+israel. There isn't a distinction between the two when they're working towards the same goals, bombing everything in sight, together.
The one who pulled the trigger is irrelevant here, because both have pulled the trigger hundreds or thousands of times in the past few days, dividing up targets between them for the joint operation.
> Given that direct assassination is still prohibited by EO 11905 / 12036 / 12333
It sounds like you think this means something?
Obviously it doesn't when we're talking about an administration that openly breaks laws, much less EOs, and issues whatever EOs they want saying whatever they want, even in violation of previous EOs. There aren't even any repercussions to the president "violating an EO".
So, the pedantry here is irrelevant. The two parties are on the same team, working towards the same goal, doing the same things, divvying up the list of targets to strike.
Given that you totally ignored the substance of my post, and instead focused on attacking me personally, it does seem like you're not interested in a discussion, and not a good fit for the HN culture and guidelines. So yeah, maybe you are right and it would be better if you left.
But! That's not who you always have to be! I'm confident you can coherently articulate your point without resorting to that. Feel free to come back if you're willing to share why you feel the president not complying with a presidential executive order is significant here, rather than insignificant.
that is considering if there will be elections, which many people don't believe it's the case.
reminder that trump has been flirting with just continuing in power (2028 hats and talks about a third term) and is responsible for trying a coup last time he lost.
personally I think there's a possibility where he'll just declare martial law and stay in power at the end of his term.
> researchers are more valuable than any military spend or any datacenter. It does not matter how many hundreds of billions you have - if the 500-1000 top researchers don't want to work for you, you're fucked; and if they do, you will win because these are the people that come up with the step-change improvements in capability.
This is a massive cope imo. The reason that the AI industry is so incestuous is just because there are only a handful of frontier labs with the compute/capital to run large training clusters.
Most of the improvements that we’ve seen in the past 3 years are due to significantly better hardware and software, just boring and straightforward engineering work, not brilliant model architecture improvements. We are running transformers from 2017. The brilliant researchers at the frontier labs have not produced a successor architecture in nearly a decade of trying. That’s not what winning on research looks like.
Have there been some step-change improvements? Sure. But by far the biggest improvement can be attributed to training bigger models on more badass hardware, and hardware availability to serve it cheaply. To act like the DoD isn’t going to be able to stand up pytorch or vllm and get a decent result is hilarious: the reason you use slurm and MPI and openshmem is because national labs and DoD were using it first. NCCL is just gpu accelerated scope-reduced MPI. nvshmem is just gpu accelerated scope-reduced openshmem.
If anything, DoD doesn’t have the inference throughput requirements that the unicorns have and might just be able to immediately outperform them by training a massive dense model without optimizing for time to first token or throughput. They don’t have to worry about if the $/1M tokens makes it economically feasible to serve, which is a primary consideration of the unicorns today when they’re choosing their parameter counts. They can just rate limit the endpoint and share it, with a 2 hour queue time.
The government invented HPC, it’s their world and you’re just playing in it.
> Generally, the defense crowd have a somewhat inflated sense of self worth.
Sure the architecture is from 2017. But the gap between GPT-1 and frontier models today is not simply "more FLOPs" and as simple as "standing up PyTorch and vllm" - theres thousands of undocumented decisions about data, alignment, reward modeling, training stability, and inference-time strategies, and lots of tribal knowledge held by a small group of people who overwhelmingly do not want to work on weapons systems.
The dense model argument is self-defeating long term. Sparsity (MoE etc.) lets you build a smarter model at the same compute budget, so going dense because you can afford to waste FLOPs is how you fall behind b/c you never came up with the step function improvements needed.
Sure, the DoD invented HPC, but it also invented the internet, and then the private sector made it actually useful.
That is with the Pentagon directly only. Now they will lose much more because no defense contractor, subcontractor and so on can use them for anything defense related (even if they use the model to invent a new type of screw, if that screw is going to be used in anything military).
So yeah, they bet a whole lot on “look at us, we have morals”.
There's no legal basis for blocking defense contractors from using them. Trump's claiming he can do so, but the law doesn't back him up. He'll lose in any fair court, or any corrupt court that values billionaire interests over virtue signaling to the orange one (like the Supreme Court).
Also, they got a huge PR win, and jumped to #1 on the Apple App Store. Consumer market share is going to decide which of the AI companies is the market leader, not fickle government contracts.
If you look at what generates cash, it's corp to corp. That's across most industries. While there are markets that are consumer mostly, LLMs have immense and enormous business facing revenue potential. The consumer market is a gnat in comparison.
There are always Executive Orders that can enforce that. It is not like in the movies where they will sort stuff out in 2 weeks in a single trial. It is going to take years, and we'll see if Anthropic survives that.
I don’t care who is in the whitehouse. Snowden revealed the crimes of the NSA in 2013 when Obama was president. They’re all going to want to use AI for mass surveillance
AI doesn't add anything to the ability to do mass surveillance. That genie was already out of the bottle from clouds and big data systems. At best AI might take on some of the gruntwork for drawing conclusions from profiles but it's doing it's usual thing of being a powerful interface built on top of other systems.
> AI doesn't add anything to the ability to do mass surveillance
I recommend reading Yuval Noah Harari's Nexus for a deep discussion around this.
He makes the point that what makes this AI age much more dangerous for mass surveillance isn't just the collection of data, which has indeed been possible for a while, but the new ability to have AI sift through that enormous volume of information, an ability which until recently has not been possible in a meaningful way without a ton of manual work to support it.
Older attempts at mass control of a population already involved mass surveillance, even in a large amount of detail, but even when capturing in detail all citizens' activities, there were just not enough people around to be able to dig through that and analyze it. This has been somewhat true even with the help of computers, though computers have certainly already been making this easier.
But now you can just give all that data to an AI with your instructions, and it'll apply some sort of "judgement" on your behalf, completely autonomously, and even perform actions against those folks it finds, again autonomously, without needing to manually build a whole infrastructure to do that with manual rules. That's a very meaningful upgrade for someone wanting to control a population.
That's still actuating using existing infrastructure that already existed. I agree with the summarise + decide part maybe being quicker sometimes but the bottleneck remains collection and collation and actioning infrastructure
like saying kids having internet-connected devices with built-in cameras doesn't increase the probability of sexting, they could do the same with film cameras and a fax machine
AI doesn't increase the amount of data captured or the processing throughput is the difference with your cameras metaphor. As said at best it can summarise things better sometimes.
I hate this so much. The nsa’s spying on everyone in 2010 was “legal” and I can only imagine how much worse it is now with AI to follow your digital footprint around everywhere. Too bad we don’t have any more whistleblowers like Snowden
I feel so sad about snowden sometimes. I tried reading his book's first few pages on how when he was growing up, he could be anyone in a forum and there was this sense of anonymity and at the same time, just freedom. And later on when he saw just how much the overreach of govt. was etc., he did what others couldn't.
It wasn't as if there weren't any other contractors like Snowden, but there were no other whistleblowers like Snowden
and where'd that leave him? In a country far away from his motherland and being worried about his safety. Being called god knows what by the country at home and most general people don't even care.
Snowden didn't do it for the money, he did it for what he felt was right and that's so rare.
Its so sad how when I searched up on Snowden on youtube, the first thing I found was ex CIA agent claiming Snowden wasn't innocent and how he had to befriend russia but at the same time, that was only because US would have literally killed him and made an example out of him to whistleblow about such a large-scale mass surveillance
“What kind of asshole reveals the fact we’re the assholes, then doesn’t let us kill him!” is one heck of a comment I found.
Also, We will charge the whistleblower with death but we will not take any action against the act which was whistleblown in the first place (:
I agree. What people forget is Snowden didn't intend to end up in Russia. He wanted to go from Hong Kong (where he tought he would be safe, but realised extradition still was an option) to Ecuador. But he feared US would intercept his plane if he went over US/US allies sky. So his plan was to go from HK to Russia, then to Cuba and finally Ecuador.
Russia stopped him because US had cancelled his passport.
That fear proved well-grounded. While it probably doesn't seem as big of a deal now — in this era when we just serially assassinate heads of state we don't like without any pretense otherwise — the US indeed did direct its European allies to intercept the plane of Bolivian president Evo Morales, based on the (incorrect, as it turned out) suspicion that Snowden was on board.
Capture the student market 100%. I’d buy one for my kids tomorrow. These machines are made with an iPhone chip so they’re going to be great at browsing the web and studying. I wouldn’t buy one for myself To do actual work on but for light users it’s the perfect device. Start them early and get them hooked in the ecosystem so they’re grow up and keep buying iPhones, Apple Watches, AirPods, and iPads.
They look like any other pair of sunglasses. No piece of glass over one eye reminding everyone you meet that you’re wearing a camera. They’re incredibly stealthy
Have you seen them in the wild? They're notably chunky and have an obvious hole where the lens is. You might not notice it in passing but if someone's talking to you it's hard not to notice. I wonder how many of their owners realise how much they're affecting every interaction they have with another human.
It wasn't the tariff. UPS has been tacking on a ridiculously high paperwork fee for the service of processing tariff payments. Other shipping companies have also had fees, but UPS is the main one that's made it exorbitant and disproportionately higher than the tariff itself.
I'm thinking the delivery agents such as UPS, Fedex, USPS now need to sue the United States so they can pay back all the recipients the fees they charged, plus interest.
There are going to be a raft of class action suits based on this.
As one of my lawyers once said, the only winners here are the lawyers.
I suspect that my recent experience confirms this. Our daughter shipped two suitcases home from the UK, paying some local company for "door-to-door" delivery. They contracted with UPS who demanded an additional $32 when the first bag showed up. For the second she paid the same fee online so they wouldn't require a check at the door.
reply