Even if all that about training is true, the bigger cost is inference and Deepseek is 100x cheaper. That destroys OpenAI/Anthropic's value proposition of having a unique secret sauce so users are quickly fleeing to cheaper alternatives.
Google Deepmind's recent Gemini 2.0 Flash Thinking is also priced at the new Deepseek level. It's pretty good (unlike previous Gemini models).
WTF dude, check your source (@deedydas). He seems to be posting garbage. The Gemini 2.0 Flash Thinking price isn't known yet. And on top of that, he gave the wrong number for R1 test results on AIME 2024 (it's 79.8%, far ahead of Gemini rather than far behind.
More like OpenAI is currently charging more. Since R1 is open source / open weight we can actually run it on our own hardware and see what kinda compute it requires.
What is definitely true is that there are already other providers offering DeepSeek R1 (e.g. on OpenRouter[1]) for $7/m-in and $7/m-out. Meanwhile OpenAI is charging $15/m-in and $60/m-out. So already you're seeing at least 5x cheaper inference with R1 vs O1 with a bunch of confounding factors. But it is hard to say anything truly concrete about efficiency OpenAI does not disclose the actual compute required to run inference for O1.
There are even much cheaper services that host it for only slightly more than deepseek itself [1]. I'm now very certain that deepseek is not offering the API at a loss, so either OpenAI has absurd margins or their model is much more expensive.
[1] the cheapest I've found, which also happens to run in the EU, is https://studio.nebius.ai/ at $0.8/million input.
Edit: I just saw that openrouter also now has nebius
Yes, sorry, I was being maximally-broad in my comment but I would think it's very, very, very likely that OpenAI is currently charging huge margins and markups to help maintain the cachet / exclusivity / and, in some senses, safety of their service. Charging more money for access to their models feels like a pretty big part of their moat.
Also possibly b/c of their sweetheart deal with Azure they've never needed to negotiate enterprise pricing so they're probably calculating margins based on GPU list prices or something insane like that.
Well in the first years of AI no, it wasn't because nobody was using it.
But at some point if you want to make money you have to provide a service to users, ideally hundreds of millions of users.
So you can think of training as CI+TEST_ENV and inference as the cost of running your PROD deployments.
Generally in traditional IT infra PROD >> CI+TEST_ENV (10-100 to 1)
The ratio might be quite different for LLM, but still any SUCCESSFUL model will have inference > training at some point in time.
>The ratio might be quite different for LLM, but still any SUCCESSFUL model will have inference > training at some point in time.
I think you're making assumptions here that don't necessarily have to be universally true for all successful models. Even without getting into particularly pathological cases, some models can be successful and profitable while only having a few customers. If you build a model that is very valuable to investment banks, to professional basketball teams, or some other much more limited group than consumers writ large, you might get paid handsomely for a limited amount of inference but still spend a lot on training.
if there is so much value for a small group, it is likely those are not simple inferences but of the new expensive kind with very long CoT chains and reasoning. So not cheap and it is exactly this trend towards inference time compute that make inference > training from a total resources needed pov.
Google Deepmind's recent Gemini 2.0 Flash Thinking is also priced at the new Deepseek level. It's pretty good (unlike previous Gemini models).
[0] https://x.com/deedydas/status/1883355957838897409
[1] https://x.com/raveeshbhalla/status/1883380722645512275