Hacker Newsnew | past | comments | ask | show | jobs | submit | jgbuddy's commentslogin

Were people overpaying 30% for tesla in 2010?

A few facts:

1. Tesla was priced at $2.5b end of 2010. 2. Tesla started production that year of the model S, with nearly 500km range and 0-100 in 4.4 seconds, still competitive 16 years later. It was an obvious disruption of a proven market. 3. that car market was valued at half a trillion at the time.

So Tesla being valued at 0.5% of the market, with disruptive technology, seems fine. Of course it was a moonshot, but hindsight is 20/20.

But what is the total market here that it's stepping into? Seems like SpaceX is servicing the majority of the market for years, yet it just has 16 billion revenue. How that gets you to 1.75 trillion, I don't know.


Humanoid robots and lots of memes (this post has only 50% sarcastic content)

Yes, and they still do

Tesla's highest market cap in 2010 was $3.3B. Tesla has more net income, sometimes multiples more, per year, from 2021 to 2025.

For comparison, it is routine to see sale prices of 3x to 5x revenue for many, many kinds of everyday businesses that have much less potential than Tesla.

There are very, very few businesses whose shares one could have purchased in 2010 that performed better over the subsequent 15 years. That is about as objective as one can get about determining whether or not something was under or over valued (in 2010).


because not only the shareholders overpaid but the car buyers too.

Is musk derangement syndrome a thing?

Yes, and it makes much less sense to me. It boils down to he's rich on paper, and doesn't put on a fake PR mask.

Let’s ignore things like the pedoguy incident and his ridiculous defense it was South African slang.

Or how he helped dismantle USAID which leads to real death of people.

You’re being spoiled with not having a fake PR mask. He‘s just spared from real consequences because of his wealth. As soon as real consequences are at the horizon that changes pretty quickly. It just happens too rarely.


Not having a PR mask is because he doesn't care about cancel culture. I like that.

Cancel culture needs to stop, we are not a hive mind, everyone has different opinions.


Once you see it, it's preety funny how these people pick weird little hills to die on.

Society seems a lot more full of people trying to broadcast who they are from their opinions on stuff instead of what they've done.

Bingo!

Society seems to favor sociopaths who destroy everything for their own benefit.

Do you think DOGE has done something good or did it just help authoritarians to dismantle opposition?

Since Musk, Trump, Thiel & Co. started to implement their vision of a society the world turned to the worse. And they won‘t be the one who habe to endure the harsh consequences



Worth noting that this model, unlike almost all qwen models, is not open-weight, nor is the parameter count exposed. Also odd that it is compared against opus 4.5 even though 4.6 was released like 2 months ago.

They said in the last paragraph[0]:

"[...] In the coming days, we will also open-source smaller-scale variants, reaffirming our commitment to accessibility and community-driven innovation. [...]"

[0] https://qwen.ai/blog?id=qwen3.6#summary--future-work


> we will also open-source smaller-scale variants

In other words, like GP said, this Qwen3.6-Plus model is not open-weight unlike the other Qwen models.


In a practical sense, I'm primarily interested in small to medium sized models being open. I think that might be common sentiment.

However, my hope is that there will be at least somewhat competitive big and open models as well, from an ethical/ideological perspective. These things were trained on data that was provided by people without their consent, so they should at least be be publicly accessible or even public domain.


Qwen3.5-Plus is the largest variant of the open weight Qwen3.5 model, expanded with a 1M context window and fine-tuned on the Qwen-native harness’ specific tools.

> unlike almost all qwen models

Almost all means there have been ones before that were not open. So, no contradiction there.


> unlike the other Qwen models

Please send the download link for qwen 3.5-plus.

Also, who cares? If you have the hardware to run a ~400b model i don’t think you count as a home user anymore.


So the Qwen3.6-Plus model is like the Qwen3.5-Plus model?

Qwen 3.5 Plus was closed weights too. It was supposedly the same model as Qwen3.5 397B, just with 1 million context size and only available on the API and their website.

If Opus 4.6 was only released two months ago, then it seems reasonable that Qwen hasn't finished fully comparing against the latest Opus.

Well don't we have numbers from both models on these benchmarks already? What else is there to do except include them in the table?

Do we? I admit ignorance in this.

I wouldn't say "almost all" seeing as -MAX and -Omni models were always closed.

Agreed

As with any AI post lol


This is unfortunately how companies die


Is this a joke


This is of course true as a blanket "gotcha" headline- although I wouldn't call a failed test the CI itself failing. A real failure would be a false positive, a pass where there wasn't coverage, or a failure when there was no breaking change. Covering all of these edge cases can become as tiresome as maintaining the application in the first place (of course this is a generalization)


> a pass where there wasn't coverage

I always feel obliged to point out that we can have 100% coverage without making a single assertion (beware Goodhart's law)


True, but you can't have complete tests without 100% coverage. It's a necessary, but not a sufficient condition; as long as it doesn't become the sole goal, it's still a useful metric.


100% coverage is an EXPTIME problem.


Lol AI is not search. I will never wrap my head around the pessimism of AI on HN


Damn this thread is a gold mine. Wishing you the best. My advice is to prioritize health and exercise and spend time outside somewhere you can run into new interesting people. build a circle


this paid off so well for anthropic and so poorly for sam altman. Optics are everything- look at the comments of the cbs interview.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: