A brief note, your numbers are way off here — Astral subsequently raised a Series A and B (as mentioned in the blog post) but did not announce them. We were doing great financially.
It seems you are one of the most active contributors there.
I would sincerely have understood better (and even wished) if OpenAI made you a very generous offer to you personally as an individual contributor than choose a strategy where the main winners are the VCs of the purchased company.
Here, outside, we perceive zero to almost no revenues (no pricing ? no contact us ? maybe some consulting ?) and millions burned.
Whether it is 4 or 8 or 15M burned, no idea.
Who's going to fill that hole, and when ? (especially since PE funds have 5 years timeline, and company is from 2021).
The end product is nice, but as an investor, being nice is not enough, so they must have deeper motives.
You're upset that uv doesn't yet support something that no other tool in the ecosystem supports?
I'd love for uv to lock build dependencies, but due to the dynamic nature of Python package metadata it's quite a hard problem. It'll be supported eventually though.
Please open an issue with some details about the memory usage. We're happy to investigate and feedback on how it's working in production is always helpful.
We run on-prem k8s and do the pip install stage in a 2CPU/4GB Gitlab runner, which feels like it should be sufficient for the uv:python3.12-bookworm image. We have about 100 deps that aside from numpy/pandas/pyarrow are pretty lightweight. No GPU stuff. I tried 2CPU/8GB runners but it still OOMed occasionally so didn't seem worth using up those resources for the normal case. I don't know enough about the uv internals to understand why it's so expensive, but it feels counter-intuitive because the whole venv is "only" around 500MB.
I would also put this list of issues that this fixes higher. It makes it more obvious what the point is. (And also a setuptools update literally broke our company CI last week so I was like "omg yes" at that point.)
Yes, we let you override our detection of your hardware. Though we haven't implemented dumping detected information on one platform for use on another, it's definitely feasible, e.g., we're exploring a static metadata format as a part of the wheel variant proposal https://github.com/wheelnext/pep_xxx_wheel_variants/issues/4...
(I work at Astral)