Hacker Newsnew | past | comments | ask | show | jobs | submit | zanie's commentslogin

A brief note, your numbers are way off here — Astral subsequently raised a Series A and B (as mentioned in the blog post) but did not announce them. We were doing great financially.

(I work at Astral)


It seems you are one of the most active contributors there.

I would sincerely have understood better (and even wished) if OpenAI made you a very generous offer to you personally as an individual contributor than choose a strategy where the main winners are the VCs of the purchased company.

Here, outside, we perceive zero to almost no revenues (no pricing ? no contact us ? maybe some consulting ?) and millions burned.

Whether it is 4 or 8 or 15M burned, no idea.

Who's going to fill that hole, and when ? (especially since PE funds have 5 years timeline, and company is from 2021).

The end product is nice, but as an investor, being nice is not enough, so they must have deeper motives.


I mean you pirouetted onto the AI hype train before running out of working capital - I guess that's doing great financially by some definitions.


Ruff wasn't named after the bird, we just think it's funny that Charlie didn't know it was a bird. He made up the word :)


I've always assumed it was something like:

ruff - "RUst Formatter".

ty - "TYpe checker"

uv - "Unified python packaging Versioner"? or "UniVersal python packaging"


Also note that R, U and T are one letter away to spell Rust.


Ah, thanks for demystifying!



As noted in the linked issue

> At time of writing, many of the remaining rules require type inference and/or multi-file analysis, and aren't ready to be implemented in Ruff.

ty is actually a big step in this direction as it provides multi-file analysis and type inference.

(I work at Astral)


Are you guys planning to tackle Python debugging next? pydevd could really use a fast native rewrite targeting modern Python.


You're upset that uv doesn't yet support something that no other tool in the ecosystem supports?

I'd love for uv to lock build dependencies, but due to the dynamic nature of Python package metadata it's quite a hard problem. It'll be supported eventually though.

(I work on uv)


I'm just saying uv isn't "it", the packaging issue in Python isn't solved yet.

And yes, build dependencies is the big elephant of why Python packaging sucks.


It's intentionally distinct from the `uv tool` interface — it won't change `ruff` or `uv tool run` behaviors.


Please open an issue with some details about the memory usage. We're happy to investigate and feedback on how it's working in production is always helpful.

(I work on uv)


Last time I looked into this I found this unresolved issue, which is pretty much the same thing: https://github.com/astral-sh/uv/issues/7004

We run on-prem k8s and do the pip install stage in a 2CPU/4GB Gitlab runner, which feels like it should be sufficient for the uv:python3.12-bookworm image. We have about 100 deps that aside from numpy/pandas/pyarrow are pretty lightweight. No GPU stuff. I tried 2CPU/8GB runners but it still OOMed occasionally so didn't seem worth using up those resources for the normal case. I don't know enough about the uv internals to understand why it's so expensive, but it feels counter-intuitive because the whole venv is "only" around 500MB.


Thanks that's helpful.

Did you try reducing the concurrency limit?


Thanks for the feedback!


The combination of

– “client (uv) and server (pyx)” and

– “You can use it to host your own internal packages, or as an accelerated, configurable frontend to public sources like PyPI and the PyTorch index.”

is what really helped me understand what pyx aims to be.


I would also put this list of issues that this fixes higher. It makes it more obvious what the point is. (And also a setuptools update literally broke our company CI last week so I was like "omg yes" at that point.)


Yes, we let you override our detection of your hardware. Though we haven't implemented dumping detected information on one platform for use on another, it's definitely feasible, e.g., we're exploring a static metadata format as a part of the wheel variant proposal https://github.com/wheelnext/pep_xxx_wheel_variants/issues/4...


It's actually not powered by Wheel Variants right now, though we are generally early adopters of the initiative :)


Well it was just a guess, "GPU-aware" is a bit mysterious to those of us on the outside ;).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: