> It's as if someone asked you how many 1s there are in the binary representation of this text.
I'm actually kinda pleased with how close I guessed! I estimated 4 set bits per character, which with 491 characters in your post (including spaces) comes to 1964.
Then I ran your message through a program to get the actual number, and turns out it has 1800 exactly.
>I estimated 4 set bits per character, which with 491 characters in your post (including spaces) comes to 1964
And that's exactly the kind of reasoning an LLM does when you ask it about characters in a word. It doesn't come from the word, it comes from other heuristics it picked up during training.
That's like saying Rust has GC because GC libraries/runtimes can be implemented in/for Rust. Rust recognizes allocations on a language basis, but does not provide the same level of control over allocations on a language basis as Zig does. For instance, there is no stable interface for custom allocators for alloc/std.
std may not yet provide a stable interface for custom allocators (the Allocator API appears to be available in unstable presently), but use of custom allocators is common in no-std rust environments like embedded. Reading through https://github.com/irbull/custom_allocators the Allocator API doesn't seem particularly complicated. I think it's fair to expect that it will stabilize in time.
> That's like saying Rust has GC because GC libraries/runtimes can be implemented in/for Rust.
Quite a few have already been implemented and are available as libraries exactly as you describe, today. I wouldn't phrase that as "Rust has GC" because that might imply that the GC is required. But if your application might benefit from GC, it's certainly available in the language. I might say "GC is optionally available in Rust if desired" to be more accurate.
> I think it's fair to expect that it will stabilize in time.
The allocator work is facing a lot of obstacles, unfortunately. I prefer it to be unstable for as long as it needs, though.
> Quite a few have already been implemented and are available as libraries exactly as you describe, today. I wouldn't phrase that as "Rust has GC" because that might imply that the GC is required.
This is exactly my point. Rust does not prevent GC by any means, but Rust also does not encourage GC by any means. The same goes for custom allocators, aside from the highly-unstable Allocator API.
> The allocator work is facing a lot of obstacles, unfortunately.
Got any links? Sounds like interesting reading. Seriously. Kind of thing I come here for. I'd really appreciate it. Or I can ask the AI for some.
> I prefer it to be unstable for as long as it needs, though.
Sure, same. Fully baking things is a process, and it's nice when things are fully baked. So I agree. I think Rust's async could be a bit more ergonomic, though it wasn't too difficult to wrap my head around, and sort of shockingly simple to implement a basic no-std async executor (~10 lines), so maybe I'm coming around. I was pleased to find out that it was simple enough that I could do it, and understand how it worked, as I try to do with all my microcontroller code, and wasn't dependent on a big async runtime like Tokio if I didn't need it's features (or bloat).
No, it isn't. Because that distinction is significant if you are using the language in an environment where those libraries are not available or suitable, such as the Linux project which uses a custom fork of Alloc which provides collections for different allocators.
So, that is not done at the language level but the library level. Unless the compiler is modified for Linux, but even if it is, that's entirely bespoke and unstable. This is not comparable to Zig's design. I'm aware that anything can be done if the right things are implemented; I'm talking about what Rust-the-language currently does for control over allocation. If the answer comes down to "we're doing something custom", then the language is soewhat or largely sidestepped. C-the-language certainly doesn't have panics or exceptions, even though longjmp or a custom runtime could be used.
For the time being, I consider alloc and std to be part of the language, and the compiler also has provisions for them if they are used. If the alternative is supposed to be only using core, then the language does not provide for control over allocations the way Zig does. Imposing allocations with no control and not providing allocations at all are both failure modes. With enough effort, anyone could do anything, but programming languages exist to control and enhance certain things under their purview. Rust does not have a comparable facility to control allocations like it controls ownership and borrowing. Just as C-with-tooling being safe isn't the same as C being safe, Rust-with-libraries providing control over allocations isn't the same as Rust providing control over allocations.
Another issue we have to consider here for the measurements taken then is that it was miscompiling, which, to me, calls into question how much we can trust that performance change.
Additionally, it was 10 years ago and LLVM has changed. It could be that LLVM does better now, or it could do worse. I would actually be interested in seeing some benchmarks with modern rustc.
> On the other hand, signed integer overflow being UB would count for C/C++
C and C++ don't actually have an advantage here because this is only limited to signed integers unless you use compiler-specific intrinsics. Rust's standard library allows you to make overflow on any specific arithmetic operation UB on both signed and unsigned integers.
It's interesting, because it's a "cultural" thing like the author discusses, it's a very good point. Sure, you can do unsafe integer arithmetic in Rust. And you can do safe integer arithmetic with overflow in C/C++. But in both cases, do you? Probably you don't in either case.
"Culturally", C/C++ has opted for "unsafe-but-high-perf" everywhere, and Rust has "safe-but-slightly-lower-perf" everywhere, and you have to go out of your way to do it differently. Similarly with Zig and memory allocators: sure, you can do "dynamically dispatched stateful allocators that you pass to every function that allocates" in C, but do you? No, you probably don't, you probably just use malloc().
On the other hand: the author's point that the "culture of safety" and the borrow checker in Rust frees your hand to try some things in Rust which you might not in C/C++, and that leads to higher perf. I think that's very true in many cases.
Again, the answer is more or less "basically no, all these languages are as fast as each other", but the interesting nuance is in what is natural to do as an experienced programmer in them.
C++ isn't always "unsafe-but-high-perf". Move semantics are a good example. The spec goes to great lengths to ensure safety in a huge number of scenarios, at the cost of performance. Mostly shows up in two ways: one, unnecessary destructor calls on moved out objects, and two, allowing throwing exceptions in move constructors which prevents most optimizations that would be enabled by having move constructors in the first place (there was an article here recently on this topic).
Another one is std::shared_ptr. It always uses atomic operations for reference counting and there's no way to disable that behavior or any alternative to use when you don't need thread safety. On the other hand, Rust has both non-atomic Rc and atomic Arc.
> one, unnecessary destructor calls on moved out objects
That issue predates move semantics by ages. The language always had very simple object life times, if you create Foo foo; it will call foo.~Foo() for you, even if you called ~Foo() before. Anything with more complex lifetimes either uses new or placement new behind the scenes.
> Another one is std::shared_ptr.
From what I understand shared_ptr doesn't care that much about performance because anyone using it to manage individual allocations already decided to take performance behind the shed to be shot, so they focused more on making it flexible.
C++11 totally could have started skipping destructors for moved out values only. They chose not to, and part of the reason was safety.
I don't agree with you about shared_ptr (it's very common to use it for a small number of large/collective allocations), but even if what you say is true, it's still a part of C++ that focuses on safety and ignores performance.
Bottom line - C++ isn't always "unsafe-but-high-perf".
The rust standard library does make targeted use of unchecked arithmetic when the containing type can ensure that that overflow never happens and benchmarks have shown that it benefits performance. E.g. in various iterator implementations. Which means the unsafe code has to be written and encapsulated once, users can now use safe for loops and still get that performance benefit.
And one of the fun things about how unwrap() does that automatically, is that if you are working with an orchestrator with retry logic, you won't need to (re-re-re-re-re-)write your own for the entire program - the orchestrator will see the error, log its output, and try again in high volume workloads, or move on to the next request - this is incredible and nice to use especially when a failure in one request doesn't need to fail the entire application for all requests.
I shy away from unwrap() in almost all cases (as should anyone!) but if you are running a modular system, then unwrap when placed strategically can be incredibly useful.
Back in 2015 when the Rust project first had to disable use of LLVM's `noalias` they found that performance dropped by up to 5% (depending on the program). The big caveat here is that it was miscompiling, so some of that apparent performance could have been incorrect.
Of course, that was also 10 years ago, so things may be different now. There'll have been interest from the Rust project for improving the optimisations `noalias` performs, as well as improvements from Clang to improve optimisations under C and C++'s aliasing model.
It's interesting that I've also heard the same from people involved in Rust. Expecting more interest from C++ programmers and being surprised by the numbers of Ruby/Python programmers interested.
I wonder if it's that Ruby/Python programmers were interested in using these kinds of languages but were being pushed away by C/C++.
The people writing C++ either don't need much convincing to switch because they see the value or are unlikely to give it up anytime soon because they don't see anything Rust does as being useful to them, very little middle ground. People from higher level languages on the other hand see in Rust a way to break into a space that they would otherwise not attempt because it would take too long a time to reach proficiency. The hard part of Rust is trying to simultaneously have hard to misuse APIs and no additional performance penalty (however small). If you relax either of those goals (is it really a problem if you call that method through a v-table?), then Rust becomes much easier to write. I think GC Rust would already be a nice language to use that I'd love, like a less convoluted Scala, it just wouldn't have fit in a free square that ensured a niche for it to exist and grow, and would likely have died in the vine.
I think on average C++ programmers are more interested in Rust than in Go. But C programmers are on average probably not interested in either. I do agree that the accessible nature of the two languages (or at least perception thereof) compared to C and C++ is probably why there's more people coming from higher-level languages interested in the benefits of static typing and better performance.
reply