Hacker Newsnew | past | comments | ask | show | jobs | submit | mswphd's commentslogin


worth clarifying the build parallelism is because the fundamental unit of compilation in rust is the crate. so the main option available to cut down the running time of a large crate is to split it into smaller crates.

I've seen this floated as a response to the current anxieties over LLMs in math. Namely in applied math, LLMs being good at pure math may actually allow the import of pure math techniques. Unclear if that will pan out, but it's interesting to consider.

You can read some motivation for it at the following link

https://rust-lang.github.io/rust-project-goals/2024h2/Rust-f...

note that it also discusses `std::offload`, which might also be of interest.


there is a strong anti-QKD bias among experts who understand QKD. It is fun academic concept, but does not solve a real world problem, and does not use techniques available at remotely comparable costs to classical cryptography in the real world, and even if you pay the enormous costs for it, it is trivial for an attacker to completely disrupt your communication in a way that cannot be recovered from (without out-of-band communication, e.g. either sending a courier, or using computational cryptography).

If you hate the NSA that's fine. Nobody in the EU cried foul over the NSA's recommendations though (and the NIST-winning schemes are European). Chinese scholars submitted some fundamentally similar schemes, the Chinese Academy of Sciences have formally recommended lattice-based schemes. While the Chinese (government-run) standardization is only starting, it is a very good bet that they will use a lattice-based scheme.

So, unless you think all of the world's governments (again, including China) are in a massive cabal to allow the NSA specifically to spy on the entire world, #2 is not a particularly valid question.


BB84 (and QKD overall) requires authenticated channels. You have to get those somewhere. You can get them from an information-theoretically secure MAC, but it has significant downsides. You can get them with computationally secure primitives, but then there's no point in using QKD in the first place. You cannot instantiate QKD securely without one of those two choices though.

> You can get them with computationally secure primitives, but then there's no point in using QKD in the first place.

I don’t entirely agree. You can build a computationally secure authenticated channel using symmetric primitives (e.g. hashes) that are very, very likely to survive for a long time. And you can build comparably secure asymmetric authentication schemes from the same primitives (hash-based signatures are a thing).

But to build a classical key exchange system, you need more exotic primitives (Diffie-Hellman or public-key encryption / KEM schemes), and the primitives of this sort that are supposedly post-quantum secure have not been studied for nearly as long and have much more structure that might make them attackable.

Not to mention that attacking the authenticated channel in QKD cannot give a store-now-decrypt-later attack.


At that point you can just pre-share a key and use AES.

Nope -- that gives neither public-key capabilities nor forward secrecy.

The point is there is no public key capability in BB84 that requires pre-sharing a symmetric key.

You absolutely do get forward secrecy with pre-shared keys. You just need to make the protocol derive the next key with a cryptographic hash function, and deliver the iteration count with the packet so the recipient knows which key is the correct one. This is called a SCMIP or hash ratchet, and it's used e.g. in Signal protocol.

(As implementation details, you'll also want to hash the hash ratchet counter with the key to prevent theoretical loops, and you'll probably want to encrypt the ratchet counter during delivery with static header key, or the very least authenticate it.)


that's not what people say? and pre-quantum crypto is also vulnerable to yet discovered classical algorithms?

> that's not what people say?

Well, they should, if they want to be honest and mathematically rigorous.

For instance, in the case of NIST's proposed post quantum cryptography standard Kyber that relies on lattice based methods.

> and pre-quantum crypto is also vulnerable to yet discovered classical algorithms?

True, and also disconcerting; with the most reckless being allowing fungible currency reliant on such methods.

We should be working on standardising and moving towards methods that are independent of, rather than rely on, unresolved questions in mathematics.


you cannot be mathematically rigorous with computational lower bounds. It is not possible with current mathematics. No ones that are relevant to cryptography exist, so any computational cryptography must separate into

1. provable constructions based on "hard" problems, and 2. best-effort cryptanalysis of "hard" problems.

This is true of lattice-based problems. It is true of EC crypto. It is true of RSA. it is true of McCliece. It is true of AES. That is the nature of things, and there is no avoiding that.

Analysis of Kyber was honest and mathematically rigorous. It's also beside the point, as all of your criticisms hold for EC/AES as well (despite EC having some reasonable lower bounds, e.g. in (extensions of the) generic group model. These of course rely on the conjecture that EC groups are generic groups).

> We should be working on standardising and moving towards methods that are independent of, rather than rely on, unresolved questions in mathematics.

There are no known methods that are remotely economically viable. There is (completely seriously) a clearer path towards fixing climate change than what you say. There is also a clearer path towards fixing global hunger. It is a complete fantasy to want to solely rely on mathematically provable techniques in cryptography, and not one that it is worth engaging with.

Furthermore, it's completely pointless. We might as well frame your question as

> We cannot prove that AES is hard, so we should not use it.

Why? It would be cool to prove that AES is hard. Sounds fun. And practically, the hardness assumptions of deployed cryptography are almost never the cause of a security vulnerability. If we care about secure systems, proving AES is hard is so low down on the priority list that it is difficult to think of something less important. Again, completely seriously, we would have MUCH more secure systems if we paid each person in the country to use better passwords.

Given that this is the case, it seems unreasonable to suggest spending \Omega(billions) updating our network infrastructure to worse-performing links just to "fix" a problem that doesn't exist. I'm even speaking as a cryptographer who (unreasonably) dislikes heuristics in the field, and tries to replace them with provable alternatives. It is a fun academic exercise. But it is not a real world issue.


I wouldn't call it "solid theoretical footing". The rough sketch of QKD is

1. BB84 key exchange requires an authenticated channel. typically you do this with a 2. Carter-Wegman MAC, which is information-theoretically secure, but requires shared randomness that cannot be reused.

Successful protocol execution refreshes randomness (you can net gain from it), so you can communicate back and forth continuously when everything is working. An MiTM who simulates a network failure though can expend some of your pre-shared randomness (without it being refreshed). If they do this enough, they can exhaust your shared randomness, and bring down the link until you exchange more shared randomness somehow out of band. if you want to maintain information theoretic security, this might involve e.g. a courier with a USB or whatever (or a carrier pigeon, who knows).

This is still "secure", but is also a significant issue any QKD (even "real" QKD) has that classical cryptography does not have, and has always made me question the "solid" story for QKD.


the rust they've written (so far) is highly unidiomatic (and with a ton of unsafe). I can't speak to the zig part, but it seems plausible to me it is line-by-line, horrendous rust.

Whether or not they can clean it up is an interesting question.


zig can do some things wrt. compiler time compute which sits somewhere in between rust const expr and proc macro usage. This isn't something rust (or most languages) have. So even if we are generous and interpret line by line as expression by expression this isn't fully doable

but also telling a LLM to do a line-by-line translation and giving it a file _is guaranteed to never truly be a line-by-line translation_ due to how LLMs work. But thats fine you don't tell it to do line-by-line to actually make it work line by line but to try to "convince" it to not do any of the things which are the opposite (like moving things largely around, completely rewriting components based on it "guessing" what it is supposed to do etc.). Or in other words it makes the result more likely to be behavior (incl. logic bug) compatible even through it doesn't do line-by-line. And that then allow you to fuzz the behavior for discrepancies in the initial step before doing any larger refactoring which may include bug fixes.

Through tbh. I would prefer if any zip -> terrible rust part where done with a deterministic, reproducible, debug-able program instead of a LLM. The LLM then can be used to support incremental refactoring. But the initial "bad" transpilation is so much code that using an LLM there seems like an horror story, wrt. subtle hallucinations and similarr.


If anyone can do it, it's Anthropic. The question is more how long it will take and how many tokens it will burn/how much groundwater.

the rust port (at least currently) heavily uses unsafe as well

https://github.com/oven-sh/bun/compare/claude/phase-a-port#d...

that isn't particularly surprising, but the point is I would expect getting things more stable than the zig version would take a bit.


That's completely normal at the first step of the language transformation. Actually it's required if you do a file by file transformation first while wanting to maintain interface compatibility.

I'm not sure I would take this kind of path, I would much more focus on refactoring the project to small and easily translatable components with small boundaries, but it's cheap to try things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: