What's the status of digital signature algorithms? There's much stuff that doesn't need to be encrypted because it's not secret - anybody can read the URL. It just needs to be protected against forgery.
That's what subresource integrity is for. Only the parent page needs HTTPS. Most of the assets can use HTTP, locked to specific content by subresource integrity.
That's more useful for security than encryption. If someone changes a third party Javascript file, it won't load. This would promote a more stable web.
The point OP is making is not everything requires privacy. We shouldn't pay the (higher and higher) price of keeping privacy for cases where it's just not needed.
But what URLs people visit are very much private information for example.
Also one of the ideas in general is, to obfuscate the really private communication by burying them in a sea of also encrypted, but trivial data.
If only the very sensitive informations get encrypted, then this is also a very good filter for an attacker to ignore everything else and just go for the high protected ones.
I was reading up on how this worked. It’s a really good idea, and a step in the right direction 100%… cloudflare is flexing a little and this mainly works because so many people are on cloudflare.
It never ceases to amaze me how complicated privacy is.
From cloudflares blog [1]
“ What about the IP address?
While both DNS queries and the TLS SNI extensions can now be protected by on-path attackers, it might still be possible to determine which websites users are visiting by simply looking at the destination IP addresses on the traffic originating from users’ devices. Some of our customers are protected by this to a certain degree thanks to the fact that many Cloudflare domains share the same sets of addresses, but this is not enough and more work is required to protect end users to a larger degree. Stay tuned for more updates from Cloudflare on the subject in the future”
> That's what subresource integrity is for. Only the parent page needs HTTPS
Once you have already opened a TLS connection for the parent page, you might as well keep using it. You've already paid for it, and that way you get confidentiality in addition to integrity.
P.S. because nobody actually answered. Hashes like what SRI uses are quantum safe.
You're missing the point: it's essentially about third party dependencies/assets (say a popular framework from a CDN), and here you cannot reuse the TLS connection. Also, using HTTPS to connect to the third party doesn't prevent tempering on the third party's side, that's what SRI is for.
Third party resources are a bane to my existence as a web user and as a person responsible for page load time. I would cry some big fat crocodile tears if that feature got removed. It’s already been curtailed a bit.
Of course you can just not do that, and then problem solved.
Historically there is some performance benefit of using a separate CDN, as browsers would share the cache. However modern browsers do not do that. I suppose in theory the CDN might be geo-distributed to be very close to your user, however it seems like the latency of opening a second connection (esp. In http/2
world) would likely outweigh that unless your web server is really bad.
Cloudfront and cloudflare can both present edge cached resources as from the origin domain can’t they? Akamai did not as I recall but that may have changed after I stopped needing to care about them.
The issue described by the article is only due to the digital signatures occurring in a TLS handshake, not due to the subsequent (symmetric) encryption of the connection (AES-GCM), which as the article explains is already considered quantum safe.
That aside, digital signatures are implemented in terms of asymmetric encryption (with the roles of private and public key reversed), so the quantum safety of asymmetric encryption and of digital signatures, as well as their possible size issues, are really one and the same.
TLS and more general crypto systems largely provide three different things - Authentication, Integrity and Secrecy.
Using a TLS connection should give you all three. Without the TLS connection on assets, you’re missing the secrecy element. Now sure, the data is public, but an attacker could now be in possession of information about what you’re requesting, which may leak information about which pages you’re visiting etc.
> What's the status of digital signature algorithms? There's much stuff that doesn't need to be encrypted because it's not secret - anybody can read the URL. It just needs to be protected against forgery.
This is obviously not true, what leads you to claim the URI is never sensitive or protected information?
True, although it doesn't really work in practice, because your third party resources change for non-malicious reasons a lot (they wrong doing so, it's bad practice, but bad practice are everywhere, social problems and technical solutions, you know the saying).
And if you use SRI you end up with a broken website every once in a while and you need to monitor it and ship a fix ASAP (but there's always downtime if you want to make sure the change wasn't malicious before updating the signatures on your website).
Nobody really cares about security, stakeholders just want a good looking enough security theater.
Right. Server resource integrity guarantees that the other guy didn't have a break-in that put a back door into their Javascript that you're including. HTTPS guarantees the other guy controls the domain.
There are meaningful size and performance challenges to implementing post quantum crypto, but I don't think "oh no, HTTPS handshakes are 20% slower!" is something we should actually worry about.
The real worry is all of the small, secure, embedded devices that literally don't have enough memory or compute to run these algorithms at all. Even the state-of-the-art hardware-accelerated implementations of PQC use a ton of memory and a ton of silicon area, to the point where they're untenable for lower end devices.
> but I don't think "oh no, HTTPS handshakes are 20% slower!" is something we should actually worry about.
Why not? TLS handshakes are real. That said, we have pretty restrictive TCP windows from 20+y ago that might need a bump-up at som point.
But importantly, we should not be making too many assumption on use-cases, as if stateful connections like TLS is the holy grail. Small and fast crypto enables new use-cases - engineering standards should always take perf & overhead into account, and not focus only on existing use.
> The real worry is all of the small, secure, embedded devices that literally don't have enough memory or compute to run these algorithms at all.
Indeed! And more overhead increases surface area for DOS attacks on “high-end devices” as well. So there are already clear examples of how these would break important things.
The reason I’m skeptical is precisely that QR today has known, significant setbacks, but unknown benefits.
I'm no expert but somehow I think post-quantum cryptography will become what IP6 is to IP addressing: we all know very well what problem it solves but the problem doesn't hit us that hard in practice that we are actually forced to make the switch.
It’s not quite the same, because the problems solved by PQC depend on technological developments (practical quantum computers) that may or may not materialize, whereas the problems solved by IPv6 are independent from such developments. And we already are at the point were IPv6 is necessary, due to having run out of IPv4 addresses.
One-time pads need you to have a limited amount of data to encode and a good channel through which to send your one-time pad. However, assuming AES isn't broken by quantum computers (there is no evidence that it is), you could use a relatively small one-time pad to generate keys to encrypt a much bigger bulk of data.
Not the craziest idea in the world, but you still need a very secure physical key distribution medium, which is hard.
I'm not so sure - QKD needs good SNR optical paths from source to destination, so over any real distance it's pretty hard. Physical medium security is just a matter of money today.
There are ideas of satellites doing QKD, but that will fall firmly into the realm of nation-states, which already spend a shit load on physical security.
There’s quite a bit of research currently ongoing for this exact topic, especially in the domain of embedded devices and smaller compute forms.
Valid concerns are being raised about performance degradation with some of these larger post-quantum models like Dilithium.
But 20 years ago we were using cryptography without it being an undue burden on the network. How much more bandwidth do we have now? How much more storage? Even on phones without wifi available, we're in pretty good shape.
Measuring cryptographic overhead as a percentage of available bandwidth and storage, it'd be interesting to figure how many years we'd regress if we went post-quantum. I'd bet it wouldn't be dramatic.
> TLS handshake would contain 52420 + 21312 = 14,724 bytes of signatures and public keys, an over 10x increase.
> Barring a large-scale quantum computer staring us in the face, this is not a tenable amount of data to send simply to open a connection.
Do we not run the risk of it simply being the case that Dilithium is the best TLS compatible algorithm for signatures we're going to get for a while? Then the question becomes: does a rodent of unusual size cough large, fault tolerant quantum computer...actually exist?
I think the answer is probably no, but if I understand attacks against the current Web PKI, it's not like an attacker would sit on the wire running Shor's algorithm in real time. The work to make a MITM attack possible would be done ahead of time, (e.g. running Shor's algorithm against a Ed25519 public key to find the private scalar) then the attack proceeds as normal.
Which is to say, if they[0] can make it work at all, they can start cracking keys and MITM attacks on connections become a reality. Does that mean that we should implement Dilithium? I don't know, but I also don't know if we should sit on our hands until something better comes along.
It's probably the stone-age of PQC right now. Apart from the humongous sizes the amount of scrutiny and standardization is low compared to RSA or even elliptic curve cryptography. So chances are that at least one popular algorithm needs to be swapped out in foreseeable time.
On the other hand speaking about HTTP newer versions allow for more efficient connection reuse and multiplexing so the bulky handshakes aren't needed as often.
One possible tls change that isn't mentioned here:
Initially just send a hash of the intermediate signature. The client would keep a cache of intermediate certificates, and if it didn't already have the cert in its cache, then request the full signed intermediate cert.
The downside for that is that you have to maintain a cache, which would be difficult for some non-browser applications.
Hasn’t the path to quantum computing with that many qubits always been full of unknown unknowns? I’m happy that researchers are preparing but it seems like defending against a threat of unknown shape that doesn’t exist is kind of futile. First, do we even have high confidence that QC is the thing that will break current algorithms? Secondly, how much time from “the writing is on the wall” until bad actors can deploy them for financial gain? Unless it’s short, why not wait and migrate later? Because like the xkcd with the $5 hammer, crypto primitives has to be the weakest link in the chain to really be a threat so big it warrants throwing it on top of everything just in case. I mean our security for even critical infrastructure is so bad that ransomware groups are rampant without any need to ever care about primitives. Also assuming that if you’re high value target for state actors, you’re not gonna use default TLS anyway. Extra assumption: most harvest-now-decrypt-later seem completely worthless?
Anyway, rambling aside: is this just naive skepticism from a mortal non-enlightened engineer with a short sighted YAGNI mindset? Or have the purist academics been selling parachutes to workers in skyscrapers - ie a solution to a problem that wont materialize anytime soon? Please, forgive my ignorance.
It is! Taking into account that only the Schor algorithm is proven to be exponentially faster in factorising numbers and looking into current progress on using it, the threat for RSA 2048 will become real in ridiculous 2^2048 years. This can be estimated by taking three events in factorisation:
2002: factorisation of 15 [1]
2012: factorisation of 21 [2]
2022: factorisation of 35 by IBM (although the article is about number 21, but let’s give them credit) [3]
In a plot, the tendency is somewhat linear, with the factorised number increasing by one every year.
A sceptic immediately replies to this thread, saying we already have 100-1000 qubit quantum computers and are on the path to getting a million qubits in the foreseeable future. That may as well be true. The issue is that the Schoor algorithm requires that the error rate of the qubit operations scale exponentially with the number of qubits. This is why IBM only uses five qubits in [3], although they already had a hundred qubit QCs at that time. It just produces meaningless results when used with more qubits.
This is also why the line for factorisation is about to be linear. If the error rate of the qubit operations decreases by 20% every year and we have exponential progress, the usefulness of the Schor algorithm taken on a log scale would be linear.
The error correction is not a panacea either. It does not reduce errors to zero for logical qubits but rather reduces them after the error threshold is reached. It would require repeated application to get the necessary error threshold to run the Schor algorithm for a desired number, which would require an unreasonable number of qubits.
> The issue is that the Schoor algorithm requires that the error rate of the qubit operations scale exponentially with the number of qubits.
So… traditional silicon computing scales exponentially in time, but QC computes in constant time(!) with the minor detail that it scales exponentially in error.
> If the error rate of the qubit operations decreases by 20% every year and we have exponential progress, the usefulness of the Schor algorithm taken on a log scale would be linear.
Right, which is exactly the type of leeway you need, no? 1-2 qubits per year will take many decades to break ECC or RSA. It’s the same Moores law-style assumptions baked into traditional computing getting faster.
> This is why IBM only uses five qubits in [3], although they already had a hundred qubit QCs at that time. It just produces meaningless results when used with more qubits.
So IBM has this really cool gf, but she’s at a different school so you won’t know her. Half joking, but putting error propagation in the footnotes seems very off to me as a complete layman. Like thinking we can accurately predict the weather 5 months out with a bit more compute.
I'd say in half the amount of time it took for us to go from the first internal combustion engine to the Model T? Or if you think that's a bad comparison: The Model T to the honda civic?
We don’t have any mainstream physical theory that describes the process of quantum decoherence aka wave function collapse. We know it happens, but don’t know when exactly or how quickly. So if QCs fail, Quantum Mechanics won’t be proven wrong, but rather we’ll have more data on wave function collapse which has been an open question for more than a century.
I personally like Penrose Objective Collapse theory, because it also connects gravity and QM in an unusual way.
Although that is true, QC isn't ruled in or out by other current observations, despite the extreme accuracy with which quantum physics theories match a very wide range of observations. If QCs can't work, QM will need to be amended but very subtly.
It's even possible that useful QC can't work for such a subtle reason that there's no way to be sure.
According to current physics and computability theories, it is thought you need a useful QC to calculate the behaviour of any physical system that has the properties required by a useful QC. Where "useful" means large enough to be out of reach of classical computers, and "calculate" means to sufficient accuracy to test if calculated behaviours match observation.
This means, the universe might deviate from the appealing math of quantum physics just enough to prevent a highly-entangled QC from ever working, at the same time as agreeing with everything we can calculate from what we have observed so far.
Examples of things we can measure but not calculate accurately: Chemical behaviours, material properties and phase transitions (at least some of them) depend on highly-entangled quantum effects, even higher than any QC is aiming for.
You might think, if we know the basic physics, and we can accurately measure temperature, energy, reaction rates and other properties, those measurements surely gives us a good idea if quantum physics theory matches the universe.
But where there is high entanglement, we can't do the calculations accurately enough to say for sure. They are limited by computational complexity on conventional computers. We have good approximations that are very useful, but they aren't precise enough to rule QC feasibility one way or the other.
But then we also have QFT. The difficult things you were talking about fall more on the QFT side of things.
In my view QM predicts QC will work just like math predicts Turing machines will work.
But the practical realization of an advanced Turing machine can indeed be a very complicated thing - in fact we need quantum physics to understand how modern transistors work.
So if practical useful QC don't work, wouldn't that be more of a QFT engineering problem?
I fully agree that we might not be able to build a working QC, just like we are struggling with fusion power for over 50 years now despite theory predicting it should work, and in theory it being a simpler problem than QCs.
Maybe I'm being pedantic with the distinction between QM/QFT...
QM doesn’t say anything about how long a system can stay in quantum entangled state, so it doesn’t say what is the limit to computations that can be performed on a quantum computer. Some computations were successfully performed, as QM predicts, but now it seems that researchers are hitting a limit.
QM says a quantum entangled state will remain like that until disturbed/measured. If there is some limit as you suggest, that means QM is at least partially broken.
That’s right, but it doesn’t specify what exactly is disturbance/measurement. We know from observation that interaction with a macroscopic system causes wave function collapse seemingly instantly. But we don’t know much about in-between behavior, like what happens when interacting with another microscopic system and does wave function of a microscopic quantum system collapse on it’s own given enough time.
Not necessarily. It could be that ECDLP (Elliptic Curve Discrete Log Problem) is not as hard as we thought.
It's not inconceivable that it's in P (Polynomial time).
In fact, if we were to see current algorithms being broken right now, I think it's more likely to be due to a classical algorithm breakthrough than by quantum algorithms running on an actual large scale quantum computer with hundreds or thousands of logical (i.e. practically error-free) qubits.
> First, do we even have high confidence that QC is the thing that will break current algorithms?
Yes?
I mean nothing is certain, but it is the most viable known threat by a wide margin.
> Secondly, how much time from “the writing is on the wall” until bad actors can deploy them for financial gain? Unless it’s short, why not wait and migrate later?
Nobody knows, but its difficult to coordinate changes, and it is difficult to know which quantum algs actually work. It can take decades to have confidence a new crypto promitive actually works.
As far as actually changing things, look how long it took to change out md5. People started warning it was insecure in 1996, it was totally broken in 2005, and in 2012 the flame malware used it to hack computers.
12 kilobytes is nothing when accessing a web page that's 2MB.
One possible improvement: only negotiate one key per server. When opening a new connection to a server that was very recently accessed, just keep using the existing security channel. In other words, mandate HTTP 2.
No it’s still a lot. The 2 MB can be streamed. In HTTP 3 it can be streamed in parallel. But none of those requests can start before the connection integrity is verified, otherwise you leak the intent of the request to a MITM agent. So you have head of line blocking on that 12k. And on mobile networking that comes with a lot of packet loss.
If server certificates worked more like PGP you could probably avoid that, but that’s not how TLS works.
If the web page is 2MB because it's loading ads, fonts, JavaScript libraries, images, cookie banners, trackers, etc each from a different domain or CDN... the problem is much bigger. This is 14k per connection...
> Large-scale quantum computers are capable of breaking all of the common forms of asymmetric cryptography used on the Internet today.
Is that actually true?
I was under the impression that there exists quite a large class of trapdoor functions (functions that are one-to-one maps, cheap to compute but whose reciprocal is crazy expensive to compute without some sort of oracle-like information) that could not be attacked by quantum computers?
Are there no asymmetric algorithm that are impervious to QC in use today on the internet?
Or are they specifically talking about the fact that RSA-style algos are vulnerable?
Symmetric algorithms are largely safe-ish; the best known attack is Grover's algorithm, which reduces the effective bits by half (runs in O(sqrt(N)) evaluations). Shor's algorithm is a polynomial-time solution to the discrete logarithm problem, which has traditionally been the basis for asymmetric cryptography.
> Shor's algorithm is a polynomial-time solution to the discrete logarithm problem
Yes, exactly, and the discrete logarithm problem is but a narrow sliver of a much larger class of trapdoor functions, most of which don't have the equivalent of Shor's algorithm to be attacked with.
A trapdoor function isn't the same as asymmetric cryptography. Trapdoor functions can be used to build asymmetric cryptography, i.e. public-key encryption and signing. In general, hash-based asymmetric cryptography can do it with any hash you consider strong enough, such as SHA-256.
Unfortunately the keys and signatures tend to be large, which is the problem the article is talking about. That's why they are not in common use.
For example, although SHA-256 is 256 bits, which is considered small, the keys and signatures of asymmetric cryptography built with SHA-256 are very much larger.
Making the keys and signatures smaller, yet secure and fast enough, is currently state of the art research.
I favor betting on proper orders of magnitude more rounds of reliable and proven constructions optimizing for a minimum mem/cpu/gpu/fpga- & asic-hard cost factors for a particular use-case than entirely new and unproven constructions (recall SIDH) and from untrustworthy intermediaries like NIST (recall P curves, Dual_EC_DRBG).
PS: We'll sure be in relative trouble if we find faster &| cheaper ways to do prime factorization or invert matrixes.
The cryptosystems the article says are too big to use, Kyber and Dilithium, are both lattice-based.
A solution to this problem may also be lattice-based: SQISign is mentioned, though that is too slow. Or it may be something else: Unbalanced Oil and Vineger, and Mayo, are not lattice-based.
I expected this to include a discussion on forward secrecy [1], but it doesn’t seem to. (I searched the page for “forward” but got no hit.)
A reasonable tradeoff at the current time may be to keep primary keys / certificates classical, and make session key exchange quantum resistant [2]. That would thwart any “harvest now, decrypt later” adversary. It wouldn’t protect against a man in the middle with a big quantum computer, but that’s not a very realistic threat for the foreseeable future.
Don’t know how much it will bring down the handshake size though.
It is a strange comparison indeed, esp. with some generations of engineers might not even know what a floppy disk was, let alone imagine how the 1% of the capacity translates to network protocols.
Quantum Key Distribution (QKD) requires special (expensive) hardware, so it really doesn’t solve the problem of key size.
Moreover, QKD still requires an authenticated channel, so we’ll still need a quantum-resistant authentication scheme.
(All in all, QKD does not have a singular use case)
This is the reason for my sarcastic response above. Attempts at making compact PQC are being broken left and right. If anything the line of battle is moving towards larger, more costly schemes at the moment. Complaining about proof size or validation speed right now is like pissing into the wind. You’re lucky if your PQC finalist scheme is even secure against a 2010 Mac mini in a year or two. Complaining that its proofs don’t fit into a MTU packet? That’s rich.
Some, yes. Lamport signatures are almost certainly quantum resistant. But the small, compact schemes that are getting attention and competing in the NIST selection? Probably not.
Not with quantum computers, which is the entire argument for moving to post-quantum crypto.
If we can manage to solve the scaling problem and can have, say, 2000 qubits of useful computation to break ES256, then we "only" need to scale to 3000 qubits to break ES384. Which is a lot less daunting than it seems, since we'll all but certainly need breakthroughs in error correction or coherence times to reach that threshold of useful applications of Shor's algorithm.
Quantum computers are like fusion in the 1940's, "it's only going to take 20 more years".
Using arguments in line with klebb, if quantum is linearly difficult, and each qubit takes 1 year, with your thought experiment's numbers that's 1000 years between breaking 256 and 384, let alone 512.
Early on, a decade ago, I was much more accepting of the quantum camp's arguments. But now in retrospect they've oversold, overpromised, and under delivered.
These hypothetical breakthroughs may not come to fruition.
I don't necessarily think that this is coming any time soon, but factoring a 5-bit integer is fundamentally different from factoring a 200-bit integer, which is why I said we probably need a breakthrough in quantum error correction or other methods such that we can actually entangle hundreds of qubits for long enough to perform meaningful computation.
I'm not convinced that we've made meaningful progress on this front in the past 20+ years since IBM first used Shor's algorithm to factor 15 back in 2001. (In 2012, they factored 21. In 2019, they failed to factor 35 due to accumulation of errors.)
If you're willing to accept the premise that quantum computers will exist that can break RSA-4096, then those same quantum computers can break elliptic curve crypto based on P256 or P384 or P521.
If you're not willing to accept that premise, then what's the point of doubling your key length? Wasting CPU cycles?
I actually talked about this with Gavin (who was running the project back then) in 2013. It wasn't a priority, and it almost certainly never will be.
Vitalik, on the other hand, has always made quantum resistance a priority, and the account abstraction model and pluggable signature schemes are just now starting to take off. Basically, you can deploy a smart contract to hold your assets, and part of the smart contract is that to move assets from the contract, you need to sign a message with something other than the typical ecdsa signatures. There are some wallets today that you can even send funds with passkey signatures, which will use the keys sitting in hardware on modern phones.
In Bitcoin public keys are hashed when publicly exposed, in the optimal case the ECDSA public keys are visible for only a period of a few minutes between broadcast of a transaction, and it being confirmed.
It is not an incompatible change to add new signature methods, a conservative implementation would have the public key commit to both an ecdsa key, and a quantum safe signature type, and use the committed ECDSA signature until such a time as it is no longer safe. This would result in no gain in signature size today, but allows for instant upgrades in place in the future with no additional transactions required.
We will just need to use quantum to beat the quantum.
QKD (quantum key distribution) is much more mature comparing with quantum computer which is still far away for cracking crypto algorithms from real practical application standpoint.
What is the problem that QKD is solving? On the one hand you need a totally separate point-to-point network for the quantum connection. On the other hand you get relatively short symmetric keys on both ends. I’ve yet to see a proposal that wouldn’t be better served by transferring the key material via DVD/USB-stick/whatever and an armed guard. I’m certain I’m missing something rather obvious.. but I don’t see what
I do not think you are missing anything that would change your cost-benefit analysis, but here are two things that might be of interest:
- A QKD link would be much lower latency than transmitting a physical token over an authenticated channel (same type of advantage as with asymmetric key encryption, but without the drawback of relying on assumptions about computational complexity)
- It does not need to be point-to-point if you have a network of quantum memories/repeaters (which are probably much easier to build than quantum computers).
Thanks for the reply. I’ll give you latency, but that’s almost never a problem in the first place. You need to (re)authenticate anyhow. I don’t think your second point holds though. Even if we assume memories/repeaters to exist (iiuc this should contradict the no-cloning-theorem) you’d need to trust them not to listen in so you’d be back to square one with electronic key distribution schemes?
There might be an argument that one is unable to secure the two sets of key material (at least on one end and at least long term) and the destination is hard to reach (e.g. James Web or so). But at that point I’d also not trust that organization to implement their end of QKD correctly..
> Even if we assume memories/repeaters to exist (iiuc this should contradict the no-cloning-theorem) you’d need to trust them not to listen in
The whole point of having repeaters is that it's impossible for them to listen in. For the same reason why it's impossible to just "listen in" on a fiber transmitting the QKD quantum signals. Repeaters don't contradict non-cloning.
The utility of a cryptosystem is the extent it force-multiplies compute in the defensive direction. Crypto that depends on equally computationally leveraged parties is broken crypto
But how far away are we from hardware for stable quantum key generation/storage that would fit on a tabletop, much less a home laptop or smartphone? Almost certain that consumer devices will stay in classical computing and use PQE.
QKD is not battle-tested. Like RSA, a naive textbook implementation would be worthless: messy physical implementations will leak information through side-channels.
If quantum cryptography is the solution, it won't arrive immediately, or for free.
That's what subresource integrity is for. Only the parent page needs HTTPS. Most of the assets can use HTTP, locked to specific content by subresource integrity. That's more useful for security than encryption. If someone changes a third party Javascript file, it won't load. This would promote a more stable web.