> how about promoting a transparent decentralized system for certificate signing
Honest question: with a decentralized system for certificate signing, what would be the trust root?
The current system has the browser makers as the root of trust; this trust is delegated to a set of certificate authorities through the list of root certificates which comes with the browser; these certificate authorities then delegate to their intermediates, which finally certify a server as trusted for a fully-qualified domain name.
Without a root of trust, anyone could say "I'm example.gov, this is my certificate", and present "proof" of that. A trust root is necessary to prevent this.
So far, the only working proposals I've seen for decentralized trust (which don't do away with the human readability side of Zooko's triangle) are based in distributed proof-of-work systems like Bitcoin's, where the trust root is the distributed "chain". Has anyone ever tried to apply a system like that for certificate signing for TLS?
The problem with achieving real adoption of crypto had historically been made a lot harder than it should be because too many problems are trying to be solved at the same time.
Separate the problems! It is much easier to find realistic solutions when the requirements are narrower. The remaining needs can be solved later on. Once some usable infrastructure has been established, it might be possible to leverage that infrastructure to add back in some of the missing features.
For HTTPS, a good start would be PHK's suggestion of simply auto-generating self-signed certs in apache by default, as a replacement for plaintext. Authenticating those certs can happen later.
After keys are everywhere, a potential solution might be t o allow both PKI authorities and some sort of web-of-trust (or other methods? blockchain? something new?), and exposing the source of trust to the user in way they can manage.
There is no one-size-fits-all solution to the trust problem, so let the user decide because they know what their requirements are. If I'm browsing to some bank, a well-known PKI root might be a good trust source. If I'm chatting on some local forum, a web-of-trust auth might be better (it's a local forum, so fingerprints can be exchanged manually, friend-to-friend).
There are middle grounds that would still be better than a plaintext internet. Cert pinning, even self-signed certs, would be better.
First time to https://example.com - I get a prompt and a UI element telling me it's self signed. I accept the risk (This may not be who you think it is! - but it probably is)
2 - nth time to https://example.com - UI element tells me it's self signed, but the same as before. Whether it's the NSA or the site, it's the same person at the other end.
Next - does the cert change to a PKI-trusted? Then great! I get a UI element (no prompt) showing the site is trusted. Does the site get a new self-signed cert? Back to the first step.
I believe this, and the parent notion of default cert generation by apache installs, is better than no SSL. It's not as good as fully verifiable auth.
---
And son of a gun, I know this isn't an original idea, but I can't believe it took another post to remind me this is exactly how SSH works. Sure, you can get the server key to your DO host and transfer it to your client, but how many people do that? They accept the fingerprint they see at first, assume it's good, and probably raise an eyebrow if it changes. Don't like it? OK, let's go back to telnet.
If state level actors wanted to MITM all SSH connections to make you accept the wrong key, they could. But it has a high chance of being caught as generally, you'll pay attention when the key changes. Plus you can reasonably verify the key out of band. (And one could imagine an extension to add a bit of CA-style key verification, like, "was this key generated by my account or my VPS provider, allowing you to manually verify one key and then being set for the rest.)
Whereas if you do this on HTTP, a: you're constantly having a lot more "first time" connections. B: you've got no real way to know when the key changed legitimately. Users would quickly grow accustomed to key change warnings and ignore them. Or you'd see banners on sites like "ignore the key warning; I reinstalled my blog and the key changed". Attacks could even inject such a banner.
If you want to manually verify every cert, you can already do that today: just go and add certs to your browser!
Unauthed HTTPS-by-default just adds complexity and a false sense of security and isn't worth pushing out on the public.
I think the problem is that HTTPS carries the connotation of security, renaming this proposal would make it obvious to users what the expectations are.
Lets change the name of it to something like httpe e for encrypted, s for secure.
Let's change the behavior of httpe to automatically accept the first key it sees for a domain.
Browsers have the option of uploading the lists of domains and keys to their creator, Firefox, Google, etc which can then be collated and updated from the browser.
This data could be used to spot where MITM attacks are taking place.
This is the first time I've heard of HTTPe and I love it! However, I'd like to sit on it for a bit. Let's encrypt should come out later this year. I want to give it a chance.
Encryption without authentication solves a majority of threat vectors that would allow people to look at your private communication. (authentication here means knowing that key K belongs to some organization, not authenticated encryption). If, by default, everything was encrypted, then the internet would be a much better place. Once encryption is in place, it means that active monitoring will be easy to detect because if only a small number of people actually verify that the remote party is who they say they are, then it will still be detected.
Is this meant to be a general statement or just a response to some comment?
One thing I do not like about the popular "strong" encryption solutions are that they are tied to relatively "weak" authentication solutions. Instead of two programs that each do one thing, we are instructed to use one program that does two things.
I prefer that encryption and authentication were are viewed as distinct programs. If desired they can be used together. Sometimes we may not wish to rely on the hope of an "encrypted channel", but instead we might just want to send an encrypted blob over an untrusted channel (=the internet).
Obviously it makes sense to send your encrypted blob to the correct destination, but that does not mean you _must_ use encryption to verify the destination is the correct one; it is an option, but not the only one.
For example, it is possible to do the authentication part via some old-fashioned method that does not require the internet.
Sure, but at least it means simple optical splitters can no longer read the contents of your web traffic. That's arguably not worth a whole lot, and may trick people into feeling safer when they shouldn't. But if nothing else changed (no browser indications, etc), wouldn't using DH on all HTTP be strictly better than nothing? Theoretically. In practice, I suppose entities just replace their splitters with a retransmitter and we're back to zero.
I have a secret. I want to share this secret with only my best friend. I do this by telling the secret to everybody I meet, whether they look like my friend or not. Eventually I meet my friend, and tell them too.
This is confidentiality without authenticity. It is an incoherent idea.
It's incoherent at a high level, and it stays incoherent as you delve deeper into the theory. For instance, systems that lack authentication tend to lose confidentiality to error oracles.
Without authentication your connection is susceptible to undetectable man-in-the-middle attacks that DHE does nothing to prevent. That they're separable is superficially true, but not interesting, as encryption alone doesn't stop people from reading your traffic, which is the whole point.
This argument comes up so regularly, one might speculate that some people are trying to keep the internet in plaintext[1].
Yet again, encryption is a replacement for plantext, which is the only thing it should be compared to. Of course you can MitM attack it, but that's not something that is easily done in bulk.
Simple encryption raises the cost of an attack from "trivial wiretaps, DPI optional" to the time, money, and effort required to do a targeted MitM attack. Additionally, while it is generally impossible to detect wiretaps, MitM can leak information that betrays the presence of an attack.
Remember, this isn't intended to stop all types of attacks. It is simply a very easy to implement feature that lets you replace plaintext with something resistant (not proof) to eavesdropping in general, and proof against some types of bulk surveillance.
Note: I haven't said anything about presenting this type of non-authenticated communication to the user as "secure".
It doesn't raise the bar high enough that the people who are currently snarfing internet traffic wholesale would bat an eye at.
It doesn't matter how secure the phone line is when you have no idea who you're actually talking to. Especially when there are people with money, means and access to make sure that you're always talking to them.
To follow up on what apendleton has said - I have been involved in implementing standard protocols (e.g. IKEv2) that involve DHE to set up an encrypted connection. The first step in all of these is ALWAYS to verify that the Diffie-Hellman value you got from the other side is actually from who you want to talk to, otherwise it is trivial to run a MITM.
Right. Because active MITM attacks -- at least if limited to cases where the attacker doesn't know how / if the connection is authenticated -- carry zero risk of alerting those being attacked.
An untargetted attacker cannot know that there is no authentication. Dragnetting connections where they don't recognize any authentication therefore risks detection.
That risk to the attacker is not present when observing plaintext connections.
.
I'm not safe from muggers because I have eyes in the back of my head to see them trying to sneak up on me. I'm (largely) safe because someone else is likely to see them (or catch them on camera) and get them caught.
One of the ways to detect MitM is authentication. It is a particularly good method, which I recommend whenever possible, but it is not the only method.
Suspicious changes in the environment might be another, as would detecting data that leak past the middleman. Key pinning would be an example of a change in the environment, unexpected changes in important network topology or routing could be another. An example of a leaking middleman might be detection of the real (non-poisoned) "duplicate" packet in a DNS-poisoning packet race.
These methods are nowhere near as good as proper authentication, of course. Reliability of detection is probably very low. The point is that it is better than the case of sending plaintext that anybody can trivially wiretap with zero chance of detection[1].
As always, it is important to define your threat model. If you are defending against any kind of targeted attack, then yes, authentication is a firm requirement. If your threat model is only concerned with avoiding the trivial surveillance that can be done in bulk, anything that forces the opponent to use a more complicated ("expensive") MitM attack is a success.
[1] modulo any still-very-hypothetical quantum communication methods. We can reevaluate our options if those technologies ever work well enough for common use,
Without a root of trust, anyone could say "I'm example.gov, this is my certificate", and present "proof" of that.
Most SSL certs in the wild are legitimate and trust has already been established. So if you hit FooCo's corporate website and get one certificate, and some other guy hits the same website and they get another, it's likely something fishy is happening. Replace this model of two people with a few million, and you have a pretty decent verification system happening.
Really, what we have now isn't okay. We're training users to click past SSL warnings which, 99.999% of the time, are due to misconfiguration or BS reasons (i.e. expiration).
> So if you hit FooCo's corporate website and get one certificate, and some other guy hits the same website and they get another, it's likely something fishy is happening. Replace this model of two people with a few million, and you have a pretty decent verification system happening.
In this model, the trust root is the verification system which compares your "hit" with other people's "hits". If an attacker can pretend to be the verification system and tell you "everything's fine", the system won't work. Also, it's centralized: the verification system itself becomes the central component.
What's to stop an attacker from poisoning such a system: pretend to be a thousand different people all saying "Yep, the cert with signature 0xBADBADBAD was what I saw"? How does someone rotate certs without breaking all their existing clients?
Convergence and DNSChain are two interesting proposals to replace the CA system.
IMO, it's more important to emphasize the idea of secure origins, and HTTPS hits that note. TLS could be swapped out, the CA system could be changed, but what matters is the expectation that connections across the web are expected to be secure by default.
Agreed.. it's a shame that I can't just publish a public-key as part of a DNS entry for a domain, and as long as the DNS chain is secure (DNSSEC) then that key can be trusted.
Honest question: with a decentralized system for certificate signing, what would be the trust root?
The current system has the browser makers as the root of trust; this trust is delegated to a set of certificate authorities through the list of root certificates which comes with the browser; these certificate authorities then delegate to their intermediates, which finally certify a server as trusted for a fully-qualified domain name.
Without a root of trust, anyone could say "I'm example.gov, this is my certificate", and present "proof" of that. A trust root is necessary to prevent this.
So far, the only working proposals I've seen for decentralized trust (which don't do away with the human readability side of Zooko's triangle) are based in distributed proof-of-work systems like Bitcoin's, where the trust root is the distributed "chain". Has anyone ever tried to apply a system like that for certificate signing for TLS?