Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"you wouldn't be able to otherwise introspect them"

Can't implement it until you define its behavior. If you define its behavior you can emulate it (which, outside this discussion, is really useful). If you can emulate it, you can single step it, breakpoint it, dump any state of the system including memory, reboot it into "alternative" firmware...

Your only hope is playing games with timing. So here's a key, and its only valid for one TCP RTT. Well if they want to operate over satellite they must allow nearly a second, so move your cracking machine next door and emulate a long TCP path and you've got nearly a second now. On the other hand if instead of runnin over the internet you merely wanted to prove bluetooth distances or google wallet NFC distances, suddenly you've gone from something I can literally do at home easily to a major lab project.

Another thing that works is "prove you're the fastest supercomputer in the world by solving this CFD simulation in less than X seconds". Emulating that would take a computer much faster than the supposedly fastest computer. So this is pretty useful for authenticating the TOP500 supercomputer list, but worthless for consumer goods.



> Your only hope is playing games with timing.

This is inane. My question was about mathematically provable secure computation, not kludges that any old advanced alien civilization could bypass by sticking your agent-computer in a universe simulator. :)

Let's ignore the computers. You are a spy dispatched from Goodlandia to Evildonia. You want to meet with your contact and exchange signing keys. You can send a signal at any time to Goodlandia that will tell them to cut off all contact with you, because you believe you have been compromised. (A certificate revocation, basically.)

Your contact, thus, expects one of three types of messages from you:

1. a request for a signing key with an attached authentication proof;

2. a message, signed with a key, stating you have been compromised and to ignore all further messages sent using that key;

3. or a message, signed with a non-revoked key, containing useful communication.

Now, is there any possible kind of "authentication proof" that you could design, such that, from the proof, it can be derived that:

1. you have not yet been compromised;

2. you will know when you have been compromised;

3. and that, in the case of compromise, you will be allowed to send a revocation message before any non-trusted messages are sent?

You can assume anything you like about the laws of Evildonia to facilitate this--like that it is, say, single-threaded and cooperatively multitasking--but only if those restrictions can also carry over to the land of Neoevildonia, a version of Evildonia running inside an emulator. :)


It might be possible to exclude enough realistic current day threats to eventually end up with something that "works" but I don't think that's useful in any way.

None the less, if you want to exclude computers, the human equivalent of "stick it in an emulator" is the old philosopher's "brains in a vat" problem. That's well traveled ground no there is no proof you're not in a vat.

There is no way to prove you have not been compromised because there is no way to prove no theoretical advancement will ever occur in the field in the future. (or not just advancement but NSA declassification, etc) So you're limited to one snapshot in time, at the very least.

You're asking for something that's been trivially broken innumerable times outside the math layer.

Its hard to say if you're asking for steganography (which isn't really "math") or an actual math proof or you just want a wikipedia pointer to the kerberos protocol which is easily breakable but if you add enough constraints it might eventually fit your requirements.


> Its hard to say if you're asking for steganography (which isn't really "math") or an actual math proof or you just want a wikipedia pointer to the kerberos protocol which is easily breakable but if you add enough constraints it might eventually fit your requirements.

None of those; I know the current state of the art in cryptography/authentication, and that it doesn't quite cover what I'm asking for. I'm basically just waiting for you to say that the specific kind of designed proof I asked for is impossible even in theory, so I can go and be sad that my vision for a distributed equivalent to SecondLife[1] will never happen.

My own notion would be that the Goodlandian agent would simply request that his contact come and look at the machine itself, outward-in, and verify to him that he's running on a real, trusted piece of hardware with no layers of emulation, at which point the contact gives him an initial seed for a private key he will use to communicate with from then on. The agent stores that verification on his TPM as a shifting nonce (think garage-door openers), so that whenever the TPM is shut down, it immediately becomes invalid as far as the contact is concerned--and must be revalidated by the contact again coming and looking at the physical machine. All we have to guarantee after that is that any method of introspecting the TPM on a piece of currently-trusted-hardware fries the keys. Which is, I think, a property TPMs already generally have?

Besides being plain-ol' impractical [though not wholly so; it'd be fine for, say, inspecting and then safe-booting military hardware before each field-deployment], I'm sure there's also some theoretical problem even here that renders it all moot. I'm not a security expert. :)

---

[1] More details on that: picture a virtual world (technically, a MOO) to which any untrusted party can write and hot-deploy code to run inside an "AI agent"--a self-contained virtual-world object that gets a budget of CPU cycles to do whatever it likes, but runs within its own security sandbox. Also picture that people who are in the same "room" as each AI agent are running an instance of that agent on their own computers, and their combined simulation of the agent is the only "life" the agent gets; there is no "server-side canonical version" of the agent's calculations, because there are no servers (think Etherpad-style collaboration, or Bitcoin-style block-chain consensus.)

Now, problematically, AI agents could sometimes be things like API clients for out-of-virtual-world banks. Now how should they go about repudiating their own requests?


"Bitcoin-style block-chain consensus"

Majority rule not consensus. Given a mere majority rule protocol, I think your virtual world idea could work.


Eh, either way, it's the same problem. Imagine you're an agent for BigBank, thinking you're running on Alice's computer. If you authenticate yourself to BigBank, BigBank gives you a session key you can use to communicate securely with them--and then you will take messages from Alice and pass them on to BigBank.

But you could also be running, instead, on an emulator on Harry's computer--and Harry wants Alice's credit card info. So now Harry reaches in and steals the key BigBank gave you, then deploys a copy of you back into the mesh, hardcoded to use that session key. Alice then unwittingly uses Harry's version of you--and Harry MITMs her exchange.

In ordinary Internet transactions, this is avoided because Alice just keeps an encryption key (a pinned cert) for BigBank, and speaks to them directly. If you, as an agent, are passed a request for BigBank, it's one that's already been asymmetrically encrypted for BigBank's eyes only. And that works... if the bank is running outside of the mesh.

But if the bank is itself a distributed service provided by the mesh? Not so much. (I'm not sure how much of a limitation that is in practice, though, other than "sadly, we cannot run the entire internet inside the mesh.")


There is no way to prove you have not been compromised as it's possible to be compromised without knowing it. EX: Listing to the EM leakage as information is sent from one chip to another.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: