Hacker Newsnew | past | comments | ask | show | jobs | submit | analogist's commentslogin

Ah yes, enforcing the law, which, “in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread.”


So you think there is no difference in guilt between being a war criminal’s arm dealer and being a war criminal’s grocer?


ICE isnt a war criminal. They're a law enforcement agency


The similarity is in the fact that, like war criminals, they are committing acts that many would classify as crimes against humanity.


The hype and angst that some people are raising over fake news articles is reaching a fevered pitch. I don't think I've ever seen such a dichotomy between what the left and the right believe to be true.


I might be really uneducated here - I'm not American. All I know about ICE is that they're immigration and customs enforcement - they're basically the US's border force. The quality of the conditions they keep migrants in is poor, and has been for years.

OK. Why is that worthy of a boycott? Your democratically elected governments create the laws, and determine the funding for things like beds and toys etc. Boycotts in some cases may make those conditions worse instead of better.

There will always need to be a border force. Wouldn't it be better to focus on improving conditions, increasing funding, and stopping people from crossing the border in the first place?


ICE isn't the border force. All border security operations, stopping people from crossing and customs inspections and such, are done by a different agency called CBP.


OK. Why not change the law, if you want to make undocumented migration legal?


Most people trying to boycott ICE don't think undocumented migration should be legal. They just don't think ICE is a good agency. (Again, ICE is not the only agency responsible for undocumented migration, so undocumented migration isn't automatically legal if they can't do their job.)


OK but we're getting close to circular logic here. If ICE is not bad because it enforces border laws, why is ICE bad?


Indeed, Customs and Border Patrol are the ones on the border. ICE are the folks that find and deport people already inside the US.

Under the last administration ICE was only targeting folks who had committed serious crimes, but now they're targeting all undocumented folk. A lot of our economy actually depends on these people, as they tend to be the ones picking vegetables and working in factory farms, preparing livestock for sale - nasty jobs most people don't want. Trump's golf courses and hotels have also knowingly employed undocumented folk, but they don't do much to the people employing undocumented folks (which is actually illegal - being undocumented is not).

Oh, and we're a country of immigrants. (Except the native folk.)

See https://www.nytimes.com/2018/07/03/us/politics/fact-check-ic... for a bit more detail.


It's a very interesting gradient that you bring up.

Most of us probably have no issue with the baker who sold Hitler his daily loaf of bread.

But many of us do have problems with Hugo Boss for designing Nazi uniforms, even though the design work would have been before most of the Nazi war crimes had occurred.

Does not resisting to the fullest of your ability constitute enabling evil?

Would you take Pablo Escobar's donation to build a children's orphanage?

Should gun store owners share the blame when a gun purchased in their business is used in a mass shooting? If you say no, what about if the gun is used in a mass shooting within 30 minutes of the sale and the shooter comes across as under distress, and the gun store owner is worried enough to call in a warning to the authorities.


A gradient is a gradient, mapping it's multiple values to just two: blame/no blame, will be problematic. In your gun store owner example, the blame itself has a gradient.


As for the donation, who is going to disagree with the idea of getting money out of bad hands and into good ones. What do you want to do otherwise, burn it? And that's got nothing to do with removing the Escobars of this world.


Plenty of people have issues with taking money from bad people.

Why did Bernie Sanders have to return Martin Shkreli's donation? Even if the donation doesn't buy any influence or soft power it allows the "bad" actor to clean their reputation.


Why did Bernie Sanders have to return Martin Shkreli's donation?

From a political perspective there is also the reputation cost to consider. Even if you're morally fine with taking "bad" peoples money, having to answer "why is Shkreli funding you campaign" all the time has a pretty big political cost, even if you have a perfectly legitimate answer.


>So you think there is no difference in guilt between being a war criminal’s arm dealer and being a war criminal’s grocer?

No, can you explain it? It sounds like a way to justify to yourself that "those people are bad", but you're "just doing your job". Are you principled about who you do business with, or not?


Hi Bron. I'm a customer of both Fastmail and GSuite, and I have enjoyed your service for a few years now. I still use Fastmail for some things, like sieve, and very much will continue paying just for the ongoing development of open-standard email like JMAP. But there are definitely a few things that I can't shake when I learned about them that very much pertains to the security mindset that prevents me from moving my primary emails onto Fastmail.

Security paradigms have been steadily moving beyond a hard-boundary-soft-center, to a defense-in-depth, distrust-your-own-services model. I was alarmed to learn last year, for example, that you use OpenVPN with fixed symmetric keys (--secret) rather than TLS with any forward secrecy (--tls-auth) for VPN between your NYI and AMS datacenters. https://blog.fastmail.com/2016/12/19/secure-datacentre-inter...

Presumably, running datalinks like this means you would have to have perfect trust in your long term key management and rotation. Is that something you plan on improving in the future?

Similarly -- I stumbled on this entirely by accident after your blog post about moving datacenters -- that your head of security ops & infrastructure tweeted "I will probably root my phone soon because Samsung's emoji set is worse than not having convenient OTA updates" https://twitter.com/robn/status/919194089920311296

I don't want to conflate anything -- a tweet on an engineer's own time about their personal devices isn't by itself a security problem. But it does reflect on the security mindset. If you had a BYOD policy, and this phone did end up being flashed to Lineage and be 3 patch levels behind (esp with Android's track record of RCE-via-media CVEs), this could definitely become a weakness on your entire infrastructure, and thereby on all of us as customers.

This is the type of thing I couldn't shake after learning about it. Of course, trust has to be placed somewhere. You have to be able to place trust on your ops and your infrastructure, but that's also a process, not a checkbox. People and devices can be trusted a little less in the overall security system, to provide redundant security. Could you clarify your position on how your staff is trained about the human weak points, security as a lifestyle if you're security and ops, and how your security mindset incorporates defense in depth?


If an Android phone connecting to the company’s WiFi or the user’s email and whatnot is enough to compromise the infrastructure, then the company has bigger problems.

I’ve worked in companies with liberal BYOD policies for portable devices, but also tasted really restricted environments and such environments are basically highly regulated security theaters.

Users do stupid things of course and in corporations it’s worth it to restrict their devices, but restricting developers on what they can install and do on their own devices has a negative ROI and doesn’t go well. If you can’t trust a dev to manage his own phone, you can’t trust him to build your infrastructure either.

And yes, we make mistakes as we are only human, which is why a phone should not be enough to compromise that infrastructure anyway.

PS: your mention of that Twitter account is creepy.


Absolutely! Our wifi network in the office is treated like an untrusted network. All authentication is done directly from our work laptop or desktop machines and requires a second factor (TOTP, not SMS!)


> PS: your mention of that Twitter account is creepy.

With no context, I agree. But I'm not exactly stalking engineers here - there was literally a direct link to that twitter from the Fastmail updates mailing list that went out, when customers were notified of the NYI datacenter move. Made me do a double take.


We don't consider looking at our staff public twitter accounts to be creepy FYI. We mention that we're at FastMail, and we do indeed link to our own twitter accounts occasionally.

Cheers.


Flippant comments on twitter definitely don't reflect security policies! That phone doesn't have production access for obvious reasons.

You're right that security is a process. We're always working to harden and segment our internal services, as is best practice these days.

Ongoing professional development and training is important for our security staff (indeed, all our staff, because everyone matters for security). The security landscape is always changing, and it's not something that's ever "solved" - it's a situation to stay on top of.


> I’m not willing to stick my neck out

> empathy gap

Maybe if you were more open to helping them with threats to their livelihood, dignity, and well-being, they’d be more open to helping you with yours?

I don’t know, just a thought.


@bascule addresses deterministic password managers as a category here:

https://tonyarcieri.com/4-fatal-flaws-in-deterministic-passw...

Certainly you’ve considered these points, and I’m interested in hearing your responses to these points if you’re using this scheme yourself and want others to use it.

I’m particularly interested in point 4 where, unlike traditional password managers like 1Password and Lastpass, where you need both the password and some form of data access, the leakage of my master password (say by a shoulder surfing security camera at any time) literally means every credential can be derived from scratch at any time.


I read it indeed. Before I respond please let me say that this is just my opinion, and I'm totally happy if other people think differently. Also, the point that I wrote and I'd like to stress is that there are some passwords, the most critical ones (i.e. Google, Facebook, banks), that I never want to store and just remember. The reason is that I've found myself in situation where my devices aren't available and I have a urgent need to access these services.

For all the other passwords I honestly don't care, deterministic or vault both have pros and cons, but in reality it doesn't really matter. First, I never experienced the same urgent need of access. Second, I could temp reset the password provided that I can access my gmail.

On point 4 specifically I have two things to say.

> With a traditional encrypted password vault scheme, we need two things to obtain site-specific passwords: the ciphertext of the password vault, and the master password.

To me this is not a feature at all. I want exactly the opposite, meaning I don't want to depend on a file to retrieve my passwords, at least not the most critical ones. So, I guess this is a critical distinction. Are you ok with this dependency? Then definitely go for a vault. Ar you against? Then you can't use a vault.

Second, and to your point. It's worth noting that, because you need the vault file, you probably have it replicated in multiple places or at least accessible by multiple devices. To me, this makes the probability to get access to your vault file higher than the probability to find out the master password. Or, said in another way, I wouldn't base the security of the system on the fact that the attacker can access the master password but not the vault file.

To limit the attack surface you have to create multiple groups with different master passwords. The one that you type more often are the one more exposed, so you want to group sites with similar security risk and frequency of login (this is another thing that often seems amplified, I don't logout+login every day in all sites, I typically keep things logged in).

I hope this replies to your question, happy to chat more.


This is so obvious that the first thing I would do is look to see if they've addressed it in some way, instead of assuming incompetence.

If you have gone through the process of being charitable-first, instead of dismissive-first, then you would notice that they have explicitly spent engineering hours on this exact problem by using an SRP-based session key exchange for mutual authentication (and additional session encryption, in addition to TLS). [1] [2]

It's not easy to engineer for both security and usability, so I especially appreciate it when someone spends the time to accomplish both.

[1] https://blog.agilebits.com/2015/11/11/how-1password-for-team... [2] https://1password.com/files/1Password%20for%20Teams%20White%...


Because they don't transmit your encryption password.

Authentication is not done by sending them your encryption password, but instead the derivation of an SRP static secret (https://en.wikipedia.org/wiki/Secure_Remote_Password_protoco...) from your password (PBKDF, XOR'd with HKDF of the entropy-boosting pepper that they call the "Secret Key"), and performing a session key exchange handshake, basically like a (non-ephemeral) Diffie Hellman. They then encrypt all future communications (inside of TLS) with the transient session key.

This gets you three things in one swoop:

- Authentication of user

- Authentication of the server (if the remote server doesn't have the stored RSA counterpart of your derived SRP static secret, the exchange can't complete)

- An additional encrypted tunnel independent of TLS, so transport security isn't reliant solely on TLS (Cloudbleed, etc). (The contents being moved around are encrypted yet again)

And:

- User doesn't have to remember a separate password.

- The password and pepper never touch the network, only (non-reversible) session tokens do.

- Having access to traffic inside of TLS (corporate or malicious TLS endpoint interception, for example) still gets you nothing.

There are valid criticisms of 1Password, but you're literally criticizing them for something they've gone out of the way explicitly spent engineering hours solving in a way that not many services have even bothered thinking about.


Thanks! I am so glad to see I was wrong on this!


That's pretty much correct, yeah. Due to exponentiation, length is almost everything in password security. Which means there's going to be a bunch of lengths at which brute force cracking is trivial, and then a very sharp rise in complexity, after which brute force cracking quickly becomes astronomical, and then absolutely impossible.

If you look at the current cracking benchmarks of GPUs (https://gist.github.com/epixoip/a83d38f412b4737e99bbef804a27...), there is an easily quantifiable difference between bcrypt and MD5: 21 bits. (https://www.wolframalpha.com/input/?i=log2(200*%5E9)-log2(10...)

That means under current GPU architecture, bcrypt is basically like "adding 3-4 characters (or 1.5 diceware words)" for free to your password. Can you basically just add 3-4 characters to your password? Sure, but not without user friction, and certainly you can't think that way as the developer of the system, because you're trying to give a small leg up to even the most vulnerable by salting and bcrypt/PBKDF2/Argon hashing.

What about theoretical limits? Well, there is another way to approach this: Landauer's principle (https://en.wikipedia.org/wiki/Landauer%27s_principle), which considers the theoretical minimum energy of a bit flip of information - so this even covers future computing technologies. Even if you used up all available mass-energy in the entire sun, it is only theoretically possible to perform 2^225.2 operations (https://security.stackexchange.com/questions/6141/amount-of-...). 225 bits of entropy is roughly a 35-character (printable ASCII) password.

(Note that you can't do this with MD5 - it has only a 128-bit hash space, before preimage attacks, the best of which lowers it to 123 bits).

So the lesson is: use slow hashes to give some protection to the vulnerable and people whose password complexity is "on the edge". Use a password manager so that the rest of your passwords can be comfortably > 128 bits in complexity, without reuse. And then forget about passwords because after that, every other part of the security system becomes more important.


A fantastic overview - clear and informed. Thanks very much for this.


Wow. This is yet another example of the fatal combination of Rolling Your Own Crypto + Using OpenSSL Directly And Blowing Your Own Foot Off Because It Lets You.

  var cipher = crypto.createCipher('aes-256-ctr', key.toString('hex'))
Besides the completely fatal error of using derived and non-unique IVs (fatal as in, if you encrypt more than 1 item with it, it is exactly as good as plaintext because any two items encrypted with the same key+iv in CTR mode cancels out to plaintext), isn't using hex encoding vastly constraining the possible complexity-per-byte of the key?

A single hard-coded salt for key derivation:

  const key = crypto.pbkdf2Sync(auth, '0945jv209j252x5', 100000, 512, 'sha512');
Again, the salt is only lowercase alphanumeric. This makes this 120-bit salt really just a 77-bit salt. But since it's hard-coded and not randomly generated, it's a 0-bit salt.

Can everyone who is developing crypto apps Just Use NaCl/Libsodium?


As an infosec guy, I'm honestly getting really tired of this. Virtually every time someone submits some new security tool to Hacker News, the author has made trivial, catastrophic, and what should be completely avoidable security mistakes.

So for the hundredth time, if you're not a cryptographer or experienced security engineer, please stop releasing and promoting your crypto-related projects before they have been vetted by someone who is. If this is something you intend to release, ideally run the basic idea by someone qualified first. By not doing so, you are doing active harm. Someone's life and/or liberty may very well depend on the software you write, and when you fail them in this regard you are ethically and morally responsible when these things are taken from them.


Is there a good way to find people who are qualified to do such a review? This paper was written by a Ph.D. student and professor of Computer Science at a respected university. The professor teaches a crypto course on Udacity (https://www.udacity.com/course/applied-cryptography--cs387). If they don't meet the criteria for being cryptographers, I wonder how many people in the world do?


Not many. There's a Venn diagram to draw about academic cryptographers and practicing cryptographic engineers and people qualified to do cryptographic reviews and the overlap is less than you might think.


I'm going to push back on this a bit.

Thomas responded alongside this comment to talk about how academic cryptographers are not necessarily qualified to implement original crypto, and I largely agree with that; however, I don't actually think that's the issue here. Rather I would pin this on a lack of peer review.

I could be wrong, but I don't believe the author of this paper has had it published or at least accepted in any journal or conference proceedings. Being an eprint format with endorsement rather than peer review, you can expect mistakes like this to happen often, even if the authors are ostensibly qualified. When you submit original research for publication you generally go back and forth a bit with adjustments as needed, and as long as there is nothing egregious you don't need to redo it all.

In this specific case, I believe the author fully understands the issue (or would, were it presented to them) and is fully capable of fixing it. A qualified peer review would (hopefully :) have caught this and other latent issues if an HN commenter did.

We see this in the broader mathematics and computer science communities, and we especially see it in sub-disciplines like machine learning as well. It's absolutely true that academic cryptographers should not be assumed capable of rolling their own crypto a priori, but in my (educated) opinion I would certainly place far more weight on crypto developed by an academic cryptographer than a software engineer without any particular training.

My platonic ideal for someone who is capable of developing original crypto is something like an academic with a PhD in math or computer science (focusing on crypto), who can develop software very well and who joins an applied lab for crypto engineering and development (like NCC's) or a top cryptanalysis firm like Riscure. Failing that, I'd probably place the most weight on someone who had a lot of training in crypto engineering or practical cryptanalysis over an academic with no implementation experience.

(I apologize if any of this is patronizing, I don't know what your background or familiarity with the academic process is w/r/t peer review, etc).


> A qualified peer review would (hopefully :) have caught this and other latent issues if an HN commenter did.

Would the qualified peer-review necessarily be reading the NodeJS code, or just checking the theoretical soundness of the paper? I'm not so certain about the former...


Almost certainly not the former, no. At best the code would be "supplementary material", which reviewers are not required to go over.


You absolutely need both skills.


What blows my mind is that most of these errors we see in these tools submitted to HN are errors that were all covered in my 'Introduction to Cryptography' course I took when getting my CS degree.

People seriously need to stop rolling their own crypto.


5c from me:

1) why CTR mode was chosen? I would probably go with something like GCM: privacy + integrity check.

2) IV ideally should be re-generated on every re-encryption. It doesn't have to be secret, but has to be random (securely random).


GCM has exactly the same problem with respect to nonces (GCM, like CTR, has a nonce, not an IV, but the terms are unfortunately used interchangeably).

The secrecy/predictability/uniqueness rules for IVs and nonces depend on the specific cipher mode you're using, so be careful about writing generic recommendations. Also, be very careful with the word "ideally", because if you get an IV or nonce wrong, chances are your problems are much worse than "not ideal".


The good news appears to be that this has prompted some upstream discussion.

https://github.com/nodejs/node/issues/13801


Even conventional VPN is not enough. The Great Firewall of China (https://en.wikipedia.org/wiki/Great_Firewall) is a mix of DNS poisoning, deep packet inspection, and traffic and usage analysis based on real time ML. It is very smart and adaptive, and will block most mainstream VPN services, including IPSEC, standard OpenVPN, SSH tunnels, (of course) SOCKS and http proxies.

Besides blocking entire sections of the net outright (like Google address blocks), poisoning controversial domains, etc, even if it can't directly inspect the traffic due to good encryption (say in the instance of OpenVPN or IPSEC), it will slowly degrade and eventually null-route your traffic over the course of minutes, depending on its judgement of the likelihood (based on packet structure and history) that your activity isn't "normal" usage.

Currently the only functional ways of getting around the GFW is VPN through stunnel (TCP OpenVPN traffic re-wrapped in TLS, thus pretending to be https traffic, and incurring triple TCP performance penalties), similar convoluted protocols like Shadowsocks, obfsproxy, and other China specific tools.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: