Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unless I'm not understanding this correctly, every package manager is vulnerable to this attack (along with many others). I'm not sure why someone bothered to write this down and make an official "disclosure". Maybe someone more knowledgeable can explain?

I mean really the idea is just that if someone got somebody else's password, they could use it to trick other people into installing a program. Even email has this problem. So really the only thing NPM could be accused of here is not doing more to make publishing secure (like using two-factor authentication).



Similar problems exist in most package management systems. registries that have a manual review process mitigate this danger, but there's still always a risk of malicious code getting into the world.

Having said this, we'd like to make exploits such as those discussed in #319816 as difficult as possible. We're exploring supporting new authentication strategies: such as 2-factor authentication, SAML, and asymmetric key based authentication (some of these features are already available in our Enterprise product, but haven't made it to the public registry yet). npm's official response has more details on this subject:

http://blog.npmjs.org/post/141702881055/package-install-scri...


Unfortunately I don't think that many/any of the Programming language package repositories have manual review processes, or even automated checking for things like known malware...

Linux package managers are a different story of course.


Firefox has automated "malware checking" for extensions, the Mozilla AMO Validator, and it's been basically torn to pieces by the community for being not actually secure [1] plus a major hassle for developers [2], to the point that large extensions with hundreds of thousands of users have stopped using the official Firefox extensions repository.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1227867#c2

[2] https://forums.zotero.org/discussion/28847/zotero-addon-is-n...


Yep it's a really nasty problem for any package manager that operates at scale.

The problem is that without any centralized validation of packages, it leaves checking to each developer who uses the libraries and obviously from an effort standpoint that just makes it worse (i.e. if it's hard for the repo owner to do validation it's hard x number_of_users for it to be done by end users)


The problem is basically how the centralized validation is supposed to work. For e.g. the Linux kernel, it's doable because all code in the kernel must (almost by definition) interact with some other part of the kernel. Thus someone else than the code owner, being responsible for those other parts of the kernel, can be tasked with signing off on the new code being good and non-malicious.

But for NPM or PyPI, where anyone can upload anything, how's that supposed to work? It's perfectly fine for someone to put a package called "removeallfiles" on PyPI which executes "sudo rm -rf /". This isn't (by itself) malicious code. The same code, but obfuscated and put in the package name "isarray", is perhaps obviously malicious. But what about something in the middle, e.g. some form of practical joke package? What central authority decides what is allowed and what is not on PyPI?

Signing is a tangential issue. As long as you're trusting the dev who uploaded the code, what difference does it make whether they used password or public key auth (effectively)?


Well if there's no central validation, that leaves all individual users to validate packages before use (which is a huge amount of work)...

The problem is that companies are using these packages as though they are trusted (i.e. not validating them when using them), and that's part of the value proposition in the first place (i.e. it's easier to use this package than write it myself), but it's missing the cost of validation.

On signing I'm not sure we're talking about the same thing. I'm referring to developers cryptographically signing packages before pushing to the repository, with a key that the end-user can validate. the idea is to protect against a comrpomise of the repository. There's a good discussion of the risks and potential solutions on The update framework's site (https://theupdateframework.github.io/)


I completely agree on the first two paragraphs.

Wrt. signing: I'm assuming we are talking about PyPI and NPM here. Also I'm assuming the major threat vector for a repository compromise is that (some of) the dev's accounts on some other services (most likely email) are somehow compromised. In which case it's down to that dev's OPSEC practices whether the repository can be compromised using the data from $OTHER_SERVICE. If the dev has poor OPSEC and would reuse the password for multiple accounts in a user/pass repo auth scenario, it's reasonable that this person would also have emailed themself the keys for signing packages, e.g. for transferring to another location behind a firewall. In either case, you're down to trusting the dev's OPSEC.

IMO, the threat models for other kinds of compromises which signing protects against are much more far fetched. AFAICT neither PyPI nor NPM use third-party mirrors, which basically leaves MitM attacks. If an attacker is capable of successfully MitM-ing the connections you make to PyPI/NPM over https, you have much bigger problems.

Or am I missing your point here?


Ah yes, so the threat model for developer signing is compromise of the repository. So here we're looking at the OpSec of the repository owner (e.g. PyPI, npm, Rubygems etc), and also the risk of deliberate compromise by the repo. owner (for example where a state with authority over the repository compels them to modify a package)

In terms of compromise there's already been the attack on Rubygems in 2013, but in general the thought here is that these repositories are extremely tempting targets for well-funded attackers. A compromise of npm for example would give an attacker direct access to a very wide range of targets.

Combine this with the very limited resources of the repository owners (most are free resources, likely constraining the money available for defence) and you get a realistic risk of attack, which is mitigated by an appropriate use of signing by the developer.

Docker hub has deployed an implementation of the Update Framework to address this, although the interesting point now is whether people actually use it as it's not compulsory...


Unfortunately I don't think that many/any of the Programming language package repositories have manual review processes, or even automated checking for things like known malware...

It depends on what kind of repository you're trying to build.

If you're talking about something like NPM, PyPI or CPAN, then sure, these are relatively open systems where anyone can contribute but that includes bad people.

An example from the other end of the spectrum would be Boost for C++, which is heavily curated and peer reviewed, good enough in quality that its libraries sometimes become part of the full C++ standard at a later date, and tiny compared to the others I mentioned before.


I don't think this is unfortunate at all. I shouldn't have to wait for someone to review my code before publishing an important bugfix. This is the primary thing that drove me away from mobile apps.


On the flip-side how is someone who's using a package from one of these repository meant to validate that it's secure and non-malicious?

without central validation, each user would have to do it, and that's frankly impractical...

The alternative is that no-one actually does the validation and runs the risk of insecure or malicious packages. To me, that's totally fine as long as they're doing it knowingly, however I'd suggest that most companies making use of NPM, PyPI, Rubygems etc are not doing it knowingly...


On update, can you send email to the purported author, telling them they've updated the package? (Similar to those "you have logged into some site from a new computer" emails.)

An easy way to undo a publish would also be useful.


Totally about your first suggestion. But just a reminder, this whole look into NPM began because someone deleted his published packages. Don't know if that's something we should be adding. Deleting versions to me sounds like rebasing public git history.


How about a delay before making the package publicly available so that there could be an undo window? I'm not sure what an appropriate duration would be, as there are obviously situations where someone might want to publish an urgent bugfix quickly.

This could also save publishers from their own "whoops" moments, akin to gmail's super-handy "undo send" feature[1].

[1]: https://support.google.com/mail/answer/1284885?hl=en


Looking forward to 2-factor authentication in npm! For what it's worth, I find Google Authenticator offers a user better experience than text message based MFA.


Until you lose your phone. There is no way to back up/recover. So it's tied to this particular device forever. This has been reported years ago and never fixed. Use authy or sms.


I don't think that's accurate. I've changed phones multiple times and the worst I had to do was find my list of 10 recovery passwords to use one to get in and change the phone I use with Authenticator.

Usually I would just login and add the new phone.


Agree, I've lost phones with many of my 2FA tokens on it, I pulled out my backup codes and got back into all 10-15 or so sites which I use 2FA on.

With the exception of a SMS to a registered phone recovery method, if I can loose my 2FA token, and loose my backup codes, and still access my account, then the 2FA implementation is IMO deeply flawed.


On a personal note, I despise apps like Authy. Every site I've tried to use 2FA on, which didn't use the standard HOTP/TOTP/U2F used an entirely different proprietary app. I have zero interest in having multiple 2FA apps, and even less interest in remembering which app I used on which site. While this clearly isn't Authys fault, its nevertheless a problem.

In other words, please just use standards based 2FA - or SMS/Phone calls.


I just save + encrypt the original QR code somewhere at the time of scanning it.


The note offers this workaround for npm: "Use npm shrinkwrap to lock down your dependencies", which will prevent the worm from spreading purely because of an install of a checked out app.

Any application package manager with a lockfile-based-workflow (like Bundler, Cocoapods, Cargo, etc.) would at least have this mitigation as a default part of the workflow.


shrinkwrap might work for a bit. but if you regenerate the file you will run into the same issue.

a way to protect you 100% against the problem is to define your dependency as a link to a specific commit or tarball.


Or a specific version since they can't be written to twice in the npm repo.


According to https://news.ycombinator.com/item?id=11341142 , in at least one case, now, this is not true.


Except the exact same code was republished, so the point still stands.


This time


IMHO the primary issue at play here is that publishing to the npm repository doesn't currently require proof of user presence, which enables a worm to propagate to other packages automatically.

The npm team is working on 2fac (https://twitter.com/seldo/status/713623991349411840) which will be an adequate solution to this issue.


Take Maven as an example: it's not vulnerable to this attack for several reasons:

1) No install scripts. Fetching a dependency in NPM will execute arbitrary code. Fetching a dependency in Maven doesn't execute any code from the dependency. Obviously when I run my project, I'm expecting to call code in that dependency, so this is a mitigation not a complete fix. But that does lead on to the next point.

Corollary: You have to change the code in my project to spread the worm, not just add a new dependency, otherwise your worm code won't get executed. This is probably a bit more tricky to get right.

2a) Code deployed internally from CI servers, not local machines. It's got to be code-reviewed before it gets pushed to my employer's package repository.

2b) Code needs to be signed before being uploaded to Maven Central. I'm not going to start typing my GPG key into random unexpected prompts.

Malicious code is still a possibility, but the scope for a worm is much less.


Unfortunately, it's not as simple as disabling `postinstall` hooks. In dev,, especially, the Node process likely runs as the same user as the one who publishes packages. There is nothing stopping the code from spawning `npm` and publishing a malicious project as soon as it is require()d. And of course, you're requiring it at some point, otherwise why would you install it?

A better fix to this issue is to require publishers to enter a two-factor token, to email them to confirm publishing, or the like.

Yeah, it makes everyone a bit uneasy with how much trust is involved in the ecosystem. Is there a better solution?


Rather than 2FA, Maven requiring a GPG signature provides that extra security for me. Neither are infallible -- malware could infect your system sufficiently to intercept your next legitimate authentication.

Also, disabling install hooks in NPM would make things really difficult for packages that rely on native code as they've traditionally been compiled on install. I consider that an anti-pattern, but it's one that's unlikely to be removed any time soon.


Not every package manager, for example the Solaris 11 package manager explicitly does not support install time scripting for this reason among many others:

https://blogs.oracle.com/sch/entry/pkg_1_a_no_scripting


If I use maven, dependencies are plain JAR files. Adding dependency doesn't do anything but simple file manipulations. To affect build process, maven uses distinct kind of dependency called plugins.

Actually I'm surprised that npm uses some kind of scripts. All I want is to download some JS files. Why is there any scripts at all? I guess it's needed for native compilation, but it's a lazy solution, there could be better solutions.


There are a surprising number of npm packages that provide a wrapper around a native library to expose bindings to node devs. I use node-sass on the dev side and mmmagic on the production side, both of which require the presence of binaries.

I understand the danger inherent in this system, and actually do keep an eye on dependencies I require. All that said, it's certainly a lot easier to have npm install handle fetching and building native libraries than it is to figure out a way to manually get those libraries attached to the node package (wait, did I install that in /opt, /usr/local, etc etc).

Ultimately, I'm downloading code someone else wrote and executing it. Yes, post- and pre-install hooks are low hanging fruit for malicious exploitation, but so is installing any large library, you can just as easily put Bad Code in a library you distribute for any other language and wait for someone to run it. The difference here is that there's an exploit possible at install time, rather than runtime.


I would say Debian is not vulnerable (to step 6), even for users of the rolling "unstable" release, since maintainers need to sign package uploads with their PGP key, which is usually protected by a separate password.


One of the problems is that npm (and others) put their credentials or some form of API token into dotfiles in the developers home directory, meaning that if you can execute code as the user (via social engineering or malware) you can push new packages.

In some cases it's even accepted practice to put the actual username/password in the clear in a dotfile, which means anyone who can even read a file from the users home directory, can gain persistent access to push packages as them...


Well, isn't proper authentication a solution in and of itself? Using keys with pass phrases or requiring sudo to publish would theoretically mitigate this issue.


No, because it can just sit in the background and wait until you type your passphrase at some point. As soon as you run malicious code, it’s all over; no workarounds.

It would be nice if npm didn’t run arbitrary install scripts by default…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: