iOS code signs all code that runs on the system. Compared to most other operating systems, this is a huge security win: without another exploit in that system, even a standard remote code execution vulnerability is useless. Nitro generates code at runtime, so it (by definition) needs to run unsigned code. With Safari, Apple can audit all the code necessary, but with third party apps, it would be easy for a developer to have their app exploited or even use the unsigned code ability to download new, possibly malicious code after the App Store review process.
So far, there have been two userland exploits of the iOS browser (Safari) since Apple implemented code signing in 2008, at JailbreakMe.com. Both of those required tons of extra work compared to what they would have required otherwise.
It's not a "weak argument", it's the reason why iOS is one of the most secure platforms out there, and way more secure than Android, Windows, (most) desktop Linux, and OS X. Other preventative technologies (e.g. ASLR, DEP/w^x, etc) are on those platforms, but none of them have the same kind of pervasive code signing that you see on iOS (and often video game consoles).
Code signing is the reason why iOS and game consoles are closed platforms, so only the first party controls what you can run on it. They're just protecting their profits.
It has yet to be shown that code signing helps significantly to improve security. It relies too much on the competency of those that write said signed software. Measures like sandboxing are much more effective.
> It has yet to be shown that code signing helps significantly to improve security.
Hah! I needed a good laugh today.
No, Apple has not blacklisted a developer's certificate that we know of to the extent that pre-existing apps on an arbitrary user's phone will stop working due to validation failure, but they have revoked certificates from developers that effectively prevent them from ever submitting another app to the store[1]. You're right that sandboxing is a stronger security measure in a general sense, but security isn't a black or white thing you can throw a single buzzword at and have all your problems taken care of. Code signing is an additional layer of protection for the average user so he/she doesn't download a fake AV program that grinds their device to a halt while offering to get rid of itself for the low price of $39.99. Yes, code signing also ensures that the manufacturer of the device can also get a cut of the profits of other people's hard work, but that is not it's only reason for existence and it doesn't always have to be employed as a revenue stream.
I'm not an expert at this so please correct me if I'm wrong, but if I remember correctly all I had to do to jailbreak my iphone was go to jailbreakme.com and click the button on it. That was it.
So it seems that if I can trick someone to click that button then it is possible for me to not only jailbreak their phone but also run any sort of malicious code afterwards (because instead of Cydia it can install just about anything).
Jailbreak me exploited a vulnerability in the FreeType library used while rendering PDF files. That has long been patched. While I'm sure that there are other exploits that we haven't discovered yet, a one-click jailbreak exploit hasn't happened since Jailbreakme, it it had, it would be ubiquitous. The odds are tremendously low for someone to accidentally jailbreak their phone.
I don't recall of ever hearing a story where someone accidentally jailbroke their phone, although I could be ill informed.
UIWebView runs in-process. There's no way for "part of" a process to be able to map executable pages without other parts able to, so either they let any app run whatever code it wants, disable Nitro, or rewrite large chunks of the UI to move WebKit out-of-process.