Lars Bak, who leads chrome's v8 team and was tech lead for java's hotspot vm, argues that javascript is a competitive 'bytecode' even compared to java or CLI bytecode [1]. Among other things, Bak claims that javascript source is more compact than traditional bytecode.
[1] about 3/4 into this interview: http://channel9.msdn.com/Shows/Going+Deep/Expert-to-Expert-Erik-Meijer-and-Lars-Bak-Inside-V8-A-Javascript-Virtual-Machine
I'm personally in favor of leaving JS mostly alone, especially at the syntax layer, and then let transcompiler solutions like CoffeeScript evolve to relieve the syntax burden and add semantic improvements.
CoffeeScript is far from perfect--it has syntax quirks of its own--but it's more pleasant for me to write in, being used to Python and Ruby. (And, yes, I understand JavaScript too; I just don't like the syntax.)
Eventually transcompiler languages will evolve to take advantage of different JS engine improvements. So far, this isn't a goal of CoffeeScript, but other abstraction layers might already be doing that.
Eventually transcompiler languages will evolve to take advantage of different JS engine improvements.
I wonder what opportunities there are here that haven't been exploited yet and that don't require replacing JS with a new language. For example, could JS implementors define a more-easily-optimizable subset of JS? Then transpilers seeking performance could target just that subset.
It's an interesting idea. My assumption is that stronger JS engines already do a good job of optimizing simply-written JS, whether the JS is hand written or produced by a transcompiler.
Transcompilers that were target-specific could probably target server-side JS at first. e.g. If you're just building a node.js app, there's no reason for the polyfill language to make any concessions to IE, for example. So one "optimization" is simply avoiding legacy cruft in the generated output. But I could also see exploiting specific features of cutting-edge JS engines.
One of the stated reasons for Dart is that the V8 team was hitting a wall with making JS fast, beyond which a language with cleaner and more optimizable semantics is needed. I wonder, though, whether the nature of that wall has been clearly written up anywhere. I'd like to know if there are tricks in the category of "This would be annoying to write by hand in JS, but would be easy for a compiler targeting JS" that could be used to get around it.
Transcompilers that were target-specific could probably target server-side JS
I had an idea recently that (to me at least) is super exciting: someone should make a good language that compiles (à la Coffeescript/Parenscript) to JS but also to Lua. JS and Lua are close semantically, so it might not be so hard. (If it did turn out hard, it probably wouldn't be worth doing.) That would be a really interesting server-side alternative to both Dart (whose philosophy appears to be "run our VM on the server and compile to JS for the client") and Node.js.
JS is becoming the compiler target "bytecode", without needing verifiers or new and complex standards to be adopted by multiple browser vendors.
JS has gaps as a target language, for sure. We are working on filling them (e.g. 64-bit and other int types).
These are easy bugs to fix compared to creating a new, portable, and future-friendly bytecode standard in addition to keeping up with a competitive JS engine. Since JS is incumbent, it's hard for any browser vendor to justify a new thing with zero users at first, and too much risk of non-standardization or JVML-like albatross status.
Can we reconsider using this thing as the assembler for the web? Can we come up with a sane, cross-platform bytecode standard?