Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

WASM specifically intended to run in just about any CPU. Which makes JIT compilation a particularly valid strategy to run it. Hardware accelerating the unoptimized WASM bytecode would have a lot of limitations since compiler does not know what hardware it is going to be running on. So, it can't do any hardware specific optimizations that a normal compiler would do.


A single compiler pass that takes the WASM and optimises it for your machine is still AOT compilation.

What makes the JIT a JIT is not it's ability to execute bytecode optimised for the platform, but to perform optimisations for your current input problem with its memory and execution characteristics. E.g. stuf like polymorphic inline caching or dynamic decompilation.

Wasm is pretty well AOT compilable, and I don't see a reason why a WASM CPU shouldn't run a hybrid aproach of a hardware AOT step, with WASM kinda being it's microcode.


> with WASM kinda being it's microcode

Not to be too pedantic, just because you make a good point that's worth clarifying: I think you meant WASM being the processor's *ISA*, which is translated to microcode instructions that are more optimized for actual execution. Exactly like an Intel CPU isn't "really" executing x86 instructions anymore.


You're correct, my brain skipped a couple steps when writing that sentence ^^', thanks for the clarification!


JIT compilers do a bit less optimization than rumored (of course web browsers and HotSpot do some of the heaviest.) The results have to be worth the analysis time, which they're not, because CPUs are already extremely fast and it's memory that's slow, and JIT isn't a technique that can optimize memory layout.


> The results have to be worth the analysis time

On modern systems it may be possible for a JIT compiler to run in the background on cores not being used by the application code.

> JIT isn't a technique that can optimize memory layout

As I understand it, compiler optimizations for memory layouts aren't very well researched in general. Why would the JIT model be a hindrance?


> On modern systems it may be possible for a JIT compiler to run in the background on cores not being used by the application code.

Only as long as the app itself is on-core. It's not worth running background superoptimization pretty much ever, particularly on a battery-powered device, but also in general because optimizations aren't real unless they're reliable. I've had Lisp programmers brag to me about how some implementation can spend 30 minutes optimizing, but in that case you can't tell what's going to happen to your program…

> As I understand it, compiler optimizations for memory layouts aren't very well researched in general. Why would the JIT model be a hindrance?

It's not a help either. Well, function specialization might be helped in some cases.


It doesn't need to be like microcode, it suffices to be like portable excitable format.

This was pretty common in big iron, and survives to this day on IBM and Unisys mainframes.

Also some Java vendors take this approach by AOT .class files into native code for embedded deployment, as an option.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: