Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Does anybody know why JIT isn't done in classically AOT compilers?

One (admittedly incomplete) answer is that AOT compilers try to replicate many of the wins that JIT compilers get from runtime specialization by including a profile-guided optimization pass instead, which specializes ahead of time, using data logged from what you hope is a representative example of runtime.

Good JIT compilers can do things like optimizing fast paths, discovering latent static classes in highly dynamic languages, etc. These kinds of optimizations can also be done AOT, if you have good profile data and suitable analysis & optimization passes.

The pros/cons of each approach are not entirely resolved, and you will find varying opinions. Part of the problem with making a direct comparison is that there are large infrastructural inconveniences with switching from one approach to the other. A good JIT is a quite pervasive beast, not something you can just tack on as a nice-to-have. PGO is somewhat infrastructurally easier to add to an existing AOT compiler. Therefore, if you can do most of what JIT does via PGO, you would prefer to do that, were you the maintainer of an existing AOT compiler. Whether you really can is afaik a bit of an open question.



I think something that's often overlooked in this discussion is the language semantics differences. So we're not just comparing AOT with JIT (or why not JIT an AOT compiled app...) We're almost always also comparing C++ to the JVM/CLR worlds.

And then the point is that most optimizations a JIT can do that an AOT cannot are particulaly important where the language semantics are "too" flexible. If your code has lots and lot of virtual calls; lots of exceptions with unpredictable code flow - well, sure, it's really important to elide that flexibility where it's not actually used. That's kind of like JS Vm's nowadays speculatively type their untyped objects - it's a huge win, and not possible statically.

But the point is - these optimizations are critical because the languages don't allow (or encourage) code to disable these dynamic features. In C++ this can be helpful; but how often is dynamic devirtualization really going to matter? I mean, you can statically devirtualize certain instances (e.g. whole-program optimization reveals only two implementations and replaces a virtual call with an if), but the real code-could by any subtype but actually isn't scenario just isn't one that comes up often.

The consequence is that C++ gets most of the benefits of a JIT simply because the JIT is solving problems C++ compilers don't need to solve. The cost is that the compiler wastes inordinate amounts of time compiling your entire program as optimally as it can, even though it only has a few hotspots.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: