We report on results for both, with and without just-in-time compilation.
The specific focus for this work was pure interpreter performance in the context of metacompilation systems, but before compilation had a chance to kick in.
For both RPython and Truffle/Graal, it's possible to disable the JIT compilers and measure pure interpreter speed.
So the "baseline" is Java - is that Java compiled or interpreted? And if the latter, is the non-JIT-ted Graal interpreter compiled (as Java) and interpreting the script, or is it interpreted itself?
The figure for the JIT-compiled numbers uses a standard HotSpot JVM, with JIT compilation.
The figures for the interpreter numbers uses a standard HotSpot JVM with the -Xint flag, so, only using the Java bytecode interpreter.
The TruffleSOM interpreter is AOT-compiled, so, it's a native binary, which is then interpreting the SOM code.
We report on results for both, with and without just-in-time compilation. The specific focus for this work was pure interpreter performance in the context of metacompilation systems, but before compilation had a chance to kick in.
For both RPython and Truffle/Graal, it's possible to disable the JIT compilers and measure pure interpreter speed.