octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Fwd: Octave JIT consultancy


From: Daryl Maier
Subject: RE: Fwd: Octave JIT consultancy
Date: Wed, 18 Sep 2019 16:24:14 -0400

Hi,

I will try and answer some of your questions as best I can.  As I have limited knowledge of Octave internals (though I'm happy to be educated!) at this point some of my answers will be somewhat abstract.

>>> Using your program, would it involve the current interpreter in any way? Or would it be parallel to the interpreter?

It could work either way, but it depends on what the Octave interpreter is capable of doing.

Generally speaking, we have used the OMR compiler technology successfully both in environments where every method is compiled before execution and in mixed-mode environments that are a combination of interpreted and JITed methods.  Which technique a language environment chooses to use depends on the capabilities of the VM (for example, to support mixed-mode may require support for 4 different method transition types between methods: (I)nterp->Interp, I->(J)itted, J->I, and J->J, and on the performance objectives (for example, mixed mode is important if you care about the cost of compile-time and can't afford to compile every method).

The OMR compiler also supports multiple compilation threads which can run in parallel with your application and VM (including the interpreter).  This is how it is used in OpenJ9.  It also works just as well in a synchronous mode where methods are compiled on demand on the application thread before proceeding.  This is how the POC implementations in Lua and WebAssembly work, for instance.

>>> How would your program deal with that octave variables can change type during their lifetimes?

This would depend on the representation of the program as it is presented to the compiler.  Presumably, by the time it reaches the JIT, from the JIT's perspective it is dealing with different variables with different types.

>>> How much work do you estimate it would be to create an m-file-compiler? In lines of code. Can you describe the process?

It depends on what the starting point is.  However, for some perspective, the proof-of-concept Lua JIT I mentioned in my earlier email implements all but one of the PUC-Rio Lua VM internal opcodes in about 2500 LOC.  This includes using the JitBuilder interface to translate the internal Lua representation of a function into the IL expected by the OMR compiler, and glue logic in Lua to initiate JIT compilation and look up compiled targets.  Doing so yielded a number of noticeable performance improvements over the interpreter with a minimal amount of work.

The general process for any language environment looking to use OMR to compile is to first have some representation of the method that you wish to compile.  For instance, an AST (abstract syntax tree) with type information, or an intermediate representation of your method (for example, in bytecode form).  One could then use either JitBuilder or write your own custom "IL generator" to translate that representation into the internal tree-based, typed representation that OMR works with.

Deciding when to compile is another design decision, as you've alluded to in an earlier question.  Do you compile everything up front before the application executes (like a static compiler), or do you compile at runtime?  If you compile at runtime you will need to introduce some control logic to determine when and how a method is compiled as the program executes.  For example, in its simplest form, at the point where one method calls another you will need to check whether the target is compiled or not.  If it is compiled, then you dispatch to the compiled body.  If it is not compiled then you can either begin the process to initiate a compile, or do some other heuristic checks to defer that process (e.g., compile a method after 10 invocations).  This is an over-simplification of the process, but I think you get the idea.

Once you've decided when and what to compile, you then have to decide "how" to compile.  The OMR compiler allows you to define different "optimization strategies" that describe the optimizations you wish to apply to this compilation.  Some strategies may favour quick compiles while others may have more involved optimizations and consume more memory.  When the OMR compiler is used in OpenJ9, it uses a tiered-compilation approach and allows methods to be recompiled with increasing levels of optimization depending on the "hotness" of the method.

During the compilation process the JIT sometimes has questions that it needs answers to from the language "front end" in order to optimize or generate code (because language environments tend to be different, and each can have different rules or configurations).  For example, how do I fold a constant floating point _expression_ at compile-time, what is the size of an address for the target machine, find me the address of a method given a method handle, etc.  Many of these queries have defaults, but a language environment would have to override them if necessary.  These are implemented in a "FrontEnd" interface and a compiler environment interface in OMR that you would have to provide.

The OMR compiler by default writes the generated code to an executable code buffer.  If the compilation was successful it simply returns the entry point address of the method just compiled.  At the moment, the language environment is responsible for managing these code addresses and providing a means to look up the compiled version of a method (though we may be introducing some infrastructure to generalize this in the near future).  There is also some less-tested (slightly above experimental) code that will permit the compiled functions to be persisted in an ELF representation for linking with other object files.

If you're interested in tinkering with this some more, last year at SPLASH'18 we ran a hands-on tutorial workshop that demonstrated how to integrate OMR into a WebAssembly environment.  You can find the tutorial here -> https://github.com/omr-turbo/wasmjit-omr/blob/turbo/turbo.md.  Although this is a simplified tutorial, it will give you a feel for what is involved in integrating your own compiler into an existing language environment.

>>> How do you assess the possibility to create a compiler from m-script to an ECU with your program? There is a looot of money here....

It might be possible to do what you're asking, but it's hard for me to say for sure without understanding more about both ends of the pipeline (i.e., the "m-script" end and the "ECU" end).  If the ECU end executes "instructions" like a CPU would then it should be possible to generate code for it.

Hope that helps.

Cheers,
..daryl


Inactive hide details for GoSim ---2019/09/16 01:06:26 PM---Hello,  I'm not an octave dev but interested in this topic. How wouGoSim ---2019/09/16 01:06:26 PM---Hello, I'm not an octave dev but interested in this topic. How would your program

From: GoSim <address@hidden>
To: address@hidden
Date: 2019/09/16 01:06 PM
Subject: [EXTERNAL] Re: Fwd: Octave JIT consultancy
Sent by: "Octave-maintainers" <octave-maintainers-bounces+maier=address@hidden>





Hello,

I'm not an octave dev but interested in this topic. How would your program
deal with that octave variables can change type during their lifetimes?
Using your program, would it involve the current interpreter in any way? Or
would it be parallel to the interpreter? My guess is parallel which is why
no dev is answering you, maybe it is enough just to know the syntax?
How much work do you estimate it would be to create an m-file-compiler? In
lines of code. Can you describe the process?

ECU = electronic control unit, automotive industry
How do you assess the possibility to create a compiler from m-script to an
ECU with your program? There is a looot of money here....

kind regards




--
Sent from:
https://octave.1599824.n4.nabble.com/Octave-Maintainers-f1638794.html 






reply via email to

[Prev in Thread] Current Thread [Next in Thread]