steve-ALUG@hst.me.uk wrote:
Not wishing to be to picky, however, this is as I understand what happens under Windows; Linux and Mono may be different. The .Net languages are compiled, but not to native machine code but to CLR - Common Language Runtime code - this is an intermediate level language (it used to be called MicroSoft Intermediate Language (MSIL)). This code is then compiled at runtime to native machine code. This second compilation step is performed not by a traditional compiler, but by a Just In Time Compiler (JIT Compiler) which compiles the CLR code into Machine code just in time, before it is needed. To be honest, I've never seen why this is supposed to be an advantage as the end result seems to be slower than native machine code.
IBM got this right with the AS/400 back in the early 80's. Every application (and even bits of the OS itself) come compiled as TIMI objects which is a virtual instruction set.
However these instructions are never directly interpreted. There is a sort of JIT like final compile step run that translates these into the current architectures instruction set. Once done this native code is appended to the runable object and it then behaves (on that box) just like it was a native binary. Move it again to a different AS/400 on a different arch (say move it from an early CISC machine to the newer PPC based machines) and the process is repeated and it is now a native application for that machine.
Due to this you can take application code from an 80's System/36 (which was a 48bit CISC machine) copy it straight to a brand new (i series PowerPC 64bit multi CPU) box and it will just work, but with all the advantages of being a native application for the newer machine. Not only does this work IBM pretty much promise that it *always* works with 100% compatibility.