JITs were far more important in the 1980s when people regularly used grossly different computing systems.
Think of the popular 1980s computers: IBM PC (Intel 8086), Amiga (Motorola 68000), Commodore 64 (MOS Technology 6510), TRS-80 (Zilog Z80), Apple II (WDC 65C02), Acorn Electron (Synertek SY6502A).
In fact, Pascal's popularity in the 1980s was probably due to the large number of p-code interpreters and Pascal -> p-code compilers.
--------------
Ironically, we still use "p-code", now called Bytecode, today. But we really don't move between systems aside from ARM and x86. GPU assembly is special, since its an entirely different model so you can't really port Java or Python to the GPU.
I guess LLVM shows that the high-level translation to LLVM-intermediate language just simplifies compiler optimization, to the point that its useful even if you're making code for a specific machine.
EDIT: I think the modern CPU have more or less settled on the same features. They're all Multithreaded, cache-coherent Modified 64-bit Harvard machines with out-of-order execution, super-scalar front-end with speculative branch prediction, with ~6 uop dispatch per clock tick and roughly 2 or 3 load/store units with 64kB L1 cache and 64-byte cache lines with a dedicated 128-bit SIMD unit
The above lines describes ARM Cortex-A72, Intel Skylake, AMD Zen, and POWER9... except Skylake has 256-bit SIMD units I guess, and Apple's A12 has 96kB L1 cache. Not very big differences anymore between CPUs.
"By 1978, there existed over 80 distinct Pascal implementations on hosts ranging from the Intel 8080 microprocessor to the Cray-1 supercomputer. But Pascal’s usefulness was not restricted to educational institutions; by 1980 all four major manufacturers of workstations (Three Rivers, HP, Apollo, Tektronix) were using Pascal for system programming. Besides being the major agent for the spread of Pascal implementations, the P-system was significant in demonstrating how comprehensible, portable, and reliable a compiler and system program could be made. Many programmers learned much from the P-system, including implementors who did not base their work on the P-system, and others who had never before been able to study a compiler in detail. The fact that a compiler was available in source form caused the P-system to become an influential vehicle of extracurricular education." (Niklaus Wirth)
Think of the popular 1980s computers: IBM PC (Intel 8086), Amiga (Motorola 68000), Commodore 64 (MOS Technology 6510), TRS-80 (Zilog Z80), Apple II (WDC 65C02), Acorn Electron (Synertek SY6502A).
The solution at that time was to use p-code ("portable" code), such as https://en.wikipedia.org/wiki/UCSD_Pascal
In fact, Pascal's popularity in the 1980s was probably due to the large number of p-code interpreters and Pascal -> p-code compilers.
--------------
Ironically, we still use "p-code", now called Bytecode, today. But we really don't move between systems aside from ARM and x86. GPU assembly is special, since its an entirely different model so you can't really port Java or Python to the GPU.
I guess LLVM shows that the high-level translation to LLVM-intermediate language just simplifies compiler optimization, to the point that its useful even if you're making code for a specific machine.
EDIT: I think the modern CPU have more or less settled on the same features. They're all Multithreaded, cache-coherent Modified 64-bit Harvard machines with out-of-order execution, super-scalar front-end with speculative branch prediction, with ~6 uop dispatch per clock tick and roughly 2 or 3 load/store units with 64kB L1 cache and 64-byte cache lines with a dedicated 128-bit SIMD unit
The above lines describes ARM Cortex-A72, Intel Skylake, AMD Zen, and POWER9... except Skylake has 256-bit SIMD units I guess, and Apple's A12 has 96kB L1 cache. Not very big differences anymore between CPUs.