I have the feeling that immutable data (with which I don’t mean copy on write) has an extreme potential for optimization but there doesn’t seem to be a good central collection of it.
Some tips from me: mmap works really well and easy with immutable data as the operating system will manage the memory pretty well. If you store strings ordered in a string table you can compare the pointers for blazing my fast string comparison.
I think if you have a large immutable lookup table, the compiler will put it in the rodata section. Which means the kernel can evict that mapping if it's unused and repopulate via a page fault when it's needed.
mmap'ng constant tables is nice, but still wasting your caches. Which makes it much slower than a few tricks to save data and cpu. Such as using proper algorithms and data structures. I'm working on an optimizing data compiler for such oneshot lookups in static const data. Range or interval queries would be different and easier (btree's only), but oneshot needs to try much more.
You can use proper data structures in mmaped files. The problem is that you will need offset-based pointers (Like position independent code, but data) and fixed strict layouts, which has very little support in programming languages.
Some tips from me: mmap works really well and easy with immutable data as the operating system will manage the memory pretty well. If you store strings ordered in a string table you can compare the pointers for blazing my fast string comparison.