Yeah that’s true. But because it’s using the system hashmap it inherits those performance characteristics and maximum size. Map is hard limited to 2^24 = 16,777,216 keys in v8. Map can also have performance spikes that cause event loop delay when you start loading it heavily. If you expect higher load, using a two levels of map (eg Map<k1, Map<k2, V>>) can help with both issues.
I went to look how the library accounted for these issues and it does not. I didn’t look to see if it uses timer coalescing or intervals, I wouldn’t want O(millions) of setTimout calls on my service either.
I think it’s fine to use in the browser or a hobby server, but due to the limitations of Map I would not use it in my production server.
Redis can do 2^32 = 4,294,967,296 keys (a few more), and you can do the “multi level map” by sharding your key space across multiple Redis or Memcached processes. Redis et al also has a large advantage of the cache surviving application deploy. Again, not everyone needs this but to me the main benefit of a remote cache is consistently lowering latencies, versus a in-memory cache where your latency will spike after every deploy as the cache refills.
I went to look how the library accounted for these issues and it does not. I didn’t look to see if it uses timer coalescing or intervals, I wouldn’t want O(millions) of setTimout calls on my service either.
I think it’s fine to use in the browser or a hobby server, but due to the limitations of Map I would not use it in my production server.
Redis can do 2^32 = 4,294,967,296 keys (a few more), and you can do the “multi level map” by sharding your key space across multiple Redis or Memcached processes. Redis et al also has a large advantage of the cache surviving application deploy. Again, not everyone needs this but to me the main benefit of a remote cache is consistently lowering latencies, versus a in-memory cache where your latency will spike after every deploy as the cache refills.