I'd expect the same hash map implementations to work well in both scenarios (many hash maps with few entries and one hash map with many entries). I'd try dense_hash_map, and if you're feeling experimental, this new thing.
A more significant factor may be sizeof(pair<Key, Value>). std::unordered_map has a separate allocation for each entry, which is quite horrible with small values in terms of memory usage and CPU cache effectiveness. With larger values, keep in mind that because of the load factor of these hashes, you end up with more buckets than you have entries. It may be better to have that pointer indirection (either through std::unordered_map or maybe dense_hash_map<Key, std::unique_ptr<Value>>) than have the empty buckets be sizeof(pair<Key, Value>). Or if you can find something like Rust's ordermap which handles that indirection with a 32-bit index (i.e., half the size of a pointer on 64-bit platforms) into an array to be more memory-efficient. Having them be adjacent in the array
Another factor might be the cost of hashing each value. If it's cheap (ints or the like), your ideal hash map class probably doesn't store the hash values. If it's expensive (long strings, for example) and you have the memory to burn, your ideal hash map class probably does store them, so that collisions cost less on lookup and so that resizing on insert costs less.
As an alternative, if you're just holding ints or something, you might be better off with a sorted std::vector<Entry>.
A more significant factor may be sizeof(pair<Key, Value>). std::unordered_map has a separate allocation for each entry, which is quite horrible with small values in terms of memory usage and CPU cache effectiveness. With larger values, keep in mind that because of the load factor of these hashes, you end up with more buckets than you have entries. It may be better to have that pointer indirection (either through std::unordered_map or maybe dense_hash_map<Key, std::unique_ptr<Value>>) than have the empty buckets be sizeof(pair<Key, Value>). Or if you can find something like Rust's ordermap which handles that indirection with a 32-bit index (i.e., half the size of a pointer on 64-bit platforms) into an array to be more memory-efficient. Having them be adjacent in the array
Another factor might be the cost of hashing each value. If it's cheap (ints or the like), your ideal hash map class probably doesn't store the hash values. If it's expensive (long strings, for example) and you have the memory to burn, your ideal hash map class probably does store them, so that collisions cost less on lookup and so that resizing on insert costs less.
As an alternative, if you're just holding ints or something, you might be better off with a sorted std::vector<Entry>.