How does the domination of a few big game engines(unity/unreal) change this? I get the impression that they handle more and more of the actual compute intensive stuff, and 'nobody' writes their own engines anymore?
So then the economy of scale changes it a bit, and maybe they can make abstractions which use many cores under the hood, hiding the complexity?
Yes, certain problems are very typical for Unreal and can be seen in many games using it, especially stutters from on-demand shader compilation and "traversal stutter" when entering new areas (so mostly problems with on-demand texture loading). These problems can be fixed, but there's no magic bullet, it simply requires a lot of work, so often this is relegated to later patches (if even that).
But there are certain games which in addition are heavily bound by single-thread performance, although they are using Unreal, probably the most prominent lately being Star Wars Jedi Survivors, which isn't fixed to this day. You can watch Digital Foundry's video for details: https://www.youtube.com/watch?v=uI6eAVvvmg0
Why exactly this is no one can say apart from the developers themselves.
> Yes, certain problems are very typical for Unreal and can be seen in many games using it, especially stutters from on-demand shader compilation and "traversal stutter" when entering new areas (so mostly problems with on-demand texture loading).
This problem is so obvious and widespread now, I wish there was a toggle/env var where I could decide I'm willing to have longer loading-screens rather than in-game stutters/"magically appearing objects", just like we had in the good old days.
Some games use a small percentage of my available VRAM and/or RAM, yet they decide to stream in levels anyways, regardless of it not being needed for any resource-usage reasons.
I don't think the problem is that easy to solve in general. For a lot of modern games, there are only two amounts of total memory that would really matter: enough to run it at all, and enough to hold all game assets uncompressed. The former is usually around 4-16 GB depending on settings and the latter is often over 200 GB.
Very few gamers have that much RAM, none have that much VRAM. Many assets also aren't spatially correlated or indexed, so even though a whole "level" might be discrete enough to load specifically, the other assets that might be needed could still encompass nearly everything in the game.
For these games, amounts of memory in between those two thresholds aren't especially beneficial. They'd still require asset streaming, they'd just be able to hold more assets at once. That sounds better, and in some cases might just be, but really the issue is boiling down to knowing what assets are needed and having already loaded them before the player sees them. That's a caching and prediction problem much more than a memory size problem.
> How does the domination of a few big game engines(unity/unreal) change this? I get the impression that they handle more and more of the actual compute intensive stuff, and 'nobody' writes their own engines anymore?
You still have to know what you're doing. Cities: Skylines 2 is a good example, as the first installation had awful performance when playing bigger cities, and it wasn't very good at parallelizing that work.
For the second game, they seem to have gone all in with Unity ECS, which changes your entire architecture (especially if you use it wholesale like developers of Cities: Skylines did), which is something you have to explicitly do. Now the second game is a lot better at using all available cores, but it does introduce a lot of complexity compared to the approach they took in the first game.
So then the economy of scale changes it a bit, and maybe they can make abstractions which use many cores under the hood, hiding the complexity?