Games run in a tight loop, they don’t (typically) yield execution. If you don’t have preemption, a game will use 100% of all the resources all the time, if given the chance.
Games absolutely yield, even if the rendering thread tries to go 100% you'll likely still be sleeping in the GPU driver as it waits for back buffers to free up.
Even for non-rendering systems those still usually run at game tick-rates since running those full-tilt can starve adjacent cores depending on false sharing, cache misses, bus bandwidth limits and the like.
I can't think of a single title I worked on that did what you describe, embedded stuff for sure but that's a whole different class that is likely not even running a kernel.
Maybe on DOS. Doing any kind of IO usually implies “yielding”, which most interactive programs do very often. Exhausting its quantum without any IO decreases the task’s priority in a classic multilevel feedback queue scheduler, but that’s not typical for programs.
Pinning on a core like this is done in areas like HPC and HFT. In general you want a good assurance that your hardware matches your expectations and some kernel tuning.
I haven't heard of it being done with PC games. I doubt the environment would be predictable enough. On consoles tho..?
We absolutely pinned on consoles, anywhere where you have fixed known hardware tuning for that specific hardware usually nets you some decent benefits.
From what I recall we mostly did it for predictability so that things that may go long wouldn't interrupt deadline sensitive things(audio, physics, etc).
Thinking about it the threads in a game that normally need more CPU time are the ones that are doing lots of sys calls. You'd have to use a fair bit of async and atomics, to split the work into compute and chatting with the kernal. Might as well figure out how to do it 'right' and use 2+ threads so it can scale. Side note the compute heavy low sys call freqency stuff like terrain gen belongs in the pool of back ground threads, normaly.