This is why games ship interpreters and include a lot of code scripting the gameplay (e.g in Lua) rather than having it all written in C++. You get fast release builds with a decent framerate and the ability to iterate on a lot of the “business logic” from within the game itself.
That's only true for badly written game code though ;)
Even with C++ and heavy stdlib usage it's possible to have debug builds that are only around 3..5 times slower than release builds in C++. And you need that wiggle room anyway to account for lower end CPUs which your game should better be able to run on.
I've never done it, but I just find it hard to believe the slow down would be that large. Most of the computation is on GPU, and you can set your build up such that you link to libraries built at different compilation optimizations.. and they're likely the ones doing most of the heavy lifting. You're not rebuilding all of the underlying libs b/c you're not typically debugging them.
EDIT:
if you're targeting a console.. why would you not debug using higher end hardware? If anything it's an argument in favor of running on an interpreter with a very high end computer for the majority of development..
Yeah, around 3x slower is what I've seen when I was still working on reasonably big (around a million lines of 'orthodox C++' code) PC games until around 2020 with MSVC and without stdlib usage.
This was building the entire game code with the same build options though, we didn't bother to build parts of the code in release mode and parts in debug mode, since debug performance was fast enough for the game to still run in realtime. We also didn't use a big integrated engine like UE, only some specialized middleware libraries that were integrated into the build as source code.
We did spend quite some effort to keep both build time and debug performance under control. The few cases were debug performance became unacceptable were usually caused by 'accidentially exponential' problems.
> Most of the computation is on GPU
Not in our case, the GPU was only used for rendering. All game logic was running on the CPU. IIRC the biggest chunk was pathfinding and visibility/collision/hit checks (e.g. all sorts of NxM situations in the lower levels of the mob AI).
Ah okay, then that makes sense. It really depends on your build system. My natural inclination would be to not have a pathfinding system in DEBUG unless I was actively working on it. But it's not always very easy to set things up to be that way
The slowdown can be enormous if you use SIMD, I believe MSVC emits a write to memory after every single SIMD op in debug, thus the code is incredibly slow(more than 10x).
If you have any SIMD kernels you will suffer, and likely switch to release, with manual markings used to disable optimizations per function/file.
If you target consoles, you’re already on the lowest end hardware you’ll run on. It’s extremely rare to have much (if any) headroom for graphically competitive games. 20% is a stretch, 3x is unthinkable.
Then I would do most of the development work that doesn't directly touch console APIs on a PC version of the game and do development and debugging on a high-end PC. Some compilers also have debug options which apply optimizations while the code still remains debuggable (e.g. on MSVC: https://learn.microsoft.com/en-us/visualstudio/debugger/how-...) - don't know about Sony's toolchain though. Another option might be to only optimize the most performance sensitive parts while keeping 90% of the code in debug mode. In any case, I would try everything before having to give up debuggable builds.
It doesn’t matter. You will hit (for example) rendering bugs that only happen on a specific version of a console with sufficient frequency that you’ll rarely use debug builds.
Debug builds are used sometimes, just not most of the time.