An atomic increment/decrement takes so little time as to make this irrelevant. If you're in such a tight loop that you care about a single increment when calling a function (to pass a parameter in), you should have inlined that function and preallocated the memory you're dealing with.
I'm talking about general use of smart pointers, which means that there's a function call involved with the smart pointer value copy, and throwing an increment in is trivial by comparison.
>whichever module happens to drop the last reference to an object which is the last gateway to a large graph of objects
When writing games, I don't think I ever had a "large graph of objects" get dropped at some random time. Typically when you drop a "large graph" it's because you're clearing an entire game level, for instance. Glitches aren't as important when the user is just watching a progress bar.
And you can still apply "ownership semantics" on graphs like that, so that the world graph logically "owns" the objects, and when the world graph releases the object, it does so by placing it on the "to be cleared" list instead of just nulling the reference.
Then in the rare case where something is holding a reference to the object, it won't just crash when it tries to do something with it. In this rare case a release could trigger a surprise extra deallocation chain, as you've suggested.
If that's ever determined to be an issue (via profiling!) you can ensure other objects hold weak references to each other (which is safer anyway), in which case only the main graph is ever in danger of releasing objects -- and it can queue up the releases and time-box how many it does per frame.
Honestly having objects reference each other isn't typically the best answer anyway; having object listeners and broadcast channels and similar is much better, in which case you define the semantics of a "listener" to always use a weak reference, and every time you broadcast on that channel you cull any dead listeners.
Aside from all of that, if you're using object pools, you'd need to deallocate thousands, maybe tens of thousands, of objects in order for it to take enough time to glitch a frame. Meaning that in typical game usage you pretty much never see that. A huge streaming world might hit those thresholds, but a huge streaming world has a whole lot of interesting challenges to be overcome -- and would likely thrash a GC-based system pretty badly.
An atomic increment/decrement takes so little time as to make this irrelevant. If you're in such a tight loop that you care about a single increment when calling a function (to pass a parameter in), you should have inlined that function and preallocated the memory you're dealing with.
I'm talking about general use of smart pointers, which means that there's a function call involved with the smart pointer value copy, and throwing an increment in is trivial by comparison.
>whichever module happens to drop the last reference to an object which is the last gateway to a large graph of objects
When writing games, I don't think I ever had a "large graph of objects" get dropped at some random time. Typically when you drop a "large graph" it's because you're clearing an entire game level, for instance. Glitches aren't as important when the user is just watching a progress bar.
And you can still apply "ownership semantics" on graphs like that, so that the world graph logically "owns" the objects, and when the world graph releases the object, it does so by placing it on the "to be cleared" list instead of just nulling the reference.
Then in the rare case where something is holding a reference to the object, it won't just crash when it tries to do something with it. In this rare case a release could trigger a surprise extra deallocation chain, as you've suggested.
If that's ever determined to be an issue (via profiling!) you can ensure other objects hold weak references to each other (which is safer anyway), in which case only the main graph is ever in danger of releasing objects -- and it can queue up the releases and time-box how many it does per frame.
Honestly having objects reference each other isn't typically the best answer anyway; having object listeners and broadcast channels and similar is much better, in which case you define the semantics of a "listener" to always use a weak reference, and every time you broadcast on that channel you cull any dead listeners.
Aside from all of that, if you're using object pools, you'd need to deallocate thousands, maybe tens of thousands, of objects in order for it to take enough time to glitch a frame. Meaning that in typical game usage you pretty much never see that. A huge streaming world might hit those thresholds, but a huge streaming world has a whole lot of interesting challenges to be overcome -- and would likely thrash a GC-based system pretty badly.