That is a pretty theoretical scenario, at least I usually have no idea in advance what the user input will be. And garbage collectors pausing program execution is not a fundamental problem, there are garbage collectors operating entirely in parallel with normal program execution. Even further, garbage collection itself seem not to be the main source of those pauses, but heap compactification, which you also lose if you use automatic reference counting (or have to do manually).
I didn't say you had to know the inputs in advance. What's known in advance is when deallocation will occur, i.e. its deterministic. You know exactly when deallocation will occur.
Heap compaction is another part of GC which results in long non-deterministic pauses. But the real problem is having fragmentation in the first place, this is something which manual allocation strategies can go a long way to mitigate.
How can you know when a deallocation will occur? Did the user add this product to a single invoice and the product object will get deallocated when closing this invoice form or is there a second invoice still referencing the product? At best you can know when a dealloction might possibly occur, but this is whenever a reference changes or goes away and usually not pretty helpful.
For any sufficiently complex real world application you will not in general know when a deallocation occurs and that is one of the reasons you decided to use automatic reference counting in the first place. Some short lived objects within a function are non-issues to begin with and are easily handled manually if you want to, but the interesting bits and pieces are objects with unpredictable life time and usage pattern because they are heavily influenced by inputs.
When will a deallocation occur? When release is called and the referece count is zero. This is deterministic, unlike GC which will perform the deallocation when it feels like it.
I think you've confused determinism with regards to a program and the inputs to that program. Any retain/release program is deterministic because it always behaves in the same manner. Given any possible input I can tell you exactly when and on which lines of code the deallocations will occur. You can't know that with GC, especially when other libraries and threads enter the equation.
Which deallocations occur and in which order depend on the input to the program, but we still know in advance all possible lines of code where specific deallocations can occur and we have complete control over that. With GC we wouldn't know that and we couldn't control it.
Knowing when deallocations can occur and having control over that is the reason to choose ARC over GC in the first place. Not the other way around. With respect to manual retain/release, ARC doesn't take anything away: I can still retain an extra count of a large object to prevent its untimely release and share that object via ARC at the same time.
Two things. First, most GC doesn't actually deallocate anything, but that's a minor nitpick. The second, more important note, is that you are now describing a system where everything is garbage collected. This needn't be the case. In a language like Go, you have control over what get's stack allocated and what is placed in a GC. When that choice can be made, then you really can tell when given objects are deallocated. Of course, since the stack is a fixed size construct, I guess you don't really deallocate these things either...
It's also worth noting that GCs rarely does anything because it feels like it, but because you are allocating past a certain limit. Knowing the size of objects, and given that the allocator can provide you with the current size of allocated objects, you should be able to determine if a GC is likely to incur in the next lines of code. This determination becomes harder when threads enter the picture, unless you're using a GC scheme like Erlang, which have one GC per process (lightweight thread). Some GCs also allow you to pause and resume them, for when you really want to avoid GC cycles, this can of course lead your program to crash if you allocate past your limit.
> At best you can know when a dealloction might possibly occur, but this is whenever a reference changes or goes away and usually not pretty helpful.
After a while working with refcounting, I found myself writing code whose natural flow meant I was actually pretty sure, at least for deallocations large enough to care about. I don't even think about it any more, really, I just continue to enjoy my objects going away when I expect them to.