I think your analysis just ignord the dummy qubits they added to the algorithm to fix the problem you mentioned. So the chance of being ON can be more than 96%. They said it can be high,e.g. 99.999999..% What is the effect of that on the upper bound you wrote?
In the case of an instance with a satisfiable solution, it just increases the chances of a false negative.
When a solution exists, you want the check to be extremely strict; 0% let through instead of 99.999999%^(n^6).
When there's no solution, you want relaxed checks the allow non-satisfying solutions to get through after not-too-many-retries (so the algorithm can terminate).
I think you missed an important point in your analysis, that the prob of ax is increasing every iteration, assume as an example, p_1(ax)=0.97, p_2(ax)=0.98 and so on till p_r(ax) \approx 1,so the expected number of iterations E till the algorithm should start is
E=1/p_1+1/(p_1p_2)+...+1/(p_1p_2...p_r).
I think the main trick here will be the rate of increase of p(ax). If p_1(ax) starts small and the increasing rate is really slowly then E will be exponential, but this will not be the case if p(ax) starts very high ,e.g. 0.99 and the rate of increase of p(ax) is "OK", then E will be polynomial. I think someone needs to check the rate of increase of p(ax). This will be the killing point.