They set the limit at (on my computer) 0.1ms. Also, their implementation is roughly 100x slower than it should be. Should python also error out on loops that run for more than 1000 iterations? The problem isn't that int parsing is broken, the problem is that web-servers aren't validating their inputs.
It's different layers. You're in control of the loops, so no. (guess what though - recursion level is limited) You're not in control of the int() implementation, so that falls on python.
It's similar to what we've done with hash collisions - we can't count on everyone fixing that in every place released so far, so all languages added hash randomisation at startup. That one fortunately had no side effects like this one does.
> They set the limit at (on my computer) 0.1ms.
That seems reasonable to me. I'm not sure if you're saying it's too low or too high?
> the problem is that web-servers aren't validating their inputs.
And they won't. This would be repeated many times in various form affecting Web servers, job queues, parsers, etc. and keep coming back for decades. We already have that with Java XML entities. They weren't fixed at the source and we get a new implementation with that bug ever day.