Practical consideration might be that a disk may experience a fault in one of the executions: works fine a hundred times but fails on 101st (eg hits disk-full error or a bad block).
But that simply means it's harder to implement true idempotency when it comes to disk usage.
This is why the problem is usually simplified to ignore unhappy paths.
I fear y'all (or I) may be dancing a bit too deeply into hypothetical here...
The idempotent version of this function doesn't blindly write. Most of Ansible, for example, is Python code doing 'state machines' or whatever - checking if changes are needed and facilitating if so.
Where y'all assume one function is, actually, an entire script/library. Perhaps I'm fooled by party tricks?
Beyond all of this, the disk full example is nebulous. On Linux (at least)... if you open an existing file for writing, the space it is using is reserved. Writing the same data back out should always be possible... just kind of silly.
It brings about the risk of corruption in transit; connectors faulty or what-have-we. Physical failure like this isn't something we can expect software to handle/resolve IMO. Wargames and gremlins combined!
To truly tie all this together I think we have to consider atomicity
> Writing the same data back out should always be possible...
Depends on the implementation: maybe you open a temp file and then mv it into place after you are done writing (for increased atomicity)?
But as I already said, in practice we ignore these externalities because it makes the problem a lot harder for minor improvements — not because this isn't something software can "handle/resolve".
But that simply means it's harder to implement true idempotency when it comes to disk usage.
This is why the problem is usually simplified to ignore unhappy paths.