This seems misplaced in a discussion about generative AI. Automated systems for handling appels are already a thing and don't necessarily rely on generative AI like LLMs.
I agree that bans and appeal handling needs a human in the loop to work as intended, to have some way of handling the false positives. Without a human in the loop the appeal process can just as well be removed entirely, because it won't fulfill its purpose.
Whether it's misplaced or not kinda depends on how big of a deal you think recent and upcoming generative AIs are. If they're really incredible, their recognition as something that makes moderation automation cheaper and/or better is likely to inject new energy into the preexisting drive to automate tasks like moderation and policy appeals. In the same way, if they make it possible to automate things that strictly required humans before (which seems to be the case for LLMs already), new classes of tasks will be automated that weren't before. Those things make things like a general right to appeals before competent, human reviewers more urgent, even if it's been a good idea for a long time.
On the other hand, if you don't see recent and upcoming generative AIs as that world-changing, they don't really change the picture as to whether or not the enshrinement of such a right into policy is warranted.
I agree that bans and appeal handling needs a human in the loop to work as intended, to have some way of handling the false positives. Without a human in the loop the appeal process can just as well be removed entirely, because it won't fulfill its purpose.