But then could you trust it not to hallucinate functionality that doesn't exist? Seems as risky as out-of-date comments, if not more
What I'd really like is an AI linter than noticed if you've changed some functionality referenced in a comment without updating that comment. Then, the worst-case scenario is that it doesn't notice, and we're back where we started.
What I'd really like is an AI linter than noticed if you've changed some functionality referenced in a comment without updating that comment. Then, the worst-case scenario is that it doesn't notice, and we're back where we started.