I'd say because psychologically (and also based on CS Theory) creating something and verifying draw from similar but also unrelated skills.
It's like NP. Solving an NP problem is very hard. Verifying that the solution is correct is very easy.
You might not know the statements required, but once the AI reminds you of which statements are available, you can check the logic using these statements makes sense.
Yes, there is a pitfall of being lazy and forgetting to verify the output. That's where a lot of vibe coding problems come from in my opinion.
The biggest problem with LLMs is that they are very good at presenting something that looks like a correct solution without having the required knowledge to confirm if it is indeed correct.
So my concern is more "do you know how to verify" rather than "did you forget to verify".