The difference I've noticed is the first shot is generally cleaner but the ceiling of what it can correct is limited. If it is given more independent or simple things to correct and it hears about it then you're usually golden but if that thing it has to correct interacts with other constraints then when it shifts approach to fix the issue it is told about it often forgets other things and can break them. Typically this happens on the more complex (as in how interrelated) problems, for complex (as in just a lot of stuff needs to be done) it does fine.
You can but as I said the ceiling on what it can correct seems limited, particularly in the described situations. GPT 4 doesn't seem to have really broken that barrier much more than GPT 3.5 in my use so far. I posted about some examples of this experience over here https://news.ycombinator.com/item?id=35158149