I'm looking forward to trying the SwiftUI preview integration, though from my experience using the xcodebuildmcp and axe tools to let agents run simulators and capture screenshots, expectations will be low. It seemed like the models were capable of identifying issues like "button that should be there is not displayed", but not identifying when the layout is wrong or some element is too big.
If you want the ghosts to hallucinate less on things like this you should hook them up with the sosumi MCP. It's been very helpful to me since it seems like Apple's newer APIs are not in the training set of today's models.
When working on my own projects I've found a good rule of thumb to be that if you are being told to use something low level and unintuitive like a semaphore in Swift when doing something that ought to be easy, you are probably either reading a stackoverflow answer from an objective-c developer or in the middle of a LLM session that's gone sideways. Low level libraries might need those things, they are approximately never right for application code. Just throw it out and start over (as you did), saves on sanity.