Including a coding style guide can help the code looks like what you want. Also including an explanation of the project structure, and overall design of the code base. Always specify what libraries it should make use of (or it'll bring in anything or implement stuff a library has already).
You can also make the AI review itself. Have it modify code, than ask to review the code, than ask to address review comments, and iterate until it has no more comments.
Use an agentic tool like Claude Code or Amazon Q CLI. Then ask it to run tests after code changes and to address all issues until test pass. Make sure to tell it not to change the test code.
I found that presenting your situation and asking for a plan/ideas + « do not give me code. Make sure you understand the requirements and ask questions if needed.» works much better for me.
It also allows me to more easily control what the llm will do and not end up reviewing and throwing 200 lines of code.
In a nextjs + vitest context, I try to really outline which tests I want and give it proper data examples so that it does not cheat around mocking fake objects.
I do not buy into the whole you’re a senior dev etc. Most people use Claude for coding so I guess it’s engrained by default.