It does write its own tests, i.e. to the extent it checks the generated programs work on the data provided and discards the ones that don't. I imagine many of the coding challenges it's trained on come with a few tests as well.
I meant actually coming up with examples consisting of specific problems and their correct solutions (and maybe some counterexamples.)
Ironically, I had just replaced 'test cases' with 'tests', because I thought that the former might seem too generic, and arguably satisfiable merely by rephrasing the problem statement as a test case to be satisfied.
That would imply the AI already solved the problem, as it needs the solution inorder to generate tests, e.g. An AI can't test addition without being able to add, and so on
The problem here is to write a program, not solve the problem that this program is intended to solve. Clearly people can write tests for programs without necessarily being able to write a program to meet the specification, and in some cases without being able to solve the problem that the program is required to solve (e.g. plausibly, a person could write a tests for a program for solving Sudoku puzzles without being able to do that themselves, and it is possible to test programs that will be employed to find currently-unknown prime numbers.)
Having said that, your point is kind-of what I was getting at here, though in a way that was probably way too tongue-in-cheek for its own good: When we consider all the activities that go into writing a program, the parts that AlphaCode does not have to do are not trivial. Being given solved test cases is what allows it to succeed (sometimes) with an approach that involves producing a very large number of mostly-wrong candidates, and searching through them for the few that seem to work.