For a while now I've been dreaming about a system that lets me link tests to documentation. In a similar way that citations of other documents support claims in many domains, claims about software could be backed up by tests that demonstrate the claim. Then, when a test fails, the system could surface the relevant pieces of documentation (whether they're developer facing or user facing) so the developer knows the _why_ they're trying to preserve as they update some combination of the code, the test, and the documentation.
Automate the tests. Sure the QA group holds the final say, but if you have a test that breaks you don't have to ask them. Or at least you have something to base a.conversation on.
Note that requirements change, and bad tests will break on no requirements. Telling the difference is an exercise for the reader.
I tend to agree. When I am capable of summoning the energy to care at this point, I make a run at #1. I have recently decided to actively pursue #2, as well.
I have multiple QA teams. Some are straight up regular users who cannot write hello world in any language. Others are programmers who got hired to write and maintain our QA end to end suite. A good end to end QA suite for a large project is complex enough that it needs everything any large software project needs: architects, tech leads, and all the other technical staff.
I've seen a recent multi-months training course for testers. Half of it is manual testing (culminating in a website-related project), the other half automated testing which includes learning some Python.
Really don't wish to go back to this. We don't have QA teams or QAs where I work anymore. We can get changes to prod much faster and our test suites are growing to be pretty high quality for the stuff our team owns.