There's always human review. Should always be human review. The real goal is to reduce the amount of stuff we have to trust whether human or machine. High-assurance security does it as follows:
1. Formal specs of what it's doing in terms of features and security where there's no ambiguity.
2. Formal specs of how it does that.
3. Formal, machine-checked proof that the how embodies the what.
4. Formal proof or even extraction of implementation. These techniques can go down to machine code or gates now.
5. Covert channel analysis to find any leaks in any of that.
6. Testing of every execution trace under a variety of inputs to show equivalence with formal model during success and failure!
7. Trustworthy distribution of above artifacts to both evaluators and users.
8. Optional, trustworthy checking of above artifacts on-site by users with diverse tooling.
9. On-site generation of system from distributed artifacts.
10. Proper guidance w/ automation where possible on secure initialization, configuration, maintenance, and termination.
Shortest summary I can do of process that goes back to the 1980's for countering the three problems you mentioned. Many key issues have been grand slammed out by tooling and checklists. Others are still evolving with mixed success. Clever attackers might always embed new backdoor you don't see. Plus, specs, tests, or key tools might be wrong. Hence, human review of each of the above by many smart minds is the most important assurance activity.
EDIT: I should note that, while it looks waterfall process, it can and probably should be done a mix of top-down & bottom-up development. Important thing is you can link various pieces together for believable assurance argument.
1. Formal specs of what it's doing in terms of features and security where there's no ambiguity.
2. Formal specs of how it does that.
3. Formal, machine-checked proof that the how embodies the what.
4. Formal proof or even extraction of implementation. These techniques can go down to machine code or gates now.
5. Covert channel analysis to find any leaks in any of that.
6. Testing of every execution trace under a variety of inputs to show equivalence with formal model during success and failure!
7. Trustworthy distribution of above artifacts to both evaluators and users.
8. Optional, trustworthy checking of above artifacts on-site by users with diverse tooling.
9. On-site generation of system from distributed artifacts.
10. Proper guidance w/ automation where possible on secure initialization, configuration, maintenance, and termination.
Shortest summary I can do of process that goes back to the 1980's for countering the three problems you mentioned. Many key issues have been grand slammed out by tooling and checklists. Others are still evolving with mixed success. Clever attackers might always embed new backdoor you don't see. Plus, specs, tests, or key tools might be wrong. Hence, human review of each of the above by many smart minds is the most important assurance activity.
EDIT: I should note that, while it looks waterfall process, it can and probably should be done a mix of top-down & bottom-up development. Important thing is you can link various pieces together for believable assurance argument.