I'm not suggesting one should use the original Boyer-Moore theorem prover today. It's of historical interest. I made it runnable again because I used it when it was state of the art, and I'd previously ported it once. But it's 1980s technology.
I like their constructive approach to mathematics. They build number theory up from something close to the Peano axioms, one proof at a time. This is good for soundness. I built on that and didn't add axioms. Proving McCarthy's "axioms" for arrays in Boyer-Moore theory was possible, and was done. Instead of explicit quantifiers, you have recursive functions which are proved to terminate. That gets you out of many annoying logic knots associated with quantifiers. Like undecidability.
I've been out of this for a long time. There's been considerable progress. But the tooling still seems not to be all that programmer-friendly. It could be. We had that partly figured out. What we didn't have was enough CPU power to make it go fast enough to be useful.
I'd hoped Rust would go in for more verification. C/C++ has so much undefined behavior that huge efforts have to be made just to nail down the language. Rust is much better in that respect.
People get carried away with abstraction in this area. I wouldn't try to verify macros or generics before they're instantiated.
That requires elaborate and iffy type theory. Instantiate them, look at the generated code (say at the LLVM input level) with hard types for everything, and work on that. That's unambiguous. Then build a system with enough automation that the user doesn't have to do work for each instantiation. That's what computers are for. Don't try to be too clever.
Automated proof is what you get with SparkAda. With the 2014 version you get executable contracts (thanks to DbC added to Ada2012) and lots of help from the gnatprove tool (that acts as a frontend - meaning it implements all runtime checks the compiler implements - to why3 or other provers or static analysis tools - see the inclusion of codepeer recently). Then once you've generated all the verification conditions you run z3, cvc4, alt-ergo... You define the amount of effort you're allowing and if they fail to prove your VCs you get immediate feedback (in your dev IDE) and you drill down to 'why' and 'how to help the prover' quickly once you've spent some time doing it (most of the time it bogs down to 'state your unstated assumptions'). And if you're not lucky but still motivated you can quickly jump into Coq or Isabelle with your VCs.
They're very pragmatic. They went for a subset of Ada, that is deemed 'provable' but still largely useable. You get lots of nice typing features and RT checks in Ada.
Still some difficult areas : aliasing, floating point, and user-friendliness, but still the tech is working.
As for generics that's the tactic SparkAda adopted. It's disturbing at first ('they went the lazy way !') but it's pragmatic and it works great. In fact some times you'd be inclined to transform a procedure, function or package to a generic, because with the correct restricted, specific types and parameters the proof effort is easier.
We're at a point where proof of absence of runtime error is almost achievable by mere programmers. We'll be asking soon 'do we go for test+fuzzing+quickcheck for robustness or proof ?'.
As for automation, they're always working on small but useful stuff. The latest one I can think about is loop unrolling. Might look funny but writing loop invariants and variants is tedious. They're slowly (lead bullets) working up small automation steps.
I like their constructive approach to mathematics. They build number theory up from something close to the Peano axioms, one proof at a time. This is good for soundness. I built on that and didn't add axioms. Proving McCarthy's "axioms" for arrays in Boyer-Moore theory was possible, and was done. Instead of explicit quantifiers, you have recursive functions which are proved to terminate. That gets you out of many annoying logic knots associated with quantifiers. Like undecidability.
I've been out of this for a long time. There's been considerable progress. But the tooling still seems not to be all that programmer-friendly. It could be. We had that partly figured out. What we didn't have was enough CPU power to make it go fast enough to be useful.
I'd hoped Rust would go in for more verification. C/C++ has so much undefined behavior that huge efforts have to be made just to nail down the language. Rust is much better in that respect.
People get carried away with abstraction in this area. I wouldn't try to verify macros or generics before they're instantiated. That requires elaborate and iffy type theory. Instantiate them, look at the generated code (say at the LLVM input level) with hard types for everything, and work on that. That's unambiguous. Then build a system with enough automation that the user doesn't have to do work for each instantiation. That's what computers are for. Don't try to be too clever.