> Turning program correctness into just a matter of type safety, at which point it is just compositional.
It isn't. "Truth-preservation" in all deduction systems is also compositional (if the premises are true, so is the conclusion); that doesn't make the propositions compositional with respect to program components (if I conclude that X > 3 and Y > 2, then I can also conclude that X + Y > 5; the conclusion preserves the truth of the premises, not the property itself, as neither X nor Y is greater than 5). If you have a property of a program that isn't a property of a component, then the property is not compositional (note that type safety is: A∘B is type-safe iff A and B are; but type safety here is merely the inference step, similar to truth preservation). If it's not compositional, it can be arbitrarily hard to prove.
> Maybe there are important cases that we can't solve this way, but I've not encountered any yet.
If you're able to affordably prove any correctness property of any real-world program, this knowledge is worth billions. I am not aware of anyone who's ever made a similar claim.
> I'd argue that we already do huge amounts of program verification with deductive proofs - that is, typechecking - many orders of magnitude more code is typechecked than has any other formal method applied - and the frontier of progress is how much we can represent important program properties within that.
But the properties proven are easy. It's like saying, I can jump 50cm, so jumping all the way to the moon is just a matter of degree. No. There are real, fundamental, theoretical barriers between proving compositional properties and non-compositional ones (or those that require one or two inferences and ones that are more); between proving properties with zero or one quantifier, and properties with two quantifiers.
Also, while you can be pleased with Haskell's formal accomplishments, have you compared them to those of model checking in safety-critical real-time systems? Because once you do, you realize it doesn't look so good in comparison.
And your comparison of Haskell and Java is particularly interesting. You say that Haskell's type systems is "streets ahead". I agree (although I think this comes at a great cost). However, Haskell correctness or development cost is not "streets ahead" or even noticeably ahead. This shows that strengthening some deductive property is no guarantee to improving quality. Why? Because software correctness is very complicated. Its theory is complicated, and its practice is even more complicated.
> I don't see that using types forces anything any more than any other approach to specification
I think depdedent types are just redundant. You can get all or most of what give you without the inflexibility and the bad default using other means.
But look, I'm personally not too fond of dependent types in particular and deductive proofs in general, and consider them something you use when all else fails, and you have the money to spare. Others can be as bearish on model checking or concolic testing, paths that I think are more promising and I believe are seeing a bigger research investments. But what is certain is that no one has any idea whether we will find a way to drastically increase software correctness in the next twenty years, let alone which method will get us there. The one thing you can be sure of anyone who tells you they do is that they are simply unfamiliar with the different avenues explored or with many of the challenges involved (including social challenges).
> And your comparison of Haskell and Java is particularly interesting. You say that Haskell's type systems is "streets ahead". I agree (although I think this comes at a great cost). However, Haskell correctness or development cost is not "streets ahead" or even noticeably ahead.
I'd argue correctness is in fact streets ahead: orders of magnitude fewer defects (in terms of the code not doing what the programmer intended; the programmer not understanding what some other human intended is still as big a problem as ever) in practice. It's interesting how little value that actually brings in ordinary commercial development, but the correctness techniques will be there when the industry is ready for them. Development cost is much harder to measure; I'd argue that some systems developed with Haskell-like type systems would simply not have been possible to make without them, but that's the sort of question that's very difficult to answer objectively.
> orders of magnitude fewer defects (in terms of the code not doing what the programmer intended; the programmer not understanding what some other human intended is still as big a problem as ever) in practice.
I don't see that, and I'm not aware of any report that reports that.
> It's interesting how little value that actually brings in ordinary commercial development, but the correctness techniques will be there when the industry is ready for them.
Except that in some cases it's already true. Companies like Amazon and Microsoft are seeing great value in using TLA+ and other formal methods, and now management encourages engineers to use them. I just don't see it for Haskell. Facebook is a good example, as they invest a lot in correctness research (both formal methods, like Infer, as well as inductive methods for automatic bug fixing and codegen[1]) and they also use some Haskell. But they're pushing hard on formal and inductive methods, but not so much on Haskell. I don't think correctness is a problem that cannot be addressed, and I believe there are techniques that address it. I just don't see that Haskell is one of them.
> Development cost is much harder to measure; I'd argue that some systems developed with Haskell-like type systems would simply not have been possible to make without them, but that's the sort of question that's very difficult to answer objectively.
It isn't. "Truth-preservation" in all deduction systems is also compositional (if the premises are true, so is the conclusion); that doesn't make the propositions compositional with respect to program components (if I conclude that X > 3 and Y > 2, then I can also conclude that X + Y > 5; the conclusion preserves the truth of the premises, not the property itself, as neither X nor Y is greater than 5). If you have a property of a program that isn't a property of a component, then the property is not compositional (note that type safety is: A∘B is type-safe iff A and B are; but type safety here is merely the inference step, similar to truth preservation). If it's not compositional, it can be arbitrarily hard to prove.
> Maybe there are important cases that we can't solve this way, but I've not encountered any yet.
If you're able to affordably prove any correctness property of any real-world program, this knowledge is worth billions. I am not aware of anyone who's ever made a similar claim.
> I'd argue that we already do huge amounts of program verification with deductive proofs - that is, typechecking - many orders of magnitude more code is typechecked than has any other formal method applied - and the frontier of progress is how much we can represent important program properties within that.
But the properties proven are easy. It's like saying, I can jump 50cm, so jumping all the way to the moon is just a matter of degree. No. There are real, fundamental, theoretical barriers between proving compositional properties and non-compositional ones (or those that require one or two inferences and ones that are more); between proving properties with zero or one quantifier, and properties with two quantifiers.
Also, while you can be pleased with Haskell's formal accomplishments, have you compared them to those of model checking in safety-critical real-time systems? Because once you do, you realize it doesn't look so good in comparison.
And your comparison of Haskell and Java is particularly interesting. You say that Haskell's type systems is "streets ahead". I agree (although I think this comes at a great cost). However, Haskell correctness or development cost is not "streets ahead" or even noticeably ahead. This shows that strengthening some deductive property is no guarantee to improving quality. Why? Because software correctness is very complicated. Its theory is complicated, and its practice is even more complicated.
> I don't see that using types forces anything any more than any other approach to specification
I think depdedent types are just redundant. You can get all or most of what give you without the inflexibility and the bad default using other means.
But look, I'm personally not too fond of dependent types in particular and deductive proofs in general, and consider them something you use when all else fails, and you have the money to spare. Others can be as bearish on model checking or concolic testing, paths that I think are more promising and I believe are seeing a bigger research investments. But what is certain is that no one has any idea whether we will find a way to drastically increase software correctness in the next twenty years, let alone which method will get us there. The one thing you can be sure of anyone who tells you they do is that they are simply unfamiliar with the different avenues explored or with many of the challenges involved (including social challenges).