>>In a typed language, only polymorphism which can be proved correct is admissible. In an untyped language, arbitrarily complex polymorphism is expressible.
>This misses the point, which is that “arbitrary” has proven to be unlikely to be correct. Ultimately, this argument comes down to “you should trust human programmer intuition,” which I gladly admit to being an assumption I’m unwilling to make.
It's true that “arbitrary” has proven to be unlikely to be correct in the general case.
However, allowing untyped types(?) has also proven to be useful. I think this is because there is yet to be a language that makes expressing complex types easy.
We work in an environment where FizzBuzz is considered a good way of filtering people who can program.
I'm generally a proponent of "strongly typed" languages. But I do worry about the cognition overhead the syntax causes, and I do wonder if there is a "better way".
> I'm generally a proponent of "strongly typed" languages. But I do worry about the cognition overhead the syntax causes, and I do wonder if there is a "better way".
What about dynamic language + runtime type checking + (nearly) 100% test coverage?
You get the flexibility of a dynamic language and the safety of a static type language (in fact you get more, since runtime type checking is more powerful).
100% Test Coverage means that every code path is executed by the test suite. This is probably a good thing.
Yet it does not mean that the code is correct for any definition of "correct" other than "passes the associated test suite." One unit test makes a line of code covered. It does not make the domain of inputs covered - 100% coverage does not mean that corner cases have been handled correctly - or that they have been considered at all. Again, all it means is that every line of code was executed at least once.
There are useless tests:
def test167 [
if [ val == foo(8); return true]
then return passed
else return failed
endif ]
enddef
It can be very costly to achieve 100% test coverage and static verification can reduce this cost by giving you some tests "for free". For example, you never need to write a test to check for null-pointer-exceptions if you code in Haskell.
Static types also help in situations other than testing. For example, you can use "guardrail programming" to be much more liberal when refactoring things: want to change a function signature? Just change it and the compiler will point out what other parts of the code need to be updated. This is even if you haven't written a test suite yet or are doing big changes that invalidate the old test suite.
That said, I think the perfect world would be one that lets you mix dynamic and static typing in a single program (some things are hard to verify statically, but would be nice if we could at least express them as types) but I think we are still a way off that being a practical alternative.
Yes, C# and lots of other language have that feature. Where it gets really tricky is if you want to dynamically check the type of functions or objects, since you need to run the assertions when the function/method gets called rather than when the function/method gets assigned to some reference. Its also tricky to give a parametric type to code that is implemented with dynamic types.
Probably you are right, but for a library I think it should be an obligation to provide 100% coverage. For a generic program, maybe I sould provide 100% coverage at least of the "critical paths".
I still don't understand the notion that dynamic types gives you anything more than static types. Are there really usefull programs that can't be written in, say, Haskell?
IMO, its not just about what useful programs you can write. Its also about how much of the type system you need to learn to get the program to compile. For example:
function foo(f){
return [f(["1", "2", "3"]), f([1,2,3])];
}
function length(xs){
return xs.length;
}
foo(length);
Anyone can see what this Javascript program doing but to write the Haskell equivalent you would need to understand higher-rank-types and enable the corresponding language extension, which gets in the way of beginners.
Unfortunately I do not know so well Haskell to give you an answer. I was referring to this:
> I think this is because there is yet to be a language that makes expressing complex types easy.
Since I'm obliged to use a dynamic language (JavaScript) I'm looking for a solution in it.
And in JavaScript is really easy to express a complex type, say "the set of all the even integers between 2 and 14 but 10", using runtime checking to enforce the constraints. Is it simple in Haskell to define such a set in its type system?
Simple? Yes. Pleasant? Not really. I'm not a Haskell expert but I can write that type in Scala (the Haskell version probably has less boilerplate, but still some):
Like I said, it's cumbersome, but there's nothing actually complicated about writing this, it's just boilerplate to represent it in the type system.
On the other hand, in Idris or the other languages mentioned in the article, you should be able to write that kind of constraint at the type level just as easily as you can at the value level. That's where it gets really exciting.
In javascript, any method that relies on receiving a value of "the set of all the even integers between 2 and 14 but 10" should probably check that it did in fact receive such a thing or it might be buggy. In a statically typed language, that check can easily be centralized to wherever an `EvensFrom2To14Without10` value is constructed.
Here's an example in rust[0]. If the `print_bare` method relies on only receiving even numbers between 2 and 14 excluding 10, then it is buggy, because it isn't checking its input, but the `print_tagged` method is correct without needing to check!
That's not really expressing a complex type in JavaScript, its expressing a complex (such as it is) runtime constraint -- which you can do easily in Haskell as well. JS doesn't give you anything Haskell doesn't there.
Its true that to make that a proper, statically-enforced type in Haskell takes some complexity, but if you aren't willing to do that, you could make it a runtime-enforced constraint on a statically-enforced type of integers as easily in Haskell as, in JavaScript, you could make it a runtime-enforced constraint on a value that is not statically enforced to be anything more specifically than a valid JS value.
I do not know Haskell enough either. But I didn't really mean 'using type system everywhere'. About your question, from what I've seen you can always go symbolic : define an abstract thing as twofourteenexceptten that represent that idea and use it. Or just use a filter everywhere to ensure that function requiring a 2and14but10 won't be called on something with other numbers. In that case you're not losing anything you'd do with a dynamic language.
"But I do worry about the cognition overhead the syntax causes, and I do wonder if there is a "better way"."
I won't claim this is the full answer, but part of the better way is to simply cut away the syntax, revealing it to have been accidental complexity all along, and never a fundamental part of static typing. Even just working in a language with "auto" or "var" or whatever local construct creates a variable based on the type of the expression on the right without you having to puzzle it out yourself makes static typing go from an enormous pain to barely more work than dynamic typing.
I learned C++/Java in school in the 90s and spent most of the 200Xs in the dynamic world precisely because the paperwork of the static languages of the time were absurd. However, working with Go or a modern C# is totally different. (I haven't done C++ with "auto", so I can't add it myself, but others can speak to that.) In my opinion it makes static typing easy enough that it tips the balance back in favor of it; you put this tiny effort in and you get a ton back out.
Go's structural typing-ish automatically-satisfied interface is even easier; do you have a concrete class with some method you like, but need to be able to switch out the actual object with something else? Just declare the relevant interface, done. You can even after-the-fact "duck type" standard library code that you shouldn't directly modify under pretty much any circumstances with no hassle. The costs of the static type system are almost (but not quite!) eliminated, and in return you get, well, static type. Weak by P. Snively's standards, sure, but still better than nothing!
Haskell, of course, well, you fight with the static typer for a few weeks at first, but once you get good at Haskell (and I don't even mean "great", just good) you eventually get to the point where you discover the static type errors are fundamentally correct and trying to tell you something important, at which point you stop hating it for telling you how wrong you are and start loving it for telling you about a problem long before any other language would have. Personally I think everyone who considers themselves a professional programmer should clock at least six months with Haskell, not for any of the reasons you usually hear, but because it will teach you the rare skill of listening to your code. Static type systems, even the simpler ones such as Go or Java, aren't just slapping your hands because you've failed to satisfy their disconnected, academic ideals... they are saying things, and not just things like "you screwed up this method call" but very often things like "this design is fundamentally flawed, you've failed to account for how this value needs to be communicated to that function". It's important to get to the point that you understand this feedback as more than just the computer just being a computer.
This stuff often ends up explained in very academic language, but I have found it to be a brutally, brutally practical skill. And I write "real code" for things that do things on networks at scale, in the cloud, etc.
The nice thing about dynamic types is that they give you the expressive power to code things that would not be allowed (or at least not encouraged) in your type system, even though they might be allowed in a different type system. Back in the day, one notable example of this was parametric polymorphism: it comes "for free" in even the simplest dynamic language but took a while to become mainstream in static languages.
IMO, in an ideal world you would able to choose between dynamic and static verification of types depending on what your program is doing. This might even let us work with richer types than we do now! For example, most typed languages still do dynamic tests for array indexes because doing those statically requires dependent types (which are hard to use and not always worth the trouble). But if we had a language that lets us talk about the array lengths in the type system we would at least be able to give better error messages if something goes wrong, even if we only find out about it at runtime.
I understand everything you are saying (didn't I make that clear in my original comment?). But I think you missed the point: I'm not arguing type systems aren't good. I think they are great. (re-reading - maybe I'm unfair on this. Your point about modern languages like Go and C# being better is a fair one)
I'm arguing that the single dimensional view the original article had on the speed of implementation vs correctness lacked some useful counter-perspectives. Infact, I think it was a pretty cheap shot, since it didn't recognize it as a critical tradeoff at all.
Go's structural typing-ish automatically-satisfied interface is even easier
Yes, I think the Go type system has some nice features.
Haskell, of course, well, you fight with the static typer for a few weeks at first
And that exactly the cognition overhead I'm talking about. Some productive programmers barely understand an if statement inside a loop. Using recursion is doubtful prospect.
I think the idea of people like that fighting a type system for weeks because it's good for them is completely unrealistic.
Note I'm not arguing that it isn't good for them: of course it is. And of course a type system "is telling you things". But people are writing real programs quicker without it.
I do like optional type systems. I think that some combination of optional type systems, test systems and tooling integration could lead to a system than slowly adds types to improve the reliability of programs, without the cognitive load that the traditional upfront typing workflow imposes.
But I'm just speculating about that.
I do know that untyped languages are often more useful that typed languages precisely because of the increased difficulty programming that type safety brings. I think that tradeoff is disappointing.
> This stuff often ends up explained in very academic language,
What, that software should be correct according to the specification (either a formal one or an informal and implicit one)? Because that's the academic motivation I've found for things like type systems and program specification.
I mean like "Curry-Howard isomorphism" and "state space" and "monad" and such. It makes it very easy to dismiss as academic wankery, which is helped by the fact that, well, yeah, some of it is, honestly. But under that academic wankery is some brilliantly practical stuff that is hard to learn from the communities that are over-focused on "practicality".
>This misses the point, which is that “arbitrary” has proven to be unlikely to be correct. Ultimately, this argument comes down to “you should trust human programmer intuition,” which I gladly admit to being an assumption I’m unwilling to make.
It's true that “arbitrary” has proven to be unlikely to be correct in the general case.
However, allowing untyped types(?) has also proven to be useful. I think this is because there is yet to be a language that makes expressing complex types easy.
We work in an environment where FizzBuzz is considered a good way of filtering people who can program.
I'm generally a proponent of "strongly typed" languages. But I do worry about the cognition overhead the syntax causes, and I do wonder if there is a "better way".