It's good to have error recovery built-in, but it does not mean one should take less care of correct code.
I believe static typing is just a must for current high-reliability software and going with a dynamic language just seems short-sighted.
The remaining languages are mostly JavaScript and Python, but the former survives, as JavaScript is mostly used for UI code and people expect less quality from GUIs and the ladder as it is status-quo for scientific computing and a swiss army knife, as we have so many libraries across so many domains.
Dynamic languages are excellent for prototyping (where you keep your constraints in you human head) and errors are not critical. There once was an argument for dynamic languages as they allowed for succinct code, but nowadays modern languages can be correct and succinct code, as we learned to exploit theorems about computation and type theory.
We are replacing all our python backend by golang and indeed static typing is a good way to well structure your data.
Regarding reliability, I would prefer elixir for 3 raisons:
#1 Functional programming. In go it's really easy to make error by sharing object reference in channel. And what is funny is that it's not so easy to make a deep copy of an object.
#2 Goroutine leak and supervision is really difficult to follow. How do you manage it in production ? Thanks to erlang, elixir provides all you need to monitor your green threads. You can even hot fix issues.
#3 Beam preemption is clearly a strong winning point.
Bonus point, go scalability is limited to your server. In elixir you can remove server boundaries.
For a new product, I will definitly bet on elixir.
No, static typing just gives a false sense of reliability to people who are not experts in reliability. There is even some research showing no correlation of static typing vs dynamic typing to bugs [1]. People tend to ignore it though.
From the conclusion in that paper: "The data indicates functional languages are better than procedural languages; it suggests that strong typing is better than weak typing; that static typing is better than dynamic; and that managed memory usage is better than unmanaged. Further, that the defect proneness of languages in general is not associated with software domains. Also, languages are more related to individual bug categories than bugs overall."
> No, static typing just gives a false sense of reliability
That is not true, there are real benefits and classes of bugs, that can be avoided before runtime. Historically there was the drawback that inferring these bugs has been computational infeasible for some programs, but nowadays the situation is much better.
IMHO the only positive aspect of dynamic languages is their syntactic beauty and existing large eco systems.
I admire Elixir's and Erlang's approach to concurrency and think this model (do not share data and engineer for failures that will happen in distributed systems) is great, but could be combined with static typing.
Dynamic languages tend to let you express yourself more easily and cleanly with less code. Paul Graham years ago wrote about how Lisp let's you write more maintainable code by being able to express yourself close to the problem domain. The power of DSLs is that it's easy to read and write code for that particular sort of task.
Another way to state the advantage of expressive languages is that the less code you have to right, the less bugs you will have. And the less code written, the less code needs to be maintained, the less needs to be read to understand what's going on.
Of course this all depends on writing clean code, not obfuscated hacks or undocumented messes with needless complexity. But that goes for all programming.
There have been some advances in language theory since Lisp, though.
In particular, modern type inference allows you to practically eliminate type declarations and approximate the terseness and expressiveness of dynamic languages. Recent languages such as Crystal and Nim have demonstrated that this is entirely feasible. (I mentioned Go earlier, but Go is, if anything, regressive when it comes to language theory, and not a good example of how to do things right.)
The Lisp example is only convincing if you've not seen what a really impressive type system can do; Haskell and the ML family come to mind, and Rust and Swift both seem promising in that regard, as does Crystal.
"Less code means less bugs" is a powerful meme, but it's a misleading metric in the context of type systems. Dynamic languages necessarily require less code since they eliminate information that otherwise allows you to fully reason at compile time about the program's future execution, thereby eliminating the ability to guarantee its safety; and the opposite is also true: "less information means more bugs".
Ultimately, it's less logic we want — less decisions that can go wrong. In Lisp, or any other dynamically typed language, you can pass a variable of the wrong type to a function and won't know it's wrong until you run the program — how did fewer lines of code help you? The equivalent program in a statically typed language might be slightly longer (though it could also be the same size), but the additional type declarations have made your program less buggy, not more.
> In particular, modern type inference allows you to practically eliminate type declarations and approximate the terseness and expressiveness of dynamic languages.
Not of dynamic languages.
> require less code since they eliminate information that otherwise allows you to fully reason
The idea of Lisp is to have a lot of information at runtime/development time. One develops running software, not a program as text. The running software gives the feedback.
These discussions are thirty years old now. Since then everyone saw the type systems of the day as advanced and modern. Haskell is actually also quite old now...
Lisp and Haskell are simply used in completely different ways to develop software. I would also bet that there are much larger Lisp programs than Haskell programs in use. A large Lisp program can be several million lines of code, like some Lisp-based CAD applications.
> if your goal is to ship code that was never executed.
That's cute, but no. There's a lot of code that only gets executed once in a blue moon and if my experience[1] is anything to go by, this is where types really shine. (Before you say "tests": The amount of code bases I've experienced in dynamically checked languages with no meaningful test coverage is shocking. That, and tests can only show the presence of bugs.)
Types are also hugely valuable as compiler-checked API documentation, especially if side effects can be document as in e.g. Haskell or Idris.
(I'm obviously assuming non-trivial type inference as in e.g. Haskell or O'Caml. If we're talking anemic type systems like Java or C#, then the trade-off becomes a lot harder to justify, at least for me. In these cases you can indeed remove a non-trivial amount of "ceremonial" code and the type systems indeed have very few useful guarantees like non-nullness and such.)
[1] We're all trading anecdotes here, let's be honest about that.
But the whole point of type systems is that you can theoretically verify your code at a high level (though very few computer languages are able to approximate anything like a mathematical proof).
I wrote about this in another comment, in the context of writing tests.
In theory, practice is the same as theory. In practice, it isn't. I've seen only toy examples of some non-trivial program properties being encoded in type systems. It nowhere constituted proof of an entire program. Nobody actually does that. A lot can remain wrong in code which passes type checks. There is no "if it compiles, ship it". Not to mention that compilers can have bugs, and that hidden performance and resource problems can lie latent in high level languages.
Or the property 'this variable is never used after a free' in a language with manual memory management (Rust encodes this) or 'this program is free from data races' (also Rust)?
No one is arguing that strong static type systems make it impossible to write programs with bugs, but eliminating whole classes of bugs at compile time certainly makes it easier.
"less information means less bugs" is not the same thing as "less code means less bugs". Your entire argument there is a non-sequitur, no one who says "less code means less bugs" is talking about the mechanical aspect of typing out code.
They're talking about a combination of using battle tested code and writing at a higher level of abstraction.
I've never understood what people actually mean when they say things like "dynamic languages let you express yourself more easily."
I tend to find it significantly harder in dynamic languages. Yeah, there is way more flexibility in small decisions (like right now, is it easier to return a string or an integer or whatever) but it's not free. Your little decision interacts, directly or otherwise, with potentially the entire rest of the program that might ever exist. There is probably a best choice, and it's hard not to try to find it every single time. Doing the easiest thing right now might mean you need to go and adjust many other things. In order to know what is overall easier you need to keep the rest of the program in mind.
I find dynamic languages kind of exhausting to write anything but one-off scripts in. I can't make good choices without holding far more of the program in my head than I would in a static language. Sometimes I don't even realize I'm making such a decision because I don't have a correct or full view of the rest of the program. To make matters worse those errors won't even show up as being a problem at the decision-site.
I feel like I express myself much easier in a language with a strong static type system. I can write down clearly exactly what the model is, and then the compiler or types let me know when I try to stray from it. From there I can decide if it's better to adjust what I'm doing in the small or the large.
Because languages like Lisp, Smalltalk, Ruby give you greater control over bending the language to more closely match the problem domain, instead of jumping through extra hoops to express what's needed, and the result is a more readable program, because it's expressed in terms of what the program's trying to accomplish.
The entire history of computing is abstraction away from the machine, and offloading as much of the work to the machine, so that humans can focus on solving problems instead of worrying about lower level details. Dynamic languages tend to be better at that.
Now if you Haskell is the comparison, then maybe not. But Haskell has an advanced type system with excellent composability, so it's quite capable of expressing high level abstractions and domain specific code.
That's the general idea, but clearly not everyone agrees. Or perhaps, other concerns are considered more important.
It's also commonly accepted — and, in my opinion, true — that dynamically typed languages move the burden of verifying your program to tests. I waste a lot more time in Ruby tests throwing bad data at my code than in Go, whereas in Go I can concentrate on real use cases, because static typing eliminates a whole class of abuse. In other words, the language forces me to do my own type validation at runtime and in tests.
Moreover, one big reason why static languages are more robust is that static typing changes one's entire approach to handling data. For example, in Ruby it's common to just do JSON.parse(raw_string) and then pluck out whatever it parsed (and again, the burden is on the programmer to verify that the parsed data structure is the correct one). The equivalent in Go is to declare a schema in the form a compile-time struct annotated with the field names that you expect as the input; the unmarshaler will catch type errors and blow up if something goes wrong. In other words, there's a completely different methodology: When Go has parsed your data, you know all the types are satisfied, whereas Ruby is stuck with duck typing. Maybe someone clever has written a strict JSON library for Ruby, but I don't know of one. Ruby libraries tend to be very un-strict.
The dark side of duck typing is that anything you don't explicitly recognize can get ignored. If you have a method with "def foo(options = {})", and someone does "foo(non_existent: 42)", nobody will typically complain. This sort of thing is why ActiveSupport has "assert_valid_keys", and why Ruby now has proper keyword arguments (which only solves the problem of valid keys, not valid data). This stuff is hard to catch and leads to code rot, which is a big maintenance problem that exacerbated by the fact that if you change or remove an argument like this, you won't know if it fails until you run the code, and "grep" is the only tool you can use to find out who actually uses the code.
I understand where you are coming from, there are certainly cases where there is too much flexibility and there is a general need for better quality culture, always writing unit tests with good test coverage, etc. Not all languages do it well and many could benefit from some extra optional compiler strictness. However, you still have comparable bug rates for both very expressive dynamic high level languages and lower level ones, making quite a big difference in total number of bugs for the problems of the same complexity. Basically having to write say 3x less code in something like Perl than in Go, to solve the same problem leaves you with 3x less bugs and takes a lot less time.
I believe static typing is just a must for current high-reliability software and going with a dynamic language just seems short-sighted.
The remaining languages are mostly JavaScript and Python, but the former survives, as JavaScript is mostly used for UI code and people expect less quality from GUIs and the ladder as it is status-quo for scientific computing and a swiss army knife, as we have so many libraries across so many domains.
Dynamic languages are excellent for prototyping (where you keep your constraints in you human head) and errors are not critical. There once was an argument for dynamic languages as they allowed for succinct code, but nowadays modern languages can be correct and succinct code, as we learned to exploit theorems about computation and type theory.
[/static hat off]