What's even worse, when typing is treated as an indisputable virtue (and not a tradeoff), pretty much every team starts sacrificing readability for the sake of typing.
And lo and behold, they end up with _more_ design bugs. And the sad part is that they will never even recognize that too much typing is to blame.
Nonsense. You might consider it a tradeoff, but it's a very heavily skewed one. Minor downsides on one side, huge upsides on the other.
Also I would say type hints sacrifice aesthetics, not readability. Most code with type hints is easier to read, in the same way that graphs with labelled axes and units are easier to read. They might have more "stuff" there which people might think is ugly, but they convey critical information which allows you to understand the code.
That has not been my experience in the past few years.
I've always been a fan of type hints in Python: intention behind them was to contribute to readability and when developer had that intention in mind, they worked really well.
However, with the release of mypy and Typescript, engineering culture largely shifted towards "typing is a virtue" mindset. Type hints are no longer a documentation tool, they are a constraint enforcing tool. And that tool is often at odds with readability.
Readability is subjective and ephemeral, type constraints (and intellisense) are very tangible. Naturally, developers are failing to find balance between the two.
I write a lot of typescript and rust. In those languages, when I want to understand some code I haven’t seen before, I always start by reading the types. Understanding what and how the data moves through a system is usually key to understanding everything. And usually I lean heavily on my editor for this - in typescript there’s a lot of value in the simple act of hovering over values to see what type they are.
I’m working with a medium size python program at the moment. It’s mostly written by someone smart but early career, and they’ve made a rabbit warren of classes and mixins that get combined in complex ways. I’ve been encouraging him to add types - and wherever those types exist, the code becomes 100% more legible to my code editor - and ultimately to me.
I don’t think I’d bother with types in Python for small programs. But my experience is that good type hints lay out a welcome mat to anyone who comes along later to figure the code out. And honestly, a lot of the time that person is the original author, just months or years after the code was written.
> Writing software without types lets you go at full speed. Full speed towards the cliff.
Isn't it strange that back when Python (or Ruby) didn't even have type hints (not type checkers, type hints!), it would easily outperform pretty much every heavily typed language?
Somehow when types weren't an option we weren't going towards the cliff, but now that they are, not using them means jumping off a cliff? Something doesn't add up.
It's because the nature of typing has changed drastically over the last decade or so, in well known languages, going from C++/Java's `FancyObject *fancyObject = new FancyObject()` (which was definitely annoying to type, and was seen as a way to "tell the compiler how to arrange memory" as opposed to "how do we ensure constraints hold?") to modern TypeScript, where large well-typed programs can be written with barely a type annotation in sight.
There's also a larger understanding that as programs get larger and larger, they get harder to maintain and more importantly refactor, and good types help with this much more than brittle unit tests do. (You can also eliminate a lot of busywork tests with types.)
Large programs are harder to maintain because people don't have the balls to break them into smaller ones with proper boundaries. They prefer incremental bandaids like type hints or unit tests that make it easier to deal with the big ball of mud, instead of not building the ball in the first place.
No it hasn’t? C++ type system has hardly changed (until concepts) and is one of the most powerful available.
A certain generation of devs thought types were academic nonsense and then relearned the existence of those features in other languages. Now they are zealots about using them.
I think the point is that in newer languages like typescript, the price paid for static typing is lower because type inference does so much of the leg work. You get all the benefits of static typing, and the cost is usually tiny - you just need to define your types (a valuable exercise regardless) and add them to function signatures.
We’ve come a long way from the C++ or Java I wrote when I was young, where types were named and renamed constantly. As I understand it, even C++ has the auto keyword now.
#include <string>
auto func(auto x, auto y) -> auto {
return x + y;
}
auto main() -> int {
auto i = func(1, 2);
auto s = func(std::string("a"), std::string("b"));
}
`int` is required as a return type from `main`, but everything else is inferred. This works because `func` becomes a template function where each parameter type is a separate template type, so you get compile-time duck typing. It also works with concepts (e.g. `std::integral auto x`).
It's quite neat, but I don't think anyone actually writes code this way, except for lambdas.
Every single typed system I have ever worked on, no matter how poorly designed, has been easier to alter than the vast majority of ruby, python, perl, php, and elixir that I've worked on
Inserting a library that wraps an existing one to add new features has been a nightmare in every statically typed language I’ve used — including times it’s virtually impossible because you’d need the underlying library to understand the wrapper type in its methods.
In Python (with duck typing), that’s a complete non-issue.
Can you give an example? I think part of the problem is that mixins and such are so hard to do in most statically typed languages that programmers just don’t code things that way.
I see your point - I certainly find myself reaching for clever high level patterns less in typescript than I do in JavaScript because complex typing can get in the way. But also, programs that make heavy use of metaprogramming are often, also, harder to read and debug. There’s something very nice and straightforward about explicit, concrete types.
I'm not the person you asked the question to but I had an unpleasant experience with Typescript recently.
I used a HTTP requests library in a nuxtjs app (probably nuxt's native library) and I spent too much of my time conjuring the request and response types that would please the type checker. It was extremely frustrating because the code would work in Javascript but the compiler wouldn't accept it because of typing.
I can't give you the details because I'm not at my computer now but the type was a mix of HTTP verbs and the structure of the JSON response. I gave up after a while and rewrote the code using fetch and no types. If they stand between me and the final result they can go down the drain.
Sounds like a bug in the type definitions. Thats an unfortunate consequence of typescript being glued to javascript: If you import a package that is authored in javascript, there's no guarantee that the type definitions are written correctly or kept up to date. Sometimes they don't exist at all.
It doesn't happen too often, but its definitely annoying.
In cases like this, the easiest way is to just add as any to your expression - which essentially turns off type checking for that expression. Maybe that's what you did?
fetch('foo.json', {...} as any)
I don't think there's anything wrong with this. Using typescript types for only 95% of your code rather than 100% still provides a lot of value in my opinion.
You can also ctrl+click on functions like this and read the actual types they're expecting.
then I parsed resData. The JSON in the response is still type checked but I don't have to fight anymore with the HTTP library. I can't remember what it was as it never made it into a commit.
Worth mentioning that I've also have the opposite opposite experience. Wrapping/using a library in vanilla JS, the type signatures changing and breaking unexpectedly with an update and only finding out when parts of the app suddenly broke.
It can be slightly laborious to manually wrap a bunch of operations so you can override something, but it's more of an annoyance/inefficiency than something that adds cognitive overhead. That said, many languages (eg structurally typed ones like TS) it should be a non-issue.
> back when Python (or Ruby) didn't even have type hints (not type checkers, type hints!), it would easily outperform pretty much every heavily typed language?
No it didn't. It outperformed Java 1.2, and people thought that Java 1.2 was what a typed language looked like. Python always sucked compared to OCaml (yet alone OCaml with a decent IDE), but OCaml had a weird syntax and the documentation was in French, so no-one cared. Now that we finally have a copy of OCaml with curly braces and a critical mass of obnoxious fanboy hype, more people have noticed.
I would expect dynamic type crowd to embrace microservices first, given how everybody says that dynamic codebases are a huge mess.
Regardless, to me enterprise represents legacy, bureaucracy, incidental complexity, heavy typing, stagnation.
I understand that some people would like to think that heavy type-reliance is a way for enterprise to address some of it's inherent problems.
But I personally believe that it's just another symptom of enterprise mindset. Long-ass upfront design documents and "designing the layout of the program in types first" are clearly of the same nature.
It's no surprise that Typescript was born at Microsoft.
You want your company to stagnate sooner? Hyperfixate on types. Now your startup can feel the "joys" of enterprise even at the seed stage.
Eh. The amount of work it takes to specify your types in a typescript program is tiny. Type inference does almost all of the work. And the benefit of that work is largely felt in maintenance & onboarding, since the code is easier to read when you’re new and come back to later. Refactoring large JavaScript programs is a nightmare.
The real enterprise death doesn’t come from types. It comes from tasteless over use of classes - especially once you have a complex web of long lived objects that and all reference each other. Significant portions of code in these codebases ends up dedicated to useless tasks like lifecycle management instead of the actual work of your application. It’s kind of the code version of corporate beaurocracy - classes everywhere devoted to doing BS jobs.
It’s not complicated people. Just write the code that tells the computer what you want it to do. No more. Unnecessary encapsulation and premature abstraction will kill your velocity dead.
Proper engineering isn't that much of a concern when you have 0 customers, and by the time you have some it's too late to change.
Besides nobody is claiming that it's impossible to build a successful products with dynamic typing. It's just not as good. You can build a successful product with zero comments in your codebase, doesn't mean it's a good idea.
Again, the evidence (as limited as it is) suggests otherwise. You are more likely to succeed if you're going with dynamic language and not doing "proper engineering". This has been widely accepted before type-checker era, and I see no reason why it would be different now. Utilize type checker when it's free, but don't waste time on type puzzles.
"Proper engineering" doesn't get you to product-market fit faster. All it does is tickle your ego.
A lot of the old robust code tended to have guard statements like “if not isinstance(…): raise ValueError”, which does a great job of surfacing mistakes before they can compound too much. We all wrote scads of production Python over the decades before typing caught on. I think it’s much easier to do a good job of it now. Having your IDE yell at you before you’ve even finished saving the file sure beats running it and hoping for the best.
This strategy is usually presented as a way for billionaires to avoid paying taxes on their wealth, but that's a blatant manipulation. The reality is that it allows you to introduce a bit of risk to potentially save taxes on day-to-day expenses. Which, even for billionaires, are not that high to have a noticeable impact on society.
Nobody is going to take a loan provided in the reddit example, since even a minor market fluctuation will trigger a margin call and cause you to lose all your assets used as collateral (and pay taxes on it).
The higher the loan amount, the higher the risks. The longer you have left to live, the higher the risks.
Since you have to commit to this strategy till the end of your life in order for it to work, you're essentially betting that your asset will always appreciate faster than losses accumulate on your compound rate on the loan. Making a bet like that for the rest of your life is quite the gamble (unless you're planning to die in the near future).
This strategy is only viable if you use it for a tiny fraction of your wealth, so it can potentially be used to fund your day-to-day expenses. But it's still a lifelong gamble. What if you happen to die during a market crash?
If an asset was worth $50k in 1950, and is now worth $100k, it would feel really weird when capital gains tax is collected on an asset that actually lost more than 80% of it's value over the years.
First of all, this is a really dumb point. If it is so easy to find good assets, why don't you buy those, take the loan against those like the reddit post suggests, then buy even more assets with those money, take another loan, and just keep doing that to generate infinite amount of money? The only reason stopping you is admitting that every asset carries a risk.
But whether it's on you or not is beyond the point. Being taxed on top of a loss is not a good thing.
Also there are plenty of reasons not to sell a depreciating asset. For example, CEO willing to maintain control over company, or simply not wanting to send bad signals to the public by selling their shares (because that would depreciate the asset even more).
A better argument would be to adjust basis for inflation instead of resetting it.
> What you're doing by breaking things into functions is trying to prevent it's eventual growth into a bug infested behemoth
Not every piece of code grows into a bug-infested behemoth. A lot of code doesn't grow for years. We're biased to think that every piece of code needs to "scale", but the reality is that most of it doesn't.
Instead of trying to fix issues in advance you should build a culture where issues are identified and fixed as they come up.
This piece of code will be a pain to maintain when the team gets bigger? So fix it when it actually gets bigger. Create space for engineers to talk about their pains and give them time to address those. Don't assume you know all their future pains and fix them in advance.
> In my experience, nearly every case where an area of a code base has become unmaintainable - it generally originates in a large, stateful piece of code that started in this fashion
In my experience it gets even worse with tons of prematurely-abstracted functions. Identifying and fixing large blocks of code that are hard to maintain is way easier that identifying and fixing premature abstractions. If you have to choose between the two (and you typically do), you should always choose large blocks of code.
The great thing about big blocks of code is that their flaws are so obvious. Which means they are easy to fix when the time comes. The skill every team desperately needs is identifying when the time comes, not writing code that scales from scratch (which is simply impossible).
> For example, lets say I gave a simple coding puzzle (think leetcode) to 10 python engineers. I would get at least 8 different responses. A few might gravitate around similar concepts but they would be significantly different.
This is true for any language. Arguably, what's different about Python is that the more senior the engineers you're interviewing, the more likely their solutions to converge.
Just because something is in the Zen of Python, doesn't mean it automatically gets followed by every Python engineer. It's Zen, after all - you don't automatically get enlightened just by reading it.
I want to increment some counter on the webpage. Which approach feels natural?
No one wakes up saying "please let me mutate simple state with function calls".