1. The concept of nation state is historically very young, few centuries at best in it's modern concept and that's when most modern nation states formed, the very last centuries. That's true anywhere from Asia to the balcans to the Americas and Africa.
2. Italy has been seen by Italian populations as one entity from millennia, essentially since the Roman empire when all the Italian tribes wanted to be elevated to the same status of citizenship as romans. And there's an overabundance of historical proofs of it, you're gonna find for centuries and centuries of Italian literature and diplomacy. Every single major writer in Italian history since middle ages has written openly about Italia, Dante Alighieri dedicated an entire chant in the 13th century to the woes of the Italian people "Ahi serva Italia".
3. You are absolutely right in stating that the major promoters of Italian unifications have been the elites, but that's common to most such historical events.
> Milan and Napoli would both be happier not having to deal with each other.
I'm Italian and this is absolutely false.
While any political and historical event will have people saying that it wouldn't even better if X or Y stayed Z, the overwhelming majority of Italians want a united Italy.
Federalist parties never took huge political weight (and the major one, Lega, won most votes when it dropped that narrative entirely) and separatist ones get irrelevant number of negligible votes.
You could say the same about Germany where there's also huge regional differences (even bigger ones due to religion).
It's very hard to imagine European and world history without a united Italy.
Italian unification success was one (albeit not the primary one, which was the rivalry with Austria) major source of pressure on Prussian elites [1].
We look at history at very abstract level, but in 1861/62 Prussian newspapers were all into "look at how Cavour did in Italy, unlike that incompetent Bismarck".
I admit I'm one of those students who never used Racket in a non-academic setting (but mostly because I needed to contribute to already-existing projects written in different languages), and I was taught Racket from one of its main contributors, John Clements at Cal Poly San Luis Obispo. However, learning Racket planted a seed in me that would later grow into a love of programming languages beyond industry-standard imperative ones.
I took a two-quarter series of classes from John Clements: the first was a course on programming language interpreters, and the second was a compilers course. The first course was taught entirely in Racket (then called DrScheme). As a guy who loved C and wanted to be the next Dennis Ritchie, I remember hating Racket at first, with all of its parentheses and feeling restricted by immutability and needing to express repetition using recursion. However, we gradually worked our way toward building a Scheme meta-circular evaluator. The second course was language-agnostic. Our first assignment was to write an interpreter for a subset of Scheme. We were allowed to use any language. I was tired of Racket and wanted to code in a much more familiar language: C++. Surely this was a sign of relief, right?
It turned out that C++ was a terrible choice for the job. I ended up writing a complex inheritance hierarchy of expression types, which could have easily been implemented using Racket's pattern matching capabilities. Additionally, C++ requires manual memory management, and this was before the C++11 standard with its introduction of smart pointers. Finally, I learned how functional programming paradigms make testing so much easier, compared to using object-oriented unit testing frameworks and dealing with mutable objects. I managed to get the project done and working in C++, but only after a grueling 40 hours.
I never complained about Racket after that.
In graduate school, I was taught Scala and Haskell from Cormac Flanagan, who also contributed to Racket. Sometime after graduate school, I got bit by the Smalltalk and Lisp bugs hard....now I do a little bit of research on programming languages when I'm not busy teaching classes as a community college professor. I find Futamura projections quite fascinating.
I'm glad I was taught programming languages from John Clements and Cormac Flanagan. They planted seeds that later bloomed into a love for programming languages.
C++ is one of my favourite languages, and I got into a few cool jobs because of my C++ knowledge.
However, given the option I would mostly reach for managed compiled languages as first choice, and only if really, really required, to something like C++, and even then, probably to a native library that gets consumed, instead of 100% pure C++.
I didn’t know you like C++. I’ve been reading your posts for a few years now and your advocacy of the Xerox PARC way of computing. I’ve found that most Smalltalkers and Lispers are not exactly fond of C++. To be fair, many Unix and Plan 9 people are also not big C++ fans despite C++ also coming from Bell Labs.
Back when C++ was becoming famous, my favourite programming language was Object Pascal, in the form of Turbo Pascal, having been introduced to it via TP 5.5 OOP mini booklet.
Shortly thereafter Turbo Pascal 6 was released, and I got into Turbo Vision, followed by Turbo Pascal 1.5 for Windows 3.1, the year thereafter.
I was a big Borland fan, thus when you got the whole stuff it was Object Pascal/C++, naturally C was there just because all C++ vendors started as C vendors.
On Windows and OS/2 land, C++ IDEs shared a lot with Smalltalk and Xerox PARC ideas in developer experience, it wasn't the vi + command line + debuggers are for the weak kind of experience.
See Energize C++, as Lucid was pivoting away from Common Lisp, with Cadillac what we would call a LSP nowadays, where you could do incrementatl compilation on method level and hot reload
You're right altougth C++ was born on UNIX at Bell Labs there is that point of view, and also a reason why I always had much more fun with C++ across Mac OS, OS/2, Windows, BeOS, and Symbian, with their full stack frameworks and IDE tooling.
However with time I moved into managed languages, application languages, where it is enough to make use of a couple of native libraries, if really required, which is where I still reach for C++.
that's an often repeated misconception about lisps.
lisps are pretty good at low-level programming, but then you'll need to make some compromises like abandoning the reliance on the GC and managing memory manually (which is still a lot easier than in other languages due to the metaprogramming capabilities).
there are lisps that can compile themselves to machine code in 2-4000 LoC altogether (i.e. compiler and assembler included; https://github.com/attila-lendvai/maru).
i'm not saying that there are lisp-based solutions that are ready for use in the industry. what i'm saying is that the lisp langauge is not at all an obstacle for memory-limited and/or real-time programs. it's just that few people use them, especially in those fields.
and there are interesting experiments for direct compilation, too:
BIT: A Very Compact #Scheme System for #Microcontrollers (#lisp #embedded)
http://www.iro.umontreal.ca/~feeley/papers/DubeFeeleyHOSC05....
"We demonstrate that with this system it is clearly possible to run realistic Scheme programs on a microcontroller with as little as 3 to 4 KB of RAM. Programs that access the whole Scheme library require only 13 KB of ROM."
"Many of the techniques [...] are part of the Scheme and Lisp implementation folklore. [...] We cite relevant previous work for the less well known implementation techniques."
People always point out this as a failure, when it is the contrary.
A programming language being managed doesn't mean we need to close the door to any other kind of resource management.
Unless it is something hard real time, and there are options there as well, we get to enjoy the productivity of high level programming, while at the same time having the tools at our disposal to do low level systems stuff, without having to mix languages.
I use it professionally. My favorite is its seemingly complete lack of bad behavior:
"3" + 1 is neither "4", "31", nor 4. It's illegal.
0 is not false, causing endless confusion on filters and &&s.
For loops don't alter the iterated value within a closure to the max value when it's finally called.
And some positives:
Immutable/functional is the default, but mutability is easy too.
Nice optional, keyword, and variable arity support.
Straight forward multithreading, semaphores, shared state, and unshared state.
Excellent module system:
- renames both in and out, including prefixes, all applied to arbitrary scopes of identifiers (I may be using inaccurate terminology)
- nested submodules
- automatic tests and/or "main" submodules
.....etc.......
If I could be grated a wish though it would be for nice struct syntax, but I think that's in Racket's successor Rhombus; haven't personally tried it yet.
I also sometimes wish it was slightly Haskell-ier in various ways, as did the talented individual who created Hackett.
If I were to guess why it's not used, it's because it's not used, which has a kind of downward-spiral force thing going on with it. If you're a random guy in charge of 200 dudes at BigCo, your first thought probably isn't "We should rewrite this whole thing in Racket!", it's probably more like "We should fire everyone and have Claude rewrite our codebase into Rust!" and tell your boss you saved 200*0.5M a year and ask for a comparative bonus. But if you're solo and in charge of designing, implementing, and maintaining a system with 1 or 2 people over the next 20 years, you can use whatever language you want, and Racket's a pretty good choice.
University is to open the people's horizons, to learn how to learn, too see computing systems in action that most people on programming bootcamps never deem possible, unless they are curious to learn about computing history.
Sometimes it takes a couple of years, before a seed grows. I for one had a professor, who said: "I am not here to teach you C or Java. I am here to teach you computer programming." and then went on to take us on a tour through various paradigms, and included Prolog, back then Dr.Scheme (which turned into Racket), C, Java and Python. At the time I didn't understand Scheme at all. Didn't understand the idea of passing a function as an argument, so deeply rooted in the imperative world I was. But a couple of years later, I came upon HN and comments mentioning SICP ... Didn't that professor teach us something about that? What if I took a look and started learning from this book everyone is recommending?
And there it was. I worked on exercises of SICP and finished approximately 40% of the book's exercises and had a very solid grasp of Scheme and Racket, and any hobby project I would take out Racket to try and build it. Along the way I learned many things, that I would still not know today, had I stuck with only mainstream imperative languages. I wouldn't be half the computer programmer, that I am today, without going the SICP and Scheme way. I also worked through The Little Schemer. What an impressive little book it is!
So it is far from what you claim. In fact even a little exposure to Scheme once upon a time can make all the difference.
Everyone gets to choose which language they use for their personal projects.
Where are all the Racket personal projects?
N.B. I say this as someone who personally contributed small fixes to Racket in the 90s (when it was called mzscheme) and 00s (when it was called PLT-Scheme).
I view Racket as an academic language used as a vehicle for education and for research. I think Racket does fine in its niche, but Racket has a lot of compelling competitors, especially for researchers and professional software engineers. Those who want a smaller Scheme can choose between plenty of implementations, and those who want a larger language can choose Common Lisp. For those who don't mind syntax different from S-expressions, there's Haskell and OCaml. Those who want access to the Java or .NET ecosystems could use Scala, Clojure, or F#.
There's nothing wrong with an academic/research language like Racket, Oberon, and Standard ML.
I wish Standard ML had a strong ecosystem and things like a good dependency manager/package manager. I really liked it. But there is even less of an ecosystem around it than some other niche languages, and I've gone into the rabbit hole of writing everything myself too often, to know that at some point I will either hit the limit of my energy burning out, or the limits of my mathematical understanding to implement something. For example how to make a normal distribution from only having uniform distribution in the standard library. So many approaches to have an approximation, but to really understand them, you need to understand a lot of math.
Anyway, I like the language. Felt great writing a few Advent of Code puzzles in SMLNJ.
Racket is my first choice for most code I write these days and I've published a fair number of libraries into the raco package manager ecosystem in hopes other people using Racket might find them useful too.
> There is no law the court can do anything it wants, arrest anyone for any reason. There is no expiry date for their term.
The judiciary does not write the laws, only applies them.
I'm quite sure that removing a tracking bracelet and trying to flee is against the law.
While it's true that the judiciary holds lots of weight in Brazil, let's not forget that different branches fighting over their boundaries is the norm in any functioning government and democracy.
We're merely more used to the judiciary bending to the executive, Brazil's an exception on this.
> The judiciary does not write the laws, only applies them.
The Magnitsky sanctioned judge is known to have made "suggestions" to our elected representatives regarding the "fake news" censorship laws that were proposed years ago. Our lawmakers rejected that law, and the courts abused their power to ram the regulations down our throats anyway via their own "resolutions".
Brazilian judiciary is ripe with "activist" judges. Every single act of "judicial activism" is a coup against the brazilian population. Not a single brazilian voted for these judges.
> TypeScript is a wonderfully advanced language though it has an unfortunately steep learning curve
An extremely steep one.
The average multi-year TypeScript developer I meet can barely write a basic utility type, let alone has any general (non TypeScript related) notion of cardinality or sub typing. Hell, ask someone to write a signature for array flat, you'd be surprised how many would fail.
Too many really stop at the very basics.
And even though I consider myself okay at TypeScript, the gap with the more skilled of my colleagues is still impressively huge.
I think there's a dual problem, on one side type-level programming isn't taken seriously by the average dev, and is generally not nurtured.
On the other hand, the amount of ideas, theory, and even worse implementation details of the TypeScript compiler are far from negligible.
Oh, and it really doesn't help that TypeScript is insanely verbose, this can easily balloon when your signatures have multiple type dependencies (think composing functions that can have different outputs and different failures).
is far from basic Typescript. The average Typescript dev likely doesn't need to understand recursive conditional types. It's a level of typescript one typically only needs for library development.
Not only have I never been expected to write something like this for actual work, I'm not sure it's been useful when I have, since most of my colleagues consider something like this nerd sniping and avoid touching/using such utilities, even with documentation.
If I saw that in a PR I would push very hard to reject; something like that is a maintenance burden that probably isn’t worth the cost, and I’ve been the most hardcore about types and TypeScript of anyone of any team I’ve been on in the past decade or so.
Now, that said, I probably would want to be friends with that dev. Unless they had an AI generate it, in which case the sin is doubled.
I think there’s a difference between what’s expected/acceptable for library code vs application code. Types like this might be hard to understand, but they create very pleasant APIs for library consumers. I’ve generally found it very rare that I’ve felt the need to reach for more complex types like this in application code, however.
RXJS’s pipe function has a pretty complex type for its signature, but as a user of the library it ‘just works’ in exactly the type-safe way I’d expect, without me having to understand the complexity of the type.
> If I saw that in a PR I would push very hard to reject; something like that is a maintenance burden that probably isn’t worth the cost
As someone who came from a CS background, this kind of attitude is deeply mysterious. That seems like a type expression I'd expect a CS undergrad to be able to write - certainly if an SDE with 1-2 years experience was confused by it, I'd be advocating against their further promotion.
The every day practice of software engineering has little to do with the academic discipline of computer science. What makes a good software engineer is not usually the same thing that makes a good CS major
Sure, but basic CS knowledge is an expectation in much of the software field (albeit less since the mid-2010's javascript boom). A lot of companies aren't going to hire you if you don't know the basics of data structures and algorithms
but then you wind up with an entire repo, or an entire engineering team utterly hobbled by a lack of expressive typing (or advanced concepts generally) and debased by the inelegance of basic bitch programming.
Disclaimer: I'm not the OP, and there are certainly places where using recursive type definitions is justified.
My interpretation of OP's point is that excessive complexity can be a "code smell" on its own. You want to use the solution to match the complexity of the job and both the team that is building it and the one that is likely to maintain it.
As amused as I am by the idea of a dev team being debased by the inelegance of basic bitch programming, the daily reality of the majority of software development in industry is "basic bitch" teams working on "basic bitch" problems. I would argue this is a significant reason why software development roles are so much at risk of being replaced by AI.
To me, it's similar to the choice one has as they improve their vocabulary. Knowing and using more esoteric words might allow adding nuance to ideas, but it also risks excluding others from understanding them or more wastefully can be used as intelligence signalling more than useful communication.
tldr: Complexity is important when it's required, but possibly detrimental when it's not.
I’d say it depends. I always advocate for code that is easy to read and to understand, but in extremely rare conditions, hard to read code is the better solution.
Especially when it comes to signatures in Typescript, complex signatures can be used to create simple and ergonomic APIs.
But anyway you shouldn’t be allowed to push anything like this without multiple lines of comments documenting the thing. Unreadable code can be balanced with good documentation but I rarely saw this unfortunately.
If it's correct, it's not a maintenance nightmare, and it will alert you to problems later when someone wants to use it incorrectly.
If you're writing first-party software, it probably doesn't matter. But if you have consumers, it's important. The compiler will tell you what's wrong all downstream from there unless someone explicitly works around it. That's the one you want to reject.
> If it's correct, it's not a maintenance nightmare, and it will alert you to problems later when someone wants to use it incorrectly.
You're confusing things. It is a maintenance nightmare because it is your job to ensure it is correct and remains correct in spite of changes. You are the one owning that mess and held accountable for it.
> If you're writing first-party software, it probably doesn't matter. But if you have consumers, it's important.
Yes, it is important that you write correct and usable code. That code doesn't fall on your lap though and you need to be the one writing and maintaining it. Whoever feels compelled to write unintelligible character soup that makes even experienced seasoned devs pause and focus is failing their job as a software engineer.
> Whoever feels compelled to write unintelligible character soup...
I see it differently. That's the name of the game. Language design is always striving toward making it more intelligible, but it is reasonable to expect pros to have command of the language.
> I see it differently. That's the name of the game. Language design is always striving toward making it more intelligible, but it is reasonable to expect pros to have command of the language.
No, that's an extremely naive and clueless opinion to have. Any basic book on software engineering will tell you in many, many ways that the goal of any software engineer is to write simple code that is trivial to parse, understand, and maintain, and writing arcane and overly complex code is the Hallmark of an incompetent developer. The goal of a software engineer is to continuously fight complexity and keep things as simple as they can be. Just because someone can write cryptic, unintelligible code that doesn't make them smart or clever: it only makes them bad at their job.
looking back at them is also real hard to debug. you dont get a particularly nice error message, and a comment or a test would tell better than the type what the thing should be looking like
The alternative is what shows in the comment: go on HN and tell the world you think TS and JS are crap and it's not worth your time, while writing poor software.
To answer this we probably need more details, otherwise it's gonna be an XY Problem. What is it that I'm trying to do? How would I type this function in, say, SML, which isn't going to allow incorrect types but also doesn't allow these kinds of type gymnastics?
We don't have to deal in hypotheticals - we have a concrete example here. There's a method, array.flat() that does a thing that we can correctly describe in TypeScript's type system.
You say you would reject those correct types, but for what alternative?
It's hugely beneficial to library users to automatically get correctly type return values from functions without having to do error-prone casts. I would always take on the burden of correct types on the library side to improve the dev experience and reduce the risk of bugs on the library-consumption side.
There's nothing I can do about the standard JavaScript library, but in terms of code I have influence over, I very simply would not write a difficult-to-type method like Array.prototype.flat(), if I could help it. That's what I mean by an XY Problem - why are we writing this difficult-to-type method in the first place and what can we do instead?
Let's suppose Array.prototype.flat() wasn't in the standard library, which is why I'm reviewing a PR with this gnarly type in it. If I went and asked you why you needed this, I guess you'd say the answer is: "because JavaScript lets me make heterogenous arrays, which lets me freely intermix elements and arrays and arrays of arrays and... in my arrays, and I'm doing that for something tree-like but also need to get an array of each element in the structure". To which I'd say something like "stop doing that, this isn't Lisp, define an actual data type for these things". Suddenly this typing problem goes away, because the type of your "flatten" method is just "MyStructure -> [MyElements]".
Sure, if you're living fully in your own application code, and you don't need to consume things from an API you don't control, it's easy to live in a walled garden of type purity.
I can recognize that most people are going to go for inaccurate types when fancier semantics are necessary to consume things from the network.
But we also have the real world where libraries are used by both JS devs and TS devs, and if we want to offer semantics that idiomatic for JS users (such as Array.prototype.flat()) while also providing a first-class experience to TS consumers, it is often valuable to have this higher-level aptitude with the TS type system.
As mentioned earlier, I believe 90% of TS devs are never in this position, or it's infrequent enough that they're not motivated to learn higher-level type mechanics. But I also disagree with the suggestion that such types should be avoided because you can always refactor your interface to provide structure that allows you to avoid them; You don't always control the shape of objects which permeate software boundaries, and when providing library-level code, the developer experience of the consumer is often prioritized, which often means providing a more flexible API that can only be properly typed with more complex types.
> Suddenly this typing problem goes away, because the type of your "flatten" method is just "MyStructure -> [MyElements]".
How is that less maintenance burden than a simple Flatten type? Now you have to construct and likely unwrap the types as needed.
And how will you ensure that you're flattening your unneeded type anyways? Sure you can remove the generics for a concrete type but that won't simplify the type.
It's simple. It's just recursive flattening an array in 4 lines. Unlikely to ever change, unlike the 638255 types that you'd have to introduce and maintain for no reason.
There are many reasons not to do that. Say your business logic changes and your type no longer needs one of the alternatives: you are unlikely to notice because it will typecheck even if never constructed and you will have to deal with that unused code path until you realize it's unused (if you ever do).
You made code harder to maintain and more complex for some misguided sense of simplicity.
Right, from the structure you get an array with one element which is likely an union type from that naming.
Honestly, you sound more like your arguing from the perspective of a person unwilling to learn new things, considering you couldn't even get that type correct.
To begin with, that flat signature wasn't even hard to understand?
What I wrote would be a syntax error in TypeScript (no name for the argument, wrong arrow), not a function that returns array with one element; I used Haskell-ish notation instead of TypeScript's more verbose "(structure: MyStructure) => MyElement[]".
I thought it was clear enough that I was being informal and what I meant was clear, but that was admittedly probably a mistake. But to infer an implication from that that I'm "unwilling to learn new things" is a non sequitur and honestly kind of an unnecessarily dickish accusation.
Brah, If you have a type with that many characters in it that isn’t a super long string name, it’s not easy to understand unless you are the 1% of 1% when it comes to interpreting this specific language.
On top of that I fully agree with the poster you’re responding to. In general application code that’s and extremely complicated type, generally done by someone being as clever as can be. And if the code you’ve written when you’re being as clever as possible has a bug in it, you won’t be clever enough to debug it.
For one, the simple answer is incomplete. It gives the fully unwrapped type of the array but you still need something like
type FlatArray<T extends unknown[]> = Flatten<T[number]>[]
The main difference is that the first, rest logic in the complex version lets you maintain information TypeScript has about the length/positional types of the array. After flattening a 3-tuple of a number, boolean, and string array TypeScript can remember that the first index is a number, the second index is a boolean, and the remaining indices are strings. The second version of the type will give each index the type number | boolean | string.
The answer above actually gets the type union of all non-array elements of a multi-level array.
In other words
Flatten<[1,[2,'a',['b']]]>
will give you a union type of 1, 2, 'a', and 'b'
const foo: Flatten<[1,[2,'a',['b']]]> = 'b'; // OK
const bar: Flatten<[1,[2,'a',['b']]]> = 'c'; // Error: Type '"c"' is not assignable to type '1 | 2 | "a" | "b"'
Technically the inference is unnecessary there, if that's you're goal:
type Flatten<T> = T extends Array<unknown> ? Flatten<T[number]> : T
I don't really consider this the type of flattening an array, but `Array<Flatten<ArrType>>` would be. And this would actually be comparable to the builtin Array.prototype.flat type signature with infinite depth (you can see the typedef for that here[1], but this is the highest level of typescript sorcery)
My solution was for flattening an array with a depth of 1 (most people using Array.prototype.flat are using this default depth I'd wager):
Here’s the fun part that I suspect many here are forgetting: if you want to write the function body, it will probably (or at the very least can) look very similar!
function flat() {
if (this.length > 0) {
let [first, ...rest] = this;
if (Array.isArray(first)) {
return [...first, ...flat(rest)];
} else {
return [first, ...flat(rest)];
}
} else {
return [];
}
}
I still wouldn’t call it basic TypeScript, but it’s not conceptually that advanced, you just need to know about infer and extends.
Now in reality, Array.prototype.flat has a more complex definition, partly because (like most of Array’s methods) the method is generic (it works on array-like objects that have a length property and numeric indexing), and partly because of the depth parameter. From lib.es2019.array.d.ts:
Ouch. Don’t like the { done, recur }[Depth extends -1 ? "done" : "recur"] at all, no idea why it wasn’t written as `Depth extends -1 ? Arr : Arr extends ReadonlyArray<…`. And as for hard-coding support for depths up to 20 then bailing… probably pragmatic, it’s possible to support all values, but rather messy: https://stackoverflow.com/q/54243431.
Notably, the provided type for Array.prototype.flat doesn't actually provide the flat type for arrays of a known structure.
In other words, if you flatten `[string, [string, number]]` my example would give you `[string, string, number]` whereas the one in lib.es2019.array.d.ts would give you `(string | number)[]`
(my example, on the other hand, would only flatten a depth of 1, though a completely flattened type instead of just depth 1 could be represented by changing `[...First, ...FlatArr<Rest>] :` to `[...FlatArr<First>, ...FlatArr<Rest>] :`
I haven't tried supporting a depth param, but I suspect it's possible.
You're missing the specialisation of Object/Any. For example Array.flat called with [int, [bool, string]] returns a type [int, bool, string]. Admittedly this is somewhat niche, but most other languages can't express this - the type information gets erased.
You're missing the input type, essentially. Those are just array types. The TypeScript type signature more of a function type, it expresses flattening a n-dimensional array (input type) into a flat array (output type).
I don’t think that means it has a steep learning curve. It just means the basics suffice for a ton of TypeScript deployments. Which I personally don’t see as the end of the world.
Yes, to me this is a biggest feature of Typescript: A little goes a long way, while the advanced features make really cool things possible. I tend to think of there being two kinds of Typescript - Application Typescript (aka The Basics, `type`, `interface`, `Record`, unions etc...) and Library Typescript which is the stuff that eg Zod or Prisma does to give the Application Typescript users awesome features.
While I aspire to Library TS levels of skill, I am really only a bit past App TS myself.
On that note I've been meaning to the the Type-Level Typescript course [0]. Has anyone taken it?
As someone who knows slightly more than the basics, and enough to know about the advanced stuff that I don't know about, this is the correct place to stop.
I would much rather restructure my javascript than do typescript gymnastics to fit it into the type system.
TypeScript codebases I've seen generally seem to have the widest demonstration of skill gap versus other languages I use.
For example, I don't ever see anyone using `dynamic` or `object` in C#, but I will often see less skilled developers using `any` and `// @ts-ignore` in TypeScript at every possible opportunity even if it's making their development experience categorically worse.
For these developers, the `type` keyword is totally unknown. They don't know how to make a type, or what `Omit` is, or how to extend a type. Hell, they usually don't even know what a union is. Or generics.
I sometimes think that in trying to just be a superset of JavaScript, and it being constantly advertised as so, TypeScript does not/did not get taken seriously enough as a standalone language because it's far too simple to just slot sloppy JavaScript into TypeScript. TypeScript seems a lot better now of having a more sane tsconfig.json, but it still isn't strict enough by default.
This is a strong contrast with other languages that compile to JavaScript, like https://rescript-lang.org/ which has an example of pattern matching right there on the home page.
Which brings me onto another aspect I don't really like about TypeScript; it's constantly own-goaling itself because of it's "we don't add anything except syntax and types" philosophy. I don't think TypeScript will ever get pattern matching as a result, which is absurd, because it has unions.
> For example, I don't ever see anyone using `dynamic` or `object` in C#, but I will often see less skilled developers using `any` and `// @ts-ignore` in TypeScript at every possible opportunity even if it's making their development experience categorically worse.
I think you're confusing things that aren't even comparable. The primary reason TypeScript developers use the likes of `any` is because a) TypeScript focuses on adding support for static type checking on a language that does not support it instead of actually defining the underlying types, b) TypeScript developers mostly focus on onboarding and integrating TypeScript onto projects and components that don't support it, b) TypeScript developers are paid to deliver working projects, not vague and arbitrary type correctness goals. Hence TypeScript developers tend to use `any` in third party components, add user-defined type guards to introduce typing in critical areas, and iterate over type definitions when time allows.
Discriminating a function or promise based on return type is never going to work, because JavaScript is dynamically typed and TypeScript erases types at compile time, so there's no way to know at runtime what type a function or promise is going to return.
You're right, but that begs the question: does a type system really require such complexity?
I'm aware that type theory is a field in and of itself, with a lot of history and breadth, but do developers really need deep levels of type flexibility for a language to be useful and for the compiler to be helpful?
I think TypeScript encourages "overtyping" to the detriment of legibility and comprehension, even though it is technically gradually typed. Because it is so advanced and Turing complete itself, a lot of brain cycles and discussion is spent on implementing and understanding type definitions. And you're definitely right that it being verbose also doesn't help.
So it's always a bittersweet experience using it. On one hand it's great that we have mostly moved on from dynamically typed JavaScript, but on the other, I wish we had settled on a saner preprocessor / compiler / type system.
The idea is to make libraries preserve as much type information as possible, as a principle. Once type information is erased it can't be restored. For regular application code you don't need to use those features.
But regular application code also contains libraries. Type information is useful even if you're the only user of those APIs.
My point was more related to the level of expressiveness required of a type system in order to allow a programmer to produce reliable code without getting in their way. I think TypeScript leans more towards cumbersome than useful.
For example, I'm more familiar with Go's type system, which is on the other side of that scale. It is certainly much less expressive and powerful than TypeScript, and I have found it frustrating and limiting in many ways, but in most day-to-day scenarios it's reasonably adequate. Are Go programs inherently worse off than TypeScript programs? Does a Go programmer have a worse experience overall? I would say: no.
They're completely different languages, Javascript is dynamically typed, not sure how useful such a comparison is. TS's type system evolved out of a desire to encode the type relations of JS functions, often native ones, which are very dynamic and polymorphic. When writing application code you can keep things simple, but trying to represent all the ways types can change for the native libraries is harder.
It's also terribly documented. As an example, I don't think `satisfies` is in the docs outside of release notes. There's lots more stuff like that, which makes using it kind of frustrating.
> "There basics," well understood and judiciously applied, is where the bulk of TypeScript's value lies.
Yes, precisely. OP is also completely oblivious to the fact that TypeScript is designed to help developers gradually onboard legacy JavaScript projects and components, which definitely don't require arcane and convoluted type definitions to add value.
these are things most developers don't know how to do in most language's type systems. I think only rust with its focus on functional roots has seen similar focus on utilizing its type system to its fullest extent.
I have mixed feelings about Typescript, I hate reading code with heavy TS annotations because JS formatters are designed to keep line widths short, so you end up with a confusing mess of line breaks. Pure JS is also just more readable.
Also you can so easily go overboard with TS and design all sorts of crazy types and abstractions based on those types that become a net negative in your codebase.
However it does feel really damn nice to have it catch errors and give you great autocomplete and refactoring tooling.
> I have mixed feelings about Typescript, I hate reading code with heavy TS annotations because JS formatters are designed to keep line widths short, so you end up with a confusing mess of line breaks. Pure JS is also just more readable.
That's not a TypeScript issue, it's a code quality issue and a skill issue. Anyone can put together an unintelligible mess in any language.
to our defense, we want to build stuff not become ts wizards. also I've worked with libraries with heavy heavy typing that it was a nightmare if you wanted to use their lib in any other way than what they have imagined.
> The average multi-year TypeScript developer I meet can barely write a basic utility type, let alone has any general (non TypeScript related) notion of cardinality or sub typing. Hell, ask someone to write a signature for array flat, you'd be surprised how many would fail.
I think you're both exaggerating your blanket accusations of incompetence and confusing learning curve with mastering extremely niche techniques akin to language gotchas.
Honestly I just use TypeScript to prevent `1 + [] == "1"` and check that functions are called with arguments. I don't care about type theory at all and the whole thing strikes me as programmers larping (poorly) as mathematicians.
then you're creating a giant mess of a soup where the state of your program could have a result, be loading and an error at the same time. If you could recognise that the state of your program is a sum of possible states (loading | success | error), and not their product as the type above you could highly simplify your code, add more invariants and reduce the number of bugs.
And that is a very simple and basic example, you can go *much* further, as in encoding that some type isn't merely a number through branded types, but a special type of number, be it a positive number between 2 and 200 or, being $ or celsius and avoiding again and entire class of bugs by treating everybody just as an integer or float.
For a function setVelocity() that can accept 1..<200. You call it with numbers that you enter directly and types tell you something that would otherwise be a comment on the function, or you do runtime checks elsewhere, and the type becomes proof that you checked them before handing it into the function.
Btw, using “autism” to mean “pedantry” leaves a bit of a bad taste in my mouth. Maybe you could reconsider using it that way in the future.
Pushing everything to types like this creates a different burden where you're casting between types all over the place just to use the same underlying data. You could just clamp velocity to 200 in the callee and save all that hassle.
> Pushing everything to types like this creates a different burden where you're casting between types all over the place just to use the same underlying data.
TypeScript does not perform any kind of casting at all. What TypeScript supports is structural typing, which boils down to allowing developers to specify type hints in a way that allows the TypeScript compiler to determine which properties or invariants are met in specific code paths.
Literal types address a very common and very mundane use case: assert what can and cannot be done with an object depending on what value one of it's fields have.
Take for example authorization headers. When they are set, their prefix tells you which authorization scheme is being used by clients. With typescript you can express those strings as a prefix constrained string type, and use them to have the TypeScript compiler prevent you from accidentally pass bearer tokens to the function that handles basic authentication.
Literal types shine when you are using them to specify discriminant fields in different types. Say for example you have a JSON object that has a `version` field. With literal types you can define different types discriminated by what string value features in it's `version` field, and based on that alone you can implement fully type-safe code paths.
If you have some `ConstrainedNumber` type, you will need to cast between it and `number`, either with a type assertion or with a type guard. In either case, when you use bespoke types everywhere you kill code reuse.
Casting? Not really - i think you’d only need a couple type checks.
Imo this is mostly useful for situations where you want to handle input validation (and errors) in the UI code and this function lives far away from ui code.
Your point about clamping makes sense, and it’s probably worth doing that anyway, but without it being encoded in the type you have to communicate how the function is intended to be used some other way.
Ah, yeah you’re right. I somehow thought typescript could do type narrowing based on checks - like say:
If (i >= 1) {
// i’s type now includes >= 1
}
But that is not the case, so you’d need a single cast to make it work (from number to ClampedNumber<1,200>) or however exactly you’d want to express this.
Tbf having looked more closely into how typescript handles number range types, I don’t think I would ever use them. Not very expressive or clear. I think I hallucinated something closer to what is in this proposal: https://github.com/microsoft/TypeScript/issues/43505
I still think that the general idea of communicating what acceptable input is via the type system is a good one. But the specifics of doing that with numbers isn’t great in typescript yet.
How would you implement it in other languages that support it better? Can you literally just do a range check and the compiler infers its range for types? If so thats actually pretty neat.
Yeah, that’s probably the best possible implementation outside of the type system. The issue with docstrings is that they aren’t checked. So you’ve communicated the api to the coder, and maybe even to the ide, but not to the compiler.
I stopped doing anything advance because I realized nobody actually wanted to do all of the advanced stuff. None of the NPM community does anything more than the basics.
Frankly, I prefer it that way. A lot of the advance stuff doesn’t actually enable any new functionality. It only shortens the code paths for implementing.
Being a JS/TS one-trick pony all my career, how does it compare to other languages? I don't really see much difference, except if comparing with some C++ shenanigans.
In an interview I'd be happy just seeing some reasoning.
IRL I'd be happy with someone at least searching for a definition and trying to learn from it.
I've asked this question multiple times as implementing array flatten used to be our go to ice breaker question, and many devs had no issues reasoning and finding an okay type definition.
typescript is largely a result of solving a non-existent problem. Yeah JS is finicky & has foot-guns, however they're ways around those foot guns that don't involve typescript.
Rich Hickey in 10 Years of Clojure & Maybe Not then the Value of Values - lays this out - though not meant at typescript but static types in general.
the thing most people don't have proper Javascript fundamentals.
Function signatures: JSDoc works
Most types - use Maps | Arrays
if a value doesn't exist in a map we can ignore it. There's also the safe navigation operator.
Instead of mutable objects - there's ways around this too. Negating types again.
I never understood how can eating at McDonald's be cheaper than cooking your own meals.
I'm not from US, but checking US grocery shops, you can eat meals made of chicken breast, bread and vegetables well below 5$ per person, well below 20$ in total for a family of 4.
Yet every time I see those discussions, fast food is always presented as a cheaper option?
Well, you see, the entire wink-wink, nudge-nudge premise of America was this: everything is cheap, but you don't have the best safety net in the world. This sort of worked, for a while, especially because we actually DID have an okay social safety net here and there for people: Medicare/Medicaid, SNAP, Social Security, ACA, etc. Now, though, these programs are being gutted, AND ALSO the Big Macs are expensive. So now, the people who run this country (like the people who own McDonald's) are giving us neither our bread nor our circuses, and that makes the F150 class very angry...
But sure, I guess there's no cost of living crisis, actually, because you have the perfect shopping list. I'll inform the nation.
When you are a single person, the math changes. It can be cheaper or even break even to eat out every day. Especially if you lack the ability to eat and/or store the minimum package amount before it goes bad. This was a massive issue in my youth before I could save up buy stuff to make things last longer in the freezer while still tasting good. It was literally break even to cook vs eat out. Thus it was actually more expensive to eat out because we have to factor the time I spent cooking and I ruined food a lot as I learned how to make and store well what I liked.
As someone with a family now, it could never work. Even without just being better at cooking and preserving food, I can buy bulkier items that have a lower cost per unit.
I guess if I were truly destitute as a young adult, I would have cooked, but I wasn't. I wanted to have s nice salad wrap and/or hot meals fancier than beans and rice.
I'm not sure where you're getting numbers but I can't buy just chicken breast for two people for less than $5 at my grocery store unless I buy in bulk. And I do not have the space to store it.
I literally see chicken breast at walmart at 2.57$/lb, that's well below 5$ per two servings.
Add some simple mashed potatoes and you're still below 5$ to feed two people in one meal.
You can also eat bens, rice, lentils, eggs, add some cheese. There's countless simple, cheap, non processed food around.
The reality is that it's "more convenient", or at least it was, because if you had to choose between spending 3$ for a complete meal you still had to cook, and some 5/6$ McDonald's processed tasty food, you'd go with #2.
But stating that it's cheaper because of "scale economy" is just false, it isn't and never was to eat out. Let alone the impact of eating such junk food.
Cool. My local grocery store chicken is between $5-6/lb today. $2 if I buy a whole chicken and butcher it myself.
Whatever prices you're seeing are not my reality, living in a major urban area in the United States. Maybe if I bought at the Walmart in Iowa next to a factory farm I could buy it for $2.50/lb, but I can't.
Simple conomy of scale. McDonald's buy chicken in bulk from vendors, and get a better deal than Safeway/wherever you shop does, they get industrial bulk handing deals and yeah they have to pay for employees to cook the food, but that's amortized over all the other customers. So it can be cheaper, the same way that it's cheaper to get oil out of the ground, refine it, do a bunch of chemistry to it, form it into plastic knives and forks, make a box for it, decorate that box, put the cutlery into the box, ship that box halfway around the world, put it in a store, and all of that's still cheaper than getting someone to wash a metal fork for you to eat dinner with.
What blew my mind is when someone explained to me the cultural difference with some places in south east Asia. In the US, eating out at restaurants is what rich people do. But in certain places in south east Asia, having a kitchen, having appliances like a fridge, having electricity for them, having dining space, having the time to go to the market to haggle with vendors, all of that adds up so it's the rich that can afford to eat at home, and everyon else eats out. So it's location dependent.
While I absolutely understand the entirety of this post, and I understand why large organizations have such ladders I want to emphasize that you don't have necessarily to aim for this vertical climbing.
It is absolutely fine to make good money and limit your responsibilities.
It is also fine to work in smaller organizations and unknown companies where you enjoy the work and there would make no sense to have more labels and roles than personnel.
There's more to life than money. I'm a freelancer/independent consultant, and as such I'm never considered for anything but senior or tech lead roles and...it's fine.
I make more money than I spend (I have the same lifestyle I had when I was making 2500€s per month), but I can optimize for choosing projects and things that I care for.
That's not really doable when you aim to climb the ladder, you need to "play" the game and you don't get to set it.
I'm free from those political shenanigans.
I also know plenty of senior devs that enjoy simply being senior devs and renouncing more responsibilities.
Eventually, for the same peter principle organizations too should not necessarily aim to push everybody to grow (albeit I understand in big tech managers do have incentives into growing and promoting people).
There's many elements of freedom in forgoing this ladder climbing and everybody should set their own priorities and goals regardless of what the industry and society says. It's liberating.
Thanks for this perspective—I'm the author, and I'm actually back to freelancing/solo now.
You're absolutely right. Staying senior and not chasing Staff+ roles is a totally valid choice. I was CTO at this company, but in previous roles, I personally aimed for more impact, not out of ambition or to play the game, but because I wanted to.
I wanted to move the product, shape the direction. I felt I could contribute beyond just my dev capacity, and that brought me satisfaction.
More importantly, we built this career path to give people who wanted to grow an alternative to pure management. That was the whole point: creating options.
But everyone finds fulfillment differently. I'm not saying this path is for everyone. Just sharing what worked for me and how we structured it at Malt.
The thing that gets on my nerves is that this is really an unsolvable political problem.
When governments do try to push and make it law to have X amount of bushes and unfarmed land in a way that makes sense for wildlife to thrive you instantly get angry farmers on your roads, lose their votes and get publicly accused to starve the nation.
And farmers, due to the difficulty of their job (in time, investments and returns) and their role in a society's lifecycle get instant empathy.
There's areas in Italy where farms have absolutely polluted water to insane level, and this further compounds with heavy pollution of drinkable water wells (which should always be at least 120 meters from the closest farm from what I know).
Regular citizens, which end up getting heavily sick from all these farms (often in mortal ways) never get any kind of support as they lack the political and financial weight, and as soon as the argument scales, you're back at populist "you want to starve our people and kill our economy" arguments.
I had a house in the country, by law there shouldn't be more than one chicken farm in a 4 miles radius, yet they built two in less than 2 miles, one of them 300 meters from my house. It literally smells in disgusting ways 24/7, from ammonia to rot I had to sell it for pennies (5'000 euros, renovated), as nobody but the farmers in the area had the slightest intention to moving to such a beautiful yet disgusting place. And I haven't even mentioned that to avoid having to bring the hundreds of dead animals to a registered incinerator (as the law requires) they just dig mass graves in 5 minutes and cover the entire land.
Wildlife has absolutely disappeared.
It's really a tough, tough political battle.
Everything from agriculture to cattle to fishing is insanely polluting and bad for the environment, but the idea of really tracking and controlling how those industries operate is beyond naive. The labels on your tuna can saying it didn't kill dolphins are worthless, there's no way to check what happens on these boats, so are the labels for your coffee or cocoa not using child labor or your food being organic. It's all absolutely fake and a matter of money.
> When governments do try to push and make it law to have X amount of bushes and unfarmed land in a way that makes sense for wildlife to thrive you instantly get angry farmers on your roads, lose their votes and get publicly accused to starve the nation.
The problem is... farmers are a pretty split bunch. On the one side you have the last few remaining small holdouts trying to make ends meet with a few dozen cows or so, they already get swamped in ever increasing bureaucracy, and on the other side you got the megafarms who not only have the benefits of scale available to them (in anywhere from machines to sheer purchase power for feedstock) but also got dedicated full time employees just taking care about getting government handouts.
Of course the small ones get up in arms whenever anything changes, they don't have the capacity and resiliency left anymore.
Here in the netherlands as soon as you try to do something, the farmers start flyingh upside down flags. I call them the 'head in the sand' flags since they stand for ignoring the problems.
I fear the problem is just that the earth suffers from an infestation of humans and the equilibrium will be restored in the same way all infestations end. It won't be pretty (already isn't in lots of places).
2. Italy has been seen by Italian populations as one entity from millennia, essentially since the Roman empire when all the Italian tribes wanted to be elevated to the same status of citizenship as romans. And there's an overabundance of historical proofs of it, you're gonna find for centuries and centuries of Italian literature and diplomacy. Every single major writer in Italian history since middle ages has written openly about Italia, Dante Alighieri dedicated an entire chant in the 13th century to the woes of the Italian people "Ahi serva Italia".
3. You are absolutely right in stating that the major promoters of Italian unifications have been the elites, but that's common to most such historical events.
reply