I feel like that's the biggest question I have about Astral. I wonder what they have in the tank. All of this software is great, but I'd like to see them get some kind of benefit, if only to assure me that they'll continue to exist and make awesome software.
(And also so they'll implement the `pip download` functionality I'd like!)
I think astral and meta were both working on their own type-checkers independently. My current understanding is that meta released so they could preempt the initial release of ty. It seems like they're a bit further ahead in development. Not sure if there are going to be any real differences between the two down the line.
Sure, but in this case they are both implementations of a spec defined by PEPs, so a bit more like gcc vs clang (less tightly bound than those, of course, in design decisions). Neither company is trying to invent a new language here.
The current major type checkers, mypy and pyright, are also based on the same PEPs, but you can still see differences between them. For example, my codebase passes pyright in strict mode, but mypy results in a bunch of type errors. I'd expect pyrefly and ty to be slightly different from each other.
Sure, I agree. I’m just saying that most of their defacto disagreements are because of ambiguity in the specs, not because one of them is (intentionally) choosing to fork.
Fwiw I think performance and features are kind of intertwined here, since Pyright’s extra speed makes it possible to infer more things that are not easy for mypy, especially in an LSP implementation.
Pyright is very impressive in that second link (conformance check), very much a product of Eric Traut & others dedicating a lot of energy to this single problem.
the specs are still evolving, and the various type checker implementations are what is driving them forward. in general, capturing the dynamic typing semantics of python in a gradually typed system is not a fully solved problem, and the type checkers are experimenting with various approaches to it.
The vast majority of the time people question whether or not an image or writing is "AI", they're really just calling it bad and somehow not realizing that you could just call the output bad and have the same effect.
Every day I'm made more aware of how terrible people are at identifying AI-generated output, but also how obsessed with GenAI-vestigating things they don't like or wouldn't buy because they're bad.
(And also so they'll implement the `pip download` functionality I'd like!)