> Man, it's easy to be fast when you're wrong. But of course it is fast because Rust not because it just skips the hard parts of dependency constraint solving and hopes people don't notice.
I think you're misunderstanding why we do this: it's a security feature. pip's design is inherently vulnerable to dependency confusion attacks, since packages of the same name across indexes are considered equally trusted by pip. You can look up the torchtriton attack to learn more.
> Stuff like this sense unlikely to contribute to overall runtime, but it does decrease flexibility.
I think you're misinformed. We support all of these features: system- and per-user configuration files, environment variables, etc. We just don't read _pip's_ configuration file, which is intended for pip, not uv.
I'm not super familiar with Bundler's architecture but I think the most impactful thing would be adopting uv's cache design, which is a big part of what makes uv so fast and should be replicable in other languages and ecosystems.
> Ignoring requires-python upper bounds. When a package says it requires python<4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong.
I don't think that ignoring upper bounds has a significant impact on uv's performance. We do this for a totally different reason, which is that it leads to better solves. For example, if you say your project requires Python 3.8 or later, but some dependency said it works for ">=3.8,<4", then suddenly your project isn't installable on Python 4, and you'd be implicitly required to put a "<4" bound on your own project. uv solves for all of your supported Python versions, not a single version, so discounting the upper bounds doesn't actually save us any time in the solve.
If there's anything else accompanying the error, do you mind filing an issue? I've been using the ty extension with Cursor for weeks and am having trouble reproducing right now.
That's the full error. It shows up in a dialog box when I press the install button. I'm on macOS, connected with the Anysphere Remote SSH extension to a Linux machine.
If I choose "install previous version" I am able to install the pre-release version from 12 hours ago without issue. Then on the extension page I get a button labeled "Switch to Release Version" and when I press it I get an error that says "Can't install release version of 'ty' extension because it has no release version." Filed a GitHub issue with these details.
In the meantime, the previous version appears to be working well! I like that it worked without any configuration. The Pyrefly extension needed a config tweak to work.
https://forum.cursor.com/t/newly-published-extensions-appear... suggests that there's some kind of delayed daily update for new VSCode extension versions to become available to Cursor? It seems likely that's what is happening here, since ty-vscode 0.0.2 was only published an hour or two ago.
We actually do want ty to be a first-class LSP (i.e., a complete alternative to Pylance and others), and it already supports nearly all of the features you'd expect. I use it as my primary LSP today in lieu of Pylance!
The PEP includes the ability to enable (or disable) lazy imports globally via a command-line flag or environment variable, in addition to the import syntax.
Lazy imports have been proposed before, and were rejected most recently back in 2022: https://discuss.python.org/t/pep-690-lazy-imports-again/1966.... If I recall correctly, lazy imports are a feature supported in Cinder, Meta's version of CPython, and the PEP was driven by folks that worked on Cinder. Last time, a lot of the discussion centered around questions like: Should this be opt-in or opt-out? At what level? Should it be a build-flag for CPython itself? Etc. The linked post suggests that the Steering Council ultimately rejected it because of the complexity it would introduce to have two divergent "modes" of importing.
I hope this proposal succeeds. I would love to use this feature.
I also hope this proposal succeeds, but I'm not optimistic. This will break tons of code and introduce a slew of footguns. Import statements fundamentally have side effects, and when and how these side effects are applied will cause mysterious breakages that will keep people up for many nights.
This is not fearmongering. There is a reason why the only flavor of Python with lazy imports comes from Meta, which is one of the most well-resourced companies in the world.
Too many people in this thread hold the view of "importing {pandas, numpy, my weird module that is more tangled than an eight-player game of Twister} takes too long and I will gladly support anything that makes them faster". I would be willing to bet a large sum of money that most people who hold this opinion are unable to describe how Python's import system works, let alone describe how to implement lazy imports.
PEP 690 describes a number of drawbacks. For example, lazy imports break code that uses decorators to add functions to a central registry. This behavior is crucial for Dash, a popular library for building frontends that has been around for more than a decade. At import-time, Dash uses decorators to bind a JavaScript-based interface to callbacks written in Python. If these imports were made lazy, Dash would break. Frontends used by thousands, if not millions of people, would immediately become unresponsive.
You may cry, "But lazy imports are opt-in! Developers can choose to opt-out of lazy imports if it doesn't work for them." What if these imports were transitive? What if our frontend needed to be completely initialized before starting a critical process, else it would cause a production outage? What if you were a maintainer of a library that was used by millions of people? How could you be sure that adding lazy imports wouldn't break any code downstream? Many people made this argument for type hints, which is sensible because type hints have no effect on runtime behavior*. This is not true for lazy imports; import statements exist in essentially every nontrivial Python program, and changing them to be lazy will fundamentally alter runtime behavior.
This is before we even get to the rest of the issues the PEP describes, which are even weirder and crazier than this. This is a far more difficult undertaking than many people realize.
---
* You can make a program change its behavior based on type annotations, but you'd need to explicitly call into typing APIs to do this. Discussion about this is beyond the scope of this post.
Product manager for Dash here At Plotly we're actually pretty excited about the potential for lazy loaded imports, as it could help out a lot with the import performance of Plotly.py.
As this comment mentions Dash apps would not support lazy loaded imports until the underlying Dash library changes how it loads in callbacks and component libraries (the two features which would be most impacted here), but that doesn't mean there's no path to success. We've been discussing some ways we could resolve this internally and if this PEP is accepted we'd certainly go further to see if we can fully support lazy loaded imports (of both the Dash library itself/Dash component libraries and for relative imports in Dash apps).
They are not entitled to hold the opinion that their imports takes too long, if they dont know the inner workings of pythons import system? Do you listen to yourself?
Right now in python, you can move import statement inside a function. Lazy imports at top level are not needed. All lazy imports do is make you think less about what you are writing. If you like that, then just vibe code all of your stuff, and leave the language spec alone.
Ie, with lazy, the import happens at the site of usage. Since clearly this is code that could already be written, it only breaks things in the sense that someone could already write broken code. Since it is opt in, if using it breaks some code, then people will notice that and choose not to rewrite that code using it.
Some of these worries make sense, but wouldn’t it be relatively trivial to pass a flag to the interpreter or something similar in order to force all imports to evaluate, as in the current behavior? But to be a bit cheeky if some of these issues cause serious production outages for you it might be time to consider moving on from a scripting language altogether.
The issue is that some imports can be made lazy and some cannot. A binaristic all-or-nothing approach does not address the issue. (I also think that there is zero basis to claim that adding such a flag is trivial, since there’s no reference implementation of this flavor of lazy imports.)
What if we have a program where one feature works only when lazy imports are enabled and one feature only when lazy imports are disabled?
This is not a contrived concern. Let’s say I’m a maintainer of an open-source library and I choose to use lazy imports in my library. Because I’m volunteering my time, I don’t test whether my code works with eager imports.
Now, let’s say someone comes and builds an application on top of this library. It doesn’t work with lazy imports for some unknown reason. If they reach for a “force all imports” flag, their application might break in another mysterious way because the code they depend on is not built to work with eager imports. And even if my dependency doesn’t break, what about all the other packages the application may depend on?
The only solution here would be for the maintainer to ensure that their code works with both lazy and eager imports. However, this imposes a high maintenance cost and is part of the reason why PEP 690 was rejected. (And if your proposed solution was “don’t use libraries made by random strangers on the Internet”, boy do I have news for you...)
My point is that many things _will_ break if migrated to lazy imports. Whether they should have been written in Python in the first place is a separate question that isn’t relevant to this discussion.
Maybe the package that requires lazy can somehow declare that requirement, so another package that tries to force not lazy will fail early and realize it needs to replace this dependency with something compatible or change its ways. It definitely adds complexity, though.
Or check at runtime if it's running with the lazy import feature active. Then instead of breaking in mysterious ways in production it would crash on startup, during development.
Theoretically the implementation may use the approach "as lazy as possible". Traverse lazy imports until you encounter a regular one.
I doubt it will make much difference, but at least it gives an option.
I don't see how. It adds a new, entirely optional syntax using a soft keyword. The semantics of existing code do not change. Yes, yes, you anticipated the objection:
> What if these imports were transitive? ... How could you be sure that adding lazy imports wouldn't break any code downstream?
I would need to see concrete examples of how this would be a realistic risk in principle. (My gut reaction is that top-level code in libraries shouldn't be doing the kinds of things that would be problematic here, in the first place. In my experience, the main thing they do at top level is just eagerly importing everything else for convenience, or to establish compatibility aliases.)
But if it were, clearly that's a breaking change, and the library bumps the major version and clients do their usual dependency version management. As you note, type hints work similarly. And "explicitly calling into typing APIs" is more common than you might think; https://pypistats.org/packages/pydantic exists pretty much to do exactly this. It didn't cause major problems.
> Import statements fundamentally have side effects, and when and how these side effects are applied will cause mysterious breakages that will keep people up for many nights.
They do have side effects that can be arbitrarily complex. But someone who opts in to changing import timing and encounters a difficult bug can just roll back the changes. It shouldn't cause extended debugging sessions unless someone really needs the benefits of the deferral. And people in that situation will have been hand-rolling their own workarounds anyway.
> Too many people in this thread hold the view of "importing {pandas, numpy, my weird module that is more tangled than an eight-player game of Twister} takes too long and I will gladly support anything that makes them faster".
I don't think they're under the impression that this necessarily makes things faster. Maybe I haven't seen the same comments you have.
Deferring imports absolutely would allow, for example, pip to do trivial tasks faster — because it could avoid importing unnecessary things at all. As things currently stand, a huge fraction of the vendored codebase will get imported pretty much no matter what. It's analogous to tree shaking, but implicitly, at runtime and without actually removing code.
Yes, this could be deferred to explicitly chosen times to get more or less the same benefit. It would also be more work.
> Libraries such as PyTorch, Numba, NumPy, and SciPy, among others, did not seamlessly align with the deferred module loading approach. These libraries often rely on import side effects and other patterns that do not play well with Lazy Imports. The order in which Python imports could change or be postponed, often led to side effects failing to register classes, functions, and operations correctly. This required painstaking troubleshooting to identify and address import cycles and discrepancies.
This isn't precisely the scenario I described above, but it is a concrete example of how deferred imports can cause issues that are difficult to debug.
Regarding performance benefits:
> At Meta, the quest for faster model training has yielded an exciting milestone: the adoption of Lazy Imports and the Python Cinder runtime. ... we’ve been able to significantly improve our model training times, as well as our overall developer experience (DevX) by adopting Lazy Imports and the Python Cinder runtime.
It's been explained many times before why this is not possible: the library doesn't actually have a version number. The distribution of source code on PyPI has a version number, but the name of this is not connected to the name of any module or package you import in the source code. The distribution can validly define zero or more modules (packages are a subset of modules, represented using the same type in the Python type system).
You got three other responses before me all pointing at uv. They are all wrong, because uv did not introduce this functionality to the Python ecosystem. It is a standard defined by https://peps.python.org/pep-0723/, implemented by multiple other tools, notably pipx.
> It's been explained many times before why this is not possible: the library doesn't actually have a version number. The distribution of source code on PyPI has a version number, but the name of this is not connected to the name of any module or package you import in the source code.
You're making the common mistake of conflating how things currently work with how things could work if the responsible group agrees to change how things work. Something being the way it is right now is not the same as something else being "not possible".
No, changing this breaks the world. A huge fraction of PyPI becomes completely invalid overnight, and the rest fails the expected version checks. Not to mention that the language is fundamentally designed around the expectation that modules are singleton. I've written about this at length before but I can't easily find it right now (I have way too many bookmarks and not nearly enough idea how to organize them).
Yes, you absolutely can create a language that has syntax otherwise identical to Python (or at least damn close) which implements a feature like this. No, you cannot just replace Python with it. If the Python ecosystem just accepted that clearly better things were clearly better, and started using them promptly, we wouldn't have https://pypi.org/project/six/ making it onto https://pypistats.org/top (see also https://sethmlarson.dev/winning-a-bet-about-six-the-python-2...).
The hard part is making the change. Adding an escape hatch so older code still works is easy in comparison.
Nobody is claiming this is a trivial problem to solve but its also not an impossible problem. Other languages have managed to figure out how to achieve this and still maintain backwards compatibility.
Note that you will be expected to have familiarized yourself generally with previous failed proposals of this sort, and proactively considered all the reasonably obvious corner cases.
I’m not going to spend an entire weekend drafting a proposal instead of spending time with my kids, just to win “internet points”.
If you want examples then just look at one of the other languages that have implemented compiler / runtime dependency version checks.
Even Go has better dependency resolution than Python, and Go is often the HN poster child for how not to do things.
The crux of the matter is this is a solvable problem. The real issue isn’t that it’s technically impossible, is that it’s not annoying enough of a day to day problem for people who are in a position to influence this change. I’m not that person and don’t aspire to be that person (I have plenty of other projects on my plate as it is)
In spite of the 'You're welcome to bring' this does not actually sound like an encouragement but more of a veiled statement that some non-technical reason will be found to shoot down the proposal if it were to be made so you might as well not bother.
It's an allusion to the fact that there is a very long history establishing that the problem is not as simple as it sounds, even if you get past the most basic issues, and it's hard to explain it all in a single coherent post.
Nobody suggested it was easy. We were just arguing against your claim that it’s impossible
Saying something is possible isn’t the same as saying something is easy.
If you were to have said “it’s a different problem to solve because of…” then you wouldn’t have had any pushback. But you didn’t. You said it “this is not possible”. And that’s the part that people were disputing.
No, the point is that most people in this thread do not appreciate the complexity of implementing lazy imports. If you disagree, your energy is better spent talking to a CPython core developer about implementation details of making baseless assertions from an ivory tower.
There are many people here who think enabling lazy imports is as simple as flipping a light switch. They have no idea what they're talking about.
And actually people do appreciate the complexities of changes like this. We were responding to a specific comment that that said “it’s impossible”. Saying something is “possible” isn’t the same as saying “it’s easy”.
> Not to mention that the language is fundamentally designed around the expectation that modules are singleton.
Modules being singletons is not a problem in itself I think? This could work like having two versions of the same library in two modules named like library_1_23 and library_1_24. In my program I could hypothetically have imports like `import library_1_23 as library` in one file, and `import library_1_24 as library` in another file. Both versions would be singletons. Then writing `import library==1.23` could be working like syntax sugar for `import library_1_23 as library`.
Of course, having two different versions of a library running in the same program could be a nightmare, so all of that may not be a good idea at all, but maybe not because of module singletons.
I know I'm missing something but wouldn't it be possible to just throw an import error when that happens? Would it even break anything? If I try:
import numpy==2.1
And let's say numpy didn't expose a version number in a standard (which could be agreed upon in a PEP) field, then it would just throw an import exception. It wouldn't break any old code. And only packages with that explicit field would support the pinned version import.
And it wouldn't involve trying to extract and parse versions from older packages with some super spotty heuristics.
But it would make new code impossible to use with older versions of python, and older packages, but that's already the case.
> And let's say numpy didn't expose a version number in a standard (which could be agreed upon in a PEP) field, then it would just throw an import exception. It wouldn't break any old code. And only packages with that explicit field would support the pinned version import.
Yes, this part actually is as simple as you imagine. But that means in practical terms that you can't use the feature until at best the next release of Numpy. If you want to say for example that you need at least version 2 (breaking changes, after all), well, there are already 18 published packages that meet that requirement but are unable to communicate that in the new syntax. This can to my understanding be fixed with post releases, but it's a serious burden for maintainers and most projects are just not going to do that sort of thing (it bloats PyPI, too).
And more importantly, that's only one of many problems that need to be solved. And by far the simplest of them.
If vereioned imports were added to the language versioned library support obviously would have to become part of the language as well.
However it isn't trivial. First problem coming to my mind:
module a importing first somelib>=1.2.0 and then b and b then requiring somelib>1.2.1 and both being available, will it be the same or will I have a mess from combining?
You could absolutely have this be part of the language in any regard. The question then becomes how does one implement it in a reasonable way. I think every package should have a __version__ property you should be able to call, then you could have versioned imports.
In fact there's already many packages already defining __version__ at a package level.
Edit: What they are solving with UV is at the moment of standing up an environment, but you're more concerned about code-level protection, where are they're more concerned about environment setup protection for versioning.
> In fact there's already many packages already defining __version__ at a package level.
This only helps for those that do, and it hasn't been any kind of standard the entire time. But more importantly, that helps only the tiniest possible bit with resolving the "import a specific version" syntax. All it solves is letting the file-based import system know whether it found the right folder for the requested (or worse: "a compatible") version of the importable package. It doesn't solve finding the right one if this one is wrong; it doesn't determine how the different versions of the same package are positioned relative to each other in the environment (so that "finding the right one" can work properly); it doesn't solve provisioning the right version. And most importantly, it doesn't solve what happens when there are multiple requests for different versions of the same module at runtime, which incidentally could happen arbitrarily far apart in time, and also the semantics of the code may depend on the same object being used to represent the module in both places.
> It's been explained many times before why this is not possible: the library doesn't actually have a version number.
That sounds like it is absolutely fixable to me, but more of a matter of not having the will to fix it based on some kind of traditionalism. I've used python, a lot. But it is stuff like this that is just maddeningly broken for no good reason at all that has turned me away from it. So as long as I have any alternative I will avoid python because I've seen way too many accidents on account of stuff like this and many lost nights of debugging only to find out that an easily avoidable issue became - once again - the source of much headscratching.
> a matter of not having the will to fix it based on some kind of traditionalism
Do you know what happens when Python does summon the will to fix obviously broken things? The Python 2->3 migration happens. (Perl 6 didn't manage any better, either.) Now "Python 3 is the brand" and the idea of version 4 can only ever be entertained as a joke.
Yes, good point. Compared to how the Erlang community has handled decades of change Python does not exactly deserve the beauty prize. The lack of forethought - not to be confused with a lot of hot air - on some of these decisions is impressive. I think that the ability to track developments in near realtime is in conflict with that though. If you want your language to be everything to everybody then there will be some broken bones along the way.
> It's been explained many times before why this is not possible: the library doesn't actually have a version number.
Not possible? Come on.
Almost everyone already uses one of a small handfull of conventional ways to specify it, eg `__version__` attribute. It's long overdue that this be standardized so library versions can reliably be introspected at runtime.
Allowing multiple versions to be installed side-by-side and imported explicitly would be a massive improvement.
I believe the charitable interpretation is that it is not possible without breaking an enormous amount of legacy code. Which does feel close enough to “not possible”.
Some situations could be improved by allowing multiple library versions, but this would introduce new headaches elsewhere. I certainly do not want my program to have N copies of numpy, PyTorch, etc because some intermediate library claims to have just-so dependency tree.
What do you do today to resolve a dependency conflict when an intermediate library has a just-so dependency tree?
The charitable interpretation of this proposed feature is that it would handle this case exactly as well as the current situation, if the situation isn't improved by the feature.
This feature says nothing about the automatic installation of libraries.
This feature is absolutely not about supporting multiple simultaneous versions of a library at runtime.
In the situation you describe, there would have to be a dependency resolution, just like there is when installing the deps for a program today. It would be good enough for me if "first import wins".
> What do you do today to resolve a dependency conflict when an intermediate library has a just-so dependency tree?
When an installer resolves dependency conflicts, the project code isn't running. The installer is free to discover new constraints on the fly, and to backtrack. It is in effect all being done "statically", in the sense of being ahead of the time that any other system cares about it being complete and correct.
Python `import` statements on the other hand execute during the program's runtime, at arbitrary separation, with other code intervening.
> This feature says nothing about the automatic installation of libraries.
It doesn't have to. The runtime problems still occur.
I guess I'll have to reproduce the basic problem description from memory again. If you have modules A and B in your project that require conflicting versions of C, you need a way to load both at runtime. But the standard import mechanism already hard-codes the assumptions that i) imports are cached in a key-value store; ii) the values are singleton and client code absolutely may rely on this for correctness; iii) "C" is enough information for lookup. And the ecosystem is further built around the assumption that iv) this system is documented and stable and can be interacted with in many clever ways for metaprogramming. Changing any of this would be incredibly disruptive.
> This feature is absolutely not about supporting multiple simultaneous versions of a library at runtime.
> and support having multiple simultaneous versions of any Python library installed.
Which would really be the only reason for the feature. For the cases where a single version of the third-party code satisfies the entire codebase, the existing packaging mechanisms all work fine. (Plus they properly distinguish between import names and distribution names.)
> and support having multiple simultaneous versions of any Python library installed.
Installed. Not loaded.
The reason is to do away with virtual environments.
I just want to say `import numpy@2.3.x as np` in my code. If 2.3.2 is installed, it gets loaded as the singleton runtime library. If it's not installed, load the closest numpy available and print a warning to stderr. If a transient dependency in the runtime tree wants an incompatible numpy, tough luck, the best you get is a warning message on stderr.
You already have the A, B, C dependency resolution problem you describe today. And if it's not caught at the time of installing your dependencies, you see the failure at runtime.
You'd have to invent a different way, within existing Python syntax, to communicate the version, but you can do this today with sys.path and sys.meta_path hacks.
But virtual environments are quite simply not a big deal. Installed libraries can be hard-linked and maybe even symlinked between environments and this can be set up very quickly. A virtual environment is defined by the pyvenv.cfg marker file; you don't need to use or even have activation scripts, and you especially don't (generally) need a separate copy of pip for each one, even if you do use pip.
On the flip side, allowing multiple versions of a library in a virtual environment has very little effect on package resolution; it just allows success in cases of conflict, but normally there aren't conflicts (because you're typically making a separate environment for a single "root" package, and it's supposed to be possible to use that package in Python as it actually exists, without hacks). The installer still has to scrounge up metadata (and discover it recursively) and check constraints.
I should be able to do "python foo.py" and everything should just work. foo.py should define what it wants and python should fetch it and provide it to foo. I should be able to do "pyc foo.py; ./foo" and everything should just work, dependencies balled up and statically included like Rust or Go. Even NodeJS can turn an entire project into one file to execute. That's what a modern language should look and work like.
The moment I see "--this --that" just to run the default version of something you've lost me. This is 2025.
NO!
I don't want my source code filled with this crap.
I don't want to lose multiple hours debugging why something did go wrong because I am using three versions of numpy and seven of torch at the same time and there was a mixup
From merely browsing through a few comments, people have mostly positive opinions regarding this proposal. Then why did it fail many times, but not this time? What drives the success behind this PEP?
All of our tools can be used independently and in coexistence with other tools. You can use `uv` with other build backends; you can use `virtualenv` to create your virtual environments, and `uv pip` to install into them; you can use `ruff` as a linter and `black` as a formatter, or `ruff` for both, or whatever. Here, similarly, you can use `uv` with `ruff`, or bring your own formatter. It's intentional for us that you can use the pieces that you want, and interoperate with other tools. But it's also intentional that we want using our tools _together_ to be a great experience. I think we can achieve both of these things. Or, at least, we're going to try.
We already develop a formatter: Ruff (https://github.com/astral-sh/ruff). Ruff and uv are built by the same team. `uv format` is just an optional front-end to `ruff format`.
But, for example, my project already uses Ruff, and I have to worry about having a "managed" extra copy of Ruff that subtly alters the normal functioning both of "uv tool run" and of ruff itself.
We ignore upper bounds because it leads to a better solve. You can read my comment here: https://news.ycombinator.com/item?id=46464453. There's significant discussion about this, e.g., here: https://discuss.python.org/t/requires-python-upper-limits/12....
> Ambiguity detection is important.
I think you're misunderstanding why we do this: it's a security feature. pip's design is inherently vulnerable to dependency confusion attacks, since packages of the same name across indexes are considered equally trusted by pip. You can look up the torchtriton attack to learn more.
> Stuff like this sense unlikely to contribute to overall runtime, but it does decrease flexibility.
I think you're misinformed. We support all of these features: system- and per-user configuration files, environment variables, etc. We just don't read _pip's_ configuration file, which is intended for pip, not uv.