Hacker Newsnew | past | comments | ask | show | jobs | submit | greut's commentslogin


> The script uses systemd underneath.

Just show us that mess of your's :)


It's a bit of an old post, and yes Pipenv is not the go-to tool anymore. pip-tools is okay for people that really, really love their requirements.txt; otherwise we tend to go with Poetry at work.

Any folks having a good experience with PDM https://github.com/pdm-project/pdm ?


After a series of bad experiences with Poetry, I switched the packages I maintain to PDM. Although I have hit a few minor snags, the maintainer and other users in the project's github discussions have never failed to help with a fix, workaround, or advice. It's pleasant to use, and I think the only feature I cared about which was in Poetry but not PDM (a publish-to-pypi command), has proven easier to do with the Twine tool anyway.

My situation is I think unusual, in that I need to use a private pypi repo which requires mTLS for both fetching and publishing. Had it not been for that I suspect I'd still be using Poetry, but given the experiences I've had with PDM I wouldn't switch back even if the situation with my repo changed.


See my above comment, something to be aware of.


How is the maturity/stability of Poetry these days? I despise Pipenv, and was hoping to push for a switch to Poetry at my place of work a couple years ago, but I ran into blocking bugs across multiple versions (latest N versions affected by bug A, prior M versions affected by bug B). Had to chock it up to "not yet mature enough" and resign myself to the absurd lock times and countless terrible behaviors of Pipenv.


We use poetry for all python projects. I haven't seen an actual poetry internal bug in quite a while, but using poetry effectively does require one keep some things in mind that are probably non-obvious to newcomers:

1. Poetry's default assumption on packages respecting semver simply does not hold up in reality. There are very few packages actually sticking to semver. Thus the `^x.y.z` default version range is quite often too loose. I've found that using `~x.y.z` for most packages is far more stable.

2. Imho, `poetry update` is a footgun. Without specifier, it will attempt to update the entire dependency tree. Not only is this slow, together with 1) it's all too likely one ends up with dependencies that actually are incompatible at runtime. I'd much rather have a `poetry update --all` flag instead for the rare instance I do want to update everything. The default behaviour should be to require a list of packages to update.

3. There are some common packages that cause very long resolution times if they are not restricted. Case in point: boto3. Even if one doesn't use boto3 oneself, it's very likely a transient dependency. Many packages simply specify `'*'` as their version dependency (they shouldn't, but it's the unfortunate reality many do). This will cause poetry to consider every possible boto3 version. With hundreds of versions - boto3 has a release every other day - this gets unwieldy fast. So I often end up specifying boto3 myself with some sensible range in my toml file, even when it's not a strict dependency of my own project.

4. The datascience ecosystem needs particular attention. Best to simply pin those, as every pandas update is guaranteed to break something. ABI changes to numpy are a particular nightmare. This is again due to too many packages simply specifying `'*'` for their numpy dependency. Which is further complicated by the fact that most don't distinguish between build-time dependencies and run-time dependencies. The numpy ABI is only forward compatible. Hence one should build with the oldest supported numpy[0].

[0]: https://pypi.org/project/oldest-supported-numpy/


I'll take the behavior of 2 over Pipenv's "any re-lock aggressively updates everything" approach. What's the point of lock files if you have to peg everything in order to have stability / control over versions??


Updating dependencies (#2) does seem needlessly painful. I have wondered if I am missing some obvious workflow.


Seems very good, except they do seem to have some long running per-releases going which seems troubling. Just ship it already!


PDM is based on PEP-582, which is only a draft and you will hit edge cases where it's not supported, or projects refuse to support it because of this. I'd avoid it for that reason personally.

virtualenvs are a much more supported.


You can opt out of PEP-582, and in PDM 2.0 (just released), it becomes opt in.


What’s the point of using it over out Poetry then ? Why do we need a other ?


It's been removed from AUR packages as well, https://lists.archlinux.org/pipermail/aur-requests/2020-Marc...


Ups (opening keynote) and downs (closing keynote), and interesting ones in the middle. It's hard to unsee the “2nd - 4TH“.


To make Python fast in that regard, you'll have to rely on C-based libraries (C as in Cython) like httptools or libuv. http://magic.io/blog/uvloop-blazing-fast-python-networking/


With `aiohttp` you don't need to do much to build your bot using `asyncio`. Here, see a sample bot to vote on stuff and one article explaining the gist of it:

- https://github.com/HE-Arc/votebot

- https://medium.com/@greut/a-slack-bot-with-pythons-3-5-async...

This is a sample project for a course on Python, in you find a mix of french/english...


`await`replaces most of those `yield from`s in 3.5, right?


It looks like (some parts of) the source code of this project is there: https://github.com/allesblinkt/riverbed-vision



The evolution of the web technologies is an organic growth that won't be stopped in time to say "HTML5 certified". Browser vendors are pushing innovations through other medium than the W3C (which sometimes causes problem) and but what's needed is driven by the applications (that's why people like facebook are pushing test suites like Ringmark). Looking at browsers using only the HTML5 perspective is sort of restrictive as people and application developers may have other needs either high-tech (NaCl, 3D) or slow-tech (assistive technologies). No browsers can do it 100% right because nobody will ever agree on what those 100% are now and in the future.


It's not about stopping the evolution of web technologies, as you say, it's about guaranteeing a minimum baseline of features that we can be sure is going to be present. A minimum target that you can count on. Most sites have no need for the advanced features you mention, but if they do then there's nothing to stop them requiring that support, and browser manufacturers will certainly keep rolling out new features as they fight for market share. In the end, it's about moving the baseline forward.


> The evolution of the web technologies is an organic growth that won't be stopped in time to say "HTML5 certified".

At the same time now you have a way to say "When I say HTML 5 I refer to these features, at least". Without a spec fixed in stone it is hard to say "I want to use a browser that does HTML 5".

Without fixed specs we are currently regressing to pre-2000 sites "optimized for" a certain browser; "Sorry, this site works only with Chrome" is a common sight on Show HN submissions, but also on sites for a more common audience. With the comeback of something that resemble a fixed spec, let's call it a snapshot spec, this situation may get better, or, at least, not get worse.


100% spec implementation doesnt stop innovation. One browser vendor just needs to respect the spec. If ones want to add more features then i dont see how respecting makes innovation hard. You can still build on top of the spec. Just respect it.

But it is my duty as a web developer to ensure the features i use will be available widely , i dont want to get trapped into this on that plateform , while other major browsers will never implement some "innovative" features.

So innovation if you want, but i care only about stability.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: