If you can instead construct a list of existing instances to grandfather in, that doesn't suffer from this problem. Of course many linting tools do this via "ignore" code comments.
That feels less arbitrary than a magic number (because it is!) and I've seen it work.
We used this approach to great effect when we migrated a huge legacy project from Javascript to Typescript. It gives you enough flexibility in the in between stages so you're not forced to change weird code you don't know right away, while enforcing enough of a structure to eventually make it out alive in the end.
That might be fine in your context. People's problems are real, though. What they're almost always missing is separating the source code from the compiled output ("lock files"). Pick a tool to help with that, commit both files to your ("one's") project, problem solved.
People end up committing either one or the other, not both, but:
- You need the source code, else your project is hard to update ("why did they pick these versions exactly?" - the answer is the source code).
- You need the compiled pinned versions in the lock file, else if dependencies are complicated or fast-moving or a project goes unmaintained, installing it becomes a huge mindless boring timesink (hello machine learning, all three counts).
Whenever I see people complaining about python dependencies, most of the time it seems just that somebody lacked this concept, or didn't know how to do it with python, or are put off by too many choices? That plus that ML projects are moving quickly and may have heavy "system" dependencies (CUDA).
In the source code - e.g. requirements.in (in the case of pip-tools or uv's clone of that: uv pip compile + uv pip sync), one lists the names of the projects one's application depends on, with a few version constraints explained with comments (`someproject <= 5.3 # right now spamalyzer doesn't seem to work with 5.4`).
In the compiled output - i.e. the lock files (pip-tools or uv pip sync/compile use requirements.txt for this) one makes sure every version is pinned to one specific version, to form a set of versions that work together. A tool (like uv pip compile) will generate the lock files from the source code, picking versions that are declared (in PyPI metadata) should work together.
My advice: pip-tools (pip-compile + pip-sync) does this very nicely - even better, uv's clone of pip-tools (uv pip compile + uv pip sync), which runs faster. Goes nicely with:
- pyproject.toml (project config / metadata)
- plain old setuptools (works fine, doesn't change: great)
- requirements.in: the source for pip-tools (that's all pip-tools does: great! uv has a faster clone)
- pyenv to install python versions for you (that's all it does: great! again uv has a faster clone)
- virtualenv to make separate sandboxed sets of installed python libraries (that's all it does: great! again uv has a faster clone)
- maybe a few tiny bash scripts, maybe a Makefile or similar just as a way to list out some canned commands
- actually write down the commands you run in your README
PS: the point of `uv pip sync` over `uv pip install -r requirements.txt` is that the former will uninstall packages that aren't explicitly listed in requirements.txt.
uv also has a poetry-like do-everything 'managed' everything-is-glued-together framework (OK you can see my bias). Personally I don't understand the benefits of that over its nice re-implementations of existing unix-y tools, except I guess for popularizing python lockfiles - but can't we just market the idea "lock your versions"? The idea is the good part!
> I think ['The Secret War' by Brian Johnson is] an earlier version of ['Most Secret War' by R V Jones]
Most Secret War is R V Jones' memoir of his personal involvement in radar, aerial navigation, aerial surveillance etc. in British military intelligence. I read it a long time ago but I don't remember it as a dry read at all - dramatic times after all.
Brian Johnson was involved in writing and producing the BBC documentary and a book. So it's not an earlier version of the same thing. The documentary is on youtube I think - a good watch, but I'd recommend reading Jones' original book.
For the benefit of anybody thinking "with gitlab I'm safe from this": If you're saying (and perhaps you're not) that some other git hosting service
- gives you control over gc-ing their hosted remote?
- does not to your knowledge have a third-party public reflog or an events API or brute-forceable short hashes?
if so, especially the second of those seems a fragile assumption, because this is "just" the way git works (I'm not saying the consequences aren't easy to mentally gloss over). Even if gitlab lacks those things curently (but I think for example it does support short hashes), it's easy to imagine them showing up somehow retroactively.
If you're just agreeing with the grandparent post that github's naming ("private") is misleading or that the fork feature encourages this mistake: agreed.
Curious to know if any git hosting service does support gc-ing under user control.
I found the post you're replying to helpful (and it made me laugh): I've come across the abbreviation POLA many times, with its non-jokey meaning "principle of least authority". I've also come across "principle of least astonishment" (Larry Wall or some other Perl contributor maybe?) but I'd never noticed that was (presumably?) a jokey reference to principle of least authority - I guess because I came across the joke first back was I was barely a programmer and I've never seen it abbreviated.
But maybe it never was a reference to POLA proper - "principle of least privilege" is more widespread I think, outside of the object capability community. And maybe "least astonishment" came first!
You're saying there's a github API that takes as an argument a secret, and creates a git commit containing that secret? I'm very surprised. Can you provide a reference to the API call?
To clarify, it doesn’t create a commit and is only usable within actions. I have always used the GH action VSCode extension for it, but I believe from the API, you would call the below endpoint using a classic/non-fine grained PAT that has the “repo” grant.
It uses types to get quoting right? Or it quotes everything (regardless if it's already quoted)?
Ironically, the first time I saw the former was in a Python templating library (in the early 2000s -- from distant memory I think it might have been the work of the MemsExchange team?)
Formatters basically differentiate the literal parts of the string and the template arguments. There's also a neat postgres library that does the same for sql quoting.
People deploying low-traffic side projects - to VPSes or similar that allows price capping - and who do use a database: what deployment stack do you use these days?
I assume Docker + something is most popular, but what something? Does terraform work sanely for cheap virtual hosting? Ansible? I don't want to manually install any more stuff than the minimum I can get away with!
For my own projects I do Terraform/Pulumi + Ansible. I use Hetzner & DigitalOcean and this setup works great with both.
I don't use docker for my projects, as I deploy on RHEL like systems which I'm intimately familiar how to configure (and have snippets I mix and match).
You use both terraform and ansible in the same project? I always thought of them as competitors filling more or less the same role, do you find it useful to use them together? Is it that hetzner and digitalocean TF providers do a good job but provide limited functionality, and ansible fills in the gaps for you?
I don't think they are competitor, and if someone uses Ansible for infra I think they are using the wrong tool (mind you, last time I checked Ansibles's infra tooling few years back it wasn't adequate in my opinion).
I use Terraform to spin up the infra (vm, storage, firewalls, load balancers, DNS, etc) that the cloud services offer. Then when the VM is up I either run Ansible via local-exec Terraform provisioner, or after the fact via separate invocations.
I use Ansible to install, configure and deploy software on Linux VMs exclusively. For client projects, or those that need fast scalability on-demand, I will also use Packer+Ansible to build preconfigured VM images which I can then spin up via Terraform separately.
Hetzner and DigitalOcean providers are first party (partenered with HashiCorp) so you have assurance that what's in the docs works. This is true for most mid/large cloud providers.
Not sure if that's intended as irony, but of course, if somebody is taking multiple years off work, you would be less likely hear about it because by definition they're not going to join the company you work for.
I don't think long-term unemployment among people with a disability or other long-term condition is "fantasticaly rare", sadly. This is not the frequency by length of unemployment, but:
Nice. I expected clicking on the different "filter" buttons would update of the search results right away: I didn't expect I had to repeat the search (though I can see why you'd do that)
I went here looking for more info about payperrun https://payperrun.com/%3E/welcome and clicked on the "Spotlight" section and saw 4 popups blocked - I never see popups anywhere these days and have to admit that sends me away pretty quickly.
That feels less arbitrary than a magic number (because it is!) and I've seen it work.