> It's just that function calls can only appear in very limited places in the program (only inside `steps`), and to define a function, you have to create a Git repository.
FYI there is also `on: workflow_call` which you can use to define reusable jobs. You don't have to create a new repository for these
All the Nix commands that take an 'installable' can take GitHub URLs. For instance:
$ nix run github:NixOS/nixpkgs#hello
Hello world!
That command will download nixpkgs from GitHub, evaluate the `hello` flake attribute, build it (or download it from a cache), and run it.
> But to find out what the flake exposes at all, reading the flake (or its documentation) is pretty much necessary.
If the flake exposes default packages or apps, then you do not need to provide a flake attribute:
$ nix run github:NixOS/nixpkgs
error: flake 'github:NixOS/nixpkgs' does not provide attribute 'apps.aarch64-darwin.default', 'defaultApp.aarch64-darwin', 'packages.aarch64-darwin.default' or 'defaultPackage.aarch64-darwin'
So you can run e.g. Alejandra [1], a Nix formatter, like so:
EDIT: For what it's worth, I think this feature can be useful sometimes, but it does also suffer from the same typosquatting problems as we see in other ecosystems.
The first uni assignment I made for CS101 was a Mandelbrot set renderer. I got it to work, but that's all the merit it had. I didn't have a clue about what I was actually doing.
When I read this post a couple of months later it answered questions I didn't even know I had. Ever since, I try to keep digging if I have that feeling of "There must be more to this.."
Or processes running with the CAP_NET_BIND_SERVICE capability! [1]
Capabilities are a Linux kernel feature. Granting CAP_NET_BIND_SERVICE to nginx means you do not need to start it with full root privileges. This capability gives it the ability to open ports below 1024
Using systemd, you can use this feature like this:
Working on that API gateway with you was a lot of fun :)
To be fair, we did run into some issues with the GHC garbage collector performance initially. That took some time to figure out and wasn't the easiest thing ever. Like all tools, there are rough edges sometimes.
I still maintain that the Haskell we wrote at the time was pretty cheap in terms of operational load / bugs to fix (especially compared to the systems that they replaced). When I was back at the office for a reunion, I heard that things were still pretty nice in this respect, but maybe someone still at Channable can chime in with more recent stories! (Or complain to me about the code I wrote back then)
Here's a way you can do this with git. This trick relies on `git merge --allow-unrelated-histories`.
Assuming you have repos `foo` and `bar` and want to move them to the new repo `mono`.
$ ls
foo
bar
# Prepare for import: we want to move all files into a new subdir `foo` so
# we don't get conflicts later. This uses Zsh's extended globs. See
# https://stackoverflow.com/questions/670460/move-all-files-except-one for
# bash syntax.
$ cd foo
$ setopt extended_glob
$ mkdir foo
$ mv ^foo foo
$ git add .
$ git commit -m "Prepare foo for import"
# Follow those "move to subdir" steps for `bar` as well.
# Now make the final monorepo
$ cd ..
$ mkdir mono
$ cd mono
$ git init
$ touch README.md
$ git add README.md
$ git commit -m "Initial commit in mono"
$ git remote add foo ../foo
$ git fetch foo
$ git remote add bar ../bar
$ git fetch bar
# Substitute `main` for `master` or whatever branch you want to import.
$ git merge --allow-unrelated-histories foo/main
$ git merge --allow-unrelated-histories bar/main
# Inspect the final history:
$ git log --oneline --graph
* 8aa67e5 (HEAD -> main) Import bar
|\
| * eec0abd (bar/main) Prepare bar for import
| * 9741d6d More stuff in bar
| * 634ba3d Initial commit bar
* 43be6e9 Import foo
|\
| * d4805a0 (foo/main) Prepare foo for import
| * 4d2ca10 More stuff in foo
| * 72072a1 Initial commit foo
* bfcb339 Initial commit in mono
Do you think this will speed up things? I tried the above suggestion and it's already for four hours to merge two repo's into one (3 years worth of git history)
Form a corporate perspective: 2FA would still force a unique secret per user. That can be useful when your users tend to reuse passwords for different sites or choose poor ones.
I have seen folks use password managers to store their poor non-autogenerated passwords.
For users that do use the PW manager properly, having the PW manager store the TOTP secrets is indeed "putting all of your eggs in one basket".
I've been using this on and off over the last two months (tend to forget I have it installed and fall back to old habits), but it's really really cool stuff!
FYI there is also `on: workflow_call` which you can use to define reusable jobs. You don't have to create a new repository for these
https://docs.github.com/en/actions/writing-workflows/workflo...