I moved all my precommit hooks to prepush hooks. I don’t need a spellchecker disrupting my headspace when I’m deep into a problem.
My developer environments are setup to reproduce CI test locally, but if I need to resort to “CI driven development” I can bypass prepush hooks with —-no-verify.
It highlights a classic management failure that I see again and again and again: Executing a project without identifying the prerequisite domain expertise and ensuring you have the right people.
Is your bottom row backwards? Surely the server learns that the client can transmit when it receives a SYN and that it must have received the SYN-ACK when the ACK comes back.
1. Stop using API keys. Configure SSO integration for developers and OIDC for automation. For example, this is very easy to setup with AWS.
2. If the above is not possible, then store credentials encrypted at rest. Decrypt them only at runtime. For example, SOPS to store encrypted credentials into the repo, then AWS KMS holds the decryption key. The SOPS Readme is very helpful.
I was curious about this and went sniffing around and it seems that their instance metadata[1] doesn't include anything that demonstrably associates the instance with Hetzner nor your specific account, making chain of custody ... tricky.
The best work-around I could come up with (not having a Hetzner account to actually kick the tires upon) is that you could inject a private key that you control into the instances via cloud-init (or volume attachment) and then sign any subsequent JWT using it. For sure it would not meet all threat models, but wouldn't be nothing either. I was hoping there was some chain of custody through Vault[2] but until Hetzner implements ANY IAM primitives, I'm guessing it's going to be a non-starter since the instances themselves do not have any identity
We had an integration test that involved many tool chains and languages to prepare. Rust compilation, WASM compilation, Docker builds shell scripts that produced a VM image, Python was in there somehow too, and then a test harness, written in Rust, that took all that to actually run the test.
With Bazel setup, and it was a beast to setup, developers could run all that, with remotely cached incremental builds, efficient parallel execution, and all reproducible (and same as CI) from a local developer environment with just one command.
I recently left Google and I do miss blaze. I'm working on some stuff that's rust and wasm and typescript and there are containers and I'm very tempted to use bazel for basically correct caching and stuff.
But even as a xoogler I think that the common wisdom of "don't use bazel until you can afford a team" or "use bazel if you are the only developer" are probably right.
My somewhat janky series of dockerfiles and bash scripts is at least debuggable to my business partner.
I'm just not ready to commit our future frontend developers to having to use bazel yet.
I'm sort of sticking to a lowest common denominator technology stack for most things except where we are spending our innovation tokens in wasm land.
But someday, probably. In the meantime I may setup sccache.
The rules are open source. We did run into some road bumps in Rules Rust, that required patches or clever work arounds. But I believe the community has made great strides to improve Bazel’s Rust support in the last five years.
To be frank, I would avoid introduction of Bazel unless you are experiencing major pains with your current system and have at least a small centralized team willing to build expertise and provide support.
Having led a successful Bazel migration, I'd still recommend many projects to stick to the native or standard supported toolchain until there's a good reason to migrate to a build system (And I don't consider GitHub actions to be a build system).
My developer environments are setup to reproduce CI test locally, but if I need to resort to “CI driven development” I can bypass prepush hooks with —-no-verify.