Hacker Newsnew | past | comments | ask | show | jobs | submit | ali_piccioni's commentslogin

I moved all my precommit hooks to prepush hooks. I don’t need a spellchecker disrupting my headspace when I’m deep into a problem.

My developer environments are setup to reproduce CI test locally, but if I need to resort to “CI driven development” I can bypass prepush hooks with —-no-verify.


CI driven development results in so many shitty commits, though, and it's so slow. I find it very miserable.

Pre-commit hooks should be much, much faster than most CI jobs; they should collectively run in less than a second if possible.


Wittgenstein’s ruler in action.


It highlights a classic management failure that I see again and again and again: Executing a project without identifying the prerequisite domain expertise and ensuring you have the right people.


Well understanding the problem and finding competent people is hard, riding on tool marketing and hiring bootcampers to do as you say is easy.


The information can be compressed into a table that shows what knowledge (columns) each party gain (rows) at each stage (cells) of the connection.

            Can         Can
           Transmit  | Receive
  
 Client    SYN-ACK     SYN—ACK            
 Server    ACK         SYN


Is your bottom row backwards? Surely the server learns that the client can transmit when it receives a SYN and that it must have received the SYN-ACK when the ACK comes back.


In priority,

1. Stop using API keys. Configure SSO integration for developers and OIDC for automation. For example, this is very easy to setup with AWS.

2. If the above is not possible, then store credentials encrypted at rest. Decrypt them only at runtime. For example, SOPS to store encrypted credentials into the repo, then AWS KMS holds the decryption key. The SOPS Readme is very helpful.


Let's say you're not on a major public cloud. Let's say you're on Hetzner How do you setup something like OIDC for workloads (workload identity)?


I was curious about this and went sniffing around and it seems that their instance metadata[1] doesn't include anything that demonstrably associates the instance with Hetzner nor your specific account, making chain of custody ... tricky.

The best work-around I could come up with (not having a Hetzner account to actually kick the tires upon) is that you could inject a private key that you control into the instances via cloud-init (or volume attachment) and then sign any subsequent JWT using it. For sure it would not meet all threat models, but wouldn't be nothing either. I was hoping there was some chain of custody through Vault[2] but until Hetzner implements ANY IAM primitives, I'm guessing it's going to be a non-starter since the instances themselves do not have any identity

1: https://docs.hetzner.cloud/#server-metadata

2: https://github.com/hashicorp/vault/blob/v1.14.7/website/cont...


We had an integration test that involved many tool chains and languages to prepare. Rust compilation, WASM compilation, Docker builds shell scripts that produced a VM image, Python was in there somehow too, and then a test harness, written in Rust, that took all that to actually run the test.

With Bazel setup, and it was a beast to setup, developers could run all that, with remotely cached incremental builds, efficient parallel execution, and all reproducible (and same as CI) from a local developer environment with just one command.


I recently left Google and I do miss blaze. I'm working on some stuff that's rust and wasm and typescript and there are containers and I'm very tempted to use bazel for basically correct caching and stuff.

But even as a xoogler I think that the common wisdom of "don't use bazel until you can afford a team" or "use bazel if you are the only developer" are probably right.

My somewhat janky series of dockerfiles and bash scripts is at least debuggable to my business partner.

I'm just not ready to commit our future frontend developers to having to use bazel yet.

I'm sort of sticking to a lowest common denominator technology stack for most things except where we are spending our innovation tokens in wasm land.

But someday, probably. In the meantime I may setup sccache.


> Rust compilation, WASM compilation, Docker builds shell scripts that produced a VM image, Python was in there somehow too

was it through bazel rules? The worry is if some of those rules will get bug or missing necessary feature, it will be pita to deal with.


This is one reason why I’m quite interested in buck2; there are no built in rules at all, so if you need any changes, you can just make them.

Unfortunately the open source set of rules is a bit rough right now, with a decent happy path but a cliff if you’re off of it, but given some time…


The rules are open source. We did run into some road bumps in Rules Rust, that required patches or clever work arounds. But I believe the community has made great strides to improve Bazel’s Rust support in the last five years.

To be frank, I would avoid introduction of Bazel unless you are experiencing major pains with your current system and have at least a small centralized team willing to build expertise and provide support.


Bazel is definitely like kubernetes where you don't need it, until you need it.

If you got a big polyglot application its perfect for you. If your app is largely all in one language then you don't need that complexity in you life.


Long lived credentials are a security red flag.

We setup our AWS organization’s policies (SCPs) to prohibit long-lived tokens. Instead access goes through SSO or OIDC.

It’s difficult to track usage behind access tokens, prevent leaks, and effectively revoke them.


Yup. TTL (leases) must become the norm for All The Things.


Having led a successful Bazel migration, I'd still recommend many projects to stick to the native or standard supported toolchain until there's a good reason to migrate to a build system (And I don't consider GitHub actions to be a build system).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: