Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why would they do this? It's pure negligence. I don't even sign anything important and still worry about my keys.


I have been asked twice or more why I insisted on not using a Continuous Integration environment for publishing some software releases that are installed by third-parties.

My team was automating the infrastructure to build internal software and naturally they wanted to be able to simplify things.

The idea that was proposed to me was the following: once I push a new version tag to GitHub, the deployment CI server is going to build and release it as an unstable version.

Some important detail here: I use the same key to sign packages regardless if they are released as unstable or stable. That would mean that if someone, somehow, managed to push a tag that was pushed upstream to GitHub, hypothetically they would be able to eventually gain access to consumers machines (basically, developers) when the consumers update it after getting a notification telling them a new version is available. No way I'd allow this to happen, but I would not be surprised if most people just took this as an acceptable risk.


Depending on your threat model I think that signing packages directly from your CI is acceptable, assuming that your CI runs is a reasonably isolated environment (e.g. on your company's LAN) and people who are able to trigger a release are correctly vetted.

If I understand the parent comment correctly they were somehow shipping the release signing key on their production environment which is a whole other level of bad.


> That would mean that if someone, somehow, managed to push a tag that was pushed upstream to GitHub

You have to define what the signature means.

IMHO it is fine for it to mean "this software was built on our build server from a well-defined state of the source code, which is only changable by our employees and contractors, and for which we have the full change log". So I deploy the code signing key to build servers, which is the only place where it is used.

I'm interested in what alternative meaning you would give to a signature. I have considered the possibility of tying it to the QA processes, but then a build can only be signed after checking it manually, which is problematic when many signatures are needed at multiple packaging layers (exe/dll, msi, setup.exe).


The problem is when a malicious package is produced, either because a flaw was introduced in the code, or because a dev machine was compromised, or _when the CI machine sad compromised_; the malicious package will be signed as if it were legit.

One middle point between automated and manual signing is, as usual, key rotation: have the signing keys expire in a short duration of time (say 2 weeks) and manually push them every week, so that the window of attack is as small as possible.


What does a key rotation solve? Either your build server is compromised or it's not.


You add another stage.

1. A release candidate X is tested in a CI

2. If tests pass, the CI sends a notification to the build server. The notification is "prep for release package <hashid>"

3. Build server pulls code from the repo, matches it against "ready, CI passed" notification and builds the package/packages.

Compromise of the entire CI/dev chain would be contained as the builders act as a new pipeline entry point running in parallel of the CI using pull method. To compromise keys located on a build server one would need to either get access to it via whatever the method of remote access the server has ( which should be nearly none ) or figure out how to compromise the code running on the builders using the input from a repo that passed CI.


Unless you audit the entire codebase prior to a manual build, from a machine you know hasnt been compromised with a key you know hasnt leaked, how is a manual build different to CI/CD securitywise?


Auditing the entire database is hard and is not part of my concerns why I rather not to use a CI/CD in my specific case.

This is what I gain by not using the CI/CD the rest of my team uses:

* isolation: I build applications to be delivered to personal computers of software engineers (mostly), they build applications to run on our own internal servers. * my SSH client requires I have my SSH key with me, while I believe I can achieve something similar with a web-based CI/CD, the client-side certificate isn't something as "production ready" out of the box as 24 years old SSH is. * if someone manages to push malicious code to my code base, I am going to notice during manual check: yes, I manually check the diff commits to see if anything weird came up (mostly thinking about bugs). In practice, I basically check if the commit hash is the same as the one I just pushed (usually containing the release notes). If it is, I build. Otherwise, I check what is going on (most likely, I forgot to checkout the tag).

You can say that this doesn't rule out my machine from being compromised, and I must agree... However, besides being very unlikely that I am a target of such a complex attack, I try to do my best to have a secure development environment.

If I were a high profile target, I would just use a spare safe machine to use in deployments (I believe Linus Torvalds use something like this to vet the security of the Linux kernel but I couldn't find the reference).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: