We had this "deploy" Jenkins box set up with limited access for devs, because it had assume-role privs to an IAM role to manage AWS infra with Terraform. The devs run their tests on a different Jenkins box, and when they pass, they upload artifacts to a repo and trigger this "deploy" Jenkins box to promote the new build to prod. The devs can do their own CI, but CD is on a box they don't have access to, hence less chance for accidental credential leakage. Me being Mr. Devops-play-nice-with-the-devs, I let them issue PRs against the CD box's repo. Commits to PRs get run on the deploy Jenkins in a stage environment to validate the changes.
This one dev wanted to change something in AWS. But for whatever reason, they didn't ask me (maybe because they knew I'd say no, or at least ask them about it?). So instead the dev opens a PR against the CD jobs, proposing some syntax change. Then the dev modifies a script which was being included as part of the CD jobs, and makes the script download some binaries and make AWS API calls (I found out via CloudTrail). Once they've made the calls, they rewrite Git history to remove the AWS API commits and force-push to the PR branch, erasing evidence that the code was ever issued. Then close the PR with "need to refactor".
In the morning I'm looking through my e-mail, and see all these GitHub commits with code that looks like it's doing something in AWS... and I go look at the PR, and the code in my e-mails isn't anyware in any of the commits. He actually tried to cover it up. And I would never have known about any of this if I hadn't enabled 'watching' on all commits to the repo.
Who'd have thought e-mail would be the best append-only security log?
Our deploy scripts make a call to a separate box that actually does the deployment, ostensibly to avoid this sort of problem and have some more control over simultaneous deployments. But it is very hard to explain to anyone how to diagnose a deployment failure on such a system, and once in a while the log piping gets gummed up and you don't get any status reports until the job completes.
I've maybe managed to explain this process to one other extant employee, so pretty much everybody bugs me or one of the operations people any time there's an issue. That could be a liability in an outage situation, but I don't have a concrete suggestion how to avoid this sort of thing.
Can you enable just enough access on the deploy box so people can go view the logs there, and do nothing else? As opposed to shipping logs off the box, which I assume is what's getting gummed up
I didn't tell his boss. I did tell my boss, in an e-mail with evidence. We both had a little chat with the dev where we made it clear that if this happened under slightly different circumstances (if he was trying to access data/systems he wasn't supposed to, if it was one of the HIPAA accounts, etc) he'd not only be shitcanned, he'd be facing serious legal consequences. We were satisfied by his reaction and didn't push it further.
I was actually fired early in my career as a contractor when an over-zealous security big-wig decided to go over my boss's boss's head. I had punched a hole in the firewall to look at Reddit, and because I also had a lot of access, this meant I wasn't trustworthy and had to go. People (like me) make stupid mistakes; we should give them a second chance.
We had this "deploy" Jenkins box set up with limited access for devs, because it had assume-role privs to an IAM role to manage AWS infra with Terraform. The devs run their tests on a different Jenkins box, and when they pass, they upload artifacts to a repo and trigger this "deploy" Jenkins box to promote the new build to prod. The devs can do their own CI, but CD is on a box they don't have access to, hence less chance for accidental credential leakage. Me being Mr. Devops-play-nice-with-the-devs, I let them issue PRs against the CD box's repo. Commits to PRs get run on the deploy Jenkins in a stage environment to validate the changes.
This one dev wanted to change something in AWS. But for whatever reason, they didn't ask me (maybe because they knew I'd say no, or at least ask them about it?). So instead the dev opens a PR against the CD jobs, proposing some syntax change. Then the dev modifies a script which was being included as part of the CD jobs, and makes the script download some binaries and make AWS API calls (I found out via CloudTrail). Once they've made the calls, they rewrite Git history to remove the AWS API commits and force-push to the PR branch, erasing evidence that the code was ever issued. Then close the PR with "need to refactor".
In the morning I'm looking through my e-mail, and see all these GitHub commits with code that looks like it's doing something in AWS... and I go look at the PR, and the code in my e-mails isn't anyware in any of the commits. He actually tried to cover it up. And I would never have known about any of this if I hadn't enabled 'watching' on all commits to the repo.
Who'd have thought e-mail would be the best append-only security log?