Hacker Newsnew | past | comments | ask | show | jobs | submit | AdrienPoupa's commentslogin

My exact reaction when they override my cmd+e shortcut and change the default layout every two months :)

Don't forget to pin your GitHub Actions to SHAs instead of tags, that may or may not be immutable!


Frustratingly, hash pinning isn’t good enough here: that makes the action immutable, but the action itself can still make mutable decisions (like pulling the “latest” version of a binary from somewhere on the internet). That’s what trivy’s official action appears to do.

(IOW You definitely should still hash-pin actions, but doing so isn’t sufficient in all circumstances.)


That's true. This specific attack was mitigated by hash pinning, but some actions like https://github.com/1Password/load-secrets-action default to using the latest version of an underlying dependency.


This attack was not mitigated by hash pinning. The setup-trivy action installs the latest version of trivy unless you specify a version.


Oh, I was referring to `aquasecurity/trivy-action` that was changed with a malicious entrypoint for affected tags. Pinned commits were not affected.


I'm pretty sure the trivy action does not do that.


FWICT, it pulls the latest version of trivy by default. If that latest tag is a mutable pointer (and it typically is), then it exhibits the problem.


Then why do they hard code the trivy version and create PRs to bump it?

https://github.com/aquasecurity/trivy-action/blob/57a97c7e78...

https://github.com/aquasecurity/trivy-action/pull/519

Edit: ah, I see you are referring to the setup-trivy action rather than the trivy-action. Yeah, that looks like a bad default, although to be fair it is a setting that they document quite prominently, and direct usage of the setup-trivy action is a bit atypical as-is.


I gotta admit you had me thinking this was serious until the `Remove lockfiles` section ;)


Not "you can always rewrite it yourself in Rust over a weekend"?


"If it has been mass maintained by some random person in Nebraska since 2003, that is battle-tested infrastructure." comes before that.


I stopped there and had to read the answers to my comment to find out and revisit it. In hindsight, this is absolutely hilarious. Might be one of my new favorite pieces of software satire (because of how realistic, albeit absurd, it is).


Every 2 years? More like every 2 days for GitHub Actions or Git operations those days :(


Grass is always greener.


I just read one of the linked articles about saturation divers, it was absolutely fascinating https://www.atlasobscura.com/articles/what-is-a-saturation-d...


I'd argue the easiest way to achieve this is to refrain from opening any ports, and using Tailscale to get remote access.


I doubt that with level of accessibility that the GP suggest that would be easy. It would be easy to have integrated firewall management that just expose 443/80 ports for reverse proxy and handle communication with docker networks. Also it can help setup vpn server and disallow accessing the server except via approved client.

Someone suggested cosmos in the comment. I think this is the closest to what I am saying. However I am into self hosting for couple of years now with development experience so I would be biased. That would be probably different for average person without deep knowledge.


But then, your firewall or Cosmos is exposed to the internet waiting for a 0day to be released, and chances here they will not be updated as soon as it comes out.

VPN server is already what Tailscale does at this point. I'm not a shill by the way, just a regular user impressed by the ease of installation/use of their product.


Tailscale is awesome, but requiring anyone you want to share data or apps with to install Tailscale leaves a lot of simple interactions off the table.


Promising, but is the layout responsive? https://ibb.co/jRLRmD4


Thanks, yes it is, there is a button at the bottom of the sidebar to mininize it. Looks like the black bar at the bottom is covering it. But i will look into this to avoid further explanation


I love my work-provided M1 MBP Max and would possibly consider getting a personal Air at this price range, but the 8gb RAM is still a no go for me, even for $699. My SO has a 2015 MBP that's still solid, and I credit that to its SSD and 16gb RAM. I can't see 8gb of RAM being usable in 2034.


For normal "office use" (web browser, an app or two) 8GB does work well enough. More RAM is always nice, of course, but the $500 you save compared to the cheapest M2 16GB Air now available is quite a bit toward the later purchase of a new laptop in 3-4 years.

Of course, sometimes refurbished M1s will appear.


RTINGS comes to mind


They do amazing first party research, but that's not quite the same thing as an aggregator like Rotten Tomatoes.


PHP dev here, we have extensions for development that make no sense in production, xdebug for example. You need it for breakpoints and debugging in general but it should not be installed in production. So we extend our production image and install it on top of it. Similarly, we include Composer (package manager) in the dev image only as we only need the installed dependencies in production but not the package manager. Our dev image is a flavor of the production one, really.


Would multi-stage Docker builds not help here? Composer executes in one step and the result artefacts are copied into a "clean" PHP image without Composer installed.


Based on the description they are doing a multi stage build, but using the prod container as a base and then building the dev container atop that. But yes you could easily go the other way with dev building an artifact and adding it to a secure locked down container. This is less typical with dynamic languages that don't typically create a single binary, but still comes up. The downsides are that your prod container is now significantly different and for dynamic languages the fast feedback loop now has a slowish build step


This what we are doing for the prod container that does not have Composer installed yes.

But in development it's much easier to have it in the image. Additionally we do not bundle the code in the dev image but bind mount it in Docker Compose, which is much faster than rebuilding the image to test changes in development; PHP not being compiled allows us to do that to reduce the feedback loop duration.


In my experience, Xdebug absolutely made sense in production just not enabled by default for all requests. A lot of its functionality can be enabled via a cookie for a single session, and its made debugging production much easier as well as identifying bottlenecks in code or production infrastructure.


That's certainly possible but we have other tools for that such as NewRelic that served us well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: