Hacker Newsnew | past | comments | ask | show | jobs | submit | kilburn's commentslogin

Not necessarily. You are still running within the same kernel.

If your images use the same base container then the libraries exist only once and you get the same benefits of a non-docker setup.

This depends on the storage driver though. It is true at least for the default and most common overlayfs driver [1]

[1] https://docs.docker.com/engine/storage/drivers/overlayfs-dri...


The difference between a native package manager provided by the OS vendor and docker is that in a native package manager allows you to upgrade parts of the system under the applications.

Let's say some Heartbleed (which affected OpenSSL, primarily) happens again. With native packages, you update the package, restart a few things that depend on it with shared libraries, and you're patched. OS vendors are highly motivated to do this update, and often get pre-announcement info around security issues so it tends to go quickly.

With docker, someone has to rebuild every container that contains a copy of the library. This will necessarily lag and be delivered in a piecemeal fashion - if you have 5 containers, all of them need their own updates, which if you don't self-build and self-update, can take a while and is substantially more work than `apt get update && reboot`.

Incidentally, the same applies for most languages that prefer/require static linking.

As mentioned elsewhere in the thread, it's a tradeoff, and people should be aware of the tradeoffs around update and data lifecycle before making deployment decisions.


> With docker, someone has to rebuild every container that contains a copy of the library.

I think you're grossly overblowing how much work it takes to refresh your containers.

In my case, I have personal projects which have nightly builds that pull the latest version of the base image, and services are just redeployed right under your nose. All it take to do this was to add a cron trigger to the same CICD pipeline.


I'd argue that the number of homelab folks have a whole CICD pipeline to update code and rebuild every container they use is a very small percentage. Most probably YOLO `docker pull` it once and never think about it again.

TBH, a slower upgrade cycle may be tolerable inside a private network that doesn't face the public internet.


> I'd argue that the number of homelab folks have a whole CICD pipeline to update code and rebuild every container they use is a very small percentage.

What? You think the same guys who take an almost militant approach to how they build and run their own personal projects would somehow fail to be technically inclined to automate tasks?


Yes, because there are a stunning number of people within r/homelab who simply want to run Plex and torrent clients.


> I think you're grossly overblowing how much work it takes to refresh your containers.

last thing I want is to build my own CI/CD pipeline and tend it


I don't know what kind of breakage was the parent talking about.

My experience is that as the car gets older it is common for the vents to lose the capability to stay pointed where I place them. As in: you point them where you want and they flip back all the way to one side as soon as you let go.

(Hot climate here, with several months of "a/c set to max during the whole trip" per year)


I’ve been in many cars where they don’t stay pointed and where the moving mechanism plastic broke off from where it’s connected so it doesn’t move the vent fins at all.


You may have reasons to require separate profiles. However, keep in mind that firefox multi-account containers [1] address many of the use cases for separate profiles in chrome with an IMHO better UX.

[1] https://support.mozilla.org/ca/kb/how-use-firefox-containers


There's no car identification in this protocol, meaning that impersonation/mitm attacks are trivial. Try again :)


I don't see it. Give an example of how this attack can be executed, a practical application.

I approach my car, I press the button on the fob to open it, and your attack does what exactly?


> The recurring joke was that general AI was just 20 years away, and had been for the last few decades.

You seem to think that joke is out of date now. Many others don't ;)


Random is prng. They still cannot cache though because they do many reading "passes" through the same data.

If build a cache that gets hits on the first pass, then it won't work for the second and later passes.


I don't think this is worth it unless you are setting up your own CDN or similar. In the article, they exchange 1 to 4 stat calls for:

- A more complicated nginx configuration. This is no light matter. You can see in the comments that even the author got bugs in their first try. For instance, introducing an HSTS header now means you have to remember to do it in all those locations.

- Running a few regexes per request. This is probably still significantly cheaper than the stat calls, but I can't tell by how much (and the author hasn't checked either).

- Returning the default 404 page instead of the CMS's for any URL in the defined "static prefixes". This is actually the biggest change, both in user-visible behavior and in performance (particularly if a crazy crawler starts checking non-existing URLs ni bulk or similar). The article doesn't even mention this.

The performance gains for regular accesses are purely speculative because the author didn't make any effort to try and quantify them. If somebody has quantified the gains I'd love to hear about it though.


I agree. But on that final point, I have to say i hate setups where bots hitting thousands of non-existent addresses have every one of them going to a dynamic backed to produce a 404. A while back I made a rails setup that dumped routes to an nginx map of valid first level paths, but I haven't seen anyone else do that sort of thing.


See Varnish Cache others... Or use a third party CDN that offers feature.

Lots of ways to configure them with route based behavior-in/validation.


I've been thinking about that exact problem and solution with the map module. On the off chance you see this, do you happen to have your solution published somewhere?


Apache's .htaccess is much worse performance-wise because it checked (and processed if it existed) all .htaccess files in all folders in the path. That is, you opened example.com/some/thing/interesting and apache would check (and possibly process) /docroot/.htaccess, /docroot/some/.htaccess, /docroot/some/thing/.htaccess and /docroot/some/thing/interesting/.htaccess.

Separating api and "front" in different domains does run into CORS issues though. I find it much nicer to reserve myapp.com/api for the API and route that accordingly. Also, you avoid having to juggle an "API_URL" env definiton in your different envs (you can just call /api/whatever, no matter which env you are in).


Was that really so bad in terms of performance? Surely .htaccess didn't exist there most of the time and even if it did, that would have been cached by kernel so each lookup by apache process wouldn't be hitting disk directly to check for file existance for each HTTP request it processes. Or maybe I am mistaken about that.


The recommendation was to disable it because:

a) If you didn't use it (the less bad case you are considering) then why pay for the stat syscalls at every request?

b) If you did use it, apache was reparsing/reprocessing the (at least one) .htaccess file on every request. You can see how the real impact here was significantly worse than a cached stat syscall.

Most people were using it, hence the bad rep. Also, this was at a time where it was more comon to have webservers reading from NFS or other networked filesystem. Stat calls then involve the network and you can see how even the "mild" case could wreak havoc in some setups


redux-query is not popular by any means. [1]

Both react-query (that is tanstack query now) [2] and rtk-query [3] include extensive configurability regarding their caching behaviors. This includes the ability to turn off caching entirely. [4,5]

Your story sounds like a usage error, not a library issue.

[1] https://www.npmjs.com/package/redux-query

[2] https://tanstack.com/query/latest/

[3] https://redux-toolkit.js.org/rtk-query/overview

[4] https://tanstack.com/query/latest/docs/framework/react/guide...

[5] https://redux-toolkit.js.org/rtk-query/usage/cache-behavior


redux-query seems a popular library for dealing with API calls in react.

I’m having a real hard time being polite right now. Do we have an education problem because where is this person getting their information that they think redux-query is popular?


Probably he made a typo and meant react-query.


> redux-query is not popular by any means

Yeah I don't know where the parent comment got this from. Every few weeks I seem to see these low effort posts that basically boil down to "javascript bad", but gets a lot of upvotes. And when you read into it, you see the author often has a poor grasp of js, or its ecosystem, and has set up some unholy abstraction, assuming that's how everyone does it.

Use the right tool for the job lol.


This is the same in Spain: ISP-provided ont/router combos are fine but they must have a bridge mode (you may have to call support to enable it).


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: