Hacker Newsnew | past | comments | ask | show | jobs | submit | quag's commentslogin

How do you update the software in the containers when new versions come out or vulnerabilities are actively being exploited?

My understanding is that when using containers updating is an ordeal and you avoid the need my never exposing the services to the internet.


If you're the one building the image, rebuild with newer versions of constituent software and re-create. If you're pulling the image from a public repository (or use a dynamic tag), bump the version number you're pulling and re-create. Several automations exist for both, if you're into automatic updates.

To me, that workflow is no more arduous than what one would do with apt/rpm - rebuild package & install, or just install.

How does one do it on nix? Bump version in a config and install? Seems similar


Now do that for 30 services and system config such as firewall, routing if you do that, DNS, and so on and so forth. Nix is a one stop shop to have everything done right, declaratively, and with an easy lock file, unlike Docker.

Doing all that with containers is a spaghetti soup of custom scripts.


> How do you update the software in the containers when new versions come out or vulnerabilities are actively being exploited?

You build new image with updated/patched versions of packages and then replace your vulnerable container with a new one, created from new image


Am I the only one surprised that this is a serious discussion in 2025?

Perhaps. There are many people, even in the IT industry, that don't deal with containers at all; think about the Windows apps, games, embedded stuff, etc. Containers are a niche in the grand scheme of things, not the vast majority like some people assume.

Really? I'm a biologist, just do some self hosting as a hobby, and need a lot of FOSS software for work. I have experienced containers as nothing other than pervasive. I guess my surprise is just stemming from the fact that I, a non CS person even knows containers and see them as almost unavoidable. But what you say sounds logical.

I'm a career IT guy who supports biz in my metro area. I've never used docker nor run into it with any of my customers vendors. My current clients are Windows shops across med, pharma, web retail and brick/mortar retail. Virtualization here is hyper-v.

And it this isn't a non-FOSS world. BSD powers firewalls and NAS. About a third of the VMs under my care are *nix.

And as curious as some might be at the lack of dockerism in my world, I'm equally confounded at the lack of compartmentalization in their browsing - using just one browser and that one w/o containers. Why on Earth do folks at this technical level let their internet instances constantly sniff at each other?

But we live where we live.


Self-hosting and bioinformatics are both great use cases for containers, because you want "just let me run this software somebody else wrote," without caring what language it's in, or looking for rpms, etc etc.

If you're e.g: a Java shop, your company already has a deployment strategy for everything you write, so there's not as much pressure to deploy arbitrary things into production.


The world is too complex, and life paths too varied, to reliably assume "everyone" in a community or group knows about some fact.

You're usually deep within a social bubble of some sort if you find yourself assuming otherwise.



Your understanding of containers is incorrect!

Containers decouple programs from their state. The state/data live outside the container so the container itself is disposable and can be discarded and rebuild cheaply. Of course there need to be some provisions for when the state (ie schema) needs to be updated by the containerized software. But that is the same as for non-containerized services.

I'm a bit surprised this has to be explained in 2025, what field do you work in?


It's not that easy.

First I need to monitor all the dependencies inside my containers, which is half a Linux distribution in many cases.

Then I have to rebuild and mess with all potential issues if software builds ...

Yes, in the happy path it is just a "docker build" which updates stuff from a Linux distro repo and then builds only what is needed, but as soon as the happy path fails this can become really tedious really quickly as all people write their Dockerfiles differently, handle build step differently, use different base Linux distributions, ...

I'm a bit surprised this has to be explained in 2025, what field do you work in?


It does feel like one of the side effects of containers is that now, instead of having to worry about dependencies on one host, you have to worry about dependencies for the host (because you can't just ignore security issues on the host) as well as in every container on said host.

So you go from having to worry about one image + N services to up-to-N images + N services.


I think you are not too wrong about this.

Just that state _can_ be outside the container, and in most cases should. It doesn't have to be outside the container. A process running in a container can also write files inside the container, in a location not covered by any mount or volume. The downside or upside of this is, that once you down your container, stuff is basically gone, which is why usually the state does live outside, like you are saying.


Your understanding of not-containers is incorrect.

In non-containerized applications, the data & state live outside the application, store in files, database, cache, s3, etc.

In fact, this is the only way containers can decouple programs from state — if it’s already done so by the application. But with containers you have the extra steps of setting up volumes, virtual networks, and port translation.

But I’m not surprised this has to be explained to some people in 2025, considering you probably think that a CPU is something transmitted by a series of tubes from AWS to Vercel that is made obsolete by NVidia NFTs.


pull new container, stop old and start new. can also make immutable containers.

Thank you! I’ll have to take a look.


I recommend starting here: https://youtu.be/nKCT-Cdk0xY

Once you understand and use this approach, you can figure out most other approaches you need to use.


Thank you for the description of CBC.

I'm curious about it and your thinking on how to track things over time and see what has surprised us since we got started. It is useful to note down every time you (or your team) sets an expectation with someone (or another team) and then make sure you don't forget about that. It's also useful to be deliberate when setting expectations.

Having a public journal could well work for noting down when expectations are set and whenever there is a meeting of minds. I've found when tracking things like this that the amount of data can quickly grow to the point where you can no longer quickly and easily reason about it. The success seems to live and die on the data visualization or UI/UX.


Ok, I'll bite. From the article I can't really figure out what collaborating by contract (CBC) is, how it works in practice or how to introduce it to an organization.

A search in Google for "Collaborate by contract" gives three results, all from the same person, all in the last few weeks. Including this new article it's 1776 words in total on CBC. It doesn't seem to be real or something that has been tried out in an organization. It appears to be Al Newkirk's idea for a system that could work, but has not been tried.

Specifically, I'd like to see an example of a contract and who agrees to it; what the journal of contracts looks like; what happens when after an agreement everyone learns something that they didn't know when the agreements were made; what are the leaders committing to and what happens when they fail to deliver that?

Links found on CBC: https://www.alnewkirk.com/bidirectional-accountability/ https://www.alnewkirk.com/understanding-collaborate-by-contr... https://www.alnewkirk.com/maybe-its-time-to-change-the-way-w... https://www.reddit.com/r/productivity/comments/1n04s5z/comme...


Okay, I can work with this.

Many teams have working agreements; and companies have employee handbooks.

I don’t know if you ever read these in detail, but they’re generally in one direction. Other than dating/relationships (manager and direct reports, etc) and some generally applicable guidelines, it’s favorable to management.

One thing I make very clear to my direct reports is that I expect them to hold me to account when I fail to do something or hinder the team; even going above me if needed.

But this is ad-hoc. It’s not consistent across the board, and I see managers who are active hindrances to their team or their mission.

This is also the norm in many companies, and it’s a problem.


It sounds like you've got something specific in mind when you say, "modeling". The term modeling is used in a lot of different situations to mean different things. For example, it could mean to make a 3d model in Blender, it could mean to pose for someone to paint you or to take a photo, with databases it's used to mean modeling the data, with statistics it's used to mean finding a way to simply represent and reason about the data (create a model of it).

The things you've listed out make me guess you want to write 2d or 3d image rendering software. Is that right?

If that's the case, there's no substitute for trying to recreate certain algorithms or curves using a language or tool that you're comfortable with. It'll help you build an intuition about how the mathematical object behaves and what problems it solves (and doesn't). All of these approaches were created to solve problems, understanding the theory of it doesn't quite get you there. If you don't have a good place to try out functions, I recommend https://thebookofshaders.com/05/ , https://www.desmos.com/calculator , or https://www.geogebra.org/calculator .

A good place to start is linear interpolation (lerp). It seems dead simple, but it's used extensively to blend two things together (say positions or colors) and the other things you listed are mostly fancier things built on top of linear interpolation.

https://en.wikipedia.org/wiki/Linear_interpolation

For bezier curves and surfaces here are some links I've collected over the years: https://ciechanow.ski/curves-and-surfaces/ https://pomax.github.io/bezierinfo/ https://blog.pkh.me/p/33-deconstructing-be%CC%81zier-curves.... http://www.joshbarczak.com/blog/?p=730 https://kynd.github.io/p5sketches/drawings.html https://raphlinus.github.io/graphics/curves/2019/12/23/flatt...

A final note: a lot of graphics math involves algebra. Algebra can be fun, but it also can be frustrating and tedious, particularly when you're working through something large and make a silly mistake and the result doesn't work. I suggest using sympy to rearrange equations or do substitutions and so on. It can seem like overkill but as soon as you save a few hours debugging it's worth it. It also does differentiation and integration for you along with simplifying equations.

https://docs.sympy.org/latest/tutorials/intro-tutorial/intro...


Thanks for that! Here is a longer video about the scanimate, including demos of a currently working machine and an interview with an operator and an engineer.

https://youtu.be/i1aT_CqhyQs


Yes and no. The H-1B visa is "dual intent" [1] and you are allowed to apply for and receive a green card (permanent resident card) while on an H-1B. After 5 years with permanent residence you can apply for citizenship. It is a common path, and the intention for the majority of people on an H-1B visa.

[1]: https://isss.temple.edu/faculty-staff-and-researchers/intern...


Yes. Most MacBooks used in businesses don’t have an iCloud account associated with them. The store doesn’t work, but that doesn’t seem to be an issue.

Downloading and installing applications by dragging them from the installer to the Applications folder works fine.


That sounds interesting. Can you say a little more about how this works?


It's a trick I stole from ext2, and simplified. In that filesystem there are three bitsets: one for reading, one for writing, one for fsck. If you don't understand a bit you can't do that action.

For most protocols there's only reading and writing, so you can use odd bits to mean "backwards compatible features, you can read even if you don't understand" and even for "stop, we broke compat".


That's a good idea for filesystems. But OpenTimestamps Proofs aren't really "written to". They're created, and then later validated. Also, being cryptographic proofs, my philosophy is the validator should almost always understand them 100%, or not at all, to avoid any false proofs.

That's also why I picked a binary encoding: it's difficult to parse an OTS proof incorrectly. An incorrect implementation will almost always fail to parse the proof at all, with a clear error, rather than silently parse the proof incorrectly.


We use the same for Lightning: even bits for incompatible changes, odd for backwards compatible changes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: