Hacker News new | past | comments | ask | show | jobs | submit login

I interviewed a few folks on the topic some time ago and published the results here: https://highops.com/insights/immutable-infrastructure-6-ques...

I particularly like the definition that emerged:

'I see it as conceptually dividing your infrastructure into "data" and "everything else". Data is the stuff that's created and modified by the services you're providing. The elements of your infrastructure that aren't managed by the services can be treated as immutable: you build them, use them, but don't make changes to them. If you need to change an infrastructure element, you build a new one and replace the old one'

More in the actual full transcript and in the video




This is all well and good, but the devil is in the details. Like rdeboo says, what happens when you do need to change the datastore config? Databases famously need plenty of care and attention to achieve optimal performance. They are decidedly not fire and forget systems. How do I tweak my postgresql performance parameters in the immutable world?


Not to mention the mere act of transferring data as part of the migration from old to new can take hours on its own.


You could keep Postgres in an immutable image, with only /var/lib/postgres in a separate volume. Upgrading the PG config would just be a matter of unmounting it, replacing the image and re-mounting. (Docker automates this with its "data volumes", but you can do it manually too).


In theory yes - but that strategy doesn't always work. Sometimes the implementation of the data store changes between releases - requiring an upgrade or data migration.

For large datasets that can take hours / days.


That may prevent the simple unmount/replace image/mount workflow, but it doesn't prevent the separation between the mutable and immutable parts of the DBMS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: