I've probably spent way too much time thinking about Linux backup over the years. But thankfully, I found a setup that works really well for me in 2018 or so, used it for the last few years, and I wrote up a detailed blog post about it just a month ago:
The tools I use on Linux for backup are restic + rclone, storing my restic repo on a speedy USB3 SSD. For offsite, I use rclone to incrementally upload the entire restic repository to Backblaze B2.
The net effect: I have something akin to Time Machine (macOS) or Arq (macOS + Windows), but on my Linux laptop, without needing to use ZFS or btrfs everywhere.
Using restic + some shell scripting, I get full support for de-duplicated, encrypted, snapshot-based backups across all my "simpler" source filesystems. Namely: across ext4, exFAT, and (occasionally) FAT32, which is where my data is usually stored. And pushing the whole restic repo offsite to cloud storage via rclone + Backblaze completes the "3-2-1" setup straightforwardly.
One problem with file based backups is that they are not atomic across the filesystem. If you ever back up a database (or really any application that expects atomicity while it’s running), then you might corrupt the database and lose data. This might not seem like a big problem, but can affect e.g. SQLite, which is quite popular as a file format.
Then again, the likelihood that the backup will be inconsistent is fairly low for a desktop, so it’s probably fine.
I think the optimal solution is:
1) file system level atomic snapshot (ZFS, BTRFS etc)
2) Backup the snapshot at a file level (restic, borg etc)
This way you get atomicity as well as a file-based backup which is redundant against filesystem-level corruption.
I agree with you, of course. On macOS, Arq uses APFS snapshots, and on Windows, it uses VSS. It'd be nice to use something similar on Linux with restic.
In my linked post above, I wrote about this:
"You might think btrfs and zfs snapshots would let you create a snapshot of your filesystem and then backup that rather than your current live filesystem state. That’s a good idea, but it’s still an open issue on restic for something like this to be built-in (link). There’s a proposal about how you could script it with ZFS in this nice article (link) on the snapshotting problem for backups."
The post contains the links with further information.
My imperfect personal workaround is to run the restic backup script from a virtual console (TTY) occasionally with my display server / login manager service stopped.
I run this from a ZFS snapshot. What I want backed up from my home dir lives on the same volume, so I don't have to launch restic multiple times. I have dedicated volumes for what I specifically want excluded from backups and ZFS snapshots (~/tmp, ~/Downloads, ~/.cache, etc).
I've been thinking of somehow triggering restic by zrepl whenever it takes a snapshot, but I haven't figured a way of securely grabbing credentials for it to unlock the repository and to upload to s3 without requiring user intervention.
Personally I've never found this to be issue, as I increase volume sizes base on need, not allocate 100% from the get-go. The space needed for short-lived snapshots is not that big, though that of course can depend on the system.
This also helps dealing with run-away (or long running) processes eating disk space, as you always have some extra space set aside..
Only a little (as much as data will change during the backup). And default filesystems nowadays support resizing downwards so you can make space after initial partitioning.
You have to know in advance to not allocate 100% to root and home otherwise you are SOL when you want to make space later. If you're lucky you can disable swap and temporarily use its allocation to do it, providing that is large enough for the changes.
This is not the case, as like I said you can shrink either of those filesystems and its container and use the freed space for this.
(Also I think lvm doesn't need the volume blocks to be contiguous on the physical volume. So you might have N free space after volume a and M after volume b, and lvm would let you create a new N+M sized volume.)
The easiest way to resize the root partition is probably to boot from eg Ubuntu live/install USB stick. The home partition you can unmount (but not while non root user sessions are using it).
Windows' Volume Shadow Copy Service[1] allows applications like databases to be informed[2] when a snapshot is about to be taken, so they can ensure their files are in a safe state. They also participate in the restore.
While Linux is great at many things, backups is one area I find lacking compared to what I'm used to from Windows. There I take frequent incremental whole-disk backups. The backup program uses the Volume Shadow Copy Service to provide a consistent state (as much as possible). Being incremental they don't take much space.
If my disk crashes I can be back up and running like (almost) nothing happened in less than an hour. Just swap out the disk and restore. I know, as I've had to do that twice.
It's just such a low-effort peace of mind. Just a few clicks and I know that regardless what happens to my disk or my system, I can be up and running in very little time with very little effort.
On Linux it's always a bit more work, but backups and restore is one of those things I prefer is not too complicated, as stress level is usually high enough when you need to do restore to worry about forgetting some incantation steps.
it depends. Doing a complete disaster recovery of a windows system IMHO can be a real struggle. Especially if you have to restore a system to different hardware, which the system state backup that microsoft offers does not support afaik.
Backing up a linux system in combination with REAR:
and a backup utility of your choice for the regular backup has never failed me so far. I used it to restore linux systems to complete different hardware without any troubles.
For my cases it's been quite easy, but then I've mostly had quite plain hardware so didn't need vendor drivers to recover.
While I've had to recover in anger twice, I've used the same procedure to migrate to new hardware many times. Just restore to the new disk in the new machine, and let Windows reboot a few times and off I went.
I don't think the diffs are usable that way. They're actually more like an "undo log" in that the snapshot space is taken by "old blocks" when the actual volume is taking writes. It's useful for the same reasons as volume shadow copy: a consistent snapshot of the block device. (Also this can be very bad for write performance as any writes are doubled - to snapshot and to to the real device)
I think block-level snapshots would be very difficult to use this way.
I just make a full dedupped backups from LVM snapshots with kopia, but I've set that up only on one system, on others I just use kopia as-is.
It takes some time, but that's fine for me. Previous backup of 25 GB an hour ago took 20 minutes. I suppose if it only walked files it knew were changed it would be a lot faster.
Thanks, sounds interesting. So you create a snapshot, then let kopia process that snapshot rather than the live filesystem, and then remove the snapshot?
> I suppose if it only walked files it knew were changed it would be a lot faster.
Right, for me I'd want to set it up to do the full disk, so could be millions of files and hundreds of GB. But this trick should work with other backups software, so perhaps it's a viable option.
While I do that, is that really the case? I can imagine database snapshots are consistent most of the time, but it can't be guaranteed, right? In the end it's like a server crash, the database suddenly stops.
That's why you do a filesystem snapshot before the backup, something supported by all systems. The snapshot is constant to the backup tool, and read order or subsequent writes don't matter.
The main difference is that Windows and MacOS have a mechanism that communicates with applications that a snapshot is about to be taken, allowing the applications (such as databases) to build a more "consistent" version of their files.
In theory, of course, database files should always be in a logically consistent state (what if power goes out?).
Well, supported by Windows and MacOS. Linux only if you happen to use zfs or btrfs, and also only if the backup tool you use happens to rely on those snapshots.
That works if the backup uses a snapshot of the filesystem or a point in time. Then the backup state is equivalent to what you'd get if the server suddenly lost power, which all good ACID databases handle.
The GP is talking about when the backup software reads database files gradually from the live filesystem at the same time as the database is writing the same files. This can result in an inconsistent "sliced" state in the backup, which is different from anything you get if the database crashes or the system crashes or loses power.
The effect is a bit like when "fsync" and write barriers are not used before a server crash, and an inconsistent mix of things end up in the file. Even databases that claim to be append-only and resistant to this form of corruption usually have time windows where they cannot maintain that guarantee, e.g. when recycling old log space if the backup process is too slow.
Do you have much of an opinion on why you went with Restic over Borg? The single Go binary is an obvious one, perhaps that alone is enough. I remember some people having un-bound memory usage with Restic but that might have been a very old version.
I use both to try to mitigate the risk of losing data due to a backup format/program bug[1]. If I wasn't worried about that, I'd probably go with Borg but only because my offsite backup provider can be made to enforce append-only backups with Borg, but not Restic, at least not that I could find.[2] Otherwise, I have not found one to be substantially better than the other in practice.
1 - some of my first experiences with backup failures were due to media problems -- this was back in the days when "backup" pretty much meant "pipe tar to tape" and while the backup format was simple, tape quality was pretty bad. These days, media -- tape or disk -- is much more reliable, but backup formats are much more complex, with encryption, data de-dup, etc. Therefore, I consider the backup format to be at least as much of a risk to me now as the media. So, anyway, I do two backups: the local one uses restic, the cloud backup uses borg.
I use both, and I never had problems with any of them. Restic has the advantage that it supports a lot more endpoints than ssh/borg, f.e. S3 (or anything that rclone supports). Also borg might be a little bit more complicated to get started with than restic.
I choose restic over borg for the simple reason that restic can back up directly to S3-compatible cloud storage and borg can't. Commodity S3-compatible storage is cheaper than borg-compatible cloud storage. I back up to both B2 ($0.006/GB) and S3 (Intelligent Tiering, ~$0.004/GB) and the two combined are still cheaper than rsync.net ($0.012/GB). I don't see that borg is any better than restic, so this seems to be a straightforward win to me. I'd trust restic with my life. It's among the highest quality software I've ever used.
One question, why use rclone for the Backblaze B2 part? I use restic as well, configured with autorestic. One command backs up to the local SSD, local NAS, and B2.
I explain in the post. Here's a copypasta of the relevant paragraph:
"My reasoning for splitting these two processes — restic backup and rclone sync — is that I run the local restic backup procedure more frequently than my offsite rclone sync cloud upload. So I’m OK with them being separate processes, and, what’s more, rclone offers a different set of handy options for either optimizing (or intentionally throttling) the cloud-based uploads to Backblaze B2."
So you did! Sorry, hadn't read the post beforehand. Oh, and I too mourned the loss of CrashPlan. Being in Canada, I didn't have the option offered to have a restore drive sent if needed, but thought it was a brilliant idea. On the other hand, I think Backblaze might!
For home backup, I have a similar setup with dedup, local+remote backups.
Borgbackup + rclone (or aws) [1]
It works so well, I even use this same script on my work laptop(s). rclone enables me to use whatever quirky file sharing solution the current workplace has.
I've been mulling over setting up restic/kopia backups - and recently discovering httm[1] support restic directly in addition to zfs (and) more - I think I finally will.
I only discovered httm thanks to this thread, and I'll definitely be trying it out for the first time today. Maybe I'll add an addendum to my blog post about it.
Enjoyed the post, thanks. One question: why don’t you use restic+rclone on macOS? They both support it and I’d assume you could simplify your system a bit…
I only have one macOS system (a Mac Mini) and Arq works well for me. Also I prefer to use Time Machine for the local backups (to a USB3 SSD) on macOS since Apple gives Time Machine all sorts of special treatment in the OS, especially when it comes time to do a hardware upgrade.
I’ve also found Arq to be brilliant on MacOS. It’s especially nice on laptops, where you can e.g. set it to pause on battery and during working hours. Also, APFS snapshots is a nice thing given how many Mac apps use SQLite databases under the hood (Photos, Notes, Mail, etc.).
On Linux, the system I liked best was rsnapshot: I love its brutal simplicity (cron + rsync + hardlinks), and how easy it is to browse previous snapshots (each snapshot is a real folder with real files, so you can e.g. ripgrep through a date range). But when my backups grew larger I eventually moved to Borg to get better deduplication + encryption.
rsnapshot was definitely my favorite Linux option before restic. I find that restic gives me the benefits of chunk-based deduplication and encryption, but via `restic find` and `restic mount` I can also get many of the benefits of rsnapshot's simplicity. If you use `restic mount` against a local repo on a USB3 SSD, the FUSE filesystem is actually pretty fast.
Thanks for the info, I’ll have a closer look at Restic then. Borg also has a FUSE interface, but last time I tried it I found it abysmally slow – much slower than just restoring a folder to disk and then grepping through it. I used a Raspberry Pi as my backup server though, so the FUSE was perhaps CPU bound on my system.
Yea, I don't want to oversell it. The restic FUSE mount isn't anywhere near "native" performance. But, it's fast enough that if you can narrow your search to a directory, and if you're using a local restic repo, using grep and similar tools is do-able. To me, using `restic mount` over a USB3 SSD repo makes the mount folder feel sorta like a USB2 filesystem rather than a USB3 one.
I backup everything except for scratch/tmp/device style directories. Bytes are cheap to store, my system is a rounding error vs my /home, and deduping goes a long way.
I'm less worried about the size and more about something breaking when doing a recovery.
Let's say you're running Fedora with Gnome and you want to switch to KDE without doing a fresh install. You make a backup, then go through the dozens of commands to switch, with new packages installed, some removed, display managers changed etc. Now something doesn't work. Would recovering from the restic backup reliably bring the system back in order?
The tool from the original post seems to be geared towards that, while most Restic and rclone examples seem to be geared towards /home backup, so I wonder how much this is actually an alternative.
Oh, I see what you're saying. I personally wouldn't use it to do a 100% filesystem restore. For the sake of simplicity, I'd just use dd/ddrescue to make a .img file and then load that .img file directly into a partition to boot from a new piece of hardware. Likewise if I were doing a big system change like GNOME to KDE or vice versa, I'd just make an .img file before and restore from it if it went wrong.
I think of restic system backups covering something like losing a customized /etc file in an apt upgrade and wanting to get it back.
I prefer using openSUSE, which is tightly integrated with snapper[0], making it simple to recover from a botched update. I've only ever had to use it when an update broke my graphics drivers, but when you need it, it's invaluable.
Snapper on openSUSE is integrated with both zypper (package manager) and YaST (system configuration tool) [1], so you get automatic snapshots before and after destructive actions. Also, openSUSE defaults to btrfs, so the snapshots are filesystem-native.
And it's also integrated into the bootloader (if you use one of the supported ones). The bootloader shows you one boot entry per snapshot so you can boot an old snapshot directly.
I haven't tried it though so I don't know for sure. (I have my own custom systemd-boot setup that predates theirs, and since my setup uses signed UKIs and theirs doesn't, I don't care to switch to theirs. I can still switch snapshots manually with `btrfs subvol` anyway; it just might require a live CD in case the default snapshot doesn't boot.)
I'm using Tumbleweed with btrfs snapshots, systemd-boot and transparent disk encryption (using TPM + measured boot), works fine.
Currently this needs to be set up semi-manually (select some options in the installer, then run some commands after install), but it'll be automatic soon.
openSUSE honestly is so criminally underrated. I've been using Tumbleweed for a few years for my dev/work systems and YaST is just great. Also that they ship fully tested images for their rolling release is just so much saner. OBS is another fantastic tool that I see so few people talking about, despite software distribution still being such a sore point in the linux ecosystem.
Because it's not very popular in the US which has mostly cemented around fedora/ubuntu/arch so you don't hear much about any other distros, and most other countries around the world tend to just adopt what they learn from the US, due to the massively influential gravitational field the US has on the tech field.
But in the german speaking world many know about it. It's a shame that despite the internet being relatively borderless it's still quite insular and divided. I'm not a native german speaker but it helps to know it since there's a lot of good linux content out there that's written in german.
I use btrfs-assistant with Kubuntu because I can't get Timeshift to work properly. It's basically some kind of front-end for snapper and btrfsmaintenance.
I adore Timeshift. It has made my time on Linux so much more trouble free.
I have used Linux for 10+ years but over the I have spent hours, days and weeks trying to undo or fix little issues I introduce by tinkering around with things. Often I seem to break things at the worst times, right as I am starting to work on some new project or something that is time sensitive.
Now, I can just roll back to an earlier stable version if I don't want to spend the time right then on troubleshooting.
I've enabled this on all my family members machines and teach them to just roll back when Linux goes funky.
I enabled this four months ago and I have had the same experience.
It’s not that I couldn’t retype the config file I accidentally wrote over while tinkering, but I like the safety that comes with Timeshift to try and fail a few times.
Hard lessons come hard. This softens those lessons a little while maintaining the learning.
While it's not quite average-user-friendly (YET), one of the reasons I switched to NixOS is because it provides this out-of-the-box. I was frustrated with every other Linux for the reasons you cite, but NixOS I can deal with, since 1) screwing up the integrity of a system install is hard to begin with, 2) if you DO manage to do it, you can reboot into any of N previous system updates (where you set N).
Linux is simultaneously the most configurable and the most brittle OS IMHO. NixOS takes away all the brittleness and leaves all the configurability, with the caveat that you have to declaratively configure it using the Nix DSL.
NixOS also has out of the box support for zfs auto snapshots, where you can tell it to keep 3 months, four weeks, 24 hourly, and frequent snapshots evert fifteen minutes so you can time shift your home directory, too
This reminds me of the default behavior of NixOS. Whenever you make a change in the configuration for NixOS and rebuild it, it takes a snapshot of the system configurations and lets you restore after a reboot if you screw something up.
Similarly, it doesn't do anything in regards to user files.
In fairness, this app supports snapshotting your home directory as well, and that's not solvable with Nix alone. In fact, I'm running NixOS and I've been meaning to set up Timeshift or Snapper for my homedir, but alas, I haven't found the time.
Is there something about your home directory that you'd want to back up that is not covered by invoking home manager as a nix module as part if nixos-rebuild?
To me, it's better than a filesystem-backup because the things that make it into home manager tend to be exactly the things that I want to back up. The rest of it (e.g. screenshots, downloads) aren't something I'd want in a backup scheme anyhow.
I want to keep snapshots of my work. I run nightly backups which have come in handy numerous times, but accessing the cloud storage is always slow, and sometimes I've even paid a few cents in bandwidth to download my own files. It would be a lot smoother if everything was local and I could grep through /.snapshots/<date>/<project>.
Data (documents, pictures, source code, etc.) is not handled by home-manager. Backing up home.nix saves your config, but the data is just as if not more important.
Hmm, different strokes I guess. Maybe it's just that too much kubernetes has gone to my head, but I see files as ephemeral.
Code and docs are in source control. My phone syncs images to PCloud when I take them. Anything I download is backed up... wherever I downloaded it from.
Cloud sync != backup. Cloud sync won't help if you accidentally delete the file, backups will. Cloud sync won't help if you make an undesired edit, backups will.
But I can just rebuild the file, or restore it from a previous commit (or if I'm having a particularly bad day, restore its inputs from a previous commit and _then_ rebuild it).
The problem, unfortunately, is that Nix often finds itself in a chicken and egg scenario where nixpkgs fails to provide a lot of important packages or has versions that are old(er). But for there to be more investment in adding more packages, etc. you need more people using the ecosystem.
That's definitely true, but maybe I've just been lucky, pretty much every proprietary program I've wanted to install in NixOS has been in Nixpkgs.
Skype, Steam, and Lightworks are all directly available in the repos and seem to work fine as far as I can tell. I'm sure there are proprietary packages that don't work or aren't in the repo, but I haven't really encountered them.
I've unfortunately encountered a few. TotalPhase's Data Center software for their USB protocol analyzers is my current annoyance, someday I'll figure out how to get it to work but thus far it's been easier to just dedicate a second laptop to it.
I am hoping that Flakes will work to fix this problem somewhat; at least in theory it can be a situation of "get it to work once then it'll work forever", and then trivially distributed later (even without a blessing from Nixpkgs).
For managing your configuration.nix file itself you can just use whichever VCS you want, it's a text file that describes one system configuration and managing multiple versions and snapshots within that configuration file is out of scope.
For the system itself, each time you run "nixos-rebuild switch" it builds a system out of your configuration.nix, including an activation script which sets environment variables and symlinks and stops and starts services and so on, adds this new system to the grub menu, and runs the activation script. It specifically doesn't delete any of your old stuff from the nix store or grub menu, including all your older versions of packages, and your old activation scripts. So if your new system is borked you can just boot into a previous one.
Imagine installing an entirely new window manager without issue, and then undoing it without issue.
NixOS does that. And I'm pretty sure that no other flavor of Linux does. First time I realized I could just blithely "shop around window managers" simply by changing a couple of configuration lines, I was absolutely floored.
NixOS is the first Linux distro that made me actually feel like I was free to enjoy and tinker with ALL of Linux at virtually no risk.
There is nothing else like it. (Except Guix. But I digress.)
Completely agree; being able to transparently know what the system is going to do by just looking at a few lines of text is sort of game-changing. It's trivial to add and remove services, and you can be assured that you actually added and removed them, instead of just being "pretty sure" about it.
Obviously this is just opinion (no need for someone to supply nuance) but from my perspective the NixOS model is so obviously the "correct" way of doing an OS that it really annoys me that it's not the standard for every operating system. Nix itself is an annoying configuration language, and there are some more arcane parts of config that could be smoothed over, but the model is so obviously great that I'm willing to put up with it. If nothing else, being able to trivially "temporarily" install a program with nix-shell is a game-changer to me; it changes the entire way of how I think about how to use a computer and I love it.
Flakes mostly solve my biggest complaint with NixOS, which was that it was kind of hard to add programs that weren't merged directly into the core nixpkgs repo.
> but from my perspective the NixOS model is so obviously the "correct" way of doing an OS that it really annoys me that it's not the standard for every operating system
- Literally every person who's read the Nix paper and drank the kool-aid thinks this lol.
I STILL don't completely understand every element of my nix config but it's still quite usable. Adding software requires adding it to the large-ish config file, largely because I created overlay namespaces of "master.programname", "unstable.programname" and "stable.programname" (with the default being "unstable" in my case) but those would all ideally be moved out into 2 text files, 1 for system level (maybe called system_packages.txt) and one for a named user (perhaps called <username>_packages.txt) and if those could be imported somehow into the configuration.nix, I think that would make things a bit easier for end-users, at least initially.
The commandline UI (even the newer `nix` one) could still use an overhaul IMHO. The original CL utils were CLEARLY aimed directly at Nix developers, and not so much at end-users...
I've been working on my own wrapper to encapsulate the most common use-cases I need the underlying TUI for https://github.com/pmarreck/ixnay < and that's it so far.
> similar to the System Restore feature in Windows and the Time Machine tool in Mac OS
This makes no sense! System Restore is a useless wart that just wastes time making "restore points" at every app/driver install and can rarely (if ever) produce a working system when used to "restore" anything. It does not back up user data at all. Time Machine is a whole-system backup solution that seems to work quite well and does back up user data.
To me the quoted statement might as well read "a tool similar to knitting needles (in hobby shops) and dremels (in machine shops)"
Reading their description further, it seems like they are implementing something similar to TimeMachine (within the confines of what linux makes possible), and not at all like "System Restore". This seems sane as this implements something that is actually useful. They, sadly, seem to gloss over what the consequences are of using non-btrfs FS with this tool, only mentioning that btrfs is needed for byte-exact snapshots. They do not mention what sort of byte-inexactness ext4 users should expect...
I believe System Restore takes a registry backup and can recover from a bad driver install but it's been years since I used it last. I think just about anything System Restore does can be replicated by "just fixing it" in Safe Mode but I think System Restore is geared for less technical folks.
Newer versions of Windows have File History to backup user data (I don't think they have an integrated system/file solution quite like Time Machine though).
However it makes some sense to keep system/user data separate. You don't want to lose your doc edits because you happened to have a bad driver upgrade at the same time. Likewise, you don't want to roll your entire system back to get an old version of a doc.
Time Machine is trivial to implement (without the UI) with disk snapshots (that's what it does--store disk snapshots to an external disk)
My main use of system restore was to return to a “clean” install + just the bare minimum installs I needed back when windows was more likely to atrophy over time. I agree it is mostly useless today.
ZFS Snapshots + Sanoid and Syncoid to manage and trigger them is what people should be doing. Unfortunately, booting from ZFS volumes seems to be some form of black art unless things have changed over the last couple of years.
The license conflict and OpenZFS always having to chase kernel releases often resulting in delayed releases for new kernels means I cannot confidently use them with rolling release distros on the boot drive. If I muck something up, the data drives will be offline for a few minutes till I fix the problem. Doing the same with the boot drive is pain I can live without.
I am somewhat wary of trying this, mucking something up and wasting a lot of time wrestling with it. Will probably play around with it in a vm and use it during the next ssd upgrade.
Would have been so much better if the distros showed more interest in ZFS
In principle there's no reason you can't install this next to GRUB in case you're wary. If you're not using ZFS native encryption, and make sure not to enable some newer zpool features, GRUB booting should work for ZFS-on-root.
That said, I've been using the tool for a while now and it's been really rock solid. And once you have it installed and working, you don't really have to touch it again, until some hypothetical time when a new backward-incompatible zpool feature gets added that you want to use, and you need a newer ZFSBootMenu build to support it.
Because it's just an upstream Linux kernel with the OpenZFS kmod, and a small dracut module to import the pool and display a TUI menu, it's mechanically very simple, and relying on core ZFS support in the Linux kernel module and userspace that's already pretty battle tested.
After seeing people in IRC try to diagnose recent GRUB issues with very vanilla setups (like ext4 on LVM), I'm becoming more and more convinced that the general approach used by ZFSBootMenu is the way to go for modern EFI booting. Why maintain a completely separate implementation of all the filesystems, volume managers, disk encryption technologies, when a high quality reference implementation already exists in the kernel? The kernel knows how to boot itself, unlock and mount pretty much any combination of filesystem and volume manager, and then kexec the kernel/initrd inside.
The upsides to ZFSBootMenu, OTOH,
* Supports all ZFS features from the most recent OpenZFS versions, since it uses the OpenZFS kmod
* Select boot environment (and change the default boot environment) right from the boot loader menu
* Select specific kernels within each boot environment (and change the default kernel)
* Edit kernel command line temporarily
* Roll back boot environments to a previous snapshot
* Rewind to a pool checkpoint
* Create, destroy, promote and orphan boot environments
* Diff boot environments to some previous snapshot to see all file changes
* View pool health / status
* Jump into a chroot of a boot environment
* Get a recovery shell with a full suite of tools available including zfs and zpool, in addition to many helper scripts for managing your pool/datasets and getting things back into a working state before either relaunching the boot menu, or just directly booting into the selected dataset/kernel/initrd pair.
* Even supports user mode SecureBoot signing -- you just need to pass the embedded dracut config the right parameters to produce a unified image, and sign it with your key of choice. No need to mess around with shim and separate kernel signing.
Sounds very interesting. I will try it out. Thanks.
GRUB can become a nightmare very quickly. Currently I am on systemd-boot + ext4 for the boot drive. Has been working without any major issues. But boot drive backup with rsnapshot is very underwhelming
Hmm, this doesn't appear to be what I hoped it was:
> Timeshift is similar to applications like rsnapshot, BackInTime and TimeVault but with different goals. It is designed to protect only system files and settings. User files such as documents, pictures and music are excluded.
On the other hand, a quick search looking for "that zfs based time machine thing" did reveal a new (to me) project that looks very interesting:
The root partition / and the home partition /home are different.
There's a /home/etc/ folder with a very small set of configuration files I want to save, everything else is nuked on reinstall.
When I do a reinstall, the root partition is formatted, the /home partition is not.
This allows me to test different distros and not be tied to any particular distro or any particular backup tool, if I test a distro and I don't like it, then it is very easy to change it.
The implication here is that your home directory can actually work across distros? How in the world do you do that? Surely you have to encounter errors sometimes when cached data or configs point to nonexistent paths, or other incompatibilities come up?
Typically ~ contains user specific config files for applications, which are (usually) programmed to be distro agnostic. If you're installing the same applications across distros, I don't see why this wouldn't work without too much effort. After all, most distros are differentiated by just two things:
- their package management tooling
- their filesystem layout (eg where do libraries etc go)
It is a backup directory owned by root. The reason is that it sits in the partition that is preserved, and it is outside my user folder, because it is better organized that way.
Timeshift saved my system so many times over the past 6-7 years. Botched upgrades, experimenting with desktop environments, destroying configuration defaults, it works and does what it says on the tin.
I may have had only one update that went wrong in 30 years of using Linux and that was just a bug introduced by a gfx driver in a new minor kernel version. I downgraded it and waited for the bug to be fixed upstream and that was it.
That is not on me but the distro maintainers. They do a really good job imho and it appart from kernel drivers issue or hardware failure (drive) it is hard to break a distrib.
I've found Debian Stable to be extremely stable, especially in recent years, I honestly don't think about system restore as much as I worry about a drive crashing or a laptop getting stolen. I assumed Linux Mint LTS was similarly stable.
Folks who have run into issues, what was the root cause?
Currently the local folder is a samba mount so it's off-site.
The only tip I'd have for people using Borg is to verify your backups frequently. It can get corrupted without much warning. Also if you want quick and somewhat easy monitoring of backups being created you can use webmin to watch for the modifications in the backup folder and send an email if there isn't a backup being sent in a while. Similarly, you can regularly scan the Borg repo and send email in case of failures for manual investigation.
This is low tech, at least lower tech than elastic stack or promstack, but it gets the job done.
A bit of a side note and a bit of old man reveal,
it would be nifty to have the backup system write
the snapshots to cd/dvd/bluray disk.
I remember working in a company that had a robot WORM system.
It would grab a disc, it would be processed, take it out,
place it among the archives. If a restore as needed the
robot would find the backup, and read off the data.
I never worked directly on the system, and I seem to remember
there was a window that the system could keep track of
(naturally) but older disks were stored off site somewhere
for however long that window was.
(Everything was replicated to a fully 100% duplicate system
geographically highly separated from the production system.
My first "real" experience with Linux was with Wubi (Ubuntu packaged as a Windows program). I think it was based on Ubuntu version 6 or 8.
I also tried to update it, when the graphical shell displayed a message saying that update is available. Of course, it bricked the system.
I've switched from Ubuntu to Mint to Debian to Fedora to Arch to Manjaro for personal use and had to support a much wider variety of distributions professionally. My experience so far has been that upgrades inevitably damage the system. Most don't survive even a single upgrade. Arch-like systems survive several major package upgrades, but also start falling apart with time. Every few years enough problems accumulate that merit either a complete overhaul or just starting from scratch.
With this lesson learned, I don't try to work with backups for my own systems. When the inevitable happens, I try to push forward to the next iteration, and if some things to be lost, then so be it. To complement this, I try to make the personal data as small and as simple to replicate and to modify moving forward as possible. I.e. I would rule against using filesystem snapshots in favor of storing the file contents. I wouldn't use symbolic links (in that kind of data) because they can either break or not be supported in the archive tool. I wouldn't rely on file ownership or permissions (god forbid ACLs!) Try to remove as much of a "formatting" information as possible... so I end up with either text files or images.
This is not to discourage someone from building automated systems that can preserve much richer assembly of data. And for some data my approach would simply be impossible due to requirements. But, on a personal level... I think it's less of a software problem and more of a strategy about how not to accumulate data that's easy to lose.
The highly awaited Linux Mint 22.0 release marks a very exciting chapter for one of the most popular Ubuntu-based distributions. This new version is not meant to be some kind of regular update; it's an LTS version with promises of updates and support all the way to 2029. Both experienced Linux users and newcomers will find a lot in this version worthy of attention to detail in changes and new features. Let's break down what's new and what makes this release stand out.
Timeshift does not work for me because I encrypted my ssd, decrypt on boot, but linux sees every file twice, once encrypted and once decrypted, thinking that my storage is full, and thus timeshift refuses to make backups due to no storage. At least thats as far as I'm understanding it atm
I use BackInTime, which works in a similar way but is much more configurable. I have hourly backups of all my code for the past day, then a single daily for the past week, etc.
Sounds like rsnapshot (rsync with hardlinks and scheduling) but the BackInTime repo doesn't mention any comparison of how it's different, though Timeshift says they're similar. Anyone have experience with BiT vs rsnapshot?
BackInTime works similar to Apple TimeMachine. It uses hardlinks + new files. Plus, it keeps settings for that backup inside the repository itself, so you can install the tool, show the folder, and start restoring.
On top of that BiT supports network backups and multiple profiles. I'm using it on my desktop systems with multiple profiles for years and it's very reliable.
However it's a GUI first application, so for server applications Borg is a much better choice.
I've used BackInTime since 2010. I loved that, even without using the tool, you could just poke through the file structure, and get an old version of any backed up file.
- rsync is not a snapthot tool, so while in most of the cases we can rsync a live volume issueless on a desktop it's not a good idea doing so
- zfs support in 2024 is a must, btrfs honestly is the proof of how NOT to manage storage, like stratis
- it seems not much a backup tool, witch is perfectly fine but since the target seems to be end users not too much IT literate it should be stated clear...
Different categories of app. Duplicity is geared toward backing up files to a separate machine, and this tool snapshots your filesystem on the same machine.
I know it won't have the atomicity of a CoW fs, but I'd be fine with that, as the important files on my systems aren't often modified, especially during a backup - I'd configure it to disable the systemd timers while the backup process is running.
Yep, been using it for a while, incl ext4, you can have scheduled snapshots too, saved my arse few times, especially when you install something that cannot be easily uninstalled like hyperland or similar.
Can’t you also snapshot LVM volumes directly? So if you have an LVM volume, it shouldn’t matter what the filesystem is, provided it is sync’d… in theory.
(I’ve only done this on VMs that could be paused before the snapshot, so YMMV.)
Yeah, you can take live snapshots with LVM. You can use wyng-backup to incrementally take and back them up somewhere outside LVM. This has been working pretty well for me to backup libvirt domains backed by LVs
Can someone recommend a solution that works well with immutable distros such as Project Bluefin or Fedora Kinoite/Silverblue? We just need to backup maybe the etc and dotfiles. Also great if it can backup NixOS too.
I've just got a simple script that uses rclone for most of my home directory to my NAS. For nearly everything else, I don't mind if I have to start mostly from scratch.
Directories and hardlinks take up space, just very little.
It would make sense to hardlink a directory if everything in that tree was unchanged, but no filesystem will allow hardlinking a directory due to the risk of creating a loop (hardlinking to a parent directory), so directories are always created new and all files in the tree get their own hardlink.
Apple's Time Machine was given an exception in their filesystem to allow it, since they have control over it and can ensure no such loops are created. So it doesn't have that penalty creating hardlinks for every single individual file every time.
Magical thing about timeshift is that you can use it straight from your live CD. It will find root, backups, and restore it together with a boot partition.
oh this brings back memories, i found a script that did this about 15 years ago. it kept three versions of backups using rsync and hard-links to avoid duplication.
https://amontalenti.com/2024/06/19/backups-restic-rclone
The tools I use on Linux for backup are restic + rclone, storing my restic repo on a speedy USB3 SSD. For offsite, I use rclone to incrementally upload the entire restic repository to Backblaze B2.
The net effect: I have something akin to Time Machine (macOS) or Arq (macOS + Windows), but on my Linux laptop, without needing to use ZFS or btrfs everywhere.
Using restic + some shell scripting, I get full support for de-duplicated, encrypted, snapshot-based backups across all my "simpler" source filesystems. Namely: across ext4, exFAT, and (occasionally) FAT32, which is where my data is usually stored. And pushing the whole restic repo offsite to cloud storage via rclone + Backblaze completes the "3-2-1" setup straightforwardly.