Say more?
NixOS supports headless LUKS, which was an improvement for me in my last distro-hop. The NixOS wiki even has an example of running a TOR Onion service from initrd to accept a LUKS unlock credential.
Say more?
NixOS supports headless LUKS, which was an improvement for me in my last distro-hop. The NixOS wiki even has an example of running a TOR Onion service from initrd to accept a LUKS unlock credential.
Thank you for sharing updates about your progress. Good luck rummaging around in found.000. :(
Regulation is slow, full of drama, scales poorly, & can result in a legal thicket that teams of lawyers can navigate better than the individuals it’s intended to advocate for. Decriminalizing interoperability is faster & can handle most of the small/simple cases, freeing up our community/legislative resources to focus on the most important regulatory needs.
X11 for xdotool. ydotool doesn’t support (& can’t really support with it’s current architecture) retrieving information like the current mouse location, current window, window dimensions & titles. Also, normal (unprivileged) user ydotool use requires udev rules or session scripts and/or running a ydotool daemon & many distros don’t yet ship with this Just Working.
X11 for Alt-F2 r
to restart Gnome Shell without ending the whole session. This is a useful workaround for a variety of Gnome bugs.
There are so many ways do handle backups, so many tools, etc. You’ll find something that works for you.
In the spirit of sharing a neat tool that works well for me, addressing many of the concerns you raised, in case it might work for you too: Maybe check out git annex. Especially if you already know git, and maybe even if you don’t yet.
I have one huge git repository that in spirit holds all my stuff. All my storage devices have a check-out of this git repo. So all my storage devices know about all my files, but only contain some of them (files not present show up as dangling symlinks). git annex tracks which drives have which data and enforces policies like “all data must live on at least two drives” and “this more-important data must live on at least three drives” by refusing to delete copies unless it can verify that enough other copies exist elsewhere.
git annex fsck
on a drive will verify that
Sounds fine?
Yes: Treat the two enclosures independently and symmetrically, such that you can fully restore from either one (the only difference would be that the one in the safe is slightly stale) and the ongoing upkeep is just:
If I assume a normal incremental backup setup, both enclosures would have a full backup and a pile of incremental backups. For example, if swapped every three days:
Enclosure A Enclosure B
----------------- ---------------
a-full-2023-07-01
a-incr-2023-07-02
a-incr-2023-07-03
b-full-2023-07-04
b-incr-2023-07-05
b-incr-2023-07-06
a-incr-2023-07-07
a-incr-2023-07-08
a-incr-2023-07-09
b-incr-2023-07-10
b-incr-2023-07-11
b-incr-2023-07-12
a-incr-2023-07-13
....
The thing taking the backups need not even detect or care which enclosure is plugged in – it just uses the last incremental on that enclosure to determine what’s changed & needs to be included in the next incremental.
Nothing need care about the number or identity of enclosures: You could add a third if, for example, you found an offsite location you trust. Or when one of them eventually fails, you’d just start using a new one & everything would Just Work. Or, if you want to discard history (eg: to get back the storage space used by deleted files), you could just wipe one of them & let it automatically make a new full backup.
Are you asking for help with software? This could be as simple as dar and a shell script.
My personal preference is to tell the enclosure to not try any fancy RAID stuff & just present all the drives directly to the host, and then let the host do the RAID stuff (with lvm or zfs or whatever), but I understand opinions differ. I like knowing I can always use any other enclosure or just plug the drives in directly if/when the enclosure dies.
I notice you didn’t mention encryption, maybe because that’s obvious these days? There’s an interesting choice here, though: You can do normal full-disk encryption, or you could encrypt the archives individually. Dar actually has an interesting feature here I haven’t seen in any other backup tool: If you keep a small --aux
file with the metadata needed for determining what will need to go in the next incremental, dar can encrypt the backup archives asymmetrically to a GPG key. This allows you to separate the capability of writing backups and the capability of reading backups. This is neat, but mostly unimportant because the backup is mostly just a copy of what’s on the host. It comes into play only when accessing historical files that have been deleted on the host but are still recoverable from point-in-time restore from the incremental archives – this becomes possible only with the private key, which is not used or needed by any of the backup automation, and so is not kept on the host. (You could also, of course, do both full-disk encryption and per-archive encryption if you want the neat separate-credential for deleted files trick and also don’t want to leak metadata about when backups happen and how large the incremental archives are / how much changed.) (If you don’t full-disk-encrypt the enclosure & rely only on the per-archive encryption, you’d want to keep the small --aux files on the host, not on the enclosure. The automation would need to keep one --aux file per enclosure, & for this narrow case, it would need to identify the enclosures to make sure it uses that enclosure’s --aux file when generating the incremental archive.)
Any sane compiler will simplify this into
function cosmicRayDetector() {
while(true) {
}
}
C++ may further ‘simplify’ this into
function cosmicRayDetector() {
return
}
I accidentally the timezone.
It’s worth watching.