jortan

Members
  • Posts

    287
  • Joined

  • Last visited

Everything posted by jortan

  1. I haven't watched space invader's video - how was the SMB share created? I shared a ZFS dataset by adding the the share to: /boot/config/smb-extra.conf [sharename] path = /zfs/dataset comment = zfs dataset browseable = yes public = yes writeable = yes vfs objects = If I remember correctly, you can then restart samba with: /etc/rc.d/rc.samba restart
  2. They're saying, in the nicest possible way, that BTRFS is not stable in RAID5 mode: >>btrfs today is most reliable when configured in RAID 1 or RAID 10 Seems like all these features will make it in to unRAID eventually and they are just polling in order to set their priorities.
  3. Vote for native ZFS support in unRAID here:
  4. >> I guess just refers to its connection on the local network? Yes. This is common even in an enterprise environment (though they would generally have a more segmented internal network) One of the selling points of reverse proxies is "SSL offloading" meaning the reverse proxy handles the SSL workload and the application server behind it doesn't need to.
  5. Consider using SWAG docker for this. This would give you a single place to configure LetsEncrypt certificate, a single port to forward and can be configured as a front-end to multiple non-SSL backend web servers. edit: just saw this "behind cloudflare and a reverse proxy" - if you have SSL on the reverse proxy, then it's not really necessary to have SSL enabled on Sonarr? Or is the reverse proxy outside your network?
  6. 545G scanned at 4.30M/s, 58.6G issued at 473K/s, 869G total This should give you some idea - 869G allocated in the array, 545G has been scanned and 58.6G written to the replacement disk so far. Hopefully this doesn't confuse the resilvering. Won't cause any problems, but it will slow down the resilvering process. There are some zfs tunables you can modify to change the io priority, but safest thing is probably just to let it complete. Consider turning off any high-io VMs/dockers that you don't need to have running.
  7. It's because ZFS pools might not import on startup if the device locations have changed: https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#selecting-dev-names-when-creating-a-pool-linux My not having any issues with this might be down to the fact that unRAID doesn't have a persistent zpool.cache (as far as I know). To each their own!
  8. zpool replace poolname origdrive newdrive Just to clarify, "origdrive" refers to whatever identifier ZFS currently has for the failed disk. So yes, this is 3739555303482842933 (ZFS id, apparently the drive located here has failed to the point where it wasn't assigned a /dev/sdx device) So the command should be zpool replace MFS2 3739555303482842933 sdi As long as you understand that these are how you refer to drives when replacing disks using zpool, there's not much chance of replacing the wrong drive: I understand that's a common recommendation, but in my experience I just reference the normal drive locations /dev/sda, /dev/sdb. ZFS never seems to have any issue finding the correct disks, even when the order has changed. In my array,. ZFS has switched by itself to using drive id's - this may only occurs if the disk order has changed?
  9. This person quite recently is having the same issue on ARM version of plex: https://forums.plex.tv/t/failed-to-run-packege-service-i-tried-many-solutions-but-did-not-work/726127/29 In the end they used a different version of Plex to do the install. Might be worth forcing an older version of the plex docker? edit: Failing that it might be worth a post on Plex forums with a reference to the above thread, noting that you seem to have the same issue on x86 docker. The few issues I've had with Dockers and ZFS seem to involve applications doing direct storage calls that ZFS doesn't support. Maybe the latest versions of Plex are doing something similar during new installs? Have you also tried to copy a working database created on non-ZFS storage?
  10. I have a very similar setup to you (nested datasets for appdata and then individual dockers), I've never run in to this issue. I just checked and it's using read/write - I'm not aware of that causing any issues either? Are there any existing files in the plex appdata folder that you've copied from elsewhere? Could it be permissions related? chown -R nobody:users /mnt/Engineering/Docker/Plex Same issue with a empty /mnt/Engineering/Docker/Plex/ folder owned by nobody?
  11. Nope, mine was also youtube-dl.subfolder.conf and I know I never enabled this as I only use *.subdomain.conf I think somehow in a previous version of swag docker a non-sample conf must have been pushed out. Possibly even from back before this docker was renamed? edit: judging by the file date, this happened early July 2020.
  12. I have a very basic setup and I've just experienced this as well - all sites returning: refused to connect. Nothing logged in access.log or error.log something broke between: 1.17.0-ls70 and 1.17.0-ls71 For anyone else seeing this, edit swag docker and change repo to: linuxserver/swag:1.17.0-ls70 edit: Don't do the above, instead rename: swag/nginx/proxy-confs/youtube-dl.subfolder.conf to this swag/nginx/proxy-confs/youtube-dl.subfolder.conf-notused (unless you do actually use this config file, in which case remove the line containing: proxy_redirect off;
  13. That did it - thank you! For anyone else with this issue, these lines were added: <boot order='1'/> <alias name='usbboot'/> You'll want to check any other instances of "boot order" setting in the XML and make everything else something other than "1"
  14. I forked a script called borgsnap a while ago to add some needed features for Unraid and my use-case. This allows you to create automated incremental-forever backups using ZFS snapshots to a local and/or remote borgbackup repository. I've posted a guide here. This includes pre/post snapshot scripts so you can automate shutting down VMs briefly while the snapshot is taken.
  15. This is a guide for users of ZFS Plugin who want to backup their ZFS pools using borgbackup. pre/post scripts can be used to briefly shutdown dockers/VMs while snapshots are taken. The backup can then proceed using the ZFS snapshot while the dockers/VMs are running. (personally I haven't bothered with this, a crash-consistent snapshot backup is good enough for me) Use case I use sanoid/syncoid but also wanted an encrypted copy of my non-encrypted zfs pool backed up incrementally off-site. I came across borgsnap that was further improved here. borgsnap wasn't suitable for large numbers of nested datasets so I've created a fork that adds recursive ZFS snapshots, some modifications for Unraid and other features here: https://github.com/jortan/borgsnap/blob/master/borgsnap borgsnap can backup to a local destination, a remote destination (via borg over SSH) or both. It will: - Read configuration file and encryption key file - Validate output directory exists and a few other basics - For each ZFS filesystem configured: Initialize borg repository if it doesn't exist Take a ZFS snapshot of the filesystem (recursively if enabled) Run borg for the local output if configured Run borg for the rsync.net output if configured Delete old ZFS snapshots (recursively if enabled) Prune local and remote borg backups if configured and needed Disclaimer I barely speak bash and have modified a script written by someone who does. Use at your own risk! Remove existing borgbackup from NerdPack The "borgbackup" package in NerdPack has broken a couple of times (including as of now). I recommend uninstalling borgbackup from NerdPack if you've installed this previously. The instructions below include downloading a linux binary of borgbackup from here - as far as I can tell works fine, though I'm not sure if it's relying on python / other packages I have installed via NerdPack. Please post any requirements you come across and I'll update this post. Create location for borg respository You can store the borgsnap repository on an array share, unassigned disk, remote mount, or another ZFS pool. You just need an empty folder somewhere. You don't need to create a borg respository, borgsnap will handle that for you. Create single disk ZFS pool for LOCAL borgsnap backup (optional) wipefs -a /dev/sdxx zpool create -o ashift=12 -m /mnt/borgpool borgpool /dev/sdxx zfs create borgpool/borgrepo Configure ZFS tunables to taste - borgbackup is going to compress backups, so no need to compress this pool/dataset. zfs set compression=zle atime=off recordsize=1m xattr=sa borgpool Download borgbackup/borgsnap Check for latest 1.x release here, copy URL for "borg-linux64" binary. Download the borgbackup binary: mkdir /boot/config/borgsnap wget https://github.com/borgbackup/borg/releases/download/1.1.16/borg-linux64 /boot/config/borgsnap/borg-linux64 Download borgsnap scripts to unRAID wget 'https://github.com/jortan/borgsnap/raw/master/borgsnap' /boot/config/borgsnap/ wget 'https://github.com/jortan/borgsnap/raw/master/borgwrapper' /boot/config/borgsnap/ Install binaries/scripts cp /boot/config/borgsnap/borg-linux64 /usr/local/sbin/borg cp /boot/config/borgsnap/borgsnap /usr/local/sbin/ cp /boot/config/borgsnap/borgwrapper /usr/local/sbin/ chmod +x /usr/local/sbin/borg chmod +x /usr/local/sbin/borgsnap chmod +x /usr/local/sbin/borgwrapper echo "alias borgwrap='/usr/local/sbin/borgwrapper /boot/config/borgsnap/borgsnap.conf'">>/etc/profile Install borgbackup/borgsnap on startup Add commands to /boot/config/go # borgbackup / borgsnap setup # cp /boot/config/borgsnap/borg-linux64 /usr/local/sbin/borg cp /boot/config/borgsnap/borgsnap /usr/local/sbin/ cp /boot/config/borgsnap/borgwrapper /usr/local/sbin/ chmod +x /usr/local/sbin/borg chmod +x /usr/local/sbin/borgsnap chmod +x /usr/local/sbin/borgwrapper echo "alias borgwrap='/usr/local/sbin/borgwrapper /boot/config/borgsnap/borgsnap.conf'">>/etc/profile Create a borg passphrase file IMPORTANT: borgbackup stores the decryption key with the backups (inside "config" file) but requires a passphrase to decrypt the key. You will not be able to restore your borg backups if you lose access to the passphrase Enter a long random passphrase here: /boot/config/borgsnap/passphrase.key Or generate a passphrase: cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 128 >/boot/config/borgsnap/passphrase.key Store a copy of this somewhere accessible in a DR scenario! No really, do it now! Create borgsnap config file. Note: I have only tested this with first-level zfs datasets (ie. poolname/dataset) /boot/config/borgsnap/borgsnap.conf FS="pool/appdata pool/data pool/vm" BASEDIR="/mnt/user/appdata/borgsnap" LOCAL="/mnt/borgpool/borgrepo" LOCAL_READABLE_BY_OTHERS=false LOCALSKIP=false RECURSIVE=true COMPRESS=zstd CACHEMODE="mtime,size" REMOTE="" REMOTE_BORG_COMMAND="" PASS="/boot/config/borgsnap/passphrase.key" MONTH_KEEP=1 WEEK_KEEP=4 DAY_KEEP=7 PRE_SCRIPT="" POST_SCRIPT="" IMPORTANT: All options must be present in the borgsnap.conf file, even if they have no value. BASEDIR is required for Unraid. This may take a few gigabytes, put this anywhere persistent. It's not required for a restore, but you don't want to lose borgbackup's cache every reboot. If you don't specify a "LOCAL" path, borgsnap will create a repository inside the dataset being backed up. You can skip doing LOCAL backups entirely and only create a REMOTE backup by setting LOCALSKIP=true If you want to create borg backups on rsync.net: REMOTE="[email protected]:myhost" In my case I installed borg on a remote Ubuntu machine, configured ssh key to access the remote server. From unraid: ssh-keygen ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected] You can then specify the remote backup destination like this: REMOTE="[email protected]:/remote/path/borgrepo" By default, the remote borg command is configured for rsync.net - if using your own borg installation, you will probably need to set: REMOTE_BORG_COMMAND="borg" Run borgsnap! You can run borgsnap manually with this command: /usr/local/sbin/borgsnap run /boot/config/borgsnap/borgsnap.conf Note that if borgsnap is interrupted, it won't run again properly until the next day. You may want to add this to User Scripts and use "Run in background" so that closing your shell won't stop borgsnap. If something didn't work and you want to try again, you can use the "tidy" command - this will attempt to unmount borgsnap's ZFS snapshots and remove any local/remote backups for the current day. This isn't perfect but helps when initially trying to configure borgsnap: /usr/local/sbin/borgsnap tidy /boot/config/borgsnap/borgsnap.conf Pre/Post Scripts You can nominate a script to run before taking a ZFS snapshot and after a ZFS snapshot. Specify the full path to the script: PRE_SCRIPT="/mnt/user/appdata/borgsnap/prescript.sh" POST_SCRIPT="/mnt/user/appdata/borgsnap/postscript.sh" Note: You can't make files executable in /boot/config so these need to go somewhere else persistant. Keep in mind these scripts are going to be run as root so you may want to set appropriate permissions with something like: chmod 700 /mnt/user/appdata/borgsnap/prescript.sh chmod 700 /mnt/user/appdata/borgsnap/postscript.sh The same script is run for every FS (pool/dataset) configured in borgsnap.conf. The FS name is passed to the scripts as $1, so you can use this to run commands specific to a pool/dataset. Example: /mnt/user/appdata/borgsnap/prescript.sh #!/bin/bash if [[ "$1" = "pool/appdata" ]]; then echo pool/appdata - Stopping all dockers docker stop $(docker ps -aq) sleep 10 fi /mnt/user/appdata/borgsnap/postscript.sh #!/bin/bash if [[ "$1" = "pool/appdata" ]]; then echo pool/appdata - Starting all dockers docker start $(docker ps -aq) sleep 10 fi Note: This starts all containers, not just those configured to auto-start in the docker service. There's presumably a better command for this. User Scripts borgsnap-run Used to run borgsnap on a schedule. /usr/local/sbin/borgsnap run /boot/config/borgsnap/borgsnap.conf borgsnap-local-backup-summary Inside the LOCAL or REMOTE repo folder, borgsnap creates A folder for each zfs pool being backed up A sub-folder for each dataset being backed up Each of those sub-folders contains a borg repository. This script will parse LOCAL folder and give a summary of borg backups and repo sizes: #!/bin/bash source /boot/config/borgsnap/borgsnap.conf echo $'\n Summary of all borgsnap backups\n' echo " Source datasets: $FS" echo $'' echo " Backup repository: $LOCAL" echo $'' echo " Archives sizes: Original size, Compressed size, Deduplicated size" echo $'' for z in $LOCAL/*; do for f in $z/*; do if [ -d "$f" ]; then echo $'-------------------------------------------------------\n' /usr/local/sbin/borgwrapper /boot/config/borgsnap/borgsnap.conf list "$f" | cut -d" " -f1 | xargs -d "\n" -L1 echo $f:: | tr -d ' ' echo $'' /usr/local/sbin/borgwrapper /boot/config/borgsnap/borgsnap.conf info "$f" | awk 'NR==8' echo $'' fi done done Example output: Summary of all borgsnap backups Source datasets: pool/appdata pool/vm Backup repository: /mnt/borgpool Archives sizes: Original size, Compressed size, Deduplicated size ------------------------------------------------------- /mnt/borgpool/pool/appdata::month-20210328 /mnt/borgpool/pool/appdata::week-20210328 /mnt/borgpool/pool/appdata::day-20210329 All archives: 7.63 MB 134.86 kB 46.04 kB ------------------------------------------------------- /mnt/borgpool/pool/vm::month-20210328 /mnt/borgpool/pool/vm::week-20210328 /mnt/borgpool/pool/vm::day-20210329 All archives: 209.32 GB 105.99 GB 27.76 GB I'll add a REMOTE version of this script at some point. Run borgbackup commands interactively You can use "borgwrapper" to run arbitrary borg commands using the keyfile passphrase in borgsnap.conf: borgwrapper /boot/config/borgsnap/borgsnap.conf <borg commands> We added an alias in /boot/config/go to make this even easier to run (restart your shell if this doesn't work yet) borgwrap <borg commands> Example commands: List contents of a borg repository: borgwrap list /mnt/borgpool/pool/appdata
  16. Not a big deal but I figured it might be a one-liner to order these so sdaa comes after sdz. This becomes more of an issue on another system where I have a lot of "unassigned devices" that are used in ZFS pools. Disks plugged in to that system drop in to the middle of a long list of unassigned devices instead of the bottom of the list.
  17. Very minor thing, but on systems with many disks, unassigned devices will show disks "out of order" when sdz rolls over to sdaa, sdab, etc. Is there some logic that could be added to show disks in the "correct" order?
  18. This should be all you need: #!/bin/bash docker restart binhex-nzbget To restart this at 2am each night, set a custom schedule with: 0 2 * * * https://crontab.guru/#0_2_*_*_*
  19. Nice, that will probably fix @Deep Insights issue (as he's passing through a block device in XML) but not mine (as I'm passing through a USB controller) I'll do some playing around and see if I can get this to work though.
  20. I was able to boot the VM with the alternative OVMF code/vars files, but the problem remains. Very frustrating!
  21. I've recreated the VM multiple times, so multiple fresh vars files have shown the issue. Perhaps an alternative OVMF version might help - though messing around with the OVMS paths in my VM XML is what stopped me from starting the unraid VM service yesterday.
  22. What I mean is we're talking about devices passed through to the guest. In my case it's an entire USB controller, so the boot device isn't listed at all in the XML file. In that sense QEMU isn't aware of the boot device when it's starting the VM (even though the guest definitely sees it) Maybe the OVMF settings are being ignored/reset for that reason? I will try to post my XML soon but I've messed up my libvirt service by poking at things and can't start it currently - will try restarting unraid later when it's not being relied on.
  23. Appreciate the feedback - but I had already tried configuring EFI boot using bcfg. Still not persistent. Seems like this is related to passthrough specifically - perhaps because qemu can't see the device when the machine is starting?
  24. I'm also having this problem. In my case I'm passing through a PCIE usb controller with a separate unraid USB drive. I can boot from this if I select manually. I can change the boot order and save. "Continue" will boot correctly, but upon restarting the VM it boots back to EFI shell. I can see that nvram file is being updated at /etc/libvirt/qemu/nvram so it seems to be saving the settings, but why not applying them? Anyone else found a solution to this?