jortan

Members
  • Content Count

    205
  • Joined

  • Last visited

  • Days Won

    1

jortan last won the day on February 12

jortan had the most liked content!

Community Reputation

29 Good

About jortan

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Consider using SWAG docker for this. This would give you a single place to configure LetsEncrypt certificate, a single port to forward and can be configured as a front-end to multiple non-SSL backend web servers. edit: just saw this "behind cloudflare and a reverse proxy" - if you have SSL on the reverse proxy, then it's not really necessary to have SSL enabled on Sonarr? Or is the reverse proxy outside your network?
  2. 545G scanned at 4.30M/s, 58.6G issued at 473K/s, 869G total This should give you some idea - 869G allocated in the array, 545G has been scanned and 58.6G written to the replacement disk so far. Hopefully this doesn't confuse the resilvering. Won't cause any problems, but it will slow down the resilvering process. There are some zfs tunables you can modify to change the io priority, but safest thing is probably just to let it complete. Consider turning off any high-io VMs/dockers that you don't need to have running.
  3. It's because ZFS pools might not import on startup if the device locations have changed: https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#selecting-dev-names-when-creating-a-pool-linux My not having any issues with this might be down to the fact that unRAID doesn't have a persistent zpool.cache (as far as I know). To each their own!
  4. zpool replace poolname origdrive newdrive Just to clarify, "origdrive" refers to whatever identifier ZFS currently has for the failed disk. So yes, this is 3739555303482842933 (ZFS id, apparently the drive located here has failed to the point where it wasn't assigned a /dev/sdx device) So the command should be zpool replace MFS2 3739555303482842933 sdi As long as you understand that these are how you refer to drives when replacing disks using zpool, there's not much chance of replacing the wrong drive: I understand that's a
  5. This person quite recently is having the same issue on ARM version of plex: https://forums.plex.tv/t/failed-to-run-packege-service-i-tried-many-solutions-but-did-not-work/726127/29 In the end they used a different version of Plex to do the install. Might be worth forcing an older version of the plex docker? edit: Failing that it might be worth a post on Plex forums with a reference to the above thread, noting that you seem to have the same issue on x86 docker. The few issues I've had with Dockers and ZFS seem to involve applications doing direct storage calls that ZFS
  6. I have a very similar setup to you (nested datasets for appdata and then individual dockers), I've never run in to this issue. I just checked and it's using read/write - I'm not aware of that causing any issues either? Are there any existing files in the plex appdata folder that you've copied from elsewhere? Could it be permissions related? chown -R nobody:users /mnt/Engineering/Docker/Plex Same issue with a empty /mnt/Engineering/Docker/Plex/ folder owned by nobody?
  7. Nope, mine was also youtube-dl.subfolder.conf and I know I never enabled this as I only use *.subdomain.conf I think somehow in a previous version of swag docker a non-sample conf must have been pushed out. Possibly even from back before this docker was renamed? edit: judging by the file date, this happened early July 2020.
  8. I have a very basic setup and I've just experienced this as well - all sites returning: refused to connect. Nothing logged in access.log or error.log something broke between: 1.17.0-ls70 and 1.17.0-ls71 For anyone else seeing this, edit swag docker and change repo to: linuxserver/swag:1.17.0-ls70 edit: Don't do the above, instead rename: swag/nginx/proxy-confs/youtube-dl.subfolder.conf to this swag/nginx/proxy-confs/youtube-dl.subfolder.conf-notused (unless you do actually use this config
  9. That did it - thank you! For anyone else with this issue, these lines were added: <boot order='1'/> <alias name='usbboot'/> You'll want to check any other instances of "boot order" setting in the XML and make everything else something other than "1"
  10. I forked a script called borgsnap a while ago to add some needed features for Unraid and my use-case. This allows you to create automated incremental-forever backups using ZFS snapshots to a local and/or remote borgbackup repository. I've posted a guide here. This includes pre/post snapshot scripts so you can automate shutting down VMs briefly while the snapshot is taken.
  11. Not a big deal but I figured it might be a one-liner to order these so sdaa comes after sdz. This becomes more of an issue on another system where I have a lot of "unassigned devices" that are used in ZFS pools. Disks plugged in to that system drop in to the middle of a long list of unassigned devices instead of the bottom of the list.
  12. Very minor thing, but on systems with many disks, unassigned devices will show disks "out of order" when sdz rolls over to sdaa, sdab, etc. Is there some logic that could be added to show disks in the "correct" order?
  13. This should be all you need: #!/bin/bash docker restart binhex-nzbget To restart this at 2am each night, set a custom schedule with: 0 2 * * * https://crontab.guru/#0_2_*_*_*
  14. Nice, that will probably fix @Deep Insights issue (as he's passing through a block device in XML) but not mine (as I'm passing through a USB controller) I'll do some playing around and see if I can get this to work though.