Jump to content

ConnectivIT

Members
  • Content Count

    113
  • Joined

  • Last visited

Community Reputation

3 Neutral

About ConnectivIT

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm on unRAID 6.8.3 but plugin still shows as version 0.8.2, though that would be explained by plugin notes: 2020.01.09 Rewrote the plugin so it does not need to be updated everytime unRAID is upgraded. It checks if there is already a new build available and installs that Rebooted unRAID today, "zfs version" returns: zfs-0.8.3-1 Was hoping to get persistent l2arc added, which apparently has been merged in to openzfs: https://github.com/openzfs/zfs/pull/9582 though isn't mentioned in recent change logs for openzfs? ps: Big thank you for getting ZFS in to unRAID and the fantastic primer in the first post. Having per-VM and per-docker snapshots has already saved my bacon.
  2. Do you want to use external storage as a backup destination? If so, get your external storage mounted in unraid: Then configure "Set backup location:" in VM Backup plugin accordingly. Note the following caveats, you'll need to type the path manually or disable restrictive validation: edit: I haven't actually done this myself, but I can't see why it wouldn't work
  3. Really appreciate this script/plugin. I seem to get random snapshot failures - usually only a single VM out of many and not always the same one. Around 50% of backup runs I get no failures at all. That being the case I suspect that if the script were to retry taking the snapshot after a short delay, it would probably work? Not sure if this is related to me using ZFS for VM storage. 2020-05-21 06:02:14 information: able to perform snapshot for disk /mnt/zfspool/vm/vmname/vdisk1.qcow2 on vmname. use_snapshots is 1. vm_state is running. vdisk_type is qcow2 2020-05-21 06:02:15 failure: snapshot command failed on vdisk1.snap for vmname. 2020-05-21 06:02:16 warning: snapshot_fallback is 1. attempting backup for vmname using fallback method. Would it be possible to add some options to retry taking the snapshot before going to fallback method? Retry snapshots - yes/no Number of times to retry - integer Number of seconds between retries - integer Or alternatively make retrying snapshots a default behavior in the script? ps: I've had a few instance where the script left a VM turned off and unable to start. First example: The log indicated that vdisk1 snapshot failed The VM wouldn't turn on as the VM xml for vdisk 1, 2 and 3 were still pointing to .snap files. vdisk1 had an orphaned .snap file (even though it was logged as failing), .snap files for vdisk2 and vdisk3 had already been removed I ended up deleting the .snap file for vdisk1, fixing the XML to point to the .qcow2 files for all three vdisks The vm started up fine (I probably lost changes to vdisk1 between failed snapshot and shutdown, but wasn't too concerned about that) I couldn't see anything in the logs that indicated what went wrong other than the vdisk1 snapshot failure. I did end up with successful (fallback) backups of all three vdisks. Log for that VM on that backup run: 2020-05-20 05:31:01 information: vmname can be found on the system. attempting backup. 2020-05-20 05:31:01 information: creating local vmname.xml to work with during backup. 2020-05-20 05:31:01 information: /mnt/disks/localbackup/vm/vmname exists. continuing. 2020-05-20 05:31:01 information: skip_vm_shutdown is false and use_snapshots is 1. skipping vm shutdown procedure. vmname is running. can_backup_vm set to y. 2020-05-20 05:31:01 information: actually_copy_files is 1. 2020-05-20 05:31:01 information: can_backup_vm flag is y. starting backup of vmname configuration, nvram, and vdisk(s). 2020-05-20 05:31:01 information: copy of vmname.xml to /mnt/disks/localbackup/vm/vmname/20200520_0500_vmname.xml complete. 2020-05-20 05:31:01 information: copy of /etc/libvirt/qemu/nvram/a65cdc4d-0bcb-ef2f-0cd4-21e5bda55dfd_VARS-pure-efi.fd to /mnt/disks/localbackup/vm/vmname/20200520_0500_a65cdc4d-0bcb-ef2f-0cd4-21e5bda55dfd_VARS-pure- efi.fd complete. 2020-05-20 05:31:01 information: able to perform snapshot for disk /mnt/zfspool/vm/vmname/vdisk1.qcow2 on vmname. use_snapshots is 1. vm_state is running. vdisk_type is qcow2 2020-05-20 05:31:01 information: qemu agent found. enabling quiesce on snapshot. 2020-05-20 05:31:18 failure: snapshot command failed on vdisk1.snap for vmname. 2020-05-20 05:31:18 warning: snapshot_fallback is 1. attempting backup for vmname using fallback method. 2020-05-20 05:31:18 information: skip_vm_shutdown is false. beginning vm shutdown procedure. 2020-05-20 05:31:18 infomration: vmname is running. vm desired state is shut off. 2020-05-20 05:31:19 information: performing 20 30 second cycles waiting for vmname to shutdown cleanly. 2020-05-20 05:31:19 information: cycle 1 of 20: waiting 30 seconds before checking if the vm has entered the desired state. 2020-05-20 05:31:49 information: vmname is shut off. vm desired state is shut off. can_backup_vm set to y. 2020-05-20 05:37:38 information: copy of /mnt/zfspool/vm/vmname/vdisk1.qcow2 to /mnt/disks/localbackup/vm/vmname/20200520_0500_vdisk1.qcow2.zst complete. 2020-05-20 05:37:38 information: backup of /mnt/zfspool/vm/vmname/vdisk1.qcow2 vdisk to /mnt/disks/localbackup/vm/vmname/20200520_0500_vdisk1.qcow2.zst complete. 2020-05-20 05:37:38 information: able to perform snapshot for disk /mnt/zfspool/vm/vmname/vdisk2.qcow2 on vmname. use_snapshots is 1. vm_state is running. vdisk_type is qcow2 2020-05-20 05:37:38 information: qemu agent not found. disabling quiesce on snapshot. 2020-05-20 05:37:38 information: snapshot command succeeded on vdisk2.snap for vmname. 2020-05-20 05:38:48 information: copy of /mnt/zfspool/vm/vmname/vdisk2.qcow2 to /mnt/disks/localbackup/vm/vmname/20200520_0500_vdisk2.qcow2.zst complete. 2020-05-20 05:39:03 information: backup of /mnt/zfspool/vm/vmname/vdisk2.qcow2 vdisk to /mnt/disks/localbackup/vm/vmname/20200520_0500_vdisk2.qcow2.zst complete. 2020-05-20 05:39:08 information: commited changes from snapshot for /mnt/zfspool/vm/vmname/vdisk2.qcow2 on vmname. 2020-05-20 05:39:08 information: forcibly removed snapshot /mnt/zfspool/vm/vmname/vdisk2.snap for vmname. 2020-05-20 05:39:08 information: able to perform snapshot for disk /mnt/zfspool/vm/vmname/vdisk3.qcow2 on vmname. use_snapshots is 1. vm_state is running. vdisk_type is qcow2 2020-05-20 05:39:09 information: qemu agent not found. disabling quiesce on snapshot. 2020-05-20 05:39:09 information: snapshot command succeeded on vdisk3.snap for vmname. 2020-05-20 05:47:56 information: copy of /mnt/zfspool/vm/vmname/vdisk3.qcow2 to /mnt/disks/localbackup/vm/vmname/20200520_0500_vdisk3.qcow2.zst complete. 2020-05-20 05:47:56 information: backup of /mnt/zfspool/vm/vmname/vdisk3.qcow2 vdisk to /mnt/disks/localbackup/vm/vmname/20200520_0500_vdisk3.qcow2.zst complete. 2020-05-20 05:48:01 information: commited changes from snapshot for /mnt/zfspool/vm/vmname/vdisk3.qcow2 on vmname. 2020-05-20 05:48:01 information: forcibly removed snapshot /mnt/zfspool/vm/vmname/vdisk3.snap for vmname. 2020-05-20 05:48:01 information: extension for /mnt/user/isos/Windows Server 2019/en_windows_server_2019_x64_dvd_3c2cf1202.iso on vmname was found in vdisks_extensions_to_skip. skipping disk. 2020-05-20 05:48:01 information: extension for /mnt/user/isos/virtio-win-0.1.173-2.iso on vmname was found in vdisks_extensions_to_skip. skipping disk. 2020-05-20 05:48:01 information: the extensions of the vdisks that were backed up are qcow2. 2020-05-20 05:48:01 information: vm_state is shut off. vm_original_state is running. starting vmname. 2020-05-20 05:48:01 information: backup of vmname to /mnt/disks/localbackup/vm/vmname completed. 2020-05-20 05:48:01 information: number of days to keep backups set to indefinitely. 2020-05-20 05:48:01 information: cleaning out backups over 3 in location /mnt/disks/localbackup/vm/vmname/ 2020-05-20 05:48:01 information: removed '/mnt/disks/localbackup/vm/vmname/20200517_0500_vmname.xml' config file. 2020-05-20 05:48:01 information: removed '/mnt/disks/localbackup/vm/vmname/20200517_0500_a65cdc4d-0bcb-ef2f-0cd4-21e5bda55dfd_VARS-pure-efi.fd' nvram file. 2020-05-20 05:49:27 information: removed '/mnt/disks/localbackup/vm/vmname/20200517_0500_vdisk3.qcow2.zst' vdisk image file. 2020-05-20 05:49:27 information: removed '/mnt/disks/localbackup/vm/vmname/20200517_0500_vdisk2.qcow2.zst' vdisk image file. 2020-05-20 05:49:27 information: removed '/mnt/disks/localbackup/vm/vmname/20200517_0500_vdisk1.qcow2.zst' vdisk image file. 2020-05-20 05:49:27 information: did not find any vm log files to remove. 2020-05-20 05:49:27 information: removing local vmname.xml. On another occasion with two vdisk VM: vdisk1 snapshot failed VM backed up using fallback method No orphaned snapshot files left VM XML for vdisk2 was left pointing to .snap file, so VM failed to start I simply updated the XML and the VM started up fine.
  4. Not what you want to hear, but dedicated hardware for pfsense? I'm all for the consolidation that unRAID brings, but you want your network/internet to "just work" and ideally not go offline just because you have to bring down your unRAID array for some reason.
  5. Really appreciate this script/plugin. [snapshot failures] I've moved this post to the plugin thread as that's what I'm using:
  6. I was experience terrible speeds using Syncthing in docker on unRAID... Until I pinned the docker to only use a couple of CPU cores (switch Basic View to Advanced in the docker's "edit" screen) I'm now seeing bursts up to 20-30MB/sec where previously I never saw anything higher than 2-3MB/sec. I can only assume this was a CPU scheduling issue (ie. is the syncthing docker waiting for all cores to be available for some reason?) edit: I may have spoken to soon - occasionally I'm seeing up to 20MB/sec, but this mostly settles back to <2MB/sec after a while. Very frustrating!
  7. I use pfSense on dedicated hardware and didn't want to have DNS resolution for my whole network reliant on lancache docker, or anything else running on unRAID. You can achieve this and still use lancache-bundle by overriding the DNS Resolver for specific hosts here: pfSense | Services | DNS Resolver Assuming your lancache-bundle docker is using 192.168.1.202 as in the docker example, add this to "Custom Options": # Configuration for arenanet local-data: "assetcdn.101.arenanetworks.com. A 192.168.1.202" local-data: "assetcdn.102.arenanetworks.com. A 192.168.1.202" local-data: "assetcdn.103.arenanetworks.com. A 192.168.1.202" local-data: "live.patcher.bladeandsoul.com. A 192.168.1.202" # Configuration for blizzard local-data: "dist.blizzard.com. A 192.168.1.202" local-data: "dist.blizzard.com.edgesuite.net. A 192.168.1.202" local-data: "llnw.blizzard.com. A 192.168.1.202" local-data: "edgecast.blizzard.com. A 192.168.1.202" local-data: "blizzard.vo.llnwd.net. A 192.168.1.202" local-data: "blzddist1-a.akamaihd.net. A 192.168.1.202" local-data: "blzddist2-a.akamaihd.net. A 192.168.1.202" local-data: "blzddist3-a.akamaihd.net. A 192.168.1.202" local-data: "blzddist4-a.akamaihd.net. A 192.168.1.202" local-data: "level3.blizzard.com. A 192.168.1.202" local-data: "nydus.battle.net. A 192.168.1.202" local-data: "edge.blizzard.top.comcast.net. A 192.168.1.202" local-data: "cdn.blizzard.com. A 192.168.1.202" local-zone: "cdn.blizzard.com." redirect local-data: "cdn.blizzard.com. A 192.168.1.202" # Configuration for bsg local-data: "cdn-11.eft-store.com. A 192.168.1.202" local-data: "cl-453343cd.gcdn.co. A 192.168.1.202" # Configuration for cityofheroes local-data: "cdn.homecomingservers.com. A 192.168.1.202" local-data: "nsa.tools. A 192.168.1.202" # Configuration for daybreak local-data: "pls.patch.daybreakgames.com. A 192.168.1.202" # Configuration for epicgames local-data: "epicgames-download1.akamaized.net. A 192.168.1.202" local-data: "download.epicgames.com. A 192.168.1.202" local-data: "download2.epicgames.com. A 192.168.1.202" local-data: "download3.epicgames.com. A 192.168.1.202" local-data: "download4.epicgames.com. A 192.168.1.202" # Configuration for frontier local-data: "cdn.zaonce.net. A 192.168.1.202" # Configuration for hirez local-data: "hirez.http.internapcdn.net. A 192.168.1.202" # Configuration for neverwinter local-data: "level3.nwhttppatch.crypticstudios.com. A 192.168.1.202" # Configuration for nexusmods local-data: "filedelivery.nexusmods.com. A 192.168.1.202" # Configuration for nintendo local-data: "ccs.cdn.wup.shop.nintendo.com. A 192.168.1.202" local-data: "ccs.cdn.wup.shop.nintendo.net.edgesuite.net. A 192.168.1.202" local-data: "geisha-wup.cdn.nintendo.net. A 192.168.1.202" local-data: "geisha-wup.cdn.nintendo.net.edgekey.net. A 192.168.1.202" local-data: "idbe-wup.cdn.nintendo.net. A 192.168.1.202" local-data: "idbe-wup.cdn.nintendo.net.edgekey.net. A 192.168.1.202" local-data: "ecs-lp1.hac.shop.nintendo.net. A 192.168.1.202" local-data: "receive-lp1.dg.srv.nintendo.net. A 192.168.1.202" local-zone: "wup.shop.nintendo.net." redirect local-data: "wup.shop.nintendo.net. A 192.168.1.202" local-zone: "wup.eshop.nintendo.net." redirect local-data: "wup.eshop.nintendo.net. A 192.168.1.202" local-zone: "hac.lp1.d4c.nintendo.net." redirect local-data: "hac.lp1.d4c.nintendo.net. A 192.168.1.202" local-zone: "hac.lp1.eshop.nintendo.net." redirect local-data: "hac.lp1.eshop.nintendo.net. A 192.168.1.202" # Configuration for origin local-data: "origin-a.akamaihd.net. A 192.168.1.202" local-data: "lvlt.cdn.ea.com. A 192.168.1.202" # Configuration for renegadex local-data: "rxp-lv.cncirc.net. A 192.168.1.202" local-data: "cronub.fairplayinc.uk. A 192.168.1.202" local-data: "amirror.tyrant.gg. A 192.168.1.202" local-data: "mirror.usa.tyrant.gg. A 192.168.1.202" local-data: "renx.b-cdn.net. A 192.168.1.202" # Configuration for riot local-data: "l3cdn.riotgames.com. A 192.168.1.202" local-data: "worldwide.l3cdn.riotgames.com. A 192.168.1.202" local-data: "riotgamespatcher-a.akamaihd.net. A 192.168.1.202" local-data: "riotgamespatcher-a.akamaihd.net.edgesuite.net. A 192.168.1.202" local-zone: "dyn.riotcdn.net." redirect local-data: "dyn.riotcdn.net. A 192.168.1.202" # Configuration for rockstar local-data: "patches.rockstargames.com. A 192.168.1.202" # Configuration for sony local-data: "gs2.ww.prod.dl.playstation.net. A 192.168.1.202" local-data: "gs2.sonycoment.loris-e.llnwd.net. A 192.168.1.202" # Configuration for steam local-zone: "content.steampowered.com." redirect local-data: "content.steampowered.com. A 192.168.1.202" local-data: "content1.steampowered.com. A 192.168.1.202" local-data: "content2.steampowered.com. A 192.168.1.202" local-data: "content3.steampowered.com. A 192.168.1.202" local-data: "content4.steampowered.com. A 192.168.1.202" local-data: "content5.steampowered.com. A 192.168.1.202" local-data: "content6.steampowered.com. A 192.168.1.202" local-data: "content7.steampowered.com. A 192.168.1.202" local-data: "content8.steampowered.com. A 192.168.1.202" local-data: "cs.steampowered.com. A 192.168.1.202" local-data: "steamcontent.com. A 192.168.1.202" local-data: "client-download.steampowered.com. A 192.168.1.202" local-zone: "hsar.steampowered.com.edgesuite.net." redirect local-data: "hsar.steampowered.com.edgesuite.net. A 192.168.1.202" local-zone: "akamai.steamstatic.com." redirect local-data: "akamai.steamstatic.com. A 192.168.1.202" local-data: "content-origin.steampowered.com. A 192.168.1.202" local-data: "clientconfig.akamai.steamtransparent.com. A 192.168.1.202" local-data: "steampipe.akamaized.net. A 192.168.1.202" local-data: "edgecast.steamstatic.com. A 192.168.1.202" local-data: "steam.apac.qtlglb.com.mwcloudcdn.com. A 192.168.1.202" local-zone: "cs.steampowered.com." redirect local-data: "cs.steampowered.com. A 192.168.1.202" local-zone: "cm.steampowered.com." redirect local-data: "cm.steampowered.com. A 192.168.1.202" local-zone: "edgecast.steamstatic.com." redirect local-data: "edgecast.steamstatic.com. A 192.168.1.202" local-zone: "steamcontent.com." redirect local-data: "steamcontent.com. A 192.168.1.202" local-data: "cdn1-sea1.valve.net. A 192.168.1.202" local-data: "cdn2-sea1.valve.net. A 192.168.1.202" local-zone: "steam-content-dnld-1.apac-1-cdn.cqloud.com." redirect local-data: "steam-content-dnld-1.apac-1-cdn.cqloud.com. A 192.168.1.202" local-zone: "steam-content-dnld-1.eu-c1-cdn.cqloud.com." redirect local-data: "steam-content-dnld-1.eu-c1-cdn.cqloud.com. A 192.168.1.202" local-data: "steam.apac.qtlglb.com. A 192.168.1.202" local-data: "edge.steam-dns.top.comcast.net. A 192.168.1.202" local-data: "edge.steam-dns-2.top.comcast.net. A 192.168.1.202" local-data: "steam.naeu.qtlglb.com. A 192.168.1.202" local-data: "steampipe-kr.akamaized.net. A 192.168.1.202" local-data: "steam.ix.asn.au. A 192.168.1.202" local-data: "steam.eca.qtlglb.com. A 192.168.1.202" local-data: "steam.cdn.on.net. A 192.168.1.202" local-data: "update5.dota2.wmsj.cn. A 192.168.1.202" local-data: "update2.dota2.wmsj.cn. A 192.168.1.202" local-data: "update6.dota2.wmsj.cn. A 192.168.1.202" local-data: "update3.dota2.wmsj.cn. A 192.168.1.202" local-data: "update1.dota2.wmsj.cn. A 192.168.1.202" local-data: "update4.dota2.wmsj.cn. A 192.168.1.202" local-data: "update5.csgo.wmsj.cn. A 192.168.1.202" local-data: "update2.csgo.wmsj.cn. A 192.168.1.202" local-data: "update4.csgo.wmsj.cn. A 192.168.1.202" local-data: "update3.csgo.wmsj.cn. A 192.168.1.202" local-data: "update6.csgo.wmsj.cn. A 192.168.1.202" local-data: "update1.csgo.wmsj.cn. A 192.168.1.202" local-data: "st.dl.bscstorage.net. A 192.168.1.202" local-data: "cdn.mileweb.cs.steampowered.com.8686c.com. A 192.168.1.202" # Configuration for teso local-data: "live.patcher.elderscrollsonline.com. A 192.168.1.202" # Configuration for twitch local-data: "d3rmjivj4k4f0t.cloudfront.net. A 192.168.1.202" local-data: "addons.forgesvc.net. A 192.168.1.202" local-data: "media.forgecdn.net. A 192.168.1.202" local-data: "files.forgecdn.net. A 192.168.1.202" # Configuration for uplay local-zone: "cdn.ubi.com." redirect local-data: "cdn.ubi.com. A 192.168.1.202" # Configuration for warframe local-data: "content.warframe.com. A 192.168.1.202" # Configuration for wargaming # Configuration for xboxlive local-data: "assets1.xboxlive.com. A 192.168.1.202" local-data: "assets2.xboxlive.com. A 192.168.1.202" local-data: "dlassets.xboxlive.com. A 192.168.1.202" local-data: "xboxone.loris.llnwd.net. A 192.168.1.202" local-zone: "xboxone.loris.llnwd.net." redirect local-data: "xboxone.loris.llnwd.net. A 192.168.1.202" local-data: "xboxone.vo.llnwd.net. A 192.168.1.202" local-data: "xbox-mbr.xboxlive.com. A 192.168.1.202" local-data: "assets1.xboxlive.com.nsatc.net. A 192.168.1.202" # Configuration for windowsupdates local-zone: "windowsupdate.com." redirect local-data: "windowsupdate.com. A 192.168.1.202" local-data: "windowsupdate.com. A 192.168.1.202" local-zone: "dl.delivery.mp.microsoft.com." redirect local-data: "dl.delivery.mp.microsoft.com. A 192.168.1.202" local-data: "dl.delivery.mp.microsoft.com. A 192.168.1.202" local-zone: "update.microsoft.com." redirect local-data: "update.microsoft.com. A 192.168.1.202" local-zone: "do.dsp.mp.microsoft.com." redirect local-data: "do.dsp.mp.microsoft.com. A 192.168.1.202" local-zone: "microsoft.com.edgesuite.net." redirect local-data: "microsoft.com.edgesuite.net. A 192.168.1.202" local-data: "amupdatedl.microsoft.com. A 192.168.1.202" local-data: "amupdatedl2.microsoft.com. A 192.168.1.202" local-data: "amupdatedl3.microsoft.com. A 192.168.1.202" local-data: "amupdatedl4.microsoft.com. A 192.168.1.202" local-data: "amupdatedl5.microsoft.com. A 192.168.1.202" edit: If you do this, make sure your lancache-docker isn't using your pfSense for the UPSTREAM_DNS resolver! Use a public DNS resolver like 9.9.9.9 or 1.1.1.1 You can generate your own list using this: https://github.com/zeropingheroes/lancache-dns-pfsense If you do this and get "error: local-data in redirect zone must reside at top of zone" when loading into pfesense, just remove the entry it's complaining about as there is a parent domain redirect entry which makes it unnecessary anyway.
  8. It used to be possible to mount /cardigann in the container and Jackett would load definitions from /cardigann/definitions At some point this has stopped working. From jackett log: Info Loading Cardigann definitions from: /home/nobody/.config/cardigann/definitions/, /etc/xdg/cardigan/definitions/, /usr/lib/jackett/Definitions This can be fixed with a new container path: Container path: /home/nobody/.config/cardigann Host path: /mnt/user/appdata/binhex-jackett/cardigann You want your yaml definitions in this folder: /mnt/user/appdata/binhex-jackett/cardigann/definitions
  9. OK, so it looks like it does ignore the gui settings in config.xml and forces listening on 0.0.0.0 I even switched briefly to syncthing/syncthing docker and used STGUIADDRESS variable to force it to the custom bridged IP. Still no joy. It seems there is something inherent in the syncthing docker that doesn't like custom bridge interfaces. Has anyone been able to get this working?
  10. Anyone else running this in custom bridged network? I installed this docker yesterday and configured everything. Today the service is still running/syncing, but the GUI isn't responding. I suspect it broke after a restart due to backup overnight. I've changed the GUI to 0.0.0.0:8384 in config.xml According to the log, it looks like the GUI is listening on all interfaces as are the other services. I assume the last line is just informational and it should be listening on the bridged IP as well as localhost? [xxxxx] 12:28:26 INFO: TCP listener ([::]:22000) starting [xxxxx] 12:28:26 INFO: QUIC listener ([::]:22000) starting [xxxxx] 12:28:26 INFO: GUI and API listening on [::]:8384 [xxxxx] 12:28:26 INFO: Access the GUI via the following URL: http://127.0.0.1:8384/ I've tried replacing gui setting of 0.0.0.0:8384 with the bridged IP:port but no change. In fact the log still reports that it's listening on [::]:8384 [xxxxx] 12:39:52 INFO: GUI and API listening on [::]:8384 [xxxxx] 12:28:26 INFO: Access the GUI via the following URL: http://127.0.0.1:8384/ All I can think of at this point is that the setting is being ignored and access to the gui is being restricted to localhost:8384? I've enabled gui debugging: <gui enabled="true" tls="false" debugging="true"> But this doesn't seem to have added anything to the logs that would indicate a problem.
  11. This is the exact text I have in the "Excluded folders" setting (comma delimited) /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata,/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Cache,/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Media Note that this was typed in to the field, I didn't use the Excluded Folder Browser.
  12. It was a while ago since I did this, but I believe so. edit: (if you're using the internal docker name "binhex-nzbget" inside sonarr to reference the network name of nzbget, then obviously you'll need to change this to just "nzbget")
  13. I've done exactly this. Just rename this folder: /mnt/user/appdata/binhex-nzbget to this: /mnt/user/appdata/nzbget
  14. Thanks for letting me know. For anyone else facing this issue, I'm ended up excluding these in CA Appdata Backup: /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata, /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Media And then backing up those paths separately using this Borg script: https://www.reddit.com/r/unRAID/comments/e6l4x6/tutorial_borg_rclone_v2_the_best_method_to/ There are many options for this. The above script is one possible solution and the post discusses some of the others too.
  15. Thanks for all the work you've put in to this, @Squid I think a number of people are having issues with how long backup/verify is taking (particularly users of Plex docker with hundreds of thousands or even millions of files) I had an idea that might resolve this: If we think of the current backups as being "offline backup" (takes place while the dockers are offline) it would be nice if we could specify a list of folders for "online backup" (ie. paths that don't contain databases and are safe to make copies of while the dockers are online) So in the settings you could specify something like: online backup folders: /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata, /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Media These paths would automatically be excluded from the normal "offline backup". Once the offline backup is completed, the dockers would be restarted and then the script would start copying the folders above in to a second .tar.gz file. For purposes of restore / trimming old backups, the pair of .tar.gz files could be treated as a single backup edit: or just make the "online backup" append to the same .tar.gz file? That would mean a lot less changes required elsewhere in the script.