Andiroo2

Members
  • Posts

    166
  • Joined

  • Last visited

Everything posted by Andiroo2

  1. Plex docker is no longer hardware transcoding on Intel Quick Sync as of v1.32.6.7468. No changes otherwise...docker updated after backing up last night and now I see crazy CPU usage due to 4K transcodes using CPU instead of GPU.
  2. Docker failed to start when I tried this, so I rebooted the server. When it came back up, I was able to delete the container and re-install from the “Previous Apps” section of Comunity Applications. Everything is working now.
  3. Same thing happening to me. I had an orphaned image and I deleted it, but still can’t start or delete the main Docker container for the same app.
  4. I recently changed my cache from BTRFS to XFS, and now I can't seem to scan the root of /mnt/cache in this app any longer. I get a greyed out list of folders, perhaps a permissions issue? I can scan them individually, but not at the root level of the cache drive(s):
  5. Failed for me now too on 6.12.1. I feel like it started failing before the upgrade to 6.12 though...
  6. Feature request (is this the right place to ask?): A option to invoke the mover once I hit a certain threshold (95% in my case) and move the OLDEST files from the cache to the array until I hit a free space threshold on the cache (say 80% free) and then stop moving. I like to keep as many files on my cache as possible for fast access and to keep my array spun down whenever I can. I do this today with the mover tuning plugin by only running the mover once I hit 95% cache utilization and only moving flies over a certain age, but with the above feature I would be able to keep the cache exactly where I want it at all times.
  7. Not urgent. I will watch for this next time and if it's a recurring problem then I'll reach out. Thanks for the offer.
  8. OK, so...I'm posting this in the Dynamix plugins thread. Can anyone tell me where the Cache Dirs plugin logs ae stored?
  9. More on this one...it looks like Mover ignored the Tuning settings for file ages. It moved everything available, and not just the files older than 30 days. I'm not over the "move all cache-yes" files threshold either. If you are interested, the logs I DM'd show the mover using a 30-day cut-off, but many more files were moved.
  10. Update: Mover was running this morning when I woke up. Looks like my issue was a rounding error. Mover tuning was reporting 95% usage but Unraid was reporting 96%. I must have been right on 95% usage and not enough to trigger the mover.
  11. Same issue for me this morning. Mover not working: May 23 13:52:02 Tower kernel: May 23 13:52:02 Tower root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 30 1 0 /mnt/user/system/Mover_Exclude.txt ini '' '' no 95 '' '' May 23 13:52:02 Tower root: Log Level: 1 May 23 13:52:02 Tower root: mover: started May 23 13:52:02 Tower root: Hard Link Status: false May 23 13:52:02 Tower root: mover: finished May 23 13:52:02 Tower root: Restoring original turbo write mode May 23 13:52:02 Tower kernel: mdcmd (55): set md_write_method auto May 23 13:52:02 Tower kernel: My settings are set to move at 95% usage of cache and I am at 96%. I installed a new plugin version right before running this latest attempt. Let me know if I can provide anything to help here.
  12. I tried searching but can't find it here...what's the path for the logs for plugins? Specifically looking for my Cache Dirs logs to troubleshoot disks spinning up. Thanks!
  13. Hi all, I'm getting an error when trying to save library settings for the last few weeks. I'm simply trying to change the Visibility settings, but when I save it I get the error "Your changes could not be saved". Other than this, Plex runs perfectly. Thoughts? Posting the permissions of my plex appdata folder for reference. Thanks!
  14. Is there any support for converting a BTRFS cache pool to ZFS, or will it need to be a complete wipe and reformat?
  15. Any timeline on fixing iotop dependencies?
  16. I pulled my hair out for a few hours trying to figure out why the MOUNT button was greyed out for one of my disks. Clicking this refresh button fixed it for me.
  17. On the Unraid dashboard, my Docker gauge is showing 90%. When I used a docker image, this meant that I had filled the image to 90% of the specified size. What does this number mean for the directory-formatted Docker data?
  18. Oh I keep those on cache…I’m referring specifically to my backups that I want to move off the server quickly after creation and then archive them on the array for safe keeping. Edit: I’ve manually implemented this by calling the per-share mover script via scheduled task: find "/mnt/cache/CommunityApplicationsAppdataBackup" -depth | /usr/local/sbin/move -d 1
  19. I just received my QNAP QM2-4P-384 and installed in Unraid. It worked right out of the box. 4x NVMe and no bifurcation required on my motherboard. I even took an existing cache pool off the motherboard and put it right into this card and Unraid booted up and recognized the pool without issue. I paid $170 USD + shipping to Canada.
  20. Pinning CPUs makes sense in your case where the performance of other things isn’t acceptable when the issue occurs. My experience is different though…I get the high IOWait but the rest of the system doesn’t hang.
  21. I'd love to see per-share mover settings as well. I have a similar use case...I want my Appdata backups (~300GB file) to write to the cache, and then stay on the cache while it's transferred to my external storage. I want to benefit from the fast read and write for the backup processes. Once that's done, the mover can move that huge file onto the array so it's not soaking up valuable NVMe cache space.
  22. I'm having this issue when trying to export data from the array to another server for backup. Speeds start around 30MB/s and drop to around 5-7MB/s. IOWAIT sitting around 33.3 according to glances. No other activity on the server at the same time. What's interesting is that the array shows ~100MB/s of reads, but there is only a trickle going over the wire. It's like the system is spinning it's wheels trying to get the data ready to send, but can only send really slowly. For reference, I am "pulling" data from Unraid to macOS. I am running the rsync commands on MacOS, connected to Unraid via the network. I have been trying with rsync over SSH and just via SMB but no real difference.
  23. Further to this, it did not persist after a reboot, so I saved a copy of the root.pubkeys file on the flash and added a command to the go file to copy it into the /etc/ssh/ folder on boot.
  24. Fix it. I had to edit this file and manually add my public key: /etc/ssh/root.pubkeys