Jump to content

Andiroo2

Members
  • Posts

    169
  • Joined

  • Last visited

Posts posted by Andiroo2

  1. On 9/5/2023 at 10:25 AM, Andiroo2 said:

    Plex docker is no longer hardware transcoding on Intel Quick Sync as of v1.32.6.7468.  No changes otherwise...docker updated after backing up last night and now I see crazy CPU usage due to 4K transcodes using CPU instead of GPU.

     

    I rolled back to a stable version and it works again…see attached. 

     

    IMG_0408.jpeg

  2. On 7/18/2023 at 11:25 PM, ljm42 said:

     

    Does it help if you set the container to not autostart, then go to Settings -> Docker and disable the Docker service, then enable it?

     

    Docker failed to start when I tried this, so I rebooted the server. When it came back up, I was able to delete the container and re-install from the “Previous Apps” section of Comunity Applications. 

     

    Everything is working now. 

    • Like 2
  3. 6 hours ago, PeterSoto said:

    Hello, I'm sorry to hear about the issue you're having with your Plex setup and Docker containers. It looks like you encountered an error when trying to update Plex and configure Nvidia GPU pass-through. The error message "Cannot delete image, in use by other container(s)" indicates that the Plex image is currently being used by other running containers, which prevents you from deleting it. The "No such container" error indicates that the container you are trying to delete may not exist or is not running. To work around this issue, you can try the following steps: Check running containers: Verify which containers are currently running on your system. You can use the docker ps command to see a list of active containers. Stop running containers: If any containers are actively using Plex images, you should first stop them using the docker stop <container_name> command, where <container_name> is the name or ID of the container running run. Remove containers: After stopping the relevant containers, you can delete them using the docker rm <container_name> command. Attempt to delete images: After deleting the containers, try deleting the Plex image again with docker rmi <image_name>requirements. Reboot the Server: If you still encounter issues, attempt to reboot your server again and ensure that the shutdown is clean to avoid any potential corruption. Check the logs: Review the logs of both Plex and Docker for any additional error messages or warnings that may help determine the root cause of the problem. If the problem persists after following these steps, you should provide more information, such as the specific error message from the log, and the Docker command you used to update Plex using pass-through. Nvidia GPUs.

    AI wrote this, right?

  4. I recently changed my cache from BTRFS to XFS, and now I can't seem to scan the root of /mnt/cache in this app any longer.  I get a greyed out list of folders, perhaps a permissions issue?  I can scan them individually, but not at the root level of the cache drive(s):

     

    923738815_Screenshot2023-06-29094909.png.7a139f4381918615d5c002eca4a89efa.png

  5. Feature request (is this the right place to ask?):  A option to invoke the mover once I hit a certain threshold (95% in my case) and move the OLDEST files from the cache to the array until I hit a free space threshold on the cache (say 80% free) and then stop moving.

     

    I like to keep as many files on my cache as possible for fast access and to keep my array spun down whenever I can. I do this today with the mover tuning plugin by only running the mover once I hit 95% cache utilization and only moving flies over a certain age, but with the above feature I would be able to keep the cache exactly where I want it at all times. 

    • Like 1
  6. 18 hours ago, hugenbdd said:

    I'm not able to reproduce.  I'm willing to do a google meeting sometime next week if you want me to look at it.

     

    Not urgent.  I will watch for this next time and if it's a recurring problem then I'll reach out.  Thanks for the offer.

  7. On 5/20/2023 at 11:23 AM, Squid said:

    Most plugins will log into the syslog (if they log / have an option to).  For those that don't log to syslog (CA is the prime example), they have their own specific area and there is no requirement / convention that they be placed anywhere

     

    OK, so...I'm posting this in the Dynamix plugins thread.  Can anyone tell me where the Cache Dirs plugin logs ae stored?

  8. On 5/24/2023 at 9:33 AM, Andiroo2 said:

     

    Update: Mover was running this morning when I woke up.  Looks like my issue was a rounding error.  Mover tuning was reporting 95% usage but Unraid was reporting 96%.  I must have been right on 95% usage and not enough to trigger the mover.

     

    More on this one...it looks like Mover ignored the Tuning settings for file ages.  It moved everything available, and not just the files older than 30 days.  I'm not over the "move all cache-yes" files threshold either.  If you are interested, the logs I DM'd show the mover using a 30-day cut-off, but many more files were moved. 

  9. 19 hours ago, Andiroo2 said:

    Same issue for me this morning.  Mover not working:

     

    May 23 13:52:02 Tower kernel: 
    May 23 13:52:02 Tower root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 30 1 0 /mnt/user/system/Mover_Exclude.txt ini '' '' no 95 '' ''
    May 23 13:52:02 Tower root: Log Level: 1
    May 23 13:52:02 Tower root: mover: started
    May 23 13:52:02 Tower root: Hard Link Status: false
    May 23 13:52:02 Tower root: mover: finished
    May 23 13:52:02 Tower root: Restoring original turbo write mode
    May 23 13:52:02 Tower kernel: mdcmd (55): set md_write_method auto
    May 23 13:52:02 Tower kernel: 

     

    My settings are set to move at 95% usage of cache and I am at 96%.  I installed a new plugin version right before running this latest attempt.  Let me know if I can provide anything to help here.

     

    Update: Mover was running this morning when I woke up.  Looks like my issue was a rounding error.  Mover tuning was reporting 95% usage but Unraid was reporting 96%.  I must have been right on 95% usage and not enough to trigger the mover.

    • Like 1
  10. Same issue for me this morning.  Mover not working:

     

    May 23 13:52:02 Tower kernel: 
    May 23 13:52:02 Tower root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 30 1 0 /mnt/user/system/Mover_Exclude.txt ini '' '' no 95 '' ''
    May 23 13:52:02 Tower root: Log Level: 1
    May 23 13:52:02 Tower root: mover: started
    May 23 13:52:02 Tower root: Hard Link Status: false
    May 23 13:52:02 Tower root: mover: finished
    May 23 13:52:02 Tower root: Restoring original turbo write mode
    May 23 13:52:02 Tower kernel: mdcmd (55): set md_write_method auto
    May 23 13:52:02 Tower kernel: 

     

    My settings are set to move at 95% usage of cache and I am at 96%.  I installed a new plugin version right before running this latest attempt.  Let me know if I can provide anything to help here.

  11. Hi all, I'm getting an error when trying to save library settings for the last few weeks.  I'm simply trying to change the Visibility settings, but when I save it I get the error "Your changes could not be saved".  Other than this, Plex runs perfectly.  

     

    image.png.6bf11c3cde1d427c03ada4d3ac30a6e8.png

     

    Thoughts?  Posting the permissions of my plex appdata folder for reference.  

     

    image.png.7d5694f7d6460b997a839d52e3e90ffc.png

     

    Thanks!

  12. On 2/7/2023 at 6:08 PM, dlandon said:

    If you make a change to the flash config file, you'll have to click the double arrows icon on the UD page to refresh the ram file that holds the config file.  When UD is operating, the configuration file is in ram and copied to the flash drive when a change is made.

     

    I pulled my hair out for a few hours trying to figure out why the MOUNT button was greyed out for one of my disks.  Clicking this refresh button fixed it for me.  

  13. 1 hour ago, trurl said:

    Actually, you want appdata, domains, and system shares to stay on cache or other fast pool and not on the array, so Docker/VM performance won't be impacted by slower array, and so array disks can spin down since these files are always open.

     

    Oh I keep those on cache…I’m referring specifically to my backups that I want to move off the server quickly after creation and then archive them on the array for safe keeping. 

     

    Edit:  I’ve manually implemented this by calling the per-share mover script via scheduled task:

    find "/mnt/cache/CommunityApplicationsAppdataBackup" -depth | /usr/local/sbin/move -d 1

     

  14. I just received my QNAP QM2-4P-384 and installed in Unraid. It worked right out of the box.  4x NVMe and no bifurcation required on my motherboard. I even took an existing cache pool off the motherboard and put it right into this card and Unraid booted up and recognized the pool without issue.  I paid $170 USD + shipping to Canada.   

    • Like 1
  15. I'd love to see per-share mover settings as well.  I have a similar use case...I want my Appdata backups (~300GB file) to write to the cache, and then stay on the cache while it's transferred to my external storage.  I want to benefit from the fast read and write for the backup processes.  Once that's done, the mover can move that huge file onto the array so it's not soaking up valuable NVMe cache space.  

  16. I'm having this issue when trying to export data from the array to another server for backup.  Speeds start around 30MB/s and drop to around 5-7MB/s.  IOWAIT sitting around 33.3 according to glances.  No other activity on the server at the same time.  

     

    What's interesting is that the array shows ~100MB/s of reads, but there is only a trickle going over the wire.

     

    image.thumb.png.296f4b8bfd61d7c89845a7935de0f76d.png

     

    image.thumb.png.26f3970252036df8196214e3b8156d6e.png

     

    It's like the system is spinning it's wheels trying to get the data ready to send, but can only send really slowly. For reference, I am "pulling" data from Unraid to macOS.  I am running the rsync commands on MacOS, connected to Unraid via the network.  I have been trying with rsync over SSH and just via SMB but no real difference.

     

    image.thumb.png.4b06fe326adb280930af20b5bb3b5229.png

     

×
×
  • Create New...