LumpyCustard

Members
  • Posts

    48
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

LumpyCustard's Achievements

Rookie

Rookie (2/14)

5

Reputation

1

Community Answers

  1. I did enable mover logging some weeks back for testing purposes, it has now been disabled. This wouldn't be the cause of the shutdown issue would it?
  2. OK i had to stop the diagnostic, after 20+ minutes my browser had reached 32GB of memory usage and things were getting weird. Why is the diagnostic tool killing itself by going through my nvme cache / urbackup folder?
  3. I posted a screenshot above. I have generated diagnostics dozens of times and i've never seen it do this before. Urbackup is a docker container i use to handle backups for my personal PC and a friend of mine who backs up over the internet. I've never seen it go through tens of thousands of files to generate a diagnostic before. It's been generating a diagnostic for the last 5 minutes now and it's still going. edit: 15 minutes now and it's still going lol.
  4. No, when i boot my server after failing to force shutdown it SOMETIMES says improper shutdown, other times it doesn't. When i generate diagnostics it completes in about 10 seconds. Once the array has started though the diagnostics take a long while to complete as it seems to be going through thousands of files in my Urbackup directory.
  5. Since moving to an Intel based server and utilising QSV/NVENC i opted to retire my Server 2016 VM. I haven't done anything funky with IOMMU groups either. When i WAS using a VM it was simply named "Windows Server 2016" I currently have no VM's in Unraid and have disabled the VM service.
  6. SOLUTION, found on arch linux forums. You will first need to open terminal and run intel_gpu_top to confirm what your iGPU's identifier is (card0 or card1). In the below screenshot my iGPU is card1/renderD129, which means my Nvidia GPU (by process of elimination) is card0/renderD128. Next edit your docker container and enable advanced view. Under "Extra Parameters" enter the following command. In my case i am directing card1 to card0 and renderD129 to renderD128 in order to stop docker from targeting my Nvidia GPU when trying to run QSV operations. --device=/dev/dri/card1:/dev/dri/card0 --device=/dev/dri/renderD129:/dev/dri/renderD128 Once you've saved, run a test and you should see activity on your iGPU.
  7. My unraid server used to run on a 3600X/B450 combo about a month ago and I experienced an error where both graceful AND forced shutdown failed and the server would just infinitely hang on terminal. I can type into terminal and interact with Unraid but it just can't shutdown. My only option is to power off the system by holding the power button. I sold off my parts and moved to a 12700/Z690 combo and I am still experiencing the same problem. When I connect to my KVM i see the following Waiting up to 90 seconds for graceful shutdown... [...] Forcing shutdown... Starting diagnostics collection... sh: -c: line 1: unexpected EOF while looking for matching `" Diagnostics attached. I created them immediately after my server booted up.devoraid-diagnostics-20230306-2008.zip Can someone point me toward what is causing shutdown to hang? Thanks.
  8. I've never directly accessed the pool (by pointing it at the actual directory on the invidual disk) or modified my docker image location since building this server so i'm not entirely sure what happened. I've been having issues with unBLANACE reporting issues with file permissions with containers such as krusader, nginx, and others, despite running new permissions to fix the issue unBALANCE continues to complain about permission issues. Most recently when my cache failed (due to to running in RAID0 mode), i used unBALANCE to completely empty my cache after getting the SSD professionally repaired. After moving all the files and creating a 4 disk cache in mirror mode with new NVMe drives i noted that mover was refusing to move some files back into the NVMe cache, again -- mover left some /appdata/ files for krusader, nginx and some others in the disk array rather than moving them to the nvme cache. I've run and re-run new permissions multiple times and it just doesn't want to fix the issue so i'm at a loss. I ended up manually moving the files myself using the unraid file browser plugin.
  9. Thanks, i'll give it a try. And yes the NVMe cache is a 4 disk array so it's "protected", hence why i noticed the difference between the shares that live on my array and the shares that are using Prefer Cache. I'll wait for the rebuild to finish in 12 hours and give it a try. Thanks again.
  10. I recently upgraded my Unraid box and while using unBALANCE I noticed that my docker.img is living in two different locations. One of the docker.img files is on disk13, the other is in my NVMe cache. Disk13: NVMe cache: Both of the Docker vDisk files are 53.7GB despite Docker settings being configured for a max of 50GB. The docker image file location is set to: /mnt/user/system/docker/docker.img I am in the process of upgrading 2 disks in my array and so the shares that are on that array are marked as "unprotected" while the array rebuilds. I note that even though /system/ is set to prefer NVMe cache, and even though docker.img is on my NVMe cache, Unraid thinks that it's unprotected because the img file is supposedly existing in 2 locations at once. How do i correct this issue? Do i just modify the vDisk location and manually point it to /mnt/nvme-cache/system/docker/docker.img then delete the other file on Disk13? Thanks.
  11. Thank you SO MUCH for this. Your post resolved issues with a small production server I created for a family friend running 11 individual Windows 10 VM's. Before (all VM's sitting on login screen, completely idle) After:
  12. CPU: i7 12700 (non K) Mobo: Gigabyte Z690 Gaming X DDR4 GPU: Nvidia 2080 Super Unraid: 6.11.5 Bios: Latest, set to iGPU as main/boot GPU. PiKVM connected to motherboard HDMI port. I've got NVENC working in Plex, TDARR, Unmanic, and Handbrake but I can't seem to get my iGPU to function in any docker container. I've installed Intel GPU TOP. modprobe i915 returns nothing, so no errors there. intel_gpu_top shows the iGPU is detected. Navigating to /dev/dri shows card0 and card1. I have tried starting containers with extra parameter "--device=/dev/dri", and i have also tried adding a device to the container (though this seems outdated and no longer necessary?) Every single container, Plex, TDARR, Unmanic, and Handbrake fail to utilise the iGPU. Plex ignores it and falls back to CPU transcode. TDARR and Unmanic fly through transcodes at 30,000FPS and fail -- intel_gpu_top shows 0% utilisation. Handbrake fails to initialise QSV. What am I doing wrong here? Is my PiKVM causing issues? Are you meant to boot headlessly in order for iGPU to work properly in Unraid? I have not seen anyone make reference to "card1" when reading through other threads across reddit and unraid, mostly every screenshot and tutorial i've seen only shows card0. Any help would be appreciated. Thanks.
  13. edit: Upon further investigation it seems like if i'm transcoding a 1080P movie, Plex leans on the CPU, but when I transcode a 4K movie, Plex uses the GPU. This is reproducible with any movie i try. Is this expected behaviour in Plex? ----------------- I'm having difficulty getting Plex hardware transcoding to work despite the GPU being detected and used by Plex. I migrated my Plex %appdata% folder from a Windows VM to Binhex Plexpass and everything worked fine after pointing it to the new media locations and syncing. - I've installed the 'nvidia driver' plugin and downloaded the latest driver. - The driver has a HWID. - I have copied the ID from 'nvidia driver' and pasted it into Binhex Plexpass container settings. - I have set --runtime=nvidia in 'extra parameters' - I have set 'nvidia_visible_devices' to 'all' - I have enabled hardware acceleration under Transcoder settings. - I have confirmed transcoder temporary directory is working (i can see transcoding files in the directory i have mapped on my cache) - When i launch Plex the the GPU Statistics plugin shows that my GPU is being utilised by Plex. - When i launch terminal and type "watch nvidia-smi" i can see the GPU is being used by Plex. However with all that working properly and looking good, when I watch a movie and set Plex to transcode (rather than direct file stream), my CPU gets slammed and Plex does not show "transcode (hw)" in the now playing section. Instead it just shows "transcode", which means that it's not HW transcoding. Anyone have an idea? Has this been caused by transferring my Windows based %appdata% folder into Docker? To be clear i took the 'Plex Media Server' folder from Windows in %appdata%, deleted all of the folders in Binhex Plex in /appdata/, and copied it in. Thanks.