[6.6.2] Various concerns to sort out: Proper cache usage, general docker performance, docker GPU passthrough, etc


cr08

Recommended Posts

Coming in as a newbie user. Been using a bit klugey setup up til now that consisted of a Proxmox install and mergerfs with the pooled drive fed to various VMs via bind mount. Finally got around to 'cleaning up' this setup a bit. Long story made short I am trying to give unRAID a chance and have it running on a Dell Poweredge T20. Here's my full specs:

 

Dell Poweredge T20

Xeon E3-1225 v3 quad core no HT

12GB ram

2x 8TB WD Red's

120GB Kingston SSD

2TB Hitachi HDD

1TB Toshiba HDD

All disks except for the 1TB attached to the onboard SATA controller, the last attached to an addon Asmedia card. All running at 6gb/s.

 

I'll spare you all the unneeded details except to say that so far everything is pretty clean. Originally installed unRAID at version 6.6.1, updated to .2. Initially moved data over gradually via midnight commander just moving stuff over from unassigned device mounts to the user share and once everything was moved I fully set up the array. That in which is currently set up with all HDD's as a 4 disk array with no parity (temporarily, need to pick up another 8TB disk) and the SSD as cache. Got a few dockers set up. Namely Plex, Sonarr, Radarr, Sabnzbd, and Deluge VPN. All binhex versions and followed Spaceinvader one's youtube videos to get those all set up.

 

Here's where the issues come in that I've ran into and hopefully I can get sorted before my trial runs out.

 

(1) First is the cache disk and proper usage. Between of my media and downloads shares I have tried setting the cache to both yes and prefer and through the following issues I ran into eventually just set it to no for now to get things operating smoothly. What happened on either yes or prefer is in the process of various content being acquired via Sabnzbd primarily it would constantly fill up the cache and upon bumping up against that limit Sabnzbd would repeatedly stop and complain about lack of disk space, often in the unpacking process with the error message itself indicating it is from a lack of disk space. It was my understanding that the cache setting set to either yes or prefer should allow it to simply fail over to the array automatically if the cache disk(s) is full. This I did not see and had to manually run mover to free up some space before continuing. Eventually I do intend to add a parity disk and would like to have a cache disk configured at that time.

 

(2) Initially I had set up Deluge VPN despite having run only usenet previously. After a little bit of deducing simply in the form of shutting off that docker, I have run into a big issue related to this particular docker/application. Basically during the time it has ran and is in the process of downloading anything, with the VPN enabled, the entire docker stack will seemingly freeze every few minutes. All dockers inaccessible and even the docker page in the unRADI UI will spin endlessly trying to load the docker list. What I did notice during this time is in the dmesg log there was an error listed regarding a potential SYN flood. As a test I disabled the Deluge docker and everything has been fine since. For now I am leaving it off until I can sort things out.

 

(3) As part of this whole process as a result of adding more disk space and acquiring more 4k content, I wanted to try and have Plex make use of the onboard HD4600 iGPU if at all possible. All the existing search results and documentation seems pretty straightforward except for one massive hiccup that I don't know if I am just simply overlooking. All mentions of this process have you going into a mysterious 'Extra parameters' section of the Plex docker and adding "--device/dev/dri:/dev/dri" to it. What I am running into is nowhere, at least in 6.6.2, can I find this 'Extra parameters' field. I can definitely provide screenshots of any page if needed. I've even tried to look for fields that sound similar enough in case wording has changed but nada.

 

(4) Lastly, and this is very application and non-unRAID specific but figured I'd add to this list anyways for the hell of it, is something that feels like it is a new occurrence related to the operation of Sabnzbd and/or Sonarr that I don't recall happening on my previous system. Essentially I have it set up for the usual tv and movies categories and configuration. I keep tv as high priority and movies as low. What I had happen a few times last night is say a movie begins downloading first and gets to the unpacking stage and is in that process. If a tv download starts in that time, completes downloading and unpacking, it kinda get stuck there if the movie is still unpacking. Sonarr will refuse to pick up that episode until the movie is complete. This all despite the set priorities.

 

Let me know if there's any additional info, screenshots, or logs needed. I spent the majority of my evening last night trying to get this all set up and to a relatively stable state to the point of staying up much later than I desired so I didn't have the forethought to grab any error messages or screenshots.

Link to comment
40 minutes ago, cr08 said:

First is the cache disk and proper usage. Between of my media and downloads shares I have tried setting the cache to both yes and prefer and through the following issues I ran into eventually just set it to no for now to get things operating smoothly. What happened on either yes or prefer is in the process of various content being acquired via Sabnzbd primarily it would constantly fill up the cache and upon bumping up against that limit Sabnzbd would repeatedly stop and complain about lack of disk space, often in the unpacking process with the error message itself indicating it is from a lack of disk space. It was my understanding that the cache setting set to either yes or prefer should allow it to simply fail over to the array automatically if the cache disk(s) is full. This I did not see and had to manually run mover to free up some space before continuing.

Cache-yes and cache-prefer are exactly the opposite of each other in terms of what mover does. Mover moves cache-yes shares from cache to array, mover moves cache-prefer shares from array to cache. Cache-prefer means you prefer for the files to always be on cache if there is room, and any files that overflowed to the array would be moved back to cache when room is available.

 

Each User Share has a Minimum Free setting. Cache also has a Minimum Free setting in Global Share Settings. The intention of these settings is very similar.

 

Unraid has no way to know how large a file will become when it begins to write the file. If a disk has less than Minimum Free, Unraid will choose another disk for the write. If a disk has more than Minimum Free, Unraid can choose the disk and if the file is too large the write will fail.

 

In order for cache to overflow to the array, Unraid has to see that the cache disk has less than Minimum Free remaining. Then it will choose another disk.

 

You should set Minimum Free for cache, and for each User Share, to larger than the largest file you expect to write.

Link to comment
1 hour ago, cr08 said:

What happened on either yes or prefer is in the process of various content being acquired via Sabnzbd primarily it would constantly fill up the cache and upon bumping up against that limit Sabnzbd would repeatedly stop and complain about lack of disk space,

In addition to what trurl said, I suspect that you told SAB to directly use the cache drive, by specifying a /mnt/cache path. Unless you have a specific reason to do that, I'd recommend using /mnt/user instead. Using the cache directly means that SAB can't use the array for overflow even if the share is set for cache prefer, because only paths in /mnt/user can use both array and cache disks. /mnt/cache tells it to bypass the user share system and go directly to ONLY the cache drive, no matter the user share setting.

Link to comment
6 hours ago, jonathanm said:

In addition to what trurl said, I suspect that you told SAB to directly use the cache drive, by specifying a /mnt/cache path. Unless you have a specific reason to do that, I'd recommend using /mnt/user instead. Using the cache directly means that SAB can't use the array for overflow even if the share is set for cache prefer, because only paths in /mnt/user can use both array and cache disks. /mnt/cache tells it to bypass the user share system and go directly to ONLY the cache drive, no matter the user share setting.

Nope. /data in the SAB container is pointed at /mnt/user/Downloads. I will give it another go after making sure the min free space options are set which is where I think I was running into issues there.

Link to comment
13 hours ago, cr08 said:

(2) Initially I had set up Deluge VPN despite having run only usenet previously. After a little bit of deducing simply in the form of shutting off that docker, I have run into a big issue related to this particular docker/application. Basically during the time it has ran and is in the process of downloading anything, with the VPN enabled, the entire docker stack will seemingly freeze every few minutes. All dockers inaccessible and even the docker page in the unRADI UI will spin endlessly trying to load the docker list. What I did notice during this time is in the dmesg log there was an error listed regarding a potential SYN flood. As a test I disabled the Deluge docker and everything has been fine since. For now I am leaving it off until I can sort things out.

This can happen if you haven't put a cap on your upload speed, and maybe have a crappy router. Torrenting is very taxing on the router as it opens hundreds of connections. The first thing you need to do is limit your upload speed in the torrent client to 80% of what your connection can handle. If you don't do this, symptoms like what you're seeing can happen. If you still have issues after doing that, it could mean that your router is having trouble with that many connections and you should limit this in the torrent client. 

Edited by strike
Link to comment
22 hours ago, cr08 said:

(3)...All mentions of this process have you going into a mysterious 'Extra parameters' section of the Plex docker and adding "--device/dev/dri:/dev/dri" to it. What I am running into is nowhere, at least in 6.6.2, can I find this 'Extra parameters' field

See the second post in this thread for full guidance on setting this up, Specifically the part where it says "you have to enable Advanced View on the docker edit page to see this".

 

Link to comment
13 hours ago, strike said:

This can happen if you haven't put a cap on your upload speed, and maybe have a crappy router. Torrenting is very taxing on the router as it opens hundreds of connections. The first thing you need to do is limit your upload speed in the torrent client to 80% of what your connection can handle. If you don't do this, symptoms like what you're seeing can happen. If you still have issues after doing that, it could mean that your router is having trouble with that many connections and you should limit this in the torrent client. 

Oddly enough all connection related settings insofar as counts that Deluge defaulted to is much lower than what is recommended for my connection (100mb/10mb). Only thing not set is the upload and download speed limits which I've never really had an issue with not having set in the desktop version. Also my router is relatively decent being a Ubiquiti Edgerouter X with Smart Queue enabled. Overall the internet connection hasn't been affected in this, JUST being able to access the dockers themselves.

With that said I have went in and updated the bandwidth/connection settings in Deluge and will see how it fares going forward. So far with a few torrents it has been running fairly smoothly.

3 hours ago, Ascii227 said:

See the second post in this thread for full guidance on setting this up, Specifically the part where it says "you have to enable Advanced View on the docker edit page to see this".

 

I could have swore I turned that advanced settings toggle on and still did not see the option. However I have noticed for whatever reason I have been having intermittent issues with Chrome not behaving with the unRAID Webui. Right off the bat I did have issues with the initial array setup and when choosing disks out of the dropdowns it didn't register fully so it wouldn't allow me to start the array because it though there was no disks selected. I have essentially broken down to using Edge instead if anything feels wonky so this may be related here. So with that said, much thanks for that link! Was one I wasn't able to find earlier and actually has a screenshot and a little more info on exactly what I should be looking for. :)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.