Hostile_18 Posted July 1 Share Posted July 1 Hi all. I see there's a few way's to set up the downloading paths for Sonarr/Radarr via Sabnzb to the media directory. (Ram unpacking, direct to array, direct to cache drive, then moved over. Shares can also be on primary and secondary locations and invoked with mover etc (I have a download share, and the media directory share). Then there's Hardlinking etc So what would be the optimum path for me? I have 32gb ram, 4tb Nvme Gen 4 cache drive and 120tb Hard Drive Array with parity. Internet connection can go to 135MBs. However I set it up I seem to have bottle necks that bring everything to a hault after a few hours or downloading straight to the array which is superslow. 4tb cache is pretty big for normal use, but at the moment I need to download TB's of data daily. Quote Link to comment
ConnerVT Posted July 1 Share Posted July 1 Your system looks to be very similar to mine (without knowing what system hardware your platform is). I also run Sabnzbd and Arrs., with 1G/1G internet. This is how I have things configured, and have zero issues or complaints: appdata and system shares = cache only Download share = cache --> array Media share = array only Logic behind this choice is: All docker appdata and docker.img run on NVMe for speed. Sabnzbd downloads then unpacks the file. This is all done in the Download share in cache (NVMe), moving from one folder in Download to another fast. Sonarr/Radarr/Lidarr will move the unpacked file from Download/completed to the appropriate Media share folder once notified by the Arr. Doing any other writes to the array other than the last doesn't make sense, as it just does a lot of writing/moving data from one location in the array to another, at spinning rust speeds and with parity. I choose to have my Media share not to use cache, as almost all of the additions to my Media share come from Arrs. Having it cached would just add an additional set of writes to the NVMe [from Download/complete --> Media (cache) --> Media (array)]. Quote Link to comment
Hostile_18 Posted July 1 Author Share Posted July 1 5 minutes ago, ConnerVT said: Your system looks to be very similar to mine (without knowing what system hardware your platform is). I also run Sabnzbd and Arrs., with 1G/1G internet. This is how I have things configured, and have zero issues or complaints: appdata and system shares = cache only Download share = cache --> array Media share = array only Logic behind this choice is: All docker appdata and docker.img run on NVMe for speed. Sabnzbd downloads then unpacks the file. This is all done in the Download share in cache (NVMe), moving from one folder in Download to another fast. Sonarr/Radarr/Lidarr will move the unpacked file from Download/completed to the appropriate Media share folder once notified by the Arr. Doing any other writes to the array other than the last doesn't make sense, as it just does a lot of writing/moving data from one location in the array to another, at spinning rust speeds and with parity. I choose to have my Media share not to use cache, as almost all of the additions to my Media share come from Arrs. Having it cached would just add an additional set of writes to the NVMe [from Download/complete --> Media (cache) --> Media (array)]. Thank you for taking the time to explain all that, great for someone new to Unraid like myself. I'll copy your setup and see how I get on. All my downloading now is done via the Arrs too. I haven't actually ever touched the System folder share, so it's just on the Array so that's an easy change (not sure what it does tbh). Last question if its ok With Appdata, System being on Cache only, should there be any files I'm backing up else where so its protected in case of failure? As you can tell I'm new to this! Quote Link to comment
itimpi Posted July 1 Share Posted July 1 17 minutes ago, Hostile_18 said: Last question if its ok With Appdata, System being on Cache only, should there be any files I'm backing up else where so its protected in case of failure? As you can tell I'm new to this! You need to backup the appdata share - you can use the Appdata Backup plugin to automate this. Quote Link to comment
ConnerVT Posted July 1 Share Posted July 1 In Community Applications (CA) get the Appdata Backup plugin. It backs up appdata (and flash drive) on a schedule. Very configurable. Appdata and System shares are generally best kept on your fastest drive (NVMe/SSD). Appdata has the configuration files for your dockers. System is where all of the executable/binaries for your dockers and VMs. Quote Link to comment
Hostile_18 Posted July 1 Author Share Posted July 1 15 minutes ago, itimpi said: You need to backup the appdata share - you can use the Appdata Backup plugin to automate this. 13 minutes ago, ConnerVT said: In Community Applications (CA) get the Appdata Backup plugin. It backs up appdata (and flash drive) on a schedule. Very configurable. Appdata and System shares are generally best kept on your fastest drive (NVMe/SSD). Appdata has the configuration files for your dockers. System is where all of the executable/binaries for your dockers and VMs. Brilliant, thank you both. I'll do that now. Quote Link to comment
Hostile_18 Posted July 1 Author Share Posted July 1 I've got everything setup now as @ConnerVT described. Also Installed the Back Up App Data plug in. It's downloading and unpacking well, its just transferring over that is causing a build up. Presuming there's no way to speed up that transfer, is it best to limit my download speed in line with the grabbing off files onto the array? If I left as is, in a few hours the cache would fill up because the download rate is too fast. Quote Link to comment
PPH Posted July 1 Share Posted July 1 (edited) You could also try enabling "turbo writes" ('Settings -> Disk Settings -> Tunable (md_write_method)' to "reconstruct write"). This will slightly improve write performance to the array but will spin up all array disks as it performs write operations. Give it a try and if no difference in write performance then switch the setting back to the default. Other options that I have used within sabnzb are disabling the "direct unpack" option and enabling the "pause downloading during post processing" option to slow down sabnzb so it only process one item at a time. This can help when disk processes can't keep up. It might take longer overall but can save reaching the point where the download location is full which will stop all downloads! Edited July 1 by PPH Quote Link to comment
Hostile_18 Posted July 1 Author Share Posted July 1 29 minutes ago, PPH said: You could also try enabling "turbo writes" ('Settings -> Disk Settings -> Tunable (md_write_method)' to "reconstruct write"). This will slightly improve write performance to the array but will spin up all array disks as it performs write operations. Give it a try and if no difference in write performance then switch the setting back to the default. Other options that I have used within sabnzb are disabling the "direct unpack" option and enabling the "pause downloading during post processing" option to slow down sabnzb so it only process one item at a time. This can help when disk processes can't keep up. It might take longer overall but can save reaching the point where the download location is full which will stop all downloads! Thank you, I'll look into those. At the moment Ive reduced my download speed from 135MBs to 99MBs and its very close to downloading at the rate its also transferring. I think this might actually be a good long term option, as it keeps bandwidth free for other users/activities, rather than full utilisation to nothing constantly etc. Quote Link to comment
ConnerVT Posted July 1 Share Posted July 1 You have a 4TB NVMe? Mine is only 2TB (less ~130GB of stuff that always lives there). I have never filled the cache up from Sabnzbd downloads. And I have grabbed some sizable things at times. I do leave my Sabnzbd limited to 80% (96MB/s) of my 1G internet speeds, just to not saturate the line so others have some bandwidth. (My wife will let me know if she can't stream her YouTube). Writes to my array are in the 60-80MB/s range. Quote Link to comment
Hostile_18 Posted July 2 Author Share Posted July 2 (edited) On 7/2/2024 at 12:56 AM, ConnerVT said: You have a 4TB NVMe? Mine is only 2TB (less ~130GB of stuff that always lives there). I have never filled the cache up from Sabnzbd downloads. And I have grabbed some sizable things at times. I do leave my Sabnzbd limited to 80% (96MB/s) of my 1G internet speeds, just to not saturate the line so others have some bandwidth. (My wife will let me know if she can't stream her YouTube). Writes to my array are in the 60-80MB/s range. Yeah 4tb Nvme. I've had download running all night, and its only filling up marginally at 80% download speed. I've got none stop downloading for the next few weeks, but after that it will return to almost nothing and I'll be able to set the speed back up higher. I keep getting this notice in Sonarr, but I'm not sure why as everything appears to be working correctly. I must have some setting slightly wrong? "Remote download client SABnzbd places downloads in /data/Complete but this is not a valid Windows path. Review your remote path mappings and download client settings." Edit: That warning message is gone now, without any changes. Before I only saw it when the Cache got full. Perhaps it's a sign when a file fails to be grabbed, very occasionally. Edit 2: And its back again a minute after I wrote this lol. Edited July 3 by Hostile_18 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.