[Solved] Speed Up Cache To Disk Write


Recommended Posts

That would require a lot of manual work or a shell script, which I don't really care that much considering they're gone in a few days/weeks anyway. I have like 2.5 TB free and will be adding in another 8 TB once my new motherboard arrives in the next few weeks. My biggest issues were the cache to disk transfer rate and the cache drive filling up constantly, which at least the latter seems to be resolved (haven't been able to test it since it can't find anything huge to download at the moment).

 

Edit: decided to grab a bunch of freeleech torrents to help bump up my ratio so I currently have 224 GB of torrents downloading (only 7 torrents), it's not enough to fill up the cache drive all at once but lets see if anything freaks out.

 

Edit 2: Looks like we're good to go, 2 of them wouldn't download so I canceled them, 4 are done so that's like 100-150 GB which went to the cache and I looked now and there's only 113 GB on the cache drive, 55 GB of which are torrents, one hasn't been moved yet and one is still downloading.

Edited by brando56894
Link to comment
On 8/15/2017 at 9:14 AM, brando56894 said:

At a few points there, IDK if the mover got confused or what, but it looked like it was moving data back into the cache almost as quickly as I was moving stuff off of it...and it wasn't stuff that I was currently downloading or post-processing (for example it had the whole series of South Park on there for some reason) :S I don't know if this is a result of cache=yes, so I changed it to cache=prefer and cleared out all everything in downloads, movies and shows (the majority of the data, the other shares equal maybe 50 GB total) and let it continue to download things for a few hours. I checked on it hours later and once again the cache drive was 100% full....

 

Right now my only solution is to disable caching on the downloads, movies and shows shares, which sucks because those are the most active shares and would definitely benefit from the write cache. I just had Radarr post-process 11 HD/UHD movies and it took 2 freaking hours to do so!

 

 

The trick is to make a share to handle your downloads, and set the Share Settings - Use cache disk to "only".  On this share configure all your post download automation task. ( run par, unrar, repair, unzip, rename ) should all be done in a folder on this cache only share.  It will be unprotected but lighting fast and wont put a strain on the protected array.

 

Secondly train your Sonarr, Radarr, to go fetch the finalized product on the "download" share and save it on the "media share".  I have no mover enabled on my media share; It does not make any sence to do so.

 

Thats how I have set it up and imho; the good way to do it.

Edited by zonderling
  • Upvote 1
Link to comment

Thanks for the suggestion. My problems seemed to be with how I had the cache setup on the shares. I understand why you have yours setup the way that you do but I think having caching enabled on my multimedia shares is useful so that the whole process is sped up (moving from download to media is quick since it's just going from one sector on the SSD to another) and Sonarr/Radarr/NZBget can quickly move on to the next on, meanwhile the mover will be doing it's job in the background to clear space from the cache. I don't think I will be running out of cache space soon since I have most of the gigantic things downloaded and it will only be downloading maybe one or two at a time now, which definitely won't fill up 500 GB.

Link to comment
5 hours ago, brando56894 said:

Thanks for the suggestion. My problems seemed to be with how I had the cache setup on the shares. I understand why you have yours setup the way that you do but I think having caching enabled on my multimedia shares is useful so that the whole process is sped up (moving from download to media is quick since it's just going from one sector on the SSD to another) and Sonarr/Radarr/NZBget can quickly move on to the next on, meanwhile the mover will be doing it's job in the background to clear space from the cache. I don't think I will be running out of cache space soon since I have most of the gigantic things downloaded and it will only be downloading maybe one or two at a time now, which definitely won't fill up 500 GB.

I think I understand your set up more or less ... i don't quite get the part you describe how your process its sped up by enabling cache; "moving from one sector to another on the SSD."

They way I do it, I keep all of the, lets call it "workload" on the cache drive, all the time.  Cache witch is typically a SSD device.  Your process has the workload part on the cache, part on the array because each time the mover moves, it moves some of your unprocessed rar files to the slower spinners.

 

Next phase when Sabnzb (or whatever other docker you use for that matter) starts with unpacking its not on your super fast SSD but on the slower spinners. To make matters worsen your CPU needs to calculate parity witch causes even more load you can easily avoid.

 

Final point is that unRAID really excels when you can organize its writes to the array sequential.  In other word one by one. My workflow does that.

 

I'm not saying yours is no good, I'm only offering a way to optimize ;)

 

edit: when reading it back, i come to the conclusion bjp999 said the same as me :)

Edited by zonderling
Link to comment
21 hours ago, zonderling said:

I think I understand your set up more or less ... i don't quite get the part you describe how your process its sped up by enabling cache; "moving from one sector to another on the SSD."

 

All I meant was that it's faster to transfer data locally on the SSD than it is to go from SSD to HDD.

 

21 hours ago, zonderling said:

They way I do it, I keep all of the, lets call it "workload" on the cache drive, all the time.  Cache witch is typically a SSD device.  Your process has the workload part on the cache, part on the array because each time the mover moves, it moves some of your unprocessed rar files to the slower spinners.

 

I understand what you're saying, and I guess I'll give it a try, I just have to reconfigure my shares since downloads is one share and I would have to split the into usenet and torrents.

Edited by brando56894
Link to comment
2 hours ago, brando56894 said:

I understand what you're saying, and I guess I'll give it a try, I just have to reconfigure my shares since downloads is one share and I would have to split the into usenet and torrents.

Sure, whatever works for you :)

Just out of curiosity ... why would you split usenet and torrents into different shares? Why not make the distinction on a folder level? (so a folder for your usenet and a folder for your torrents both on the same "downloads" share? ) Just out of my personal interest in the subject, i only use usenet, no torrents) 

Link to comment
1 hour ago, zonderling said:

Sure, whatever works for you :)

Just out of curiosity ... why would you split usenet and torrents into different shares? Why not make the distinction on a folder level? (so a folder for your usenet and a folder for your torrents both on the same "downloads" share? ) Just out of my personal interest in the subject, i only use usenet, no torrents) 

There are typically seeding requirements for torrents. Seeding is where others are able to get the files from you. You are supposed to "pay forward" for your downloads. During seeding, they cannot be moved or altered in any way.

  • Upvote 2
Link to comment

Might not help, but I got sick of cache limitations when I was doing major updates (Upgrading a season of a big show to Bluray when released etc) so I finally just bit the bullet and used an old 240G SSD as an unassigned device for NZB downloading and an old 4T spinner to host my torrents.  It was just less painful than making the other choice of having a SSD cache with enough room (Large $$$ commitment) or a large spinner that is slow.

 

Obviously, I had the spare SATA ports to make this work, but I've found it's MUCH more stable...now my NZB downloads don't impact my production array even if I get a set of Remux-1080p downloads that don't post-process properly.  I don't have to spend $$$ on large SSD space for torrents that would see no benefit from them.

 

Obviously the solution depends on situation, but think of it more like finding the trade-offs that work for you rather than trying to stick to one method of operation...just my recommendation.

Link to comment
16 hours ago, zonderling said:

Sure, whatever works for you :)

Just out of curiosity ... why would you split usenet and torrents into different shares? Why not make the distinction on a folder level? (so a folder for your usenet and a folder for your torrents both on the same "downloads" share? ) Just out of my personal interest in the subject, i only use usenet, no torrents) 

 

 

As the other guys said, there's no reason to cache torrents and I have to seed mine. I currently have 2 TB of seeding torrents. So Usenet will be cache-only and torrents will be no cache since they will be written directly to downloads and then copied to either movies or shows.

Link to comment
On 8/19/2017 at 6:17 AM, brando56894 said:

 

 

As the other guys said, there's no reason to cache torrents and I have to seed mine. I currently have 2 TB of seeding torrents. So Usenet will be cache-only and torrents will be no cache since they will be written directly to downloads and then copied to either movies or shows.

Thx guys.  Sure sounds like an optimized workflow to me.

Link to comment
On 8/18/2017 at 4:36 PM, Tybio said:

Might not help, but I got sick of cache limitations when I was doing major updates (Upgrading a season of a big show to Bluray when released etc) so I finally just bit the bullet and used an old 240G SSD as an unassigned device for NZB downloading and an old 4T spinner to host my torrents.  It was just less painful than making the other choice of having a SSD cache with enough room (Large $$$ commitment) or a large spinner that is slow.

 

Obviously, I had the spare SATA ports to make this work, but I've found it's MUCH more stable...now my NZB downloads don't impact my production array even if I get a set of Remux-1080p downloads that don't post-process properly.  I don't have to spend $$$ on large SSD space for torrents that would see no benefit from them.

 

Obviously the solution depends on situation, but think of it more like finding the trade-offs that work for you rather than trying to stick to one method of operation...just my recommendation.

I had a similar limitation to my first setup.  My 250 GiB cache drive would fill up 100%

Now I have configured to have the  SabNZB "incomplete" on the cache drive and the "complete" folder is on another share on the array.

For very big downloads, all the reconstructive work ( par, unrar, repair, rename ) is done on the SSD and final copy is done on a share on the array.  I've called mine "scratchdisk".  Last step on my scratchdisk Sonarr and Radarr pijck up the payload and move them to the difinite media store.

 

This workflow works for me, but as the saying goes, there are many roads to Rome ;)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.