6.4.1: Gigabit Internet Downloads Saturating HDD Processing


Recommended Posts

I upgraded to gigabit internet speeds recently, and I'm noticing some new issues.

 

My setup: 2x Xeon E5-2670, 128GB RAM, 24TB of mixed (7200rpm HGST and Seagate IronWolf drives) + 6TB parity Iron Wolf, 256GB SSD for cache and VM storage

 

When using SABNZBD for downloading, my very first download will max out the connection at 110MB/sec, but susbequent downloads drop down as low as 5MB/sec although normally hover between 10-25MB/sec. SAB is set to automatically do post-processing my decompressing and then moving the file.

 

I believe the issue here is coming from the hard drives being saturated because my CPU load is low, RAM usage is low, and it only slows down when there are files processing. If nothing is processing it immediately goes back up to max download speed.

 

I don't really want to use the cache drive since I have two gaming VM's on there for my kids.

 

In what ways can I optimize this? I suppose I could add another 256GB SSD, run them in RAID-0, put one VM onto each SSD, and then use the cache pool for my downloads? Is this a possible setup? Seems like it wouldn't impact the VM's too much.

 

I would appreciate any advice! Thanks!

Edited by bnr32jason
Link to comment

Do you need all that RAM all the time? Not sure if this will work as I don't have this much RAM to test it. Just a theory here...

Apply the Plex transcoding method here. Let the usenet downloads extract and repair to RAM. Sonarr and Radarr will scoop out of the RAM and move to the array. The key is to not let it blow out so you may have to throttle the downloads a bit to find that sweet spot. But in this case less is more speed! No SSD wear either with this method.

Link to comment
26 minutes ago, itimpi said:

You could have a drive mounted outside the main array (using the Unassigned Devices plugin) for the downloading and only have the files moved to the main array for final storage.  

 

Thank you for the suggestion. Wouldn't I still run into similar issues though since it would be downloading and extracting/processing simultaneously? Or are you suggesting something like a separate SSD?

Link to comment
1 minute ago, digiblur said:

Do you need all that RAM all the time? Not sure if this will work as I don't have this much RAM to test it. Just a theory here...

Apply the Plex transcoding method here. Let the usenet downloads extract and repair to RAM. Sonarr and Radarr will scoop out of the RAM and move to the array. The key is to not let it blow out so you may have to throttle the downloads a bit to find that sweet spot. But in this case less is more speed! No SSD wear either with this method.


Interesting, I'd like to read more about the Plex transcoding method. Is this something that has been documented somewhere? Thanks!

Link to comment
Just now, bnr32jason said:

 

Thank you for the suggestion. Wouldn't I still run into similar issues though since it would be downloading and extracting/processing simultaneously? Or are you suggesting something like a separate SSD?

A disk that is outside the array will have much better write performance than one in the array (where each write operation actually involves 2 read and 2 write operations).    A SSD would give even better performance than a HDD but if you have the HDD spare already then you could see how much that improves things before deciding on whether to invest in a SSDfor this purpose.

Link to comment
8 minutes ago, digiblur said:

Do you need all that RAM all the time? Not sure if this will work as I don't have this much RAM to test it. Just a theory here...

Apply the Plex transcoding method here. Let the usenet downloads extract and repair to RAM. Sonarr and Radarr will scoop out of the RAM and move to the array. The key is to not let it blow out so you may have to throttle the downloads a bit to find that sweet spot. But in this case less is more speed! No SSD wear either with this method.

Any RAM used up for transcoding would be RAM unavailable for I/O buffering, which is where that initial burst of speed is coming from in the first place.

Link to comment

Thanks for the advice everyone.

 

I took my 1TB 2.5" drive, which is only 5400rpm, set it up as an unassigned device, and got it setup with SAB. 

It's working better, but not to my satisfaction just yet. I'm not getting any dips down to 5MB/sec anymore, but it's still dipping down to 30MB/sec or so, mostly staying around 50-60. So that's twice as good as it was doing before.

 

So, now that I know this works, now I'm debating going to a 7200rpm drive or SSD. Would a 7200RPM drive have a significant impact over 5400? I have a 256GB Crucial M4 MLC SSD I could use, but I don't have a spare 7200rpm laying around to test out. From what I've read MLC drives don't have as much wear issues as TLC SSD's, so I should be good even using it as a cache drive I think.

Edited by bnr32jason
Link to comment
13 hours ago, bnr32jason said:

So, now that I know this works, now I'm debating going to a 7200rpm drive or SSD. Would a 7200RPM drive have a significant impact over 5400? I have a 256GB Crucial M4 MLC SSD I could use, but I don't have a spare 7200rpm laying around to test out.

 

I learned long ago that if I'm considering two doable options, knowing one would perform better than the other, and I end up choosing the lesser of the two, it'll bother me forever and I'll always wonder how much better the other option would be. More than likely, I'll end up switching to the better of the two :P.

 

You've got the M4 already, I'd put that in and sign off on this issue.
Caveat: Unless that is the SSD you mention in your first post that is the cache drive and hosting the VMs. If that's the case, I would keep the SSD for the VMs

 

If you're looking for more suggestions, personally, this is what I would do (and have done for myself).

  1. One Large SSD (256GB or more). Install as cache drive, and save downloads to here.
  2. Have a separate SSD for each VM. Since most data should be getting saved to the array, these drives don't need to be large. Big enough for OS and Apps (256GB or less. 120GB should be fine). If the VMs are not being used concurrently, you could even store them on the same SSD, but if they are used concurrently, separate would avoid any potential performance bottlenecks.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.