Fluctuating transfer speed to unraid (smbd/shfs issue?)


Recommended Posts

Hi.. I did my best to search through the forums, and I couldn't find anyone with the same transfer-related problem as me.

 

Transfers to the unraid server from my windows computer fluctuate.. dropping to 0Mpbs, and then back up again.  CPU usage of shfs and smbd also skyrockets to near 100% at the same time. (CPU usage also fluctuates.. I'm assuming shfs/smbd may be causing the problem)

 

I'm running 5.0-rc8a with an empty cache drive.

File transfer is to a user share using the cache drive.

Vanilla install with only Unmenu and cache-dirs added (I've tried disabling cache-dirs)

I've tried rebooting.

I've verified it's not a cable or windows issue (transfers are fine to an ubuntu box from windows using the same cables.. and transfers from the ubuntu box to unraid also fluctuate in the same way)

Transfers from unraid to the other boxes are consistent and speedy.

unraid box is an HP Proliant N40L with a 1.5GHz AMD Turion II

 

Let me know if any further info is needed.  I absolutely love unraid otherwise.  My license was money well-spent, so far!

UnraidTransfer.png.a467e9047f93aeee2c164000f1a7212a.png

Link to comment

The files I was transferring were 4-5GB+ each. And they were queued and transferring one at a time in a single explorer copy dialogue.  (and like I mentioned before, I tested another large transfer from an Ubuntu installation, with the exact same fluctuating result). I'm not sure the exact timing of it.. But t seemed like every 5-10 seconds, transfer speed would drop to 0 or close to it.

 

I made sure nothing else was accessing the server when I noticed the issue, and I was using unmenu pages to check the syslog/top processes.

 

I've been busy.. But when I have free time, I was planning on seeing if I get the same results with a direct transfer to the cache drive.  Could it be an issue with the user share layer?

 

Let me know what else I can check, troubleshooting-wise.

 

Thanks for your response.

Link to comment

Well, I just tried a transfer directly to the cache drive with the same poor result.  Transfer speed still drops low, or to 0 every few seconds.  The only difference I see is that without using the user share, it's the smbd process alone that has it's cpu usage spike to 95%+. 

 

Windows 8 seems to have zero support for NFS, so I couldn't try that from this box... but I did try a large transfer from the ubuntu box to the user share with the same result. (fluctuating transfer speed.. high cpu usage by shfs and 4 nfsd processes this time)

 

Now I'm kind of at a loss as to what to try next, or what the problem might be.

 

Any thoughts?

 

Link to comment

What happens when you transfer between Ubuntu and Windows?

 

I get steady transfer speeds between 50-60MB/s in either direction.

 

Read speeds from unraid are fine/steady.. the issue seems to be something with actually writing to the server.

 

I just realized I didn't attempt a direct write to one of the array disks rather than the cache drive.  I've been assuming that if it were a controller/cable/drive issue, there would be errors, or reads would be affected as well.

I'll try that later.

Link to comment

I tested an N36L with LAN Speed Test (Lite) software. I see similar behavior but it never drops to zero. However, throughput does not seem to be effected adversely. I get about 36MBps writes and 68MBps reads. I think it may be due to the caching on the server filling the buffers which causes TCP to back-off. Once the server disk buffers are emptied, TCP is able to ramp up and until the buffers are full again. What is the overall write speed?

Link to comment

Thanks for the reply.  I think I was getting averaged write speed somewhere around 25-30MBps.  Not awful..  but not great.  I could actually live with that (though faster is always better!)  if things seemed normal otherwise.. but the fluctuation combined with high cpu makes me wonder how trustworthy things are as they are right now.

 

I actually had been doing some reading on other forums.. and I was planning on testing some settings this weekend when I get a chance to set it up w/monitor.  When I first set up the server, I can't recall if write caching was enabled in the bios.  Most setup guides seem to recommend enabling it. 

 

hdparm says that write caching is currently disabled for all my drives.  Not sure if that could be due to bios settings, or if unraid has it disabled by default.

 

Anyway, I'll report back after I check my bios.

Link to comment

Well..  it turns out having write caching disabled in the bios was to blame. 

With caching enabled, I can copy to the cache drive via user share at 55-60MB/s steady.

 

I'm curious if how most people have caching set up in their servers.. I plan on setting up a UPS this week, but does having caching enabled make a likely catastrophe if I were to lose power while the mover script were running?

 

Thanks for your responses and help.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.