Jump to content
We're Hiring! Full Stack Developer ×

SMB Shares Slow, Unresponsive, and Crashing with multiple client machines


Afrobaron

Recommended Posts

I have on my network two windows 10 machines and a Mac.  All three are having issues recently with my SMB shares from Unraid.  The common issues is I can open windows explorer (or Mac Finder), browse to a shared folder.  Sometimes teh share will open and load contents quickly, other times it will take up to 30 seconds.  Browsing further into subfolders is hit or miss on if I get a quick load or not.  Additionally, opening files, even small text files, can take extended periods and similar on a save of the file(s) from within an application like excel, notepad++, etc. 

 

I have user accounts on unraid, assigned them to shares, and have windows credential manager setup with the same user accounts (used tutorials from here to do this).


This evening it has come to a point where Windows Explorer had to be forced closed in order to recover.  The Unraid syslog shows the below in the code snip block from the same time area.  

 

Added note, all Unraid pages, and any docker with a web GUI work perfectly fine and show no signs or issues of anything wrong, just the SMB shares.

 

This issue may have started after I used unbalance so I could remove a drive that was starting to have errors.  I followed procedures for the drive removal and the array reconfig went fine with no data loss.  Not sure if in my unbalnce run I moved anything that shouldn't have moved from other drives or pools (I don't think I did).  However, it was after this move I started to experience the slow responses, and it has gotten worse over time.  

 

 

Running Unraid 6.9.2 (diagnostics attached)

 

 

Sep 20 19:50:06 Hades smbd[14454]:   sys_path_to_bdev() failed for path [.]!
Sep 20 19:50:06 Hades smbd[14454]: [2021/09/20 19:50:06.980729,  0] ../../source3/lib/sysquotas.c:565(sys_get_quota)
Sep 20 19:50:06 Hades smbd[14454]:   sys_path_to_bdev() failed for path [.]!
Sep 20 19:50:06 Hades smbd[14454]: [2021/09/20 19:50:06.980753,  0] ../../source3/lib/sysquotas.c:565(sys_get_quota)
Sep 20 19:50:06 Hades smbd[14454]:   sys_path_to_bdev() failed for path [.]!
Sep 20 19:50:06 Hades smbd[14454]: [2021/09/20 19:50:06.980777,  0] ../../source3/lib/sysquotas.c:565(sys_get_quota)
Sep 20 19:50:06 Hades smbd[14454]:   sys_path_to_bdev() failed for path [.]!
Sep 20 19:50:06 Hades smbd[14454]: [2021/09/20 19:50:06.982969,  0] ../../source3/lib/sysquotas.c:565(sys_get_quota)
Sep 20 19:50:06 Hades smbd[14454]:   sys_path_to_bdev() failed for path [.]!
Sep 20 19:50:06 Hades smbd[14454]: [2021/09/20 19:50:06.983008,  0] ../../source3/lib/sysquotas.c:565(sys_get_quota)
Sep 20 19:50:06 Hades smbd[14454]:   sys_path_to_bdev() failed for path [.]!
Sep 20 19:50:06 Hades smbd[14454]: [2021/09/20 19:50:06.983039,  0] ../../source3/lib/sysquotas.c:565(sys_get_quota)
Sep 20 19:50:06 Hades smbd[14454]:   sys_path_to_bdev() failed for path [.]!
Sep 20 19:50:06 Hades smbd[14454]: [2021/09/20 19:50:06.983068,  0] ../../source3/lib/sysquotas.c:565(sys_get_quota)
Sep 20 19:50:06 Hades smbd[14454]:   sys_path_to_bdev() failed for path [.]!
Sep 20 19:54:24 Hades smbd[14454]: [2021/09/20 19:54:24.170706,  0] ../../source3/smbd/smb2_read.c:255(smb2_sendfile_send_data)
Sep 20 19:54:24 Hades smbd[14454]:   smb2_sendfile_send_data: sendfile failed for file Light Saber/_Completed Saber Configs/_Proffie/ProffieOS/blades/blade_id.h (Connection reset by peer) for client ipv4:10.0.0.34:53979. Terminating

 

hades-diagnostics-20210920-2235.zip

Link to comment
  • 1 year later...

I discovered that most of the time I experienced this issue I had one or both of the following things going on.
1. Parity checks were running that were ting up the disk the info was sitting on.  

- My fix here was to install the parity check tuning plugin/addon to pause the check in the daytime hours when I was using the server more actively.  

 

2. My disks were spun down due to in activity overnight.  This was a just a slow initial access with normal access speeds afterward.  

- I was going to create a job that ran each morning to force a read or write on the pool that would spin them up but ultimately I went against the idea as there are days like on weekends I don't need to access the pool early.  So I am just living with the slow initial access when my server has been idle for a long period. 

 

The only condition I saw both occur was when my VM pool was spun down (no VMs running) and my data pool was running parity check.  The VM pool had a share on it that I was trying to move a file to from the data pool and the combined delays caused some issues.  Very rare chance of occurrence now that parity check does not run when I am active. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...