the network share is no longer available


Recommended Posts

Had posted this before and have seen many reports of it so thought would put it up as an issue.

 

Looking for some help.  I have a HP Microserver with 6 3Tb hds and up until i got to just over 50% used capacity I was able to transfer large files no problems by large I mean 40Gb iso files.  What happens is when I start the transfer windows just times out with "the network share is no longer available" what I see in unraid via the uumenu in open files is the file I am trying to transfer being allocated it's storage and counting up in it's file size but the before the file reaches the size it needs to be windows times out and there has been no actual file transfer take place.  Hope that makes sence.

 

Update to this Is I have moved to new hardware so thats not the issue as it is still happening I am now at 70% on each drive.  Also once the transfer has failed and the file has grown to the size it was going to be, doing another copy that then overwrites the file works fine.  Is there anything I can log or debug to see why it's taking so long to allocate the file storage causing the timeout.  I have seen it even to the point of the file size is still zero when the error comes up but then say 30 seconds later the size starts increasing. 

 

I was also going to do some tests possibly with a cache drive and a fresh empty drive to see if it has somehting to do with allocating the space once the first x number of blocks had been filled I don't want to drop my parity to see if it's the parity causing it if I can help it.

 

Any help greatly appreciated.

Link to comment

Hi,

 

I too am seeing this issue. I recently installed 5.0RC4 and on one particular drive copying files or doing other things like mkdir over SMB either times out or causes Win7 to error with "Could not find this item. This is no longer located in \\192.168.1.1\disk6\folder"

 

Looks like either Win7 or SMB is disconnecting from each other mid transfer or operation. Is there a way to debug this to get down to what the issue is?

 

Cheers

 

Jfp

Link to comment
  • 3 weeks later...

I have got a few more bits to add to this.  does not seem to matter if disks are spun down or up, if I try the copy after a reboot within 5 mintus, the copy is fine anything after that fails.  If there is a dummy file with the same name even say 100k in size that is going to be overwritten by the large file it will work (this is my current work around copy small file with same name to share, then copy the bigger file over it), if I use a program that streams the file without trying to create the final file size first ie when using pavtube to transcode something (it writes the file as it's transcoding and never failes even tho file size ends up being 9gb but if I try to copy a 9gb file it will fail), I have not been able to test a drive with less than 50% SPACE and I have not been able to test a cache drive yet.  I will be adding a cache drive shortly and I gather this is not going to have an issue but it will be interesting to see if the mover then has the issues when moving the data from the cache drive.

Link to comment

Ok so with cache drive I can copy as large a file as I want and the process seems to work well.  File is created as final size instantly and copy begins and obvioulsy get the full speed that a cache drive can deliver.  The mover script also does not seem to have any issues but what I noticed is that when the mover script is moving the large files (using rsynch by the looks of it) it does not have any issues but during transfer the full filesize is never created and the file is streamed ie file size grows as file is transfered.

 

So where can I go now to try and find out what is happening. 

Are many others seeing this as an issue or is it only me???

 

Next test I have is to put in a new 3TB drive so I have an empty drive and see what happens.

Link to comment

I have 4 unRAID servers that experienced this and the only fix I found (after 3 years of unRAID usage) was to add a cache drive. They don't  time out anymore, and after it's transferred to the cache drive the mover will automatically move the files to the array internally, so it can't time out. This is definitely not related to 5.0 beta/RC, i've experienced this for years. It is very normal, and I believe most people use cache drives, so they just don't experience it.

 

My advice is keep using the cache drive as you are. Set it to move files over weekly (or daily). If your share settings are set up correctly, you should never have to directly transfer to a disk.

Link to comment

I had a similar issue - i.e. when copy large multiple GB files the network seemed to drop out. This never happened on release 4.7 but I was experiencing it on 5.0RC2/3/4. I reverted to 5.0RC1 and it seemed to resolve the issue.

 

More recently I installed an Intel NIC (to replace onboard Realtek) AND updated to 5.0RC5. I've not seen the issues re-appear (i.e. the system seems to be working well), however unsure if that is due to RC5, Intel NIC or both.

 

I do not use a cache drive, so it looks like my particular issue was due to 5.0RCs (2/3/4).

 

Alex

Link to comment

I don't think it is a network issue due to a few hours things. The first being no issues using cache drive the second is the way the issue shows itself. When I transfer a file you can actually watch it via uumenu build the file ie you Initiate a file copy of a file say 20gb, I can watch the file size grow before data starts to be transferred. The time out happens before the file reaches the full 20gb size and continues to grow even after the time out error and at this stage still no data has been transferred. Once the file has finished growing you do the copy again tell it to overwrite and then the copy goes fine. if I transfer a small file to the server it goes fine if I then overwrite that small file with a 20gb file it also works fine as the file then seems to build as the data is being copied. It just seems to be when you transfer a new large file and it attempts to create the file to the final size before data is transferred. It also did not start happening until my disks were at 50percent.

Link to comment

Please try this.  Copy the attached file, 'smb-extra.conf' to the 'config' directory of your flash, then Stop array and Start array.

 

This file sets an option called "strict allocate" to "yes".  Let me know if this has any effect.

 

After array started, you can verify Samba the setting of this flag by typing this command:

 

testparm -sv | grep "strict allocate"

smb-extra.conf

Link to comment

Please try this.  Copy the attached file, 'smb-extra.conf' to the 'config' directory of your flash, then Stop array and Start array.

 

This file sets an option called "strict allocate" to "yes".  Let me know if this has any effect.

 

After array started, you can verify Samba the setting of this flag by typing this command:

 

testparm -sv | grep "strict allocate"

 

I just checked I already a file of that name in there with

max protocol = SMB2

in it.  Do I just add that line from your file to my exisiting file??

Link to comment

This file sets an option called "strict allocate" to "yes".  Let me know if this has any effect.

The description given for this option looks like it could be helpful for all versions of unraid, not just the latest RC. Is there any reason why NOT to use it on 4.7?

 

It requires a certain level of SAMBA software and GLIBC in order to have any effect, so it likely isn't helpful for all versions of unraid.

 

http://wiki.samba.org/index.php/Linux_Performance#How_do_I_get_the_patch_.3F

Link to comment

Please try this.  Copy the attached file, 'smb-extra.conf' to the 'config' directory of your flash, then Stop array and Start array.

 

This file sets an option called "strict allocate" to "yes".  Let me know if this has any effect.

 

After array started, you can verify Samba the setting of this flag by typing this command:

 

testparm -sv | grep "strict allocate"

 

I just checked I already a file of that name in there with

max protocol = SMB2

in it.  Do I just add that line from your file to my exisiting file??

 

Yes you can add that line, but you can also just delete your existing smb_extra.conf file since starting with -rc1 (I think), SMB2 is already set as the "max protocol".

Link to comment

Will make change and do a test on a non cached share and see what happens.  Also with the info in that wiki I can also start to sniff traffic to see if the 1-byte SMBwriteX/SMB2_WRITE requests are beeing seen in the traffic or if these are being seen but being actioned still before the smb timeout.

Link to comment

Well I did the change and found it made no difference.  But I did find out what seems to be causing the issue and maybee why other people don't see it.  I have a Movie share that has 200 odd files in the root no folders just 200 odd files.  If I try to copy a new file in I get the share error and the slow file build.  If I make a directory in the share which when you go into it has no files and then try the copy again works perfect data transfer starts right away.  I delete the folder and try the copy again to the root where there are 200 files and it failes.

 

Move on to next share.  This share has 50 folders in the root but no files, start file transfer works perfect transfer starts straight away.

 

Got to another Share that has 70 files in the root again try transfer it failes create folder it's empty transfer to folder works perfect.

 

So my trying to link it to the fact that may array was filling up and disk space was incorrect, it was filling up because of the files and then it seems that the file count in a directory seems to be the issue.  I am going to try some further tests by seeing if x number of files inside a folder that when there are no files copies fine to see what x has to be before you see issues.

Link to comment

So my trying to link it to the fact that may array was filling up and disk space was incorrect, it was filling up because of the files and then it seems that the file count in a directory seems to be the issue.  I am going to try some further tests by seeing if x number of files inside a folder that when there are no files copies fine to see what x has to be before you see issues.

 

For those shares with a large number of file in them already - is it possible that those files are on separate disks where one or more of those disks are spun down when you try to write your new file?

Link to comment

Nope for those tests every write was going to the same disk as all my shares are set to most free space also confirmed by watching the open file through uumenu and for my testing I started from a fresh reboot with all disks spun up. I have so far found that the file build starts to slow down when there are about 26 files. The first ie no files file is built instantly transfer starts straight away 26 files and the file takes 10 seconds to grow to full size and then the transfer starts. Going to continue testing to see what else I can find.

Link to comment
  • 3 weeks later...

So where do I go from here is this just expected behaviour and move on or is there something that can be done about it.  I currenlty have a cache disk so I am insulated from the error but would like to know it didn't exist.

 

I've never seen an unRAID system that isn't affected by it, and I have access to a total of 5 unRAID systems.

 

Just use your cache drive and ignore it in my opinion.

Link to comment
  • 4 months later...

I am using the basic version rc8a which does not support cache drive.

 

I have 2 windows 7 and both experience the same error. In the syslog there is no error for this.

 

This error can happen during both read and write for me.

 

I am wondering what is the root cause for this, besides adding a cache drive, what other solution is available?

 

I notice on one windows 7, every time when I copy big files (tried 2 files over 1.7G) I get this error. Small files are fine.

 

I have upgrade Samba to 3.6.8 still the same.

 

Update:

 

Good news here, I turned on NFS , and also installed windows 7 NFS client. Now I am able to copy those big files ( over 1.5) using NFS shares without errors. So looks like the issue reported here is mostly caused by Samba.

syslog-2012-12-13.txt

Link to comment

So my trying to link it to the fact that may array was filling up and disk space was incorrect, it was filling up because of the files and then it seems that the file count in a directory seems to be the issue.  I am going to try some further tests by seeing if x number of files inside a folder that when there are no files copies fine to see what x has to be before you see issues.

 

For those shares with a large number of file in them already - is it possible that those files are on separate disks where one or more of those disks are spun down when you try to write your new file?

 

Small clarification here, yes it is the case that the share with the large numbers of files in them are spread out accroos multiple disks but no it doesn not seem to make any difference if they are all spun up or not or a combination of both.  Issue is only seen where there a large numbers of files in the folder, if it is empty I cannot repoduce the fault.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.