Issue writing huge files in rc16c


Recommended Posts

I think it's very likely that your issue is simply that your server is so full that the initial file write by ImgBurn is written okay on one drive, but then it attempts to expand that file and the drive is too full for that.

 

I don't think that this is the reason. There's no drive expansion during the copy because there's enough room on the drive that was selected by unRAID for that particular file. And I can recreate that issue even when using a single drive - instead of a shared folder.

 

 

But I'm still impressed. This morning I had to create a lot of huge images on my two LimeTech towers and when using an empty folder it always works. When using my usual folders it always fails (100% ok vs. 100% fail on huge files).

 

 

What does ImgBurn? There's an option to preallocate files. This is turned on per default. I will switch that option off and look what happens. Here's a quote from their manual:

 

Allocate Files On Creation: Files created in 'Read/Build Mode' will be preallocated. This cuts down on fragmentation.

 

 

I know that this issue has to do with nearly full arrays. But I want to fill my array at least to its max and don't waste 2TB per machine (=4TB) because of the nature of unRAID/ReiserFS.

 

This pretty much sums up what I found, if it is a your server is nearly full error then just creating a folder with no files in it then doing the copy should not work either as the amount of space has not changed it seem to have something to do with the amount of files that are in the folder you are copying to which is why the empty folder trick seems to work 100% of the time.  I'm not sure why this has been marked as solved tho as a workaround has been found but it does not explain the original problem.  I do not suffer this issue any more as I use a cache drive but it would be a nice one to fix.

Link to comment

FWIW I just finished doing an image directly to the server from Image for Windows on one of my other machines.  54GB file ... sent directly to a well-nested folder, and it worked perfectly.

 

However, none of my drives are close to full on my backup server (at least 1TB free on all of them).

 

My Media server is a different story ... several of those drives are VERY full [in a couple cases only 20MB or so of free space].  But I wrote directly to those drives to fill the last bit up ... and they're all static data (DVDs), so once they were full, they're never modified again.    The last few writes to the drives were quite slow.

 

One other thought:  Have you had this problem with Windows 7?

 

As above the issue is not a nested folder issue it seems to be how many files are in the folder you are copying the file to.  I can have a 5 deep nested folder with 200 files in it and make the copy fail.  If I then created a new folder in that 5 depp nested folder now 6 deep with no files in it the copy will work.  I have watched the file being created and when I was watching the issue a 40gb file (obviously this cannot all fit in ram) the pc would time out at I think 90 second (it's been a long time since I looked at this issue) and I have seen it take 3-4 minutes to pre-build write the file.

 

The other thing I did test was that I put in a new 3tb precleared drive in, the writes were being done to this new empty drive but the issue still persisted.

Link to comment

For some programs and / or process you will not have any choice about pre allocation, so whilst this might be fixed for imgburn (with some set of client side program options ticked) if this is a problem with preallocation in general then it's not really solved.

 

Unless you want to tag it as a feature / limitation of unraid that doing this can cause problems and is something to be avoided. Which would be fine - but would at least be acknowledgement of the issue and documented (presumably).

Link to comment

Thanks for your answers. All drives are pretty full, but I did not see these problems with 16b. The share in question spans 12 of the 14 data drives. allocation_method=highwater with min_free_space=0 and split_level=999.

 

Thanks

 

Did you read that min_free_space SHOULD BE equal to approximately twice the size of the largest file to be written to the share?  If that is not the source of your problem, it soon will be!

 

Really!?!? Double?? I have tons of files on my unRAID1 that are 50GB in size from IMG Burn. But I have mine set to a minimum of 20GB. I guess I should increase it to at least 50GB. So far the lowest my drives got down to was around 90GB, but then I added a few more so those drives have not been touched in a while. I guess I need to keep track of the free space on those drives. I guess Ideally I need to switch out the old 1.5TB drives in that array to some 2TB drives.

Link to comment

Why did I mark this thread as "Solved"? I did bring this issue up and for me it's closed. I found or got at least two workarounds (ImgBurn option and empty directory trick).

 

I don't have enough knowledge to decide if this is an unRAID issue or an ImgBurn issue or a general issue with filesystems. Perhaps LimeTech jumps in and gives an idea.

 

I'm a unRAID user since 5 years now. I'm really happy but in my experience there is something under the hood that might pop up under special situations. I know them all - at least I think so ;-) For some of them I know what to do to come around those situations. For me this is ok.

 

Just an example: Sometimes, before a copy starts, I do see thousands of reads on a drive before an unknown network error cancels the copy operation. I brought it up in a thread for the RC11 release. This happens occasionally - I can no longer reproduce this at will. So I always wait for the real copy to start and leave the office then - a liitle bit later. Most software has those glitches. As long as you know how to work around them everything is fine. This is how I work with software. Perhaps that's not ok for you but for me it's ok.

 

Link to comment

Thanks for your answers. All drives are pretty full, but I did not see these problems with 16b. The share in question spans 12 of the 14 data drives. allocation_method=highwater with min_free_space=0 and split_level=999.

 

Thanks

 

Did you read that min_free_space SHOULD BE equal to approximately twice the size of the largest file to be written to the share?  If that is not the source of your problem, it soon will be!

 

Really!?!? Double?? I have tons of files on my unRAID1 that are 50GB in size from IMG Burn. But I have mine set to a minimum of 20GB. I guess I should increase it to at least 50GB. So far the lowest my drives got down to was around 90GB, but then I added a few more so those drives have not been touched in a while. I guess I need to keep track of the free space on those drives. I guess Ideally I need to switch out the old 1.5TB drives in that array to some 2TB drives.

 

The  recommendation was not mine!  It came from the 'Un-Official' unRAID Manual.  Read this entry from that source for an explanation as to why:

 

    http://lime-technology.com/wiki/index.php/Un-Official_UnRAID_Manual#Min._Free_Space

Link to comment
  • 4 weeks later...

For that "folder trick", I've noticed that it doesn't have to be a folder.  If I just create a new empty file (from my win explorer, right-click inside the server folder, and create new text document), once that goes through, then copying the big file(s) runs in full speed, and without any time-out problems.

 

 

Link to comment

hi,

I never had this issue with unRaid, but I had seen it on other systems.

the issue is that when programs like ImgBurn and others process images, they need to allocate space almost double  the size of the final image.

so if you writing image that would total 50GB the initial allocation will be at least 100GB

and some times even more. you do not see this as the temp file(s) that this programs create usually invisible until process is done.

I saw this once on windows XP system, when I was trying to burn an ISO image to my data drive that was almost full(not unRiad share, an actual second HDD in the system) using ImgButn coincidently.

the DVD was 3.5GB, and the drive had 6GB space left but the process was failing with not enough space error.

I had to clear out a little more than 1.5GB (that's 7.5GB free on Data drive) until I could burn the image.

 

I am not sure why creating and using an empty folder or file works sometimes

Link to comment

I am not sure why creating and using an empty folder or file works sometimes

 

If the filesystem has allot of files and is at a high capacity, the kernel and filesystem routines need to walk down all the trees to find space for allocating the directory entry and data space.

 

Once this is done, most of the filesystem entries are in ram so successive adds to the filesystem utilize cached values rather then search the filesystem on disk again.

Link to comment