unRAID Server Release 5.0-beta14 Available


limetech

Recommended Posts

I'm waiting for someone with a LSI Controller and a dev/test array to give it a try before moving on from Beta 12. I'm rather enjoying my stable system now with an uptime of  144 days.

 

I too would like to see if it works with LSI Controllers.

 

+1 - I'm still sticking with b11 because of the LSI and nfs problems.

Link to comment
  • Replies 496
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Whats the specs of the rest of your system (cpu/ram/motherboard) ?

 

It's a Supermicro X7SBL motherboard with a Core 2 Q6600, and 8gigs of ECC DDR2.

 

Is anyone else that's having problems with their LSI cards using an expander? I've got a 12-bay enclosure/expander, maybe that's having an impact.

 

-A

Link to comment

Tried b12a, still with the "Stale NFS File Handle" issue.

 

I guess I'll have to think about reverting down to 4.7. NFS is kinda critical to me, as I cannot use my decoder over SMB/Samba (too slow even over the 1ge network) and I am left with NFS.

Link to comment

I'm waiting for someone with a LSI Controller and a dev/test array to give it a try before moving on from Beta 12. I'm rather enjoying my stable system now with an uptime of  144 days.

 

Still badly broken on my system(see sig)  :(

 

Went back to 12a, which has been running stable for months... :)

Link to comment

Looks like there will need to be community configured editions of unRAID moving forward for releases if the thought is "whatever hardware works works, and whatever doesn't doesn't". On the next Beta/RC/Final of unRAID 5.0 there will be Limetech's version and then one which works for LSI hardware where the only change would be using the Linux 3.0.x kernel series.

Link to comment

According the new kernel, there are still the same issues for NFS.

 

Peter ----

 

An interesting comment.  I read it several times in an attempt to determine its meaning.  Are you saying that there is an acknowledged issue in the Ver 3.3 kernel which is causing the NFS problems? 

 

Frank

Link to comment

An interesting comment.  I read it several times in an attempt to determine its meaning.  Are you saying that there is an acknowledged issue in the Ver 3.3 kernel which is causing the NFS problems? 

 

I don't know about Peter, but I did revert back to 4.7 and it seems that I have finally solved all my NFS issues.

 

It is a shame that I can't run 5beta14, as it seemed pretty stable for me, but for the NFS issue, but I do really need NFS to be available over my network, and so be it.

Link to comment

mejutty,

 

I experienced the same issue, usually once the drives start to fill up. I posted same issue on other forum, which someone posted some recommendations on how to potentially fix, see posts here:

 

http://lime-technology.com/forum/index.php?topic=4500.msg85277#msg85277

 

Maybe they will work for you? Didnt for me, but given its only really every now and then, and doesnt appear to create a hole in my parity i keep going as normal. It usually only occurs the first time i write to that particular drive as well ie if i have shares spanning multiple drives, first write to one drive may fail then work, then next write to another drive will fail then work. Once it fails once, every subsequent write to that drive works fine.

 

If you figure out a solution i would be keen to hear.

 

Well I found that if I use teracopy I get more consistent results in being able to copy up the files.  I am however no longer able to build iso files on the fly direclty to the server.  Did a bit more reading and found the issue may be something to do with Samba Strict Allocate problem in that it will write a file with 0's to the size you are trying to write.  Using NFS may get around the issue but it is just a pain as it is another thing to install/configure.

Link to comment

mejutty,

 

I experienced the same issue, usually once the drives start to fill up. I posted same issue on other forum, which someone posted some recommendations on how to potentially fix, see posts here:

 

http://lime-technology.com/forum/index.php?topic=4500.msg85277#msg85277

 

Maybe they will work for you? Didnt for me, but given its only really every now and then, and doesnt appear to create a hole in my parity i keep going as normal. It usually only occurs the first time i write to that particular drive as well ie if i have shares spanning multiple drives, first write to one drive may fail then work, then next write to another drive will fail then work. Once it fails once, every subsequent write to that drive works fine.

 

If you figure out a solution i would be keen to hear.

 

Well I found that if I use teracopy I get more consistent results in being able to copy up the files.  I am however no longer able to build iso files on the fly direclty to the server.  Did a bit more reading and found the issue may be something to do with Samba Strict Allocate problem in that it will write a file with 0's to the size you are trying to write.  Using NFS may get around the issue but it is just a pain as it is another thing to install/configure.

 

I've got the same problem here, too. As you said, the server seems to allocate the needed space and the copy times out before it has finished. But it only happens if the target disk you are copying to is "low" at free space (20-30% free).

 

There is a thread for this problem here: http://lime-technology.com/forum/index.php?topic=19207.0

 

Is there any simple way to mount a NFS share into Windows explorer to check if that would "resolve" the issue? It's just annoying if you start copying 2TB over night and at morning you see it stopped at 10% because of this error.

Link to comment

Maybe it would be helpfull for now if we gather a current version overview with regards to hardware.

 

* Which beta should I run if I have LSI cards

* Which beta should I run if I have Supermicro aoc-saslp-mv8 (12?)

 

Finished my 1st build using a SUPERMICRO MBD-X8SIL-F-O board with two SUPERMICRO AOC-SASLP-MV8 [currently only one card in use] running pro 5 beta 14. Jumped in with both feet but maybe didn't study as well as I thought I did. Seems stable but its only been 4 days as I have been copying data and adding drives one by one. Currently have 1 parity, 5 data, 1 cache. Plan to start preclear on another 2TB drive tonight. Any issues I should be watching for?

Link to comment

I'm waiting for someone with a LSI Controller and a dev/test array to give it a try before moving on from Beta 12. I'm rather enjoying my stable system now with an uptime of  144 days.

 

I'm running beta14 on a LSI 9690SA with 5 drives. It's not dev/test because I wasn't aware of any issues with LSI controllers until I started reading this thread. All of my drives are configured to spin down, and I've never seen a single error on any of them.

 

I realize this isn't what most people are reporting. I'm not sure what the difference could be. Anyway, there you go: someone who's successfully run beta14 on an LSI controller :)

 

-A

 

I think the reason your 9690SA card is working is because it is a 3ware card.  LSI bought 3ware back in 2009 but still sells 3ware cards.  I am using an LSI SAS 9211-8i which uses the LSI SAS2008 chipset and I am also stuck on 5.0 beta 12a because of the LSI problem.

 

Drew

Link to comment

Maybe it would be helpfull for now if we gather a current version overview with regards to hardware.

 

* Which beta should I run if I have LSI cards

* Which beta should I run if I have Supermicro aoc-saslp-mv8 (12?)

 

Finished my 1st build using a SUPERMICRO MBD-X8SIL-F-O board with two SUPERMICRO AOC-SASLP-MV8 [currently only one card in use] running pro 5 beta 14. Jumped in with both feet but maybe didn't study as well as I thought I did. Seems stable but its only been 4 days as I have been copying data and adding drives one by one. Currently have 1 parity, 5 data, 1 cache. Plan to start preclear on another 2TB drive tonight. Any issues I should be watching for?

 

For what it's worth I have that exact setup with 3 of the AOC-SASLP-MV8s, same MB, with no problems on 5.0B14...

Link to comment

mejutty,

 

I experienced the same issue, usually once the drives start to fill up. I posted same issue on other forum, which someone posted some recommendations on how to potentially fix, see posts here:

 

http://lime-technology.com/forum/index.php?topic=4500.msg85277#msg85277

 

Maybe they will work for you? Didnt for me, but given its only really every now and then, and doesnt appear to create a hole in my parity i keep going as normal. It usually only occurs the first time i write to that particular drive as well ie if i have shares spanning multiple drives, first write to one drive may fail then work, then next write to another drive will fail then work. Once it fails once, every subsequent write to that drive works fine.

 

If you figure out a solution i would be keen to hear.

 

Well I found that if I use teracopy I get more consistent results in being able to copy up the files.  I am however no longer able to build iso files on the fly direclty to the server.  Did a bit more reading and found the issue may be something to do with Samba Strict Allocate problem in that it will write a file with 0's to the size you are trying to write.  Using NFS may get around the issue but it is just a pain as it is another thing to install/configure.

 

I've got the same problem here, too. As you said, the server seems to allocate the needed space and the copy times out before it has finished. But it only happens if the target disk you are copying to is "low" at free space (20-30% free).

 

There is a thread for this problem here: http://lime-technology.com/forum/index.php?topic=19207.0

 

Is there any simple way to mount a NFS share into Windows explorer to check if that would "resolve" the issue? It's just annoying if you start copying 2TB over night and at morning you see it stopped at 10% because of this error.

 

 

 

My 3tb drivers are only 60% full but then given I am copying 40gb files it shows that it probably start failing quicker.  I hope I can find a fix cause it's only going to get worse as I get fuller.

Link to comment

Maybe it would be helpfull for now if we gather a current version overview with regards to hardware.

 

* Which beta should I run if I have LSI cards

* Which beta should I run if I have Supermicro aoc-saslp-mv8 (12?)

 

So b12 for LSI chipsets

and b14 for SASLP-MV8 cards

 

NFS issues not taken into account, as I just want to have a clear view on the hardware support (with regards to data integrity)

 

 

Link to comment

So I just tested a free NFS client for windows and the problem with "network resource not available any more" really seems to be a problem with SMB (Strict allocate) as mejutty found out.

Copying immediately starts and you can see the file size growing on the server (with SMB you see the final size once copying starts).

 

I don't think that there is a way to change the Samba-Setting for us?

Link to comment

mejutty,

 

I experienced the same issue, usually once the drives start to fill up. I posted same issue on other forum, which someone posted some recommendations on how to potentially fix, see posts here:

 

http://lime-technology.com/forum/index.php?topic=4500.msg85277#msg85277

 

Maybe they will work for you? Didnt for me, but given its only really every now and then, and doesnt appear to create a hole in my parity i keep going as normal. It usually only occurs the first time i write to that particular drive as well ie if i have shares spanning multiple drives, first write to one drive may fail then work, then next write to another drive will fail then work. Once it fails once, every subsequent write to that drive works fine.

 

If you figure out a solution i would be keen to hear.

 

Well I found that if I use teracopy I get more consistent results in being able to copy up the files.  I am however no longer able to build iso files on the fly direclty to the server.  Did a bit more reading and found the issue may be something to do with Samba Strict Allocate problem in that it will write a file with 0's to the size you are trying to write.  Using NFS may get around the issue but it is just a pain as it is another thing to install/configure.

 

I've got the same problem here, too. As you said, the server seems to allocate the needed space and the copy times out before it has finished. But it only happens if the target disk you are copying to is "low" at free space (20-30% free).

 

There is a thread for this problem here: http://lime-technology.com/forum/index.php?topic=19207.0

 

Is there any simple way to mount a NFS share into Windows explorer to check if that would "resolve" the issue? It's just annoying if you start copying 2TB over night and at morning you see it stopped at 10% because of this error.

 

Unfortunately, it was not what I found with my test bed.  As you can see from the equipment list in my signature, I have two very small data disks so filling them up was not much of an issue.  I used  ImgBurn to burn DVD iso's to the server.  The 80GB drive filled up first (Final tally -- 77.7 GB's used, 2.29GB free, 3% Free).  Then the next group of files were written to the @0GB drive (Final tally there -- 18.2 GB's used, 1.84GB's free, 9% Free).  For the complete array, the stats were:  (95.9 GB used, 4.13 free, 4% Free).

 

I deliberately did not use Windows Explorer to simply copy the files as I have TeraCopy installed.  The Windows machine runs Win 7 (64 bit).  While my network backbone is GigaBit, the NIC in the test bed is 100Mbs.  (Makes for fast writes until all of the caches are filled!!!)

Link to comment

* Which beta should I run if I have Supermicro aoc-saslp-mv8 (12?)

 

and b14 for SASLP-MV8 cards

 

 

Hmm, I'm not yet sold on full SASLP support:

 

1) With b14, I now get boot-up and periodic "libata" errors (see post: http://lime-technology.com/forum/index.php?topic=16840.msg163947#msg163947.  So far there doesn't appear to be any direct indication of hardware failures on drives attached to the SASLP that this misleading error would suggest.  All drives were 2TB.

 

2) I finally upgraded one drive on the SASLP to 3TB and now I'm getting "random" errors on a different drive on the card (same SAS port).  Although its possible that the other drive just so happened to start failing right after that upgrade, I don't believe in coincidences.  The suspect drive gets a clean bill of health from the long SMARTCTL report and I can rebuild data and parity check with no errors.  But eventually write or read errors will occur even though I don't directly transfer files to that drive specifically.  I'm on the second "red ball" incident so I will now replace this suspect drive with a new 3TB and see how this system functions...

 

Otherwise, for <=2TB drives, I've had no other problems.

Link to comment

mejutty,

 

I experienced the same issue, usually once the drives start to fill up. I posted same issue on other forum, which someone posted some recommendations on how to potentially fix, see posts here:

 

http://lime-technology.com/forum/index.php?topic=4500.msg85277#msg85277

 

Maybe they will work for you? Didnt for me, but given its only really every now and then, and doesnt appear to create a hole in my parity i keep going as normal. It usually only occurs the first time i write to that particular drive as well ie if i have shares spanning multiple drives, first write to one drive may fail then work, then next write to another drive will fail then work. Once it fails once, every subsequent write to that drive works fine.

 

If you figure out a solution i would be keen to hear.

 

Well I found that if I use teracopy I get more consistent results in being able to copy up the files.  I am however no longer able to build iso files on the fly direclty to the server.  Did a bit more reading and found the issue may be something to do with Samba Strict Allocate problem in that it will write a file with 0's to the size you are trying to write.  Using NFS may get around the issue but it is just a pain as it is another thing to install/configure.

 

I've got the same problem here, too. As you said, the server seems to allocate the needed space and the copy times out before it has finished. But it only happens if the target disk you are copying to is "low" at free space (20-30% free).

 

There is a thread for this problem here: http://lime-technology.com/forum/index.php?topic=19207.0

 

Is there any simple way to mount a NFS share into Windows explorer to check if that would "resolve" the issue? It's just annoying if you start copying 2TB over night and at morning you see it stopped at 10% because of this error.

 

Unfortunately, it was not what I found with my test bed.  As you can see from the equipment list in my signature, I have two very small data disks so filling them up was not much of an issue.  I used  ImgBurn to burn DVD iso's to the server.  The 80GB drive filled up first (Final tally -- 77.7 GB's used, 2.29GB free, 3% Free).  Then the next group of files were written to the @0GB drive (Final tally there -- 18.2 GB's used, 1.84GB's free, 9% Free).  For the complete array, the stats were:  (95.9 GB used, 4.13 free, 4% Free).

 

I deliberately did not use Windows Explorer to simply copy the files as I have TeraCopy installed.  The Windows machine runs Win 7 (64 bit).  While my network backbone is GigaBit, the NIC in the test bed is 100Mbs.  (Makes for fast writes until all of the caches are filled!!!)

 

I have no issues copying files (ATM) around the 4-10Gb mark as the file size grows and reaches that mark before the timeout (when I try to copy a 40gb file i see the file get to around 30Gb before the timeout).  Did you copy the files using teracopy or write them directly to the server using imgburn??

Link to comment

 

 

I have no issues copying files (ATM) around the 4-10Gb mark as the file size grows and reaches that mark before the timeout (when I try to copy a 40gb file i see the file get to around 30Gb before the timeout).  Did you copy the files using teracopy or write them directly to the server using imgburn??

 

I used ImgBurn to create the ISO and that file was written directly into the file location (a user share) on the server as it was being compiled.  I will delete a BLU-RAY file from the server later today and use ImgBurn to generate a BLU-RAY ISO on the server to see if a larger file size makes any difference.

Link to comment

 

 

I have no issues copying files (ATM) around the 4-10Gb mark as the file size grows and reaches that mark before the timeout (when I try to copy a 40gb file i see the file get to around 30Gb before the timeout).  Did you copy the files using teracopy or write them directly to the server using imgburn??

 

I used ImgBurn to create the ISO and that file was written directly into the file location (a user share) on the server as it was being compiled.  I will delete a BLU-RAY file from the server later today and use ImgBurn to generate a BLU-RAY ISO on the server to see if a larger file size makes any difference.

 

As I promised, I have generated a large file which resulted in the drive/user share being almost full.  I deleted a couple of BLU-RAY Iso's and then copied a couple of DVD ISO's to bring the drive to the point of where the BLU-RAY that I was going to created would almost fill the drive.

 

The BLU-RAY which I choose was Pirates of the Caribbean (Part 3) and I again used ImgBurn to create and store the resulting file directly on the server.  The movie only ISO turned out to be 38.6GB and the drive had 78.3GB of data (1.69GB or 2% free) when the copy was done.  This test bed server has only a 100Mbs NIC and it took 2hours 52 minutes to make the copy.  I did check the progress periodically and the transfer rate remained reasonably consistent throughout the entire time.  (The initial prediction was 2hours 45 minutes.)

 

I can only say that I didn't have any of the issues which other people have reported.  Perhaps the issue is a problem with the network rather than with actual disk transfer...

Link to comment

I'm running a updated version with kernel 3.2.9 and samba 3.6.3

 

feel free to test -> http://www.filefactory.com/file/c35c621/n/bzroot_3.2.9-samba3.6.3.rar

 

I.m running beta 14 with kernel 3.3, if anyone want to try this, feel free -> http://www.filefactory.com/file/2qlr5qxok92v/n/bzimage_rar

Including samba 3.6.3-

 

//Peter

 

New kernel 3.3.1 and also verified that Tun/Tap and bridge is enabled as modules for OpenVPN.

-K10temp is also added for sensors for AMD.

 

 

http://www.filefactory.com/f/8dc509eb6d610f88/

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.