jaybee Posted June 26, 2012 Share Posted June 26, 2012 This is looking good then. Is there anything outstanding now to fix that RC5 does not address? Could this become FINAL!? Quote Link to comment
jaybee Posted June 26, 2012 Share Posted June 26, 2012 I thought NFS was fixed some people reported. What user share problem? Quote Link to comment
boof Posted June 26, 2012 Share Posted June 26, 2012 I thought NFS was fixed some people reported. What user share problem? Please see the first post from Limetech in this thread : Here's the current state of 5.0 along with what still needs to be done for "final" release. First some known issues: - NFS stale file handles when accessing user shares. This is not fixed in -rc5, but finally I know what the problem is. Until this is solved I recommend not using NFS to access user shares. I'll add more info about this in a sticky... and thread with further background : http://lime-technology.com/forum/index.php?topic=20301.0 Quote Link to comment
jaybee Posted June 26, 2012 Share Posted June 26, 2012 Oh yes. So we await the sticky with Tom's further info and findings. Quote Link to comment
PeterB Posted June 26, 2012 Share Posted June 26, 2012 I thought NFS was fixed some people reported. What user share problem? Some problem cases have been fixed in RC5, but there is still a problem in other cases (I found that disabling the cache drive resolved these). There is also a separate issue outstanding , which is to be fixed in the next release - to do with duplicate files occurring on user shares. Quote Link to comment
ftp222 Posted June 27, 2012 Share Posted June 27, 2012 I upgraded my backup system from 5.0 Beta 10 to 5.0 RC5 and everything is working great. Parity check speeds are exactly what they were in Beta 10. The only issue I noticed is in the "upgrade notes" - They state the array should not start automatically and to verify every drives partition type; however the array did start automatically so I did not have the chance to check the partition type before starting the array. Quote Link to comment
abs0lut.zer0 Posted June 27, 2012 Share Posted June 27, 2012 There is also a separate issue outstanding , which is to be fixed in the next release - to do with duplicate files occurring on user shares. +1 this happens to me quite a bit not sure why ? Quote Link to comment
tyrindor Posted June 27, 2012 Share Posted June 27, 2012 I upgraded my backup system from 5.0 Beta 10 to 5.0 RC5 and everything is working great. Parity check speeds are exactly what they were in Beta 10. The only issue I noticed is in the "upgrade notes" - They state the array should not start automatically and to verify every drives partition type; however the array did start automatically so I did not have the chance to check the partition type before starting the array. You need to set enable auto start to No in the disk settings. Quote Link to comment
PeterB Posted June 27, 2012 Share Posted June 27, 2012 There is also a separate issue outstanding , which is to be fixed in the next release - to do with duplicate files occurring on user shares. +1 this happens to me quite a bit not sure why ? Read here. Quote Link to comment
tazman Posted June 27, 2012 Share Posted June 27, 2012 Regarding the SAS2LP-MV8 support. I recently switched motherboards from a Supermicro X9SCL+-F to a X9SCM-F which resolved the problems with my three cards. There is a new Linux driver available at ftp://ftp.supermicro.com/driver/SAS/Marvell/MV8/SAS2/Driver/Linux/4.0.0.1534/. Tom: are you planning to integrate that into the next release? Is is resolving the problems you are still experiencing with those cards? Quote Link to comment
limetech Posted June 29, 2012 Author Share Posted June 29, 2012 The only issue I noticed is in the "upgrade notes" - They state the array should not start automatically and to verify every drives partition type; however the array did start automatically so I did not have the chance to check the partition type before starting the array. Fixed that in -rc6, thanks for pointing that out. Quote Link to comment
limetech Posted June 29, 2012 Author Share Posted June 29, 2012 There is also a separate issue outstanding , which is to be fixed in the next release - to do with duplicate files occurring on user shares. +1 this happens to me quite a bit not sure why ? Read here. That issue is probably not what accounts for most "duplicates". Usually duplicates are caused by accessing storage both via disk shares and user shares. For example, suppose you have: disk1/Movies disk2/Movies and the corresponding user share: Movies If you navigate to disk1/Movies and cause a file to get created there (say, .ds_store), and also navigate to disk2/Movies, and the same file name gets created, well now there's a duplicate reported when you navigate to Movies. Quote Link to comment
limetech Posted June 29, 2012 Author Share Posted June 29, 2012 Regarding the SAS2LP-MV8 support. I recently switched motherboards from a Supermicro X9SCL+-F to a X9SCM-F which resolved the problems with my three cards. There is a new Linux driver available at ftp://ftp.supermicro.com/driver/SAS/Marvell/MV8/SAS2/Driver/Linux/4.0.0.1534/. Tom: are you planning to integrate that into the next release? Is is resolving the problems you are still experiencing with those cards? Very very reluctant to incorporate drivers not supplied by the linux kernel release because typically they lock us into only the set of kernels they happen to compile with. This has been the case for example, with Realtek r8168 driver vs. kernel-supplied r8169 driver. But pretty sure I'll be staying on 3.0.x kernel for a while, so I'll look at what those packages are... Quote Link to comment
abs0lut.zer0 Posted June 29, 2012 Share Posted June 29, 2012 There is also a separate issue outstanding , which is to be fixed in the next release - to do with duplicate files occurring on user shares. +1 this happens to me quite a bit not sure why ? Read here. That issue is probably not what accounts for most "duplicates". Usually duplicates are caused by accessing storage both via disk shares and user shares. For example, suppose you have: disk1/Movies disk2/Movies and the corresponding user share: Movies If you navigate to disk1/Movies and cause a file to get created there (say, .ds_store), and also navigate to disk2/Movies, and the same file name gets created, well now there's a duplicate reported when you navigate to Movies. if i say for definite my system is ONLY accessed via user shares? just asking ? Quote Link to comment
PeterB Posted June 29, 2012 Share Posted June 29, 2012 if i say for definite my system is ONLY accessed via user shares? just asking ? I would say that it's likely that you have an application accessing files on your user shares which is doing the same as on my system - creating a file, then renaming it to the same name as an existing file. Quote Link to comment
limetech Posted June 29, 2012 Author Share Posted June 29, 2012 if i say for definite my system is ONLY accessed via user shares? just asking ? I would say that it's likely that you have an application accessing files on your user shares which is doing the same as on my system - creating a file, then renaming it to the same name as an existing file. Yes that's possible. I can probably verify if you want to send your system log that shows the 'duplicate' file names to: [email protected] This bug is fixed in -rc6. Quote Link to comment
chickensoup Posted June 29, 2012 Share Posted June 29, 2012 Thanks for the updates Tom, I'm sure you've been working hard. Take a holiday after 5.0 goes final, you've earnt it Doesn't sound like it's far away now. Quote Link to comment
pantner Posted July 2, 2012 Share Posted July 2, 2012 Just reporting in (i've been a bit slack, didn't upgrade to RC4 but went from RC3 to RC5) and all looks good on my end loving the 5 series releases so far! Quote Link to comment
burnaby_boy Posted July 3, 2012 Share Posted July 3, 2012 5.0-rc5 seems to be working fine for me, except that copying files over the network to the server seems to be somewhat slower than with rc4. With rc4 I was averaging about 30 MB/second whereas with rc5 I'm topping out at about 24 MB/second. Quote Link to comment
optiman Posted July 6, 2012 Share Posted July 6, 2012 Has anyone had to replace a drive yet with version 5? Is the process the same as before, or has it changed? In short, this is what I've done in the past; -run a parity check -power down and replace drive -power up and confirm unraid can see the new drive -Some people run a pre-clear process, but I've never done that and don't even know how... If we should do this step, I hope this is a button for this and no command line junk is needed. -Rebuild the drive -When all done, I run another parity check -done. Thanks Quote Link to comment
Tybio Posted July 6, 2012 Share Posted July 6, 2012 I just replaced a 750G with a 2TB yesterday and it looks good. Adding a new 2TB drive to the array today...looks good so far. Quote Link to comment
optiman Posted July 6, 2012 Share Posted July 6, 2012 Cool, did you follow the same steps as I listed? Did you do the pre-clear, and if yes, how? Quote Link to comment
Tybio Posted July 6, 2012 Share Posted July 6, 2012 Nope, no pre-clear...yea, I know that means I'm foolish . I ran those steps, but before the final parity check I also added a new 2TB drive to the array and expanded it. That add is on-going...but I did check the replaced drive before adding the new one and the data rebuilt properly. Quote Link to comment
JonathanM Posted July 6, 2012 Share Posted July 6, 2012 Nope, no pre-clear...yea, I know that means I'm foolish .Just to emphasize the point, you are jeopardizing all the data on your server by using one untested drive. Any single drive failure requires perfect performance from every other drive in the array. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.