unRAID Server Release 5.0-rc2 Available


Recommended Posts

  • Replies 158
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

i thought the idea was after beta 14, there would be 2 RC releases and then a stable version.. I recall something like that, gonna look it up ;)

 

But how it looks now, it seems more RC will be released.... :(

I hope we dont get 14 RC releases.... if u know what i mean

 

I hope that we get a solid, stable, release - how ever many betas or RCs it takes.  We just need faults ironed out - even if that simply means published work-arounds ... more delays are being caused by feature-creep!

Link to comment

i thought the idea was after beta 14, there would be 2 RC releases and then a stable version.. I recall something like that, gonna look it up ;)

 

But how it looks now, it seems more RC will be released.... :(

I hope we dont get 14 RC releases.... if u know what i mean

 

I hope that we get a solid, stable, release - how ever many betas or RCs it takes.  We just need faults ironed out - even if that simply means published work-arounds ... more delays are being caused by feature-creep!

 

+1

Link to comment

I seem to be solid on RC-2 although I saw no errors with rc1 or b14 as well, I build my new server on b14. parity check has been 65MB/s plus on this machine since I turned it on.

 

I am also for the get a released version and move on with the features from there. If all of the fixes are in rc2 that have anything to promote data loss or corruption then STOP and make it final.

 

I do have one question and it is driving me around the bend. When copying files from either Win7 or WinXP I use teracopy and I still get the transfer rates jumping around quite a bit, it will start copying at 50-60 drop to 15-16 and then to 25 up to 40 down to 10 etc... I am not sure if this is a M$,  unraid issue, hardware issue or a network issue. I would love to figure it out though, if anyone has any ideas on troubleshooting I am open to suggestions. All my machines are on a 1GB network the main switch is a GB managed then I have two GB 8 port switches in the system one behind my TV for the TV, BD player, Video Games, XBMC etc and another in my office for my main pc, 2nd pc, test xbmc machine and pre-clear machine. All the cables are cat-5e and terminated by ME, I cannot recall if that is A or B but it is one of those two but if A or B they are the same on both ends all tested. This was all fixed back when I was testing LMCE and was having network issues.

 

THX,

 

Dave

Link to comment
I do have one question and it is driving me around the bend. When copying files from either Win7 or WinXP I use teracopy and I still get the transfer rates jumping around quite a bit, it will start copying at 50-60 drop to 15-16 and then to 25 up to 40 down to 10 etc... I am not sure if this is a M$,  unraid issue, hardware issue or a network issue. I would love to figure it out though, if anyone has any ideas on troubleshooting I am open to suggestions. All my machines are on a 1GB network the main switch is a GB managed then I have two GB 8 port switches in the system one behind my TV for the TV, BD player, Video Games, XBMC etc and another in my office for my main pc, 2nd pc, test xbmc machine and pre-clear machine. All the cables are cat-5e and terminated by ME, I cannot recall if that is A or B but it is one of those two but if A or B they are the same on both ends all tested. This was all fixed back when I was testing LMCE and was having network issues.

 

I use teracopy as well and sustain high transfer rates without an issue.  I would suggest it is something at your networking layer.  I would break it down item by item, for example - plug a PC/laptop into the SAME switch that unRAID is present on and see what your transfer speeds are like (perhaps this is the managed switch?)  and then gradually add each extra switch/device/ethernet cable until the issue shows itself.

 

My setup is a bit simpler than yours - a single 24 port managed HP Procurve with everything connected to it via Molex CAT6 ethernet cables (I didn't terminate them myself though!)

 

I'd probably suggest breaking that out into its own thread though - I think I'm reading from your post that the issue has been present across unRAID versions, so it's not really RC2-related (and likely not even unRAID-related).

Link to comment

Array is now starting during boot.  Seems to be working great, flying through parity test at ~95MB/Sec.  Will post syslog when it's done (<6hrs for 13TB of parity protected data).

 

Unrelated, and I'm sure this is a fix required by whoever developed the custom UI you are using (which looks fantastic btw) but shouldn't that say "Array of seven protected data disks" ?

 

a) There are technically seven data drives which are protected

b) Parity drive is not included in the total size (nor should it be)

c) Cache & Flash are also not protected disks

 

Alternatively it could be reworded to something like "Array of eight protected disks (including parity)"  or "Array of eight (including parity) protected disks" 8)

 

This is the Simple Features add on interface and is not the "stock" unraid interface.  Post this in the Simple Features thread.

Link to comment

I seem to be solid on RC-2 although I saw no errors with rc1 or b14 as well, I build my new server on b14. parity check has been 65MB/s plus on this machine since I turned it on.

 

I am also for the get a released version and move on with the features from there. If all of the fixes are in rc2 that have anything to promote data loss or corruption then STOP and make it final.

 

I do have one question and it is driving me around the bend. When copying files from either Win7 or WinXP I use teracopy and I still get the transfer rates jumping around quite a bit, it will start copying at 50-60 drop to 15-16 and then to 25 up to 40 down to 10 etc... I am not sure if this is a M$,  unraid issue, hardware issue or a network issue. I would love to figure it out though, if anyone has any ideas on troubleshooting I am open to suggestions. All my machines are on a 1GB network the main switch is a GB managed then I have two GB 8 port switches in the system one behind my TV for the TV, BD player, Video Games, XBMC etc and another in my office for my main pc, 2nd pc, test xbmc machine and pre-clear machine. All the cables are cat-5e and terminated by ME, I cannot recall if that is A or B but it is one of those two but if A or B they are the same on both ends all tested. This was all fixed back when I was testing LMCE and was having network issues.

 

THX,

 

Dave

 

Is there other network traffic during your test?

Link to comment

I've been testing rc2 on my test array with 1x m1015 (LSI) with no issues - I've now migrated it to my production array (ESXi-based) and it works 100%.  The only minor issue I'm seeing is the cache drive is flashing green as if spun down (but this could be SimpleFeatures?)  Has persisted after 2 reboots, but has not affected functionality in any way.  Ignore that, after a spin down/spin up cycle of the array, the issue has gone.

 

Great work, it looks like this is a nice stable release for LSI-based controllers :)

Link to comment

I notice something weird in the RC-2 version. Yesterday I lost power because I played around with a powercable where I shouldn't, result is the system now started a parity check that I need to finish.

 

Parity has been rebuilding for some time, I notice however that periodically the array spins down my disks (ofcourse they spin up again during the parity rebuild).

 

I have no idea if this is intended behaviour but it would seem to be more sensible to not spin down disks during a parity check...

Link to comment

 

I havent seen one rc2 issue so far or i must have overlooked it???

 

 

I cannot see one either. Everything people are posting about is unrelated.

What is the verdict on NFS performance and reliability in RC2 so far? Still broken?

If RC2 is indeed stable with everything else and write performance is acceptable (need some more tests including with cache drives) then I it must be time to just draw a line and call it 5.0 Final. Fixing NFS should be next priority and hence version 5.01 / 5.1.

Of course, if Tom can fix NFS and any write issues by releasing an RC3, then great, but any more than an RC3 and it's going to start getting tedious again.

Link to comment

 

I havent seen one rc2 issue so far or i must have overlooked it???

 

 

I cannot see one either. Everything people are posting about is unrelated.

What is the verdict on NFS performance and reliability in RC2 so far? Still broken?

If RC2 is indeed stable with everything else and write performance is acceptable (need some more tests including with cache drives) then I it must be time to just draw a line and call it 5.0 Final. Fixing NFS should be next priority and hence version 5.01 / 5.1.

Of course, if Tom can fix NFS and any write issues by releasing an RC3, then great, but any more than an RC3 and it's going to start getting tedious again.

 

 

i absolutely agree !

Link to comment

As mentioned above, seems stable so far on my m1015 (LSI) based ESXi system.  No issues with transfer speeds, here's a screenshot of Teracopy to the array (cache drive is a RDM'ed SSD, and I have two gigabit ports in etherchannel from the sending PC.)

 

qpJqC.jpg

 

 

Link to comment
What is the verdict on NFS performance and reliability in RC2 so far? Still broken?

If RC2 is indeed stable with everything else and write performance is acceptable (need some more tests including with cache drives) then I it must be time to just draw a line and call it 5.0 Final. Fixing NFS should be next priority and hence version 5.01 / 5.1.

 

So, you're advocating releasing a 'stable' version where one of the advertised features may not work reliably?

Link to comment

What is the verdict on NFS performance and reliability in RC2 so far? Still broken?

If RC2 is indeed stable with everything else and write performance is acceptable (need some more tests including with cache drives) then I it must be time to just draw a line and call it 5.0 Final. Fixing NFS should be next priority and hence version 5.01 / 5.1.

 

So, you're advocating releasing a 'stable' version where one of the advertised features may not work reliably?

 

+1

agreed does not make sense

Link to comment

I just upgraded my 5.0b6a machine to RC2.

 

I'm getting errors similar to the following in the syslog that I've never seen before:

May  6 10:35:58 Hyperion kernel: ata11: sas eh calling libata port error handler (Errors)
May  6 10:35:58 Hyperion kernel: ata12: sas eh calling libata port error handler (Errors)
May  6 10:35:58 Hyperion kernel: ata13: sas eh calling libata port error handler (Errors)
May  6 10:35:58 Hyperion kernel: ata14: sas eh calling libata port error handler (Errors)

 

The array booted up and came online automatically, and the drives all list the correct alignment.  I do have a AOC-SASLP-MV8 SATA card, as well as a pair of Monoprice 2 port SATA cards, in addition to my 4 on-board ports.  I'm using 9 total drives at this point (7 in the array, 1 parity, 1 mounted but unaffiliated drive).

 

As far as I can tell, the data is all OK.  Parity check is running, but I notice it's at about 2/3 normal speed (44MB/s).  I was just getting parity check speeds between 66-68MB/s on Beta 6a on the same parity check location (about 0% - 2%).

 

Any ideas on what is going on?  Should I cancel my parity check and revert to 6a, which was stable for me?

 

Edit: Ooops, forgot to attach the full syslog.

syslog-2012-05-06.txt.zip

Link to comment

I just upgraded my 5.0b6a machine to RC2.

 

I'm getting errors similar to the following in the syslog that I've never seen before:

May  6 10:35:58 Hyperion kernel: ata11: sas eh calling libata port error handler (Errors)
May  6 10:35:58 Hyperion kernel: ata12: sas eh calling libata port error handler (Errors)
May  6 10:35:58 Hyperion kernel: ata13: sas eh calling libata port error handler (Errors)
May  6 10:35:58 Hyperion kernel: ata14: sas eh calling libata port error handler (Errors)

 

The array booted up and came online automatically, and the drives all list the correct alignment.  I do have a AOC-SASLP-MV8 SATA card, as well as a pair of Monoprice 2 port SATA cards, in addition to my 4 on-board ports.  I'm using 9 total drives at this point (7 in the array, 1 parity, 1 mounted but unaffiliated drive).

 

As far as I can tell, the data is all OK.  Parity check is running, but I notice it's at about 2/3 normal speed (44MB/s).  I was just getting parity check speeds between 66-68MB/s on Beta 6a on the same parity check location (about 0% - 2%).

 

Any ideas on what is going on?  Should I cancel my parity check and revert to 6a, which was stable for me?

 

Edit: Ooops, forgot to attach the full syslog.

Those are not errors.  They are informational messages to you describing how the "sas" driver for each of ata11,12,13 and 14 will use the libata port error handler.

 

 

Link to comment

A quick round up of my experience with RC2:

 

1.) Ubuntu nfs hangs can be circumvented by mounting the shares as udp rather than tcp (although this does seem to slow down the array writes slightly).

2.) nfs stale file handle errors seem to persist, although I cannot reproduce them as easily as I could with b11.  However, they can be circumvented by umount/mount at the client.

3.) No problems now with my LSI controller.

4.) Array now starts automatically from boot.

5.) Crashes on shutdown (kernel oops) seem to be a thing of the past.

6.) Reads from the array are running at around 95MB/s

7.) Writes to the array (cache drive) are running at 75MB/s (slightly slower than the 95+ I've seen when using tcp)

8.) Parity check runs at 108MB/s (at 2% completion on four protected drives)

9.) Several add-ons/plugins are running without any difficulty. (Logitech/Squeeze/Slim server 7.7.2, Dovecot mail server, mpop mail fetcher, p910nd print server, apcupsd UPS monitor/control)

 

So, apart from the question marks over nfs/tcp hangs and stale file handles, I'm very happy with RC2.

Link to comment

I notice something weird in the RC-2 version. Yesterday I lost power because I played around with a powercable where I shouldn't, result is the system now started a parity check that I need to finish.

 

Parity has been rebuilding for some time, I notice however that periodically the array spins down my disks (ofcourse they spin up again during the parity rebuild).

 

I have no idea if this is intended behaviour but it would seem to be more sensible to not spin down disks during a parity check...

 

2 things, post the syslog! And try the following, set spin-down to 5 mins for all drives (parity included). Throw are large file copy that will take greater than 5 mins to complete, post the syslog for Tom, showing that attempts are being made to spin drive the drives, during drive usage.

 

For all others thank you for wasting everyones time with mentioning plugins. If you want to keep Tom to his word, YOU have to follow instructions. So now there clearly will be more RCs inorder to reach a final.

Link to comment

 

Madburg: What did I do to deserve this "thank you for wasting time" remark ???

 

He said 'for all others', so not you ;)

 

But I agree with madburg, anyone who wants to checkout this RC (or any beta) should do so on a clean unread install without any plugin. Verify if everything is working fine, and only after that has been done you should install any needed plugins and see if it breaks anything.

 

Test results from installs with plugins offer no information for Tom as the plugin itself can be the culprit.

Link to comment

OK. here is an interesting bug or issue.

 

I decided to expand my RC2 array.

I added a new drive and hit the "yes I want to add the new drive" button, it decided to add the drive AND run a parity check at the same time.

Then on top of that, it looks like it tried to spin the drives down while running a parity check and formatting a drive.

 

I'll admit I am not running 100% clean unraid.

 

 

I have VM tools

plus the following unMenu packages: apcupsd, Monthly Parity Check, Clean Powerdown, overtemp powerdown, screen, SSMTP and mail alerts.

None of these should have caused this glitch..

running default gui.

 

Syslog attached.

Everything else seems to be OK.

LSI spin-up spin-down.

LSI on expander.

Server booted in both baremetal and as a VM.

 

Test speeds seem ok. about on par with beta12 so far.

 

 

 

 

 

syslog-2012-05-06.txt

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.