unRAID Server Release 5.0-rc15a Available


Recommended Posts

No Worries,  only mention of the super.dat file was below.

 

Version 5.0-beta3, 5.0-beta4, 5.0-beta5, 5.0-beta5a, 5.0-beta5b

1. Prepare the flash: either shutdown your server and plug the flash into your PC or Stop the array and perform the following actions referencing the flash share on your network:  Copy the files bzimage and bzroot from the zip file to the root of your flash device, overwriting the same-named files already there.

Delete the file config/super.dat. You will need to re-assign all your hard drives.

 

 

So I assume you went from one of these to RC4???  As you should not have had to do that if you were upgrading from RC4

Link to comment
  • Replies 147
  • Created
  • Last Reply

Top Posters In This Topic

Damn it, i just went back and re-read the upgrade notes. i was reading from the beta section, not rc section. so you're right, i didnt need to delete the super.dat file.

 

Once this parity sync finishes, i will copy over syslinux.cfg and reboot server. then run the utils/new permissions utility as it probably needs that too.

Hope it doesnt mess with any plugins or shares

 

thanks again

Link to comment

Updated to 5.15a  new month means a parity check so here are the numbers.

 

Last checked on Mon Jul 1 11:44:40 2013 SGT (today), finding 0 errors.

> Duration: 11 hours, 14 minutes, 39 seconds. Average speed: 98.8 MB/sec

 

Not the best I've had but then again not the worst

Link to comment

Updated to 5.15a  new month means a parity check so here are the numbers.

 

Last checked on Mon Jul 1 11:44:40 2013 SGT (today), finding 0 errors.

> Duration: 11 hours, 14 minutes, 39 seconds. Average speed: 98.8 MB/sec

 

Not the best I've had but then again not the worst

 

Yea I ran mine too and got an average around 106 MB/s. Pretty good if you ask me.

Link to comment

Updated to 5.15a  new month means a parity check so here are the numbers.

 

Last checked on Mon Jul 1 11:44:40 2013 SGT (today), finding 0 errors.

> Duration: 11 hours, 14 minutes, 39 seconds. Average speed: 98.8 MB/sec

 

Not the best I've had but then again not the worst

 

Yea I ran mine too and got an average around 106 MB/s. Pretty good if you ask me.

 

+1   

 

Last checked on Mon Jul 1 16:36:01 2013 CDT (today), finding 0 errors.

> Duration: 7 hours, 49 minutes, 14 seconds. Average speed: 106.6 MB/sec

 

Link to comment

So with 5.0-rc15a, core i5 w/4gb ram. whats the recommended values for: md_num_stripes / md_write_limit / md_sync_window ?

I hear the default values provided by unraid are for legacy/old hardware...

 

Right now mine are set at: 2816 / 1792 / 1024

 

Link to comment

So with 5.0-rc15a, core i5 w/4gb ram. whats the recommended values for: md_num_stripes / md_write_limit / md_sync_window ?

I hear the default values provided by unraid are for legacy/old hardware...

 

Right now mine are set at: 2816 / 1792 / 1024

 

There are not any "recommended" values => you simply need to experiment to see what works best with YOUR configuration.  This can be impacted by the disks (both type and how many), your add-ons, how much RAM you have, etc.    The first # should be >= the sum of the 2nd two (which I assume you know, since it's currently equal to the sum) ... but otherwise you simply need to try adjusting the number of sync windows and seeing what yields your best parity check time.

 

Link to comment

 

Going back to RC15.  RC15a unmenu randomly become unresponsive.  I've got to reload or go back to tower:8080 to be able to access it back again.  I can't have this with the WIFE, she gets mad about stuff like this!

 

I understand, they are not impressed with the server but when something doesn't work right all hell breaks loose! (I may be slightly over exaggerating...)

Link to comment

So with 5.0-rc15a, core i5 w/4gb ram. whats the recommended values for: md_num_stripes / md_write_limit / md_sync_window ?

I hear the default values provided by unraid are for legacy/old hardware...

 

Right now mine are set at: 2816 / 1792 / 1024

 

There are not any "recommended" values => you simply need to experiment to see what works best with YOUR configuration.  This can be impacted by the disks (both type and how many), your add-ons, how much RAM you have, etc.    The first # should be >= the sum of the 2nd two (which I assume you know, since it's currently equal to the sum) ... but otherwise you simply need to try adjusting the number of sync windows and seeing what yields your best parity check time.

 

 

I have no idea what the numbers mean, or how they are related.  I fully rely on this thread to guide me :)  A while back it was suggested to change them to help performance.  So mine are at

strips = 12800

limit = 7680

window = 3840

 

See my hw in my sig.  My last parity check;

 

 

Last checked on Mon Jul 1 11:52:19 2013 PDT (yesterday), finding 0 errors.

> Duration: 11 hours, 52 minutes, 18 seconds. Average speed: 70.2 MB/sec 

 

I guess my speeds are ok, given my drive setup.

Link to comment

So with 5.0-rc15a, core i5 w/4gb ram. whats the recommended values for: md_num_stripes / md_write_limit / md_sync_window ?

I hear the default values provided by unraid are for legacy/old hardware...

 

Right now mine are set at: 2816 / 1792 / 1024

 

There are not any "recommended" values => you simply need to experiment to see what works best with YOUR configuration.  This can be impacted by the disks (both type and how many), your add-ons, how much RAM you have, etc.    The first # should be >= the sum of the 2nd two (which I assume you know, since it's currently equal to the sum) ... but otherwise you simply need to try adjusting the number of sync windows and seeing what yields your best parity check time.

 

 

I have no idea what the numbers mean, or how they are related.  I fully rely on this thread to guide me :)  A while back it was suggested to change them to help performance.  So mine are at

strips = 12800

limit = 7680

window = 3840

 

See my hw in my sig.  My last parity check;

 

 

Last checked on Mon Jul 1 11:52:19 2013 PDT (yesterday), finding 0 errors.

> Duration: 11 hours, 52 minutes, 18 seconds. Average speed: 70.2 MB/sec 

 

I guess my speeds are ok, given my drive setup.

 

i dont think 70mb is very good... ive seen plenty of 90-120mb ones... have you tried adjusting it down?

Link to comment

No, because I don't know what to change them to. 

 

I know that my speeds are not at the top of the pack, but it was pointed out that the most likely cause was the different drive sizes.  I've seen speeds at high 90's, but then much slower.  The 70 being my current average with 3tb, 2tb, and one last 1tb drive is not too bad.

Link to comment
I have no idea what the numbers mean, or how they are related.  I fully rely on this thread to guide me :)  A while back it was suggested to change them to help performance.  So mine are at

strips = 12800

limit = 7680

window = 3840

 

See my hw in my sig.  My last parity check;

 

 

Last checked on Mon Jul 1 11:52:19 2013 PDT (yesterday), finding 0 errors.

> Duration: 11 hours, 52 minutes, 18 seconds. Average speed: 70.2 MB/sec 

 

I guess my speeds are ok, given my drive setup.

 

Quote from Lime Tech

There are some tuneables related to parity sync on the Disk Settings page:

md_num_stripes

md_write_limit

md_sync_window

 

For each of these, it will either say "default" or "user-set" to the right of the input field.  If you set an input field to blank and click Apply, it sets that value back to the default.

 

Current defaults are:

md_num_stripes 1280

md_write_limit 768

md_sync_window 384

 

md_num_stripes - is going to impact total memory used by the unraid driver.  This memory is used to perform the parity calculations both for normal writes, and for reconstruct writes (writing to an array with a missing/disabled disk), and parity sync/check.  Roughly, each stripe requires 4096 x N bytes, where N is the number of disks in the array.  You can leave this number at it's default unless you want to really increase the other two values.  This value must always be bigger than either of the other two.

 

md_write_limit - determines the maximum number of stripes allocated for write operations.  This is to prevent the entire stripe pool from getting allocated when a large write is taking place, so that reads can still take place.  Increasing this number will increase write throughput, but only up to a limit.

 

md_sync_window - the one we're interested in for parity sync/check.  You can think of this as the number of parity sync/check stripes permitted to be "in-process" at any time.  The larger this number, the faster parity sync/check will occur, again up to a limit.  Making this too big however, may introduce unacceptable latencies for normal read/write occurring during parity sync/check.

 

So I suggest experimenting with increasing md_sync_window - I have this set to 512 for in-house servers.

 

strips = 2560

limit = 768

window = 1024

 

gets me

 

Last checked on Mon Jul 1 16:36:01 2013 CDT (today), finding 0 errors.

> Duration: 7 hours, 49 minutes, 14 seconds. Average speed: 106.6 MB/sec

Link to comment

It will never run faster than your slowest drive can read, or that your parity drive can write.

 

Once you hit some number drives it will also slow down due to bus contention (PCI or PCI-Express).

 

Finally it will hit your DMA to RAM bandwidth limit.

Link to comment

I have no idea what the numbers mean, or how they are related.  I fully rely on this thread to guide me :)  A while back it was suggested to change them to help performance.  So mine are at

strips = 12800

limit = 7680

window = 3840

 

I'm surprised you're not having other significant issues if you're allocating that much to disk buffers.

I'd try 2048/768/1024  and see what your parity check results are ... I suspect they'll be as good as you're seeing now, and you won't be nearly as likely to encounter other memory contention issues.

 

Link to comment

I have no idea what the numbers mean, or how they are related.  I fully rely on this thread to guide me :)  A while back it was suggested to change them to help performance.  So mine are at

strips = 12800

limit = 7680

window = 3840

 

I'm surprised you're not having other significant issues if you're allocating that much to disk buffers.

I'd try 2048/768/1024  and see what your parity check results are ... I suspect they'll be as good as you're seeing now, and you won't be nearly as likely to encounter other memory contention issues.

 

 

Thanks, I just changed them to your recommendations. :)  I have had some issues with Plex Media Server, but that's about it. 

Link to comment

 

Going back to RC15.  RC15a unmenu randomly become unresponsive.  I've got to reload or go back to tower:8080 to be able to access it back again.  I can't have this with the WIFE, she gets mad about stuff like this!

 

I understand, they are not impressed with the server but when something doesn't work right all hell breaks loose! (I may be slightly over exaggerating...)

 

I found something interesting.  If I spin down the array, and try to access anything else of the menu section such as Syslog, unmenu locks up.  The same happens if I spin up the array.  So, this wasn't an issue with RC15 or RC15a.  Something is making unmenu to become unresponsive.

 

I am going to switch back to RC12a and see if the issue remains.

 

 

UPDATE: Switched to RC12a, and unmenu doesn't lock up / becomes unresponsive when I spin down / up the hard drives.  It appears that once again, I will be going back to RC12a.  :o

Link to comment

I have no idea what the numbers mean, or how they are related.  I fully rely on this thread to guide me :)  A while back it was suggested to change them to help performance.  So mine are at

strips = 12800

limit = 7680

window = 3840

 

I'm surprised you're not having other significant issues if you're allocating that much to disk buffers.

I'd try 2048/768/1024  and see what your parity check results are ... I suspect they'll be as good as you're seeing now, and you won't be nearly as likely to encounter other memory contention issues.

 

 

Thanks, I just changed them to your recommendations. :)  I have had some issues with Plex Media Server, but that's about it. 

 

To follow up on this one, after making the changes and running a parity check, results are

 

Last checked on Wed Jul 3 23:57:04 2013 PDT (today), finding 0 errors.

> Duration: 12 hours, 49 minutes, 24 seconds. Average speed: 65.0 MB/sec

 

So close to what they were before.  I did have a issue after making the initial change.  The webgui became unresponsive again about 2 to 3 hours after.  I had to use unmenu to stop the array and then the unmenu became unresponsive as well.  Had to use the console to run the powerdown script, which didn't complete.  I had to then run the reboot command and it rebooted.  I lost the syslog in the process and I have no idea if this issue is related to the changes made above.  I did have the webgui issue in previous rc's, but not in 15a until now.

 

I would like feedback re these settings and my test results.  Thanks

Link to comment

I would like feedback re these settings and my test results.  Thanks

 

Leave the settings as they are now ... you had them far too high, and it's hard to say what issues that could have ultimately caused.

 

As for the current issues with 15a => don't worry about them;  update to RC16b and see if that resolves things ... as a minimum it fixes a significant potential data loss problem in RC15a.

Link to comment
Guest
This topic is now closed to further replies.