unRAID Server Release 5.0-rc14 Available


Recommended Posts

I will now go and test my nfs stale file handle problem when performing mkvmerge.

 

I have set 'Tunable (fuse_remember)' to -1.

 

For the first time (other than with versions rc4-rc10) I am able to access my Movies share after I have run an mkvmerge with the output file written to that Movies share.

 

Great stuff - thank you Tom!

Link to comment
  • Replies 203
  • Created
  • Last Reply

Top Posters In This Topic

 

Madburg, may I ask you for a favor please? I see from your screenshots that you've allocated more than 4GB memory to that machine. Would you mind, when you have a chance, start that machine with less than 4GB, and see if you'll have any performance differences? And I don't mean just parity sync, I mean actual copying of large amounts of files (and big ones) to the server. Thanks.

 

I could, once I am done with all my tests. Parity won't be complete until tomorrow and I wont have a chance until after work, plus all the prep work for fathers day coming up :)

 

If you provide some details on what you consider large file copies and from a client to where? cache or array, create a scenario and I will do it when I have a moment.

 

How much memory do you want me to allocate, and under which boot option do I boot from with your choosen memory.

 

Whenever you have a chance.  From the hypervisor, allocate about 3.9GB memory to the VM and start it, without other changes. Boot unRaid normally, NO "mem=" options and such.

 

What I can tell you is once I fire up all my clients and workload aginst the unRaid server and kick off the mover my mem utilization hits above 5gb. So there is no way in hell I would settle/live with a non pae build on a permanent bases.

 

Oh, no, that's not  5GB used really, that contains all discardable cached buffers. That's just how the linux kernel works: if you have 100GB physical ram in your machine, you'll see it is using 100GB. I can bet my old running shoes that your server will do exactly that same load with 2GB memory allocated to it. You'll be surprised.

 

Ok I will test with 3gb of memory

 

I don't believe it's not using the 5gb when it's taxed. U say it uses everything u give it, but I never seen it take all 6gb in total. During this parity check for example its using 1.5+- not 6GB and not 5GB. Call it buffers/flushes whatever, if there's no memory for it, performance suffers. I withness it first hand plenty of time messing around with unRaid for these two years. There is no ballooning as when u pass through a device, in my case a controller to the VM it forces the allocation of all that memory to that guest And reserved from the host from ballooning/over commitment from any other guest. VMware warns you of this. As their goal is to not have any dependence so one can utilize vmotion, HA/DRS etc. once you subject passing through a physical device to a guest, all that technology and flexability is gone. Which in our case is perfectly fine. We are committing a chuck of our physical hardware to unRaid and the rest to be utilized for other virtualization aspects. This way another chassis, psu, proc, etc. is not required to be purchased.

 

Esxi makes it very easy to see utilization, especially when the host is being managed under vsphere where it stores historical data. So you can look back And see what happened at 3am when the mover kicked off, how long and what resources (CPU cycles, memory) were being utilized. Versus not knowing much about Linux, unRAID and making guesses based on a few glances at what some plugin or command may show at a point it time when you ran it.

 

 

Link to comment

As for four years => not at all true.

You just have to pick up a fight about everything, don't you?

Very well then, here are some facts:

- He started working on v.5 in March 2009.

- The first v.5 beta was due out for Christmas 2009.

- v.5-beta1 was out in July 2010.

- We are having this discussion in June 1013.

So yes, it has been four years of work on v.5. :)

 

Come on, what do you want from a guy who posts how every release has been rock solid, yet won't even reboot since booting up RC13 in fear for his array, then waits until RC14 gets released and asks Tom, how he can minimize screwing something up shutting down RC13 in order to go to RC14.  ;D

Link to comment

I don't believe it's not using the 5gb when it's taxed. U say it uses everything u give it, but I never seen it take all 6gb in total. During this parity check for example its using 1.5+- not 6GB and not 5GB.

Oh, it's using it, but not in the sense how Windows apps are using memory. Give Linux 200GB memory, and then start reading/copying lots of files around, and you'll see it "using" all that memory. It caches about everything it can possibly cache. And that's a good thing. But all those buffers are immediately discardable when needed. So don't let it give you the false impression that it's struggling for memory.

No disagreement with you on these points.

ME no thinks linux -> windows the same  ;)

Link to comment

Parity check at 45% (more than 1/2 way through the 2TB drives now):

 

Observation, went up stairs to watch a show via mac mini (plex), now that the parity check is running, cannot connect to unRAID via AFP, syslog shows constant disconnects. Initially tested when I first brought up RC14 and before running a parity check, I was able to connect no problem. Tested with a second AFP client same thing, screenshot shows the syslog tail with afp disconnect logged.

 

250GB drive spun down, ISO finished being extracted, so cache spun down as well. ESXi showing mem utilization dropped (i bet right after the iso completed being extracted) don't know if drives spun down would elevate any memory on unRaid as well...

 

Change in Low mem though,  the lowest i have seen so far during this parity check 48560 (also in the screenshot) to tired, I thought I saw 4856.. (its 48560) going to bed. A bit low, but not that low. Just checked again its 80388

 

 

 

 

RC14-45_PercentInParityCheck.jpg.2a7f9c89c9401806f903e49cddcf700f.jpg

Link to comment

You almost talked me into it, but I have 4 hours of sleep if I go to bed now.

 

If I start this adventure i will have zero, I would have to log off all clients, servers, shutdown unraid, reajust memory and reservation as its static not like on a traditional VM, then only boot back up. Re-login with a few clients a servers, kick of the ISO exctraction. Connect via AFP (should), log off AFP clients, kick off parity, screeshot, wait a while, screenshot, attempt AFP connects again. It won't be a 15 min. things. So I have to make the executive decision and let this finish while i get a bit of sleep and take this up tomorrow. I can try to VPN tomorrow from work and do this test (as long as time permits at work). I let u know, u seem to be glued here arguing with the gary man anyway  ;D

 

 

Edit: Just make very sure you're not selecting any "mem=" boot options on this restart.

 

No problem will do.

Link to comment

As for four years => not at all true.

You just have to pick up a fight about everything, don't you?

Very well then, here are some facts:

- He started working on v.5 in March 2009.

- The first v.5 beta was due out for Christmas 2009.

- v.5-beta1 was out in July 2010.

- We are having this discussion in June 1013.

So yes, it has been four years of work on v.5. :)

 

Come on, what do you want from a guy who posts how every release has been rock solid, yet won't even reboot since booting up RC13 in fear for his array, then waits until RC14 gets released and asks Tom, how he can minimize screwing something up shutting down RC13 in order to go to RC14.  ;D

 

"release" => Official release (i.e. v4.7).    My v4.7 media server has been booted precisely 5 times since I installed it in Jan 2011 ... initially;  twice to add drives; and twice when it was shut down by the APC UPS package due to extended power failures.    I'd call that "rock solid" (Current UpTime is 187 days and counting).    But even my v5 system has been very solid, although the uptime has never been more than 45 days due to fairly constant releases (current uptime is only a bit over 7 hrs, as I tested the shutdown after it finished my initial post-load parity check.    Yes, I was very cautious r.e. RC13, as there were already LOTS of notes about how it was messing up flash drives and causing a variety of issues when you shut down before I had even finished my initial parity check [first thing I always do after loading a new version].    Didn't see any reason to mess up the system if I didn't need to, as it was obvious a new release would be out very quickly to resolve that significant issue (THAT is one thing Tom is quite good at -- note the number of "a" releases due to him forgetting Realtek drivers).

 

As for working on v5 for over 4 years ... of course he has; but that's not what you said.  In fact he's been working on UnRAID for over 7 years.    Your criticism, however, was "... He can't bring one kernel to final in over four years, " ==> implying he hadn't made a final release in that time.  But the last "official" final release was in Jan 2011.    I'd call THAT the "clock" when you start counting for the next release.    We had four children over a span of 17 years, but I'd hardly say we worked on making on our youngest one for 17 years  :)

 

 

Link to comment

upgraded to rc14

 

but changed the linux.cfg file for the one from rc12a again

 

with the new linux.cfg file i had only 3064440k ram in top

with the old one i have again 4064440k in top

 

powerdown with stock script works

 

custom powerdown script still doesn't work :(

so we will need a new one urgently

 

can anybody tell me what the command is that happens when we press stop the array on the gui ?

 

would like to use that command in a cron for the moment till somebody makes a better powerdown script

uninstalling the old script now as no use

Link to comment

As for four years => not at all true.

You just have to pick up a fight about everything, don't you?

Very well then, here are some facts:

- He started working on v.5 in March 2009.

- The first v.5 beta was due out for Christmas 2009.

- v.5-beta1 was out in July 2010.

- We are having this discussion in June 1013.

So yes, it has been four years of work on v.5. :)

 

Come on, what do you want from a guy who posts how every release has been rock solid, yet won't even reboot since booting up RC13 in fear for his array, then waits until RC14 gets released and asks Tom, how he can minimize screwing something up shutting down RC13 in order to go to RC14.  ;D

 

"release" => Official release (i.e. v4.7).    My v4.7 media server has been booted precisely 5 times since I installed it in Jan 2011 ... initially;  twice to add drives; and twice when it was shut down by the APC UPS package due to extended power failures.    I'd call that "rock solid" (Current UpTime is 187 days and counting).    But even my v5 system has been very solid, although the uptime has never been more than 45 days due to fairly constant releases (current uptime is only a bit over 7 hrs, as I tested the shutdown after it finished my initial post-load parity check.    Yes, I was very cautious r.e. RC13, as there were already LOTS of notes about how it was messing up flash drives and causing a variety of issues when you shut down before I had even finished my initial parity check [first thing I always do after loading a new version].    Didn't see any reason to mess up the system if I didn't need to, as it was obvious a new release would be out very quickly to resolve that significant issue (THAT is one thing Tom is quite good at -- note the number of "a" releases due to him forgetting Realtek drivers).

 

As for working on v5 for over 4 years ... of course he has; but that's not what you said.  In fact he's been working on UnRAID for over 7 years.    Your criticism, however, was "... He can't bring one kernel to final in over four years, " ==> implying he hadn't made a final release in that time.  But the last "official" final release was in Jan 2011.    I'd call THAT the "clock" when you start counting for the next release.    We had four children over a span of 17 years, but I'd hardly say we worked on making on our youngest one for 17 years  :)

 

While I have to agree that what Garycase is saying is a fairer view, we need to keep in mind that the promised 4.7.1 was never forthcoming.  Whilst the problems that this was going to address did not effect Garycase, it does not change the fact that the last time there was a supported stable was two years ago..... 

 

Having said all that, it's obvious that a great effort is being put into finalizing 5 at present.  Regardless if is this is being driven by the desire of hardware sales or whether there are different kernels being tried, it does appear that we are advancing and that Tom is trying his best with frequent communication.  Now is not the time to complain about how long things are taking.  If Tom goes silent again for weeks or months again then I will be pissed off too...  but while he he working on it and communicating regularly, what more can you ask? 

Link to comment

I only have 4GB, but I'm still running the "mem=4096" option.  Not sure if that makes a difference in either direction, BUT I am seeing much better write speeds (~75-80MB/s)

Your better speeds are because you've started the kernel in non-PAE mode.  My observations exactly! Even with 4GB physicall RAM installed, you are still much better off in non-PAE mode.  Thank you!

 

Unlikely since I've been running with the mem parm with 13 in an attempt to solve the problem.

 

Sent from my Nexus 7 using Tapatalk HD

 

Sorry, I didn't quite understand you: What did you mean was "Unlikely"?

Your speeds are better with the mem parmeter, isn't that correct?

 

Ack, I screwed up that post ...  so here is what I said NOT hidden inside a quote:

 

I had already added the mem parameter with rc13.  So, the only change between rc13 and rc14 for me is everything else Tom might have changed EXCEPT for the presence of the mem parameter.

 

Link to comment

Parity check at 99%

I Forgot, I have backups that start at 11pm getting written to the unRAID cache drive. I had several movies, tv shows and backups scheduled to be moved to the array at the default mover schedule, (which are all still moving, do to the parity check running) so this parity check will not be my best time but a good exercise for RC14.

 

 

Update:

Jun 13 08:08:24 PNTower kernel: md: sync done. time=37046sec
Jun 13 08:08:24 PNTower kernel: md: recovery thread sync completion status: 0

RC14-99_PercentInParityCheck.jpg.070bd139940951f394524f5a0ee6bd10.jpg

Link to comment

I tried out rc14 on my unRAID1  setup. I replaced the three files but it got stuck partway through the boot. Something about losing a connection which I guess is my external enclosures. It never got to the point where it recognized the 17 drives in my external, port multiplier, enclosures. I tried a second time with the same results. So I  copied the three files from rc12a back onto the flash drive and it booted normally. I have no idea what the issue was. I'll need to try it on my second unRAID setup tomorrow, which only has eight drives in external enclosures, to see if it has a similar issue.

 

 

 

How much ram do you have and which boot option did you select?

Even on my ESX system, there's a really long pause as it's attaching drives until a failure shows up.

Without the mem= parameter it comes up normally. My system only has 4 drives though.

 

I have 4GB of memory. I didn't select any boot option. I thought if you had 4GB or less there was no need to select a boot option? I guess I need to hook up a monitor to be able to select an option. WHich would be kind of a pain to do that since I don't normally run the unRAID 24/7. I boot it every day. So I would need to select an option everytime I boot?

 

I'll need to check it out with my second unRAID(which is connected to a TV) tonight after it finishes with the files I am copying over. I think I have 2GB of memory in that one, maybe 4GB, I'm not sure. I don't run any Add Ons so I don't need alot memory in either of my unRAID setups.

Link to comment

Parity check at 99%

I Forgot, I have backups that start at 11pm getting written to the unRAID cache drive. I had several movies, tv shows and backups scheduled to be moved to the array at the default mover schedule, (which are all still moving, do to the parity check running) so this parity check will not be my best time but a good exercise for RC14.

 

 

Update:

Jun 13 08:08:24 PNTower kernel: md: sync done. time=37046sec
Jun 13 08:08:24 PNTower kernel: md: recovery thread sync completion status: 0

 

37k seconds and 99% looks like a normal time for a 3TB parity drive to me, is that correct?

Link to comment

Something caught my eye in scrolling through my putty windows. Syslog is 27MB, so I barely got it to load up in order to search through it.

 

Since the parity check started yesterday evening (and mover running) these SAS events got logged (my understanding for 1 second, no where else in the entire 27MB syslog) which I am not sure what to make of. Never seen this type of event(s) before.

 

SAS (errors?) events are at the bottom:

Jun 12 21:50:56 PNTower kernel: mdcmd (61): check CORRECT
Jun 12 21:50:56 PNTower kernel: md: recovery thread woken up ...
Jun 12 21:50:56 PNTower kernel: md: recovery thread checking parity...
Jun 12 21:50:57 PNTower kernel: md: using 4096k window, over a total of 2930266532 blocks.
Jun 12 23:09:42 PNTower afpd[3665]: afp_disconnect: primary reconnect failed
Jun 12 23:12:20 PNTower afpd[6406]: afp_disconnect: primary reconnect failed
Jun 12 23:14:56 PNTower afpd[9086]: afp_disconnect: primary reconnect failed
Jun 12 23:16:25 PNTower afpd[3665]: dsi_stream_write: Broken pipe
Jun 12 23:16:25 PNTower afpd[3665]: afp_alarm: connection problem, entering disconnected state
Jun 12 23:17:32 PNTower afpd[11852]: afp_disconnect: primary reconnect failed
Jun 12 23:20:08 PNTower afpd[14502]: afp_disconnect: primary reconnect failed
Jun 12 23:20:32 PNTower afpd[3914]: dsi_stream_write: Broken pipe
Jun 12 23:20:32 PNTower last message repeated 2 times
Jun 12 23:21:01 PNTower kernel: mdcmd (62): spindown 11
Jun 12 23:21:23 PNTower afpd[6406]: dsi_stream_write: Broken pipe
Jun 12 23:21:23 PNTower afpd[6406]: afp_alarm: connection problem, entering disconnected state
Jun 12 23:22:22 PNTower afpd[11852]: dsi_stream_write: Broken pipe
Jun 12 23:22:22 PNTower afpd[11852]: afp_alarm: connection problem, entering disconnected state
Jun 12 23:22:44 PNTower afpd[17394]: afp_disconnect: primary reconnect failed
Jun 12 23:25:20 PNTower afpd[19892]: afp_disconnect: primary reconnect failed
Jun 12 23:25:25 PNTower afpd[9086]: dsi_stream_write: Broken pipe
Jun 12 23:25:25 PNTower afpd[9086]: afp_alarm: connection problem, entering disconnected state
Jun 12 23:28:53 PNTower afpd[14502]: dsi_stream_write: Broken pipe
Jun 12 23:28:53 PNTower afpd[14502]: afp_alarm: connection problem, entering disconnected state
Jun 12 23:31:06 PNTower afpd[17394]: dsi_stream_write: Broken pipe
Jun 12 23:31:06 PNTower afpd[17394]: afp_alarm: connection problem, entering disconnected state
Jun 12 23:35:42 PNTower afpd[19892]: afp_alarm: child timed out, entering disconnected state
Jun 12 23:35:48 PNTower afpd[30238]: afp_disconnect: primary reconnect failed
Jun 12 23:38:24 PNTower afpd[32740]: afp_disconnect: primary reconnect failed
Jun 12 23:41:00 PNTower afpd[2821]: afp_disconnect: primary reconnect failed
Jun 12 23:43:36 PNTower afpd[5377]: afp_disconnect: primary reconnect failed
Jun 12 23:46:12 PNTower afpd[7876]: afp_disconnect: primary reconnect failed
Jun 12 23:48:49 PNTower afpd[10379]: afp_disconnect: primary reconnect failed
Jun 13 00:09:17 PNTower afpd[30238]: dsi_stream_write: Broken pipe
Jun 13 00:09:17 PNTower afpd[30238]: afp_alarm: connection problem, entering disconnected state
Jun 13 00:13:56 PNTower afpd[7876]: dsi_stream_write: Broken pipe
Jun 13 00:13:56 PNTower afpd[7876]: afp_alarm: connection problem, entering disconnected state
Jun 13 00:14:09 PNTower afpd[10249]: dsi_stream_write: Broken pipe
Jun 13 00:14:09 PNTower last message repeated 2 times
Jun 13 00:15:12 PNTower afpd[32740]: dsi_stream_write: Broken pipe
Jun 13 00:15:12 PNTower afpd[32740]: afp_alarm: connection problem, entering disconnected state
Jun 13 00:16:29 PNTower afpd[2821]: dsi_stream_write: Broken pipe
Jun 13 00:16:29 PNTower afpd[2821]: afp_alarm: connection problem, entering disconnected state
Jun 13 00:16:29 PNTower afpd[5377]: dsi_stream_write: Broken pipe
Jun 13 00:16:29 PNTower afpd[5377]: afp_alarm: connection problem, entering disconnected state
Jun 13 00:16:29 PNTower afpd[10379]: dsi_stream_write: Broken pipe
Jun 13 00:16:29 PNTower afpd[10379]: afp_alarm: connection problem, entering disconnected state
Jun 13 00:16:29 PNTower afpd[12910]: dsi_stream_write: Broken pipe
Jun 13 00:16:29 PNTower last message repeated 2 times
Jun 13 00:16:29 PNTower afpd[16885]: dsi_stream_write: Broken pipe
Jun 13 00:16:29 PNTower last message repeated 2 times
Jun 13 00:16:29 PNTower afpd[15540]: dsi_stream_write: Broken pipe
Jun 13 00:16:29 PNTower last message repeated 2 times
Jun 13 00:18:10 PNTower afpd[13938]: dsi_stream_write: Broken pipe
Jun 13 00:18:10 PNTower last message repeated 2 times
Jun 13 00:18:10 PNTower afpd[18362]: dsi_stream_write: Broken pipe
Jun 13 00:18:10 PNTower last message repeated 2 times
Jun 13 00:49:32 PNTower kernel: mdcmd (63): spindown 11
Jun 13 01:14:19 PNTower afpd[10202]: afp_alarm: child timed out, entering disconnected state
Jun 13 01:16:59 PNTower afpd[22841]: dsi_stream_write: Broken pipe
Jun 13 01:16:59 PNTower last message repeated 2 times
Jun 13 01:17:00 PNTower afpd[28229]: dsi_stream_write: Broken pipe
Jun 13 01:17:00 PNTower last message repeated 2 times
Jun 13 01:17:16 PNTower afpd[20103]: dsi_stream_write: Broken pipe
Jun 13 01:17:16 PNTower last message repeated 2 times
Jun 13 01:17:16 PNTower afpd[17249]: dsi_stream_write: Broken pipe
Jun 13 01:17:16 PNTower last message repeated 2 times
Jun 13 01:17:49 PNTower afpd[25537]: dsi_stream_write: Broken pipe
Jun 13 01:17:49 PNTower last message repeated 2 times
Jun 13 01:17:49 PNTower afpd[30858]: dsi_stream_write: Broken pipe
Jun 13 01:17:49 PNTower last message repeated 2 times
Jun 13 01:21:42 PNTower emhttp: shcmd (144): /usr/sbin/hdparm -y /dev/sde &> /dev/null
Jun 13 01:38:29 PNTower afpd[9906]: afp_alarm: child timed out, entering disconnected state
Jun 13 02:52:14 PNTower kernel: mdcmd (64): spindown 11
Jun 13 02:53:54 PNTower emhttp: shcmd (145): /usr/sbin/hdparm -y /dev/sde &> /dev/null
Jun 13 03:40:01 PNTower logger: mover started
Jun 13 03:40:01 PNTower logger: skipping Backup/
Jun 13 03:40:01 PNTower logger: moving Backups/
Jun 13 03:40:01 PNTower logger: ./Backups/Exchange/NYCEMX001/Daily Incremental NYCEMX001_WED_6_12_2013-11.00.03.PM.bkf
Jun 13 03:40:01 PNTower logger: .d..t...... ./
Jun 13 03:40:01 PNTower logger: .d..t...... Backups/
Jun 13 03:40:01 PNTower logger: .d..t...... Backups/Exchange/
Jun 13 03:40:01 PNTower logger: .d..t...... Backups/Exchange/NYCEMX001/
Jun 13 03:40:01 PNTower logger: >f+++++++++ Backups/Exchange/NYCEMX001/Daily Incremental NYCEMX001_WED_6_12_2013-11.00.03.PM.bkf
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: attempting task abort! scmd(f2545240)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: [sdl] CDB: cdb[0]=0x2a: 2a 00 03 eb a4 20 00 04 00 00
Jun 13 04:03:48 PNTower kernel: scsi target0:0:10: handle(0x0014), sas_address(0x5001517e85bfcfe5), phy(5)
Jun 13 04:03:48 PNTower kernel: scsi target0:0:10: enclosure_logical_id(0x5001517e85bfcfff), slot(5)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: task abort: SUCCESS scmd(f2545240)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: attempting task abort! scmd(e5370f00)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: [sdl] CDB: cdb[0]=0x2a: 2a 00 03 eb a8 20 00 04 00 00
Jun 13 04:03:48 PNTower kernel: scsi target0:0:10: handle(0x0014), sas_address(0x5001517e85bfcfe5), phy(5)
Jun 13 04:03:48 PNTower kernel: scsi target0:0:10: enclosure_logical_id(0x5001517e85bfcfff), slot(5)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: task abort: SUCCESS scmd(e5370f00)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: attempting task abort! scmd(f1d69b40)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: [sdl] CDB: cdb[0]=0x2a: 2a 00 03 eb ac 20 00 04 00 00
Jun 13 04:03:48 PNTower kernel: scsi target0:0:10: handle(0x0014), sas_address(0x5001517e85bfcfe5), phy(5)
Jun 13 04:03:48 PNTower kernel: scsi target0:0:10: enclosure_logical_id(0x5001517e85bfcfff), slot(5)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: task abort: SUCCESS scmd(f1d69b40)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: attempting task abort! scmd(f5c47780)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: [sdl] CDB: cdb[0]=0x2a: 2a 00 03 eb b0 20 00 04 00 00
Jun 13 04:03:48 PNTower kernel: scsi target0:0:10: handle(0x0014), sas_address(0x5001517e85bfcfe5), phy(5)
Jun 13 04:03:48 PNTower kernel: scsi target0:0:10: enclosure_logical_id(0x5001517e85bfcfff), slot(5)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: task abort: SUCCESS scmd(f5c47780)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: attempting task abort! scmd(f5c47300)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: [sdl] CDB: cdb[0]=0x2a: 2a 00 03 eb b4 20 00 01 b8 00
Jun 13 04:03:48 PNTower kernel: scsi target0:0:10: handle(0x0014), sas_address(0x5001517e85bfcfe5), phy(5)
Jun 13 04:03:48 PNTower kernel: scsi target0:0:10: enclosure_logical_id(0x5001517e85bfcfff), slot(5)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: task abort: SUCCESS scmd(f5c47300)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: attempting task abort! scmd(f5c47480)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: [sdl] CDB: cdb[0]=0x2a: 2a 00 00 00 a7 40 00 00 70 00
Jun 13 04:03:48 PNTower kernel: scsi target0:0:10: handle(0x0014), sas_address(0x5001517e85bfcfe5), phy(5)
Jun 13 04:03:48 PNTower kernel: scsi target0:0:10: enclosure_logical_id(0x5001517e85bfcfff), slot(5)
Jun 13 04:03:48 PNTower kernel: sd 0:0:10:0: task abort: SUCCESS scmd(f5c47480)
Jun 13 04:04:16 PNTower logger: ./Backups/Exchange/NYCEMX001/Daily Incremental NYCEMX001_WED_6_12_2013-11.00.03.PM.log
Jun 13 04:04:16 PNTower logger: .d..t...... Backups/Exchange/NYCEMX001/
Jun 13 04:04:16 PNTower logger: >f+++++++++ Backups/Exchange/NYCEMX001/Daily Incremental NYCEMX001_WED_6_12_2013-11.00.03.PM.log
Jun 13 04:04:16 PNTower logger: ./Backups/Exchange/NYCEMX001
Jun 13 04:04:16 PNTower logger: ./Backups/Exchange
Jun 13 04:04:16 PNTower logger: .d..t...... Backups/Exchange/
Jun 13 04:04:16 PNTower logger: ./Backups/
Jun 13 04:04:16 PNTower logger: .d..t...... Backups/
Jun 13 04:04:16 PNTower logger: skipping Handbrake/
Jun 13 04:04:16 PNTower logger: moving Movies/
Jun 13 04:04:17 PNTower logger: ./Movies/Flight (2012) BRu/Flight.2012.1080p.BluRay.DTS.DL.x264-HDC.nfo
Jun 13 04:04:17 PNTower logger: .d..t...... ./
Jun 13 04:04:17 PNTower logger: .d..t...... Movies/
Jun 13 04:04:17 PNTower logger: cd+++++++++ Movies/Flight (2012) BRu/
Jun 13 04:04:17 PNTower logger: >f+++++++++ Movies/Flight (2012) BRu/Flight.2012.1080p.BluRay.DTS.DL.x264-HDC.nfo
Jun 13 04:08:33 PNTower logger: ./Movies/Flight (2012) BRu/Flight (2012) BRu.mkv
Jun 13 04:08:33 PNTower logger: .d..t...... Movies/Flight (2012) BRu/
Jun 13 04:08:33 PNTower logger: >f+++++++++ Movies/Flight (2012) BRu/Flight (2012) BRu.mkv
Jun 13 04:34:16 PNTower kernel: mdcmd (66): spindown 16
Jun 13 04:34:16 PNTower kernel: mdcmd (67): spindown 18
Jun 13 04:34:57 PNTower kernel: mdcmd (68): spindown 14
Jun 13 04:37:37 PNTower kernel: mdcmd (69): spindown 12
Jun 13 04:38:08 PNTower kernel: mdcmd (70): spindown 15

 

Parity check just finished no errors, no red balls, all seems well, so not sure what the SAS events logged mean. Maybe a glitch that it recovered from...?

 

AFP disconnects are mostly from the Mac server trying to connect to unRAID, which it did successfully prior to executing the parity check.

 

Update: Mover completed, verified everything moved off the cache drive, no red balls (drive SDL is fine), all looks well.

SAS hiccup?

 

Link to comment

I had already added the mem parameter with rc13.  So, the only change between rc13 and rc14 for me is everything else Tom might have changed EXCEPT for the presence of the mem parameter.

Unlikely since I've been running with the mem parm with 13 in an attempt to solve the problem.

Which is exactly how that parameter is solving your problem -- by making your kernel boot itself in non-PAE mode. Try removing it and you'll see those speeds nose-dive.

 

Then why didn't it work to speed things up in rc13?  Mind you I did run rc13 at first without mem=4095 and only added it because folks were suggesting it as a possible fix.  Also to be clear I was not getting SLOOOOOW speeds on the order of single digits like others.  I was getting very herky jerky speeds ranging from 100 down to 0 for a single large file all averaging out to about 30.  That was on rc13 with and without mem=4095.  Now with mem=4095 I get a fairly consistent 80.  But for grins, I will indeed go home and try some speed tests with mem=4095 removed just for completeness.

 

Also note that I only have 4GB in my machine.

Link to comment

37k seconds and 99% looks like a normal time for a 3TB parity drive to me, is that correct?

 

Seems a bit slow (a bit over 10 hours) ... my D525 Atom-based setup does a parity check in 7:33 => I think a rebuild would probably take about the same.    [That's with 6 3TB WD Reds]

 

... but it's probably fine, considering this system was also running the mover (with a fair amount of stuff to move apparently);  was also running some apps doing an ISO extraction;  and had been used a bit from another PC (in an apparently unsuccessful attempt to watch a movie);  then I'd say it's not a bad time.    Also, my system took about 8:15 until I changed some of the disk tuning parameters [Thanks to Pauven for his work on those that motivated me to experiment and find the optimal parameters for my system].    Doing ANYTHING during a parity check can add a significant amount of time to the process, since it results in a lot of disk thrashing.

 

Link to comment

Parity check at 99%

I Forgot, I have backups that start at 11pm getting written to the unRAID cache drive. I had several movies, tv shows and backups scheduled to be moved to the array at the default mover schedule, (which are all still moving, do to the parity check running) so this parity check will not be my best time but a good exercise for RC14.

 

 

Update:

Jun 13 08:08:24 PNTower kernel: md: sync done. time=37046sec
Jun 13 08:08:24 PNTower kernel: md: recovery thread sync completion status: 0

 

37k seconds and 99% looks like a normal time for a 3TB parity drive to me, is that correct?

Considering what I had going on, it's a very good time.

That is do to the disk tunable parameters, defaults would have been not so much (additional 2-3 hours)

 

Link to comment

37k seconds and 99% looks like a normal time for a 3TB parity drive to me, is that correct?

 

Seems a bit slow (a bit over 10 hours) ... my D525 Atom-based setup does a parity check in 7:33 => I think a rebuild would probably take about the same.    [That's with 6 3TB WD Reds]

 

Yeah your 6 equally sized drives, with nothing going on, against what my config is and had going on. Really dude... go away.

Link to comment

Yeah your 6 equally sized drives, with nothing going on, against what my config is and had going on.

 

Read the whole post => I noted that with all the other stuff you had going on it wasn't bad.  Any other activity on the system always causes a notable extension of the time;  and in your case that old 250GB drive has to be a major "drag" at the start ... plus you've also got to contend with the 2TB drives.    Given all that, it's not a bad time at all.    But the question was whether it was a good time for a 3TB parity drive ... and in that context it's not -- but there's good reason why it's slower.    BTW, the number of drives makes almost no difference in the time -- it's their relative sizes and speeds.

 

As a matter of interest, why do you keep that old 250GB drive in there??

 

Link to comment

So someone confirm what I need to do in order to boot and test RC14. My machine has 4GB of ram installed, no more, no less. I think the answer is nothing, but if I want, I can edit the file talked about earlier with the dollar sign in front of a php variable. Correct?

Link to comment

So someone confirm what I need to do in order to boot and test RC14. My machine has 4GB of ram installed, no more, no less. I think the answer is nothing, but if I want, I can edit the file talked about earlier with the dollar sign in front of a php variable. Correct?

 

With your hardware config (4GB RAM)...yes, answer is technically nothing other than follow the upgrade instructions in 5.0-rc14 release notes.  You edit the file to include the "$" once unRAID boots from a console or telnet session.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.