unRAID Server release 4.5-beta6 available


Recommended Posts

As a counterpoint data point, I have to say that the 4.5 betas have given me a huge boost in write speed.  This is writing to disk shares only, I do not use User Shares, which appear to be involved in all or most of the reports above.  My writes have always averaged about 9MB/s, but now have increased to almost 15MB/s, an enormous improvement!

 

The speed increase is so much better for me that, if Tom were to revert something back in order to restore the User Share writing speed, and that reversion also restored the previous slower speed on writes directly to disks, I would have to stick with the current version.  Hopefully, the problem is just with a module that is User Share related.

 

I too noticed this on my build which I found very interesting. Is there a reason why you're using disk shares instead of user shares other than full control over where your data is going (avoiding 6 drives spinning up all at once for accessing your movie library vs. having them all consolidated onto 1 drive)?

Link to comment
  • Replies 314
  • Created
  • Last Reply

Top Posters In This Topic

Is there a reason why you're using disk shares instead of user shares other than full control over where your data is going (avoiding 6 drives spinning up all at once for accessing your movie library vs. having them all consolidated onto 1 drive)?

 

Total control over their placement is a primary reason.  But also, when I started with unRAID, User Shares were just being announced, and remained a very beta item that I did not fully trust for quite awhile, long after they were out of beta.  I've been working with computers a long time, and been through too many disasters, which makes me much slower now to pick up new things.  I'd much rather have you guys doing the guinea pig thing for me, and I'll be happy to help apply the bandages you will need.  Plus, it would be a big job to convert over to them, to retrain SageTV where everything is.

Link to comment

In testing out 'cache_dirs' by Joe I've found a few strange behaviors in spinning drives up/down.

 

If some drives are spun up and I click [spin Down], then the remaining drives are spun up first, and then all are spun down except the parity drive. Re-clicking [spin Down] has no effect after that.

 

If I click [spin Up] first and then click [spin Down] it seems to work correctly.

 

I can confirm at least part of this, I did not try spinning up then down.

 

I had 5 or 6 drives spun up, and was not expecting further disk activity, so clicked the Spin Down button, and watched the other drives spin up first, and then most of the drives spin down.  Interestingly, 2 of the drives that were previously not spinning, were left spun up!  Probably received their spin down command while they were still spinning up.  I too am running cache_dirs, although I don't know of any related cause.  I suspect it is the preceding sync (a Good Thing) that is causing the spin ups.  For now, it is just as easy to use MyMain to individually spin a few drives down.

Link to comment

Ok, thanks - it's either the linux kernel or new Samba release causing the slowdown... I'll figure out which it is and try to get a fix in for next release.

 

Tom, and tried also with NFS with the same results. Based on that I guess it's not the new Samba release.

Using NFS I can copy directly to cache drive 80+MB/s, while via user share only 12-13MB/s.

 

Or is Samba too involved in NFS transfers somehow?

Link to comment

Well speaking of us beta testing the shares functionality... I have another very very strange issue. I had discovered that my log file was full of duplicate files the other day... confused I started going into my drives and moving files around to get rid of the problem. Disk 1 and Disk 2 in my array were the only drives suffering from this and I found it very peculiar. I have around 25,000 MP3's on the server... rather than hound down which ones were being affected I moved all the music files off the array... made sure no more were on any of the drives... then put them back on. When I woke up the next morning I noticed that the exact same thing happened again. Then as I test I manually moved the files onto a different disk (not 1 or 2)... the files successfully transferred over no duplicates. At this point I was like what the heck is going on? I use Directory Opus and set it up for a split folder view of both \\tower\disk1\ and \\tower\disk2\... I manually right clicked and created a new text document in disk1... instantaneously the file appeared on disk2 also. For some reason disk1/2 are mirroring each other for new files (they have different storage amounts available so I'm assuming that the stuff that was already on the drives before this started happening is not being affected by this mirroring process). My question is how do I fix this because I don't want this happening lol?

 

Disk1 is a 1TB Seagate 7200.11 Drive

Disk2 is a 1TB Western Digital RE2-GP Drive

 

I have my shares setup in 3 categories: Media, Backups and Software each is split-level 2 with highwater method and my I Have NCQ disabled. The only thing I have installed is unMenu and here is my go file.

http://www.charlesjorourke.com/hosting/go.txt

 

I'm going to restart the server now and get a fresh log going because this one is completely full of nonsense duplicate/removal logs (i was messing with torrents) etc. but you can view it anyway:

http://www.charlesjorourke.com/hosting/syslog-2009-05-15(2).txt

 

Once the server restarts I'll upload a new syslog and then I'll upload a 3rd later on when it has been running for a while.

 

edit: Here's the fresh reboot log http://www.charlesjorourke.com/hosting/syslog-freshreboot.txt

 

Link to comment

There does appear to be something wrong with your system, possibly with the User Shares system, so I would recommend not doing any file management with them for the time being.  You might try reverting back to older versions, such as v4.5-beta4 or v4.4.2, and testing.  For sure, when you see those dupe messages appear, or otherwise detect corruption, don't do anything but capture the syslog and reboot a fresh system.

 

The incorrect dupe errors are all consistent with your test write of Disk 1 and 2.  They also seem to indicate that files on Disk 2 are mirrored on Disk 1, not physically, but within the presented file systems, and the User Share system rightly reports them as dupes.

 

You do have a mis-configuration in your go file, that may be complicating things, although I don't think it is the the cause of your problems.  You added the line that starts UnMENU in the middle of the do loop that sets the readahead values for the data drives.  Move it below the done line.  As it is now, you are starting UnMenu 6 times, once for each data drive!

Link to comment

Hah thanks for catching that! I will most likely revert back to 4.4.2... that way I get my write speeds back and avoid this issue at least.

 

Edit: Not good... even after reverting back to 4.4.2 I am still getting the mirroring. There's no way this has been going on this whole time and I haven't noticed... and yes the writing to cache speed issues were fixed when I reverted back. I'm very concerned with the mirroring though... any more insight would be wonderful. I'm not at home so I'm doing this all remotely until after 6pm est.

 

It's definitely share related and only with certain folders too... on some areas of the drive I can created a file/copy one over whatever and it won't mirror it.

 

Edit 2: This reminds me of symbolic link behavior to be honest... now that I think of it. When I originally set this server up before shares were in full swing I was playing with symbolic links at one point. That might be exactly what this is... left over links.

 

Edit 3: Haha yep left over symbolic links that I haven't noticed until now. I'm wondering why the duplicate messages didn't arrive earlier than the 4.5 betas... All should be well now and I'm glad I sorted this crap out.

Link to comment

Actually, no.  The files you are referring to are called '.reiserfs_priv' and they exist only in the root directory of a file system.  They are used to store "extended attributes" which are necessary to support Active Directory correctly.  All releases prior to 4.5-beta2 should never create these files.  All releases starting with 4.5-beta3 will only possibly create these files if you select 'Active Directory' as the security mode.  So if you went directly from 4.3.3 to 4.5-beta6 (and are not using Active Directory) I don't see how you could see these files.

I can only report what I experienced. And I did upgrade from prior to 4.5-beta2 and I did see these ".reiserfs_priv" files on the root of every harddisk (except the one which unRAID couldn't mount, IIRC, but I'm not 100% sure on that one) after having experienced the problems I reported and then going back to 4.3 something. And I never ever activated Active Directory support. These are facts, regardless of whether you find them plausible or not...  ;)  What I can not say with certainty is when those files were created exactly. I noticed them the first time after having done 4.3->4.5->4.3. Never noticed them before. But it's probably possible that they were created before the upgrade to 4.5. However, I do believe they were created during the upgrade.

 

What I can say is that after having noticed - and then manually deleted - those '.reiserfs_priv' files I never saw these files again. I'm back on 4.5-beta6 now and unRAID didn't recreate them this time. Seems to me that they were only created during the first 4.3 -> 4.5-beta6 upgrade process.

 

Again, need to see a system log.

Would love to give you one. But how am I supposed to provide you with a system log of a problem that I no longer experience?

 

Also, I don't see how a file system with 0 free blocks would not mount - never have seen that.

Ok, so it seems that I experienced 3 bugs at once:

 

(1) Disks not mountable or unmountable are displayed as "unformatted".

(2) For whatever reason unRAID created '.reiserfs_priv' files during 4.3 -> 4.5 upgrade, although I never ever even touched "Active Directory" security mode.

(3) Mounting one of the two 0 byte-free harddisks failed.

 

I still find it a plausible explanation that the non-mountability was caused by the harddisk being full and unRAID failing to create that '.reiserfs_priv' file. Maybe you could try to reproduce that specific situation, just to make sure.

 

Anyway, for me the problem is gone, so I'm happy enough.

Link to comment

Just upgraded from 4.4.2.

 

Seems to have issues with cache drive speed:

 

- using 4.4.2 write to cache drive via user share was ~30MB/sec (still too low for my full satisfaction)

- using 4.5-beta6 write to cache drive via user share is between 9-14MB/sec

   However, if I try to write to cache drive directly (not via user share) I can reach the level I had with 4.4.2 at ~30MB/sec

 

Is it a bug or the problem is on my side?

 

Thank you for your support in advance!

 

After reading this, I thought I would try it myself.  If I write to \\tower\movies (using cache) I get 18-20MB/s.  If I write the same file to \\tower\cache\movies, I get 55-60MB/s.

 

 

 

I realize I had backups of 4.4.2 on my flash drive, so I renamed the files and rebooted.  I get 46-50 MB/s to \\tower\movies and 55-60 MB/s to \\tower\cache\movies.  So, something has definitely changed.

 

Please try this experiment & let me know the result.  In 4.5-betaX there is a config parameter on the Settings page, under 'Disk settings' called 'Force NCQ disabled'.  Please set this to 'No' and let me know if there is any difference in cache write speed.

 

Unfortunately, that did not help.  I changed it to "No", rebooted and tried it...still about 20MB/s. 

 

Ok, thanks - it's either the linux kernel or new Samba release causing the slowdown... I'll figure out which it is and try to get a fix in for next release.

 

Just to confirm this, it didn't solve the issue for me either. Thank you for taking care.

 

 

same here.

 

writes of 60MB/s to cache drive, but only 10-15 on a share.

 

 

ET

Link to comment

I have the same issues as everyone else with the cache drive writes.  I've also noticed an issue with this release back to beta 3 also, however.  I thought it might have been due to a Windows update, but I was encountering some playback issues during video playback.  The format of the video didn't matter...it occurred with mkv, mp4, avi, etc.  The audio would always be fine, but ever so often the video would just pause for a second or two and then fast forward to catch up with the audio.  It would only happen once or twice during an entire movie, but it was still annoying.  I kept hoping it would go away with each beta release, but it never did.

 

Anyway, I reverted back to 4.4.2 final late last week and have not encountered a single instance of the skipping video since.  I had never encountered this previously and I've been solely on unRAID for about a year now.

Link to comment

bjp999 recommended that I post the problem here about getting error messages with more then 15 drives (the 1st post is in the software area). Here is the error I get in the syslog:

 

May 15 07:13:00 Tower2 kernel: BUG: unable to handle kernel paging request at 6d614e8e

May 15 07:13:00 Tower2 kernel: IP: [<f82e525c>] xor_block+0x76/0x84 [md_mod]

May 15 07:13:00 Tower2 kernel: *pdpt = 0000000002de0001 *pde = 0000000000000000

May 15 07:13:00 Tower2 kernel: Oops: 0000 [#1] SMP

May 15 07:13:00 Tower2 kernel: last sysfs file: /sys/devices/pci0000:00/0000:00:1c.3/0000:03:00.0/ho

st0/target0:4:0/0:4:0:0/block/sda/stat

May 15 07:13:00 Tower2 kernel: Modules linked in: md_mod ata_piix sata_promise e1000 sata_sil24 liba

ta

May 15 07:13:00 Tower2 kernel:

May 15 07:13:00 Tower2 kernel: Pid: 1906, comm: unraidd Not tainted (2.6.29.1-unRAID #2)

May 15 07:13:00 Tower2 kernel: EIP: 0060:[<f82e525c>] EFLAGS: 00010202 CPU: 0

May 15 07:13:00 Tower2 kernel: EIP is at xor_block+0x76/0x84 [md_mod]

May 15 07:13:00 Tower2 kernel: EAX: 00001000 EBX: 6d614e76 ECX: c2890000 EDX: c28fb000

May 15 07:13:00 Tower2 kernel: ESI: c2893000 EDI: c28fb000 EBP: f63cbefc ESP: f63cbedc

May 15 07:13:00 Tower2 kernel:  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068

May 15 07:13:00 Tower2 kernel: Process unraidd (pid: 1906, ti=f63ca000 task=f6fff740 task.ti=f63ca00

0)

May 15 07:13:00 Tower2 kernel: Stack:

May 15 07:13:00 Tower2 kernel:  c2892000 c2893000 c2868000 6d614e76 00001000 00000001 c28f11e0 c28f1

3a0

May 15 07:13:00 Tower2 kernel:  f63cbf3c f82e7474 00000001 00000005 00000011 00001000 00001000 f5aab

000

May 15 07:13:00 Tower2 kernel:  c28fb000 c2890000 c2892000 c2893000 c2868000 c28f1910 00000000 c28f1

1e0

May 15 07:13:00 Tower2 kernel: Call Trace:

May 15 07:13:00 Tower2 kernel:  [<f82e7474>] ? compute_parity+0x101/0x2cc [md_mod]

May 15 07:13:00 Tower2 kernel:  [<f82e7ff0>] ? handle_stripe+0x8cc/0xc5a [md_mod]

May 15 07:13:00 Tower2 kernel:  [<c0344f57>] ? schedule+0x5e3/0x63c

May 15 07:13:00 Tower2 kernel:  [<f82e8804>] ? unraidd+0x9e/0xbc [md_mod]

May 15 07:13:00 Tower2 kernel:  [<f82e8766>] ? unraidd+0x0/0xbc [md_mod]

May 15 07:13:00 Tower2 kernel:  [<c01301bc>] ? kthread+0x3b/0x63

May 15 07:13:00 Tower2 kernel:  [<c0130181>] ? kthread+0x0/0x63

May 15 07:13:00 Tower2 kernel:  [<c01035db>] ? kernel_thread_helper+0x7/0x10

May 15 07:13:00 Tower2 kernel: Code: f8 8b 71 0c 89 45 ec 75 13 56 8b 45 f0 89 d1 53 8b 5d ec 89 fa

ff 53 14 5a 59 eb 15 ff 71 10 89 d1 8b 45 f0 89 fa 56 53 8b 5d ec <ff> 53 18 83 c4 0c 8d 65 f4 5b 5e

5f 5d c3 55 89 e5 57 31 ff 56

May 15 07:13:00 Tower2 kernel: EIP: [<f82e525c>] xor_block+0x76/0x84 [md_mod] SS:ESP 0068:f63cbedc

May 15 07:13:00 Tower2 kernel: ---[ end trace 6c2b49c6389e0999 ]---

 

This error occurs when adding a Western digital WD10EACS OR WD10EAVS as the 16th drive and the main web page hangs on "mounting..." However, adding a 300GB or 500GB Maxtor drive works fine with no errors. Any ideas?? Thanks in advance.

Link to comment

The post above is referring to this thread, which includes the following important info:

The first line in the error suggests you might have some corruption with the file-system on disk12.  You might try a reiserfsck check of it.

Joe, Thanks for the suggestion.

I ran reiserfsck and it found no corruptions on disk12. Putting the array back to 15 disks eliminates the error. It only occurs when I try to expand the system past 15 drives.

 

However the post above says that adding a Maxtor as a 16th drive works fine, a fact that significantly alters the problem analysis, and is NOT found in the other thread.

 

[ALR: thank you for posting here, as I had thought to post a link here to that thread, but had not gotten around to it.]

Link to comment

RobJ

I came across this while experimenting with my 2nd server over the last couple of days. Since it has smaller drives, the parity check doesn't take as long. I might try a 750GB drive next to see what happens. Strange problem....

Link to comment

More spin down weirdness ...

 

In testing out 'cache_dirs' by Joe I've found a few strange behaviors in spinning drives up/down.

 

If some drives are spun up and I click [spin Down], then the remaining drives are spun up first, and then all are spun down except the parity drive. Re-clicking [spin Down] has no effect after that.

 

If I click [spin Up] first and then click [spin Down] it seems to work correctly.

 

I can confirm at least part of this, I did not try spinning up then down.

 

I had 5 or 6 drives spun up, and was not expecting further disk activity, so clicked the Spin Down button, and watched the other drives spin up first, and then most of the drives spin down.  Interestingly, 2 of the drives that were previously not spinning, were left spun up!  Probably received their spin down command while they were still spinning up.  I too am running cache_dirs, although I don't know of any related cause.  I suspect it is the preceding sync (a Good Thing) that is causing the spin ups.  For now, it is just as easy to use MyMain to individually spin a few drives down.

 

The Spin Down button does insist on spinning all other drives up first, and then some of them often stay spun up.  So each time I click it, I may end up with a different set of drives spun up, until I finally get them all spun down.  The most reliable method to more quickly spin them all down, is to spin them all up first, then spin them all down.

 

Spin down problem #2:  I am getting spurious spin down commands, showing in my syslogs, usually upon starting a file operation to a drive that has not been accessed recently, and usually (but not always) is already spun down.  At least in some cases, this delays the initiation of the operation.  I now have seen at least 5 of these, a spin down command in the syslog at the exact moment it was supposed to spin up for access.

 

I just had another 'spurious' spin down.

May 20 16:33:58 JacoBack emhttp: shcmd (89): /usr/sbin/hdparm -y /dev/sdg >/dev/null
May 20 16:40:46 JacoBack emhttp: shcmd (90): /usr/sbin/hdparm -y /dev/sdg >/dev/null

I had started a copy of 3 files, 12 gigabytes, from Disk 5 (sdg), walked away to do something else, came back too late and saw that it had spun down at 16:33:58, marked the same 3 files and deleted them, at 16:40:46, at which point sdg received another spin down command, then Disk 5 and the parity drive spun up and performed the delete.  Yesterday, in a similar operation, with all drives spun down, I selected some files to delete, and it was the parity drive instead that received the spurious spin down command.

Link to comment

The most reliable method to more quickly spin them all down, is to spin them all up first, then spin them all down.

 

... or use the unmenu / myMain spindown command.  It does not spin up drives first and works almost instantly.

Link to comment
- Fix ntp startup problem.

 

At first, this was still not working for me.  Then I discovered that my /etc/resolv.conf file that I had been adding my nameserver to, had been overwritten with the single line, "# Generated entries:".  I found rc.inet1.conf which apparently now is trying to generate a resolv.conf, but expects to find DNS servers configured, which mine aren't, they were all empty.  Then after a second reboot, I discovered that the first DNS server had been filled in somehow with my nameserver, so rebooted again, and sure enough, my NTP is now working!

 

One minor item, when I checked the DNS server settings one last time, now the second DNS server had also been filled in, with a second copy of the same nameserver IP, which is perhaps unnecessary.

Link to comment

Tested the power button this afternoon, when a thunderstorm began popping off quite near, and I have to say:  Outstanding job!  I was expecting to wait for drives to be spun up first, before terminating, and/or transactions to be replayed and a parity check to start on the next boot, but it actually behaved exactly the way I want it to, when I want to shut it down quickly.  I actually would not have minded a few transactions being replayed, but there were none, and I would not have minded a parity check started, but it didn't.  I'm really happy it does not bother spinning drives up either, but quickly and seemingly safely shuts down the array and terminates.

Link to comment

I had a quick test to see if writing to the disk shares was faster than the user shares, and indeed it was.

 

I was regularly getting over 20mb/s using the disk shares and never more than 10mb/s using the user shares....

 

Justin

Link to comment

I've quite big trouble with my 4.5-beta6 unRAID array:

 

One harddisk has failed. So I've done exactly what the documentation says. But now "http://tower" is extremely slow and all harddisks (old + new) show as "unformatted". The old harddisks are all "green", the new harddisk is "orange".

 

Estimated speed:	24	KB/sec
Estimated finish:	654760.5	minutes

 

What can I do? Is this going to repair itself?

Link to comment

DO NOT PRESS THE FORMAT BUTTON.

 

Capture and post a syslog.

 

Shutdown your server cleanly (using the Web GUI or Powerdown script)

 

Reboot the server

 

See if everything is back to normal.  If not, capture and post another syslog.

Link to comment

DO NOT PRESS THE FORMAT BUTTON.

We cannot stress this enough.  The drives are incorrectly reported as "unformatted."  Do NOT press the format button, or you will lose data as you request the array to format them when you really do not intend to wipe their data.

Capture and post a syslog.

On 4.5-beta6, you can in your browser type:

http://tower/log/syslog

Then, you can save the page as a text file and attach it to your next post.  Or from your telnet prompt, you can follow the directions in the wiki here:

http://lime-technology.com/wiki/index.php/Troubleshooting#Capturing_your_syslog

Shutdown your server cleanly (using the Web GUI or Powerdown script)

 

Reboot the server

 

See if everything is back to normal.  If not, capture and post another syslog.

Good instructions.

 

Usually, when drives show as unformatted (and they were previously part of your array) it is because they were not mounted at the time the display looked for them.

 

Have you tried to press the "refresh" button on your browser.  It could be you caught the status as it was starting to mount the drives?  A "refreshed" look at the status may show a different picture.

 

Also, you said "One harddisk has failed. So I've done exactly what the documentation says."

Since there is a lot of documentation, and some of it is not easy to understand... could you tell us exactly what you did.  This is most critical in understanding what has happened.

 

Also, if you have ANY add-ons, or any scripts running, or anything special about your array, please let us know.  Were you logged in via telnet when you re-started the array, had you "changed directory" to any of the disks before attempting to stop the array and seeing everything "unformatted"

 

Did you initially see "unformatted" or was that when you attempted to stop the array?

 

Joe L.

Link to comment

Capture and post a syslog.

See:

 

http://madshi.net/syslog1.txt  (too big to be attached)

 

I've also attached my "go" script.

 

Have you tried to press the "refresh" button on your browser.

I had tried that, no difference. Now after a few hours I checked back and now the array seems to be started, at least I have the option to stop it. But still all harddisks show as unformatted. The old disks are all "green", the new one is now "red".

 

Shutdown your server cleanly (using the Web GUI or Powerdown script)

 

Reboot the server

 

See if everything is back to normal.  If not, capture and post another syslog.

After a reboot the array starts automatically and the "new" disk is shown "red". See attachment "syslog2.txt".

 

I'm beginning to think that maybe the controller for "disk 8" (the one where the failed harddisk was attached) has gone bad and maybe the old disk hasn't failed at all?

 

Since there is a lot of documentation, and some of it is not easy to understand... could you tell us exactly what you did.

1. Replaced failed harddisk.

2. Start server -> array is in stopped state.

3. Saved disk.cfg and super.dat to my PC.

4. Ticked the "I'm sure" checkbox, and pressed "Start will bring the array on-line, start Data-Rebuild, and then expand the file system."

5. Got what I described in my previous post.

 

Also, if you have ANY add-ons, or any scripts running, or anything special about your array, please let us know.

I've a small adjustment in the go script, but nothing special otherwise. No plugins, no scripts other than the go tweak.

 

were you logged in via telnet when you re-started the array, had you "changed directory" to any of the disks before attempting to stop the array and seeing everything "unformatted"

No and no. After having pressed the "start Data-Rebuild" button I was however no patient enough to wait until I got an updated status page. After about 10-15 seconds nothing had happened. So I pressed F5 to refresh the page. It took another 10-15 seconds, then finally I saw the unformatted disks etc...

 

Did you initially see "unformatted" or was that when you attempted to stop the array?

I never tried to stop the array before writing my previous post. Now I've done that to reboot the server (as you suggested). Now the new disk showed as "not installed" with a red LED. The old disks are all green and properly mounted now. So basically I have the same situation as before replacing the failed harddisk.

 

Thanks for your help!!

Link to comment

I think I need to ask Tom if he could add the drive status to the import list in the syslog - because I'm looking here at a perfect syslog, with all 11 drives identified, assigned, mounted, with all of their Reiser file systems mounted and their transaction logs checked without any issues.  If you had not said otherwise, I would have to say there are 11 green balls visible.  The recovery thread even woke up and checked, and found nothing to do!  If it's true that this is a replacement Disk 8, then right now, it looks to the system like a good data disk.  You are sure it did not have time to fully rebuild?  If not, then you may have to un-assign it, reboot, and re-assign it, then start another rebuild.

 

A off-topic comment:  your go script ends with a single find command.  You may want to read the Keep directory entries cached section of the Improving unRAID Performance page, and in particular, this thread.

Link to comment

I think I need to ask Tom if he could add the drive status to the import list in the syslog - because I'm looking here at a perfect syslog, with all 11 drives identified, assigned, mounted, with all of their Reiser file systems mounted and their transaction logs checked without any issues.  If you had not said otherwise, I would have to say there are 11 green balls visible.  The recovery thread even woke up and checked, and found nothing to do!  If it's true that this is a replacement Disk 8, then right now, it looks to the system like a good data disk.  You are sure it did not have time to fully rebuild?

Yes, I'm sure. And Disk 8 definitely has a red LED. And if I stop the array, Disk 8 is listed as "not installed". However, before I stop the array, Disk 8 is listed with the correct model name and serial number.

 

But please check out syslog1. There are loads of errors listed in there. Only syslog2 looks clean to me.

 

If not, then you may have to un-assign it, reboot, and re-assign it, then start another rebuild.

Will give that a try later.

 

A off-topic comment:  your go script ends with a single find command.  You may want to read the Keep directory entries cached section of the Improving unRAID Performance page, and in particular, this thread.

Thanks! Sounds like a very useful improvement. Will definitely look at this, once the disk8 problem is solved.

Link to comment
Guest
This topic is now closed to further replies.