Auggie

Members
  • Posts

    387
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Auggie

  1. I've been running the 6 beta on my backup unRAID for awhile now but haven't upgraded to the newest releases until recently, so I just saw the new interface changes and I'm wonderfully surprised and impressed! So much more information on the server now available in the GUI, though I'm not yet decided if I like the way the "Array Operation" has been moved to it's own tab. Still, it sure is much more informative than v5 and since I have had zero issues through the v6 beta releases on my backup unRAID, I'm wondering if everyone else has also experienced reliable operation; enough so that it would be safe enough to upgrade my media unRAID to version 6? EDIT: I also just discovered that RFS drive format is being discontinued and unRAID is switching to XFS as the default format. Since I'm gradually swapping out to 6TB drives, I'm curious if tis better to go to XFS formatting with version 6, further adding impetus to upgrade my media server to v6.
  2. So far, no good: root@UnRAID:~# reiserfsck /dev/md13 reiserfsck 3.6.24 Will read-only check consistency of the filesystem on /dev/md13 Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --check started at Tue Jan 20 07:01:39 2015 ########### Replaying journal: Done. Reiserfs journal '/dev/md13' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. Bad root block 0. (--rebuild-tree did not complete) Aborted (core dumped) I'm hoping that it just spit out the results of the previous reiserfsck which was a complete lockup of the computer (which I never experienced before). Running it a second time hoping it will initiate a brand new analysis and subsequent recovery recommendations... -- UPDATE: Nope. Same result and process aborted. So basically all data is now gone from the drive.
  3. Well, the first problem that I encountered is that unRAID no longer sees the drive as part of the array ("not installed"), though it acknowledges it's presence as "unformatted". I'm starting the reiserfsck "check disk" option, but I'm afraid that if unRAID does not have it as part of the array, any tree rebuilding may not reflect on the parity drive (if needed). Maybe any files will still be recovered and viewable but outside of the array, requiring that I manually transfer the files to a new replacement drive added to the array itself (taking up slot 13 the suspect drive previously held).
  4. Hmmm... So the possibility exists that I may still be able to recover at least some of the data off the drive then. I just packaged the drive, but will reinstall and initiate another reiserfsck. Hopefully it will complete successfully, but regardless, I will then RMA the drive.
  5. I've been running a pair of 6TB drives (1 parity Seagate Desktop, 1 data WD Red) on a fully loaded 24 drive setup for several months now with no problems. Got a third 6TB WD Red, performed a parity check with no issues, swapped in new 6TB and successfully completed a data rebuild/expansion. Then I started to move large files around from drive to drive via telnet command lines and a completely different drive in the array started coughing up errors into the thousands (a Hitachi Deskstar 4TB). Stopped array, performed a SMART test with zero hardware problems, started up in maintenance mode to perform a reiserfsck, which recommended a --rebuild-tree. Commenced the tree rebuild then after the first pass, the telnet session froze with syslog messages: Linux 3.9.11p-unRAID. root@UnRAID:~# reiserfsck /dev/md13 reiserfsck 3.6.24 Will read-only check consistency of the filesystem on /dev/md13 Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --check started at Mon Jan 19 15:10:20 2015 ########### Replaying journal: Done. Reiserfs journal '/dev/md13' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. \/ 1 (of 41|/ 1 (of 154// 29 (of 146-block 901775361: The level of the node (0) is not correct, (1) expected the problem in the internal node occured (901775361), whole subtree is skipped / 2 (of 154\block 901811789: The level of the node (8466) is not correct, (2) expected the problem in the internal node occured (901811789), whole subtree is skipped / 2 (of 41|/ 13 (of 155\/ 3 (of 168-block 955285505: The level of the node (0) is not correct, (1) expected the problem in the internal node occured (955285505), whole subtree is skipped /135 (of 155-/ 8 (of 106-block 823409822: The level of the node (59748) is not correct, (1) expected the problem in the internal node occured (823409822), whole subtree is skipped / 3 (of 41|/ 49 (of 87// 3 (of 85|block 878530878: The level of the node (8966) is not correct, (1) expected the problem in the internal node occured (878530878), whole subtree is skipped / 50 (of 87/block 829670944: The level of the node (18829) is not correct, (2) expected the problem in the internal node occured (829670944), whole subtree is skipped / 4 (of 41-/ 11 (of 92-block 976390105: The level of the node (56870) is not correct, (2) expected the problem in the internal node occured (976390105), whole subtree is skipped / 5 (of 41\/ 40 (of 170// 30 (of 88\block 911754612: The level of the node (36899) is not correct, (1) expected the problem in the internal node occured (911754612), whole subtree is skipped /167 (of 170-/122 (of 128|block 876981361: The level of the node (36338) is not correct, (1) expected the problem in the internal node occured (876981361), whole subtree is skipped /168 (of 170// 1 (of 149-block 896106497: The level of the node (0) is not correct, (1) expected the problem in the internal node occured (896106497), whole subtree is skipped / 6 (of 41// 68 (of 170\/162 (of 170/block 911919319: The level of the node (26663) is not correct, (1) expected the problem in the internal node occured (911919319), whole subtree is skipped / 69 (of 170-/ 1 (of 170\block 911928423: The level of the node (20605) is not correct, (1) expected the problem in the internal node occured (911928423), whole subtree is skipped / 70 (of 170|block 912060140: The level of the node (54955) is not correct, (2) expected the problem in the internal node occured (912060140), whole subtree is skipped / 7 (of 41// 12 (of 170-/ 47 (of 131/block 830930955: The level of the node (36553) is not correct, (1) expected the problem in the internal node occured (830930955), whole subtree is skipped / 47 (of 170|/156 (of 159|block 893485059: The level of the node (0) is not correct, (1) expected the problem in the internal node occured (893485059), whole subtree is skipped / 48 (of 170// 1 (of 85-block 923205665: The level of the node (32534) is not correct, (1) expected the problem in the internal node occured (923205665), whole subtree is skipped / 51 (of 170//137 (of 170-block 953745426: The level of the node (0) is not correct, (1) expected the problem in the internal node occured (953745426), whole subtree is skipped / 52 (of 170\block 953768738: The level of the node (5301) is not correct, (2) expected the problem in the internal node occured (953768738), whole subtree is skipped / 8 (of 41|/ 13 (of 151// 68 (of 86/block 823409792: The level of the node (45337) is not correct, (1) expected the problem in the internal node occured (823409792), whole subtree is skipped / 14 (of 151-/ 21 (of 85\block 823409817: The level of the node (34870) is not correct, (1) expected the problem in the internal node occured (823409817), whole subtree is skipped / 83 (of 151|/ 77 (of 86/block 896106502: The level of the node (51456) is not correct, (1) expected the problem in the internal node occured (896106502), whole subtree is skipped / 84 (of 151-/ 1 (of 170\block 896113600: The level of the node (18524) is not correct, (1) expected the problem in the internal node occured (896113600), whole subtree is skipped /116 (of 151// 71 (of 86|block 829600305: The level of the node (23230) is not correct, (1) expected the problem in the internal node occured (829600305), whole subtree is skipped /117 (of 151/block 829600306: The level of the node (29734) is not correct, (2) expected the problem in the internal node occured (829600306), whole subtree is skipped / 10 (of 41|/ 89 (of 167-/ 74 (of 89|block 960669156: The level of the node (65485) is not correct, (1) expected the problem in the internal node occured (960669156), whole subtree is skipped / 90 (of 167/block 960669157: The level of the node (44947) is not correct, (2) expected the problem in the internal node occured (960669157), whole subtree is skipped / 12 (of 41//110 (of 170\/ 22 (of 170/block 955285512: The level of the node (0) is not correct, (1) expected the problem in the internal node occured (955285512), whole subtree is skipped /111 (of 170-block 955325781: The level of the node (61706) is not correct, (2) expected the problem in the internal node occured (955325781), whole subtree is skipped / 13 (of 41\/ 14 (of 128\/157 (of 170|block 956704846: The level of the node (32477) is not correct, (1) expected the problem in the internal node occured (956704846), whole subtree is skipped / 15 (of 128/block 956704854: The level of the node (20546) is not correct, (2) expected the problem in the internal node occured (956704854), whole subtree is skipped / 14 (of 41-block 961210160: The level of the node (40442) is not correct, (3) expected the problem in the internal node occured (961210160), whole subtree is skipped finished Comparing bitmaps..vpf-10640: The on-disk and the correct bitmaps differs. Bad nodes were found, Semantic pass skipped 31 found corruptions can be fixed only when running with --rebuild-tree ########### reiserfsck finished at Mon Jan 19 15:59:29 2015 ########### root@UnRAID:~# reiserfsck --rebuild-tree /dev/md13 reiserfsck 3.6.24 ************************************************************* ** Do not run the program with --rebuild-tree unless ** ** something is broken and MAKE A BACKUP before using it. ** ** If you have bad sectors on a drive it is usually a bad ** ** idea to continue using it. Then you probably should get ** ** a working hard drive, copy the file system from the bad ** ** drive to the good one -- dd_rescue is a good tool for ** ** that -- and only then run this program. ** ************************************************************* Will rebuild the filesystem (/dev/md13) tree Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes Replaying journal: Done. Reiserfs journal '/dev/md13' in blocks [18..8211]: 0 transactions replayed ########### reiserfsck --rebuild-tree started at Mon Jan 19 16:07:51 2015 ########### Pass 0: ####### Pass 0 ####### Loading on-disk bitmap .. ok, 968504505 blocks marked used Skipping 38019 blocks (super block, journal, bitmaps) 968466486 blocks will be read 0%.block 143556206: The number of items (644) is incorrect, should be (1) - corrected block 143556206: The free space (4105) is incorrect, should be (209) - corrected pass0: vpf-10110: block 143556206, item (0): Unknown item type found [2214592900 17828098 0x2040501 (15)] - deleted block 144990516: The number of items (643) is incorrect, should be (1) - corrected block 144990516: The free space (33792) is incorrect, should be (209) - corrected pass0: vpf-10110: block 144990516, item (0): Unknown item type found [42139649 2264924417 0x101ff (15)] - deleted block 356483528: The number of items (65412) is incorrect, should be (1) - corrected block 356483528: The free space (23895) is incorrect, should be (3793) - corrected pass0: vpf-10110: block 356483528, item (0): Unknown item type found [143327325 2231392093 0x75d088b005d57ff (5)] - deleted block 434313585: The number of items (65279) is incorrect, should be (1) - corrected block 434313585: The free space (65023) is incorrect, should be (3280) - corrected pass0: vpf-10110: block 434313585, item (0): Unknown item type found [4261381888 33618687 0x2000a00 (15)] - deleted block 434970917: The number of items (29710) is incorrect, should be (1) - corrected block 434970917: The free space (62977) is incorrect, should be (2416) - corrected pass0: vpf-10110: block 434970917, item (0): Unknown item type found [2906740068 54575346 0x320a83ffb4060 (] - deleted block 666515829: The number of items (643) is incorrect, should be (1) - corrected block 666515829: The free space (8243) is incorrect, should be (1233) - corrected verify_directory_item: block 666515829, item 234980096 506135064 0xf010e18022a050f DIR (3), len 2815, location 1281 entry count 165, fsck need 0, format new: All entries were deleted from the directory block 870973441: The number of items (1) is incorrect, should be (0) - corrected block 870973441: The free space (0) is incorrect, should be (4072) - corrected block 878009699: The free space (61166) is incorrect, should be (0) - corrected pass0: vpf-10200: block 878009699, item 0: The item [4093200062 1050670404 0x1ef00002552d87d IND (1)] with wrong offset is deleted block 911639975: The number of items (1) is incorrect, should be (0) - corrected block 911639975: The free space (0) is incorrect, should be (4072) - corrected left 0, 20157 /sec 33531 directory entries were hashed with "r5" hash. "r5" hash is selected Flushing..finished Read blocks (but not data blocks) 968466486 Leaves among those 812392 - corrected leaves 69 - leaves all contents of which could not be saved and deleted 9 pointers in indirect items to wrong area 12884 (zeroed) Objectids found 34861 Pass 1 (will try to insert 812383 leaves): ####### Pass 1 ####### Looking for allocable blocks .. finished 0% left 809732, 176 /sec Message from syslogd@UnRAID at Tue Jan 20 05:31:09 2015 ... UnRAID kernel: EIP: [<f84703b0>] mvs_slot_task_free+0xf/0x139 [mvsas] SS:ESP 0068:f7715e38 Message from syslogd@UnRAID at Tue Jan 20 05:31:09 2015 ... UnRAID kernel: Stack: Message from syslogd@UnRAID at Tue Jan 20 05:31:09 2015 ... UnRAID kernel: Code: 41 10 b9 00 00 02 00 89 04 24 89 d8 ff 96 c0 00 00 00 31 c0 83 c4 34 5b 5e 5f 5d c3 55 89 e5 57 89 c7 56 89 d6 53 89 cb 83 ec 14 <83> 79 08 00 0f 84 18 01 00 00 f6 42 14 05 75 48 8b 49 0c 85 c9 Message from syslogd@UnRAID at Tue Jan 20 05:31:09 2015 ... UnRAID kernel: Call Trace: Message from syslogd@UnRAID at Tue Jan 20 05:31:09 2015 ... UnRAID kernel: Process scsi_eh_1 (pid: 862, ti=f7714000 task=ed2ab600 task.ti=f7714000) Message from syslogd@UnRAID at Tue Jan 20 05:31:09 2015 ... UnRAID kernel: Process scsi_eh_9 (pid: 1065, ti=f7732000 task=ed2fdb00 task.ti=f7732000) Message from syslogd@UnRAID at Tue Jan 20 05:31:09 2015 ... UnRAID kernel: EIP: [<f84703b0>] mvs_slot_task_free+0xf/0x139 [mvsas] SS:ESP 0068:f7733e38 Message from syslogd@UnRAID at Tue Jan 20 05:31:09 2015 ... UnRAID kernel: Code: 41 10 b9 00 00 02 00 89 04 24 89 d8 ff 96 c0 00 00 00 31 c0 83 c4 34 5b 5e 5f 5d c3 55 89 e5 57 89 c7 56 89 d6 53 89 cb 83 ec 14 <83> 79 08 00 0f 84 18 01 00 00 f6 42 14 05 75 48 8b 49 0c 85 c9 Message from syslogd@UnRAID at Tue Jan 20 05:31:09 2015 ... UnRAID kernel: Call Trace: Message from syslogd@UnRAID at Tue Jan 20 05:31:09 2015 ... UnRAID kernel: Stack: So now I'm at a crossroads: do I, A) attempt another reiserfsck, or 2) RMA drive then rebuild data on replacement drive? -- UPDATE: After a forced reset due to unresponsiveness, unRAID reports an unformatted drive in the target slot (sigh)... This will be my very first complete loss of the entire data on an unRAID drive since I started with version 4.x five years ago. It's a media server so I can recover the lost videos (time consuming and laborious), but what could have I done differently to avoid this? I've performed several reiserfsck --rebuild-tree commands in the past with success: should have I instead installed a brand new drive and rebuilt the data? There were no indications I could find of detected hardware failure of any sort, nor was it approaching or exceeded maximum recommended power-on hours. syslog-2015-01-15.txt.zip
  6. The SAS2LP in the 4x slot perhaps may be overkill, but at only $85 it's not much more of an expense over having to get a single Norco SFF-8087 Reverse Breakout Cable at $30 shipped, or two for $60 if I wanted to utilize all six mobo SATA ports. I can always do the SASLP versus SAS2LP in the x4 slot speed test to see if there really is no appreciable speed increase.
  7. Yes, that has always been in the back of mind: 24 drives going through one PCIe x8 HBA/slot versus spread across two PCIe x8 HBAs/slots (16 drives) and one PCIe x4 HBA/slot (8 drives). An RES2SV240 can be had for $245+tax (free shipping) from Staples versus, a pair of SAS2LP's for $210 shipped from SuperBiiz. Decisions, decisions... EDIT: Found a 15% SuperBiiz coupon bringing to the total shipped to $196 so I went the three SAS2LP route. I still will eventually get the RES2SV240 but for my other unRAID Atom-based X7SPA-HF mobo since it only has once PCIe slot. I may try a parity check speed test between the three SAS2LPs and a single SAS2LP with RES2SV240 to get the definitive answer to this question of performance hits between the x4 slot versus SAS expander...
  8. I found and downloaded the User's Manual and it appears that it's always powered through a Molex connector as from all photos I've seen, the PCI tabs don't even have any traces on them at all. Anywho, I think I'm going to go with the RES2SV240 simply because it's cheaper!
  9. The Chenbro's also don't necessarily need to be plugged into a PCI slot, but unlike the Intel's, it doesn't appear to come with a Molex power connector. The Chenbro's do have external ports but I don't need that feature. Since the Intel's appear to be cheaper, I think I will go with RES2V240 expander. Does it come with a backplane bracket? I don't foresee needing to add any other PCIe cards so mounting the SAS expander in the x4 slot will ensure it's securely mounted.
  10. I have to look into those as I really don't want to spend close to $500 for the LSI which only allows up to 16 drives, whereas a Chenbro CK22803 offers up to 28 drives and can be had for as low as $300 shipped. I am assuming those SAS expanders you mentioned are natively supported by unRAID? EDIT: From my limited research, it seems the SAS expanders require a "source" SAS port to expand, meaning if my mobo only has one PCIe slot, these PCIe-based SAS expanders cannot be used as they do not themselves provide for a "stand alone" SAS port? Correct the SAS expanders indeed need a "input" SAS port, that then gets expanded. The motherboard I use only has one PCIe slot, and that is the slot I have my M1015 in. However since the expander does nothing but multiply the one port, it requires only power. Meaning it doesn't use the PCIe slot for anything but power. If you only have one slot like me you can put your SAS card in the available slot, then buy a (cheap) separate PCIx 8x power adapter. I use a PCIe 2x to 8x riser, from eBay that I believe was under 10$. Can you provide some links to similar PCIe riser cards to yours? I'm not familiar with them so I'm not sure what to look for and get.
  11. I just found out about SAS expanders and am now looking at a Chenbro CK22803 28-port expander. I'm thinking this will be much more efficient and faster alongside a single SAS2LP, versus going to three SAS2LP's where one will be in a x4 slot, yes? I can even plug-in the Chenbro into the x4 slot since the card only uses the PCI bus for power, correct? I'm really loving the noticeable speed improvements with the one SAS2LP and I'm about to pull the trigger on two more SAS2LP's but for $100 more for the Chenbro and if it will be much faster than having an HBA in the x4 slot, I'd rather go the Chenbro route.
  12. I have to look into those as I really don't want to spend close to $500 for the LSI which only allows up to 16 drives, whereas a Chenbro CK22803 offers up to 28 drives and can be had for as low as $300 shipped. I am assuming those SAS expanders you mentioned are natively supported by unRAID? EDIT: From my limited research, it seems the SAS expanders require a "source" SAS port to expand, meaning if my mobo only has one PCIe slot, these PCIe-based SAS expanders cannot be used as they do not themselves provide for a "stand alone" SAS port?
  13. For my media server, the "desktop" class is suitable for me. Though I would prefer a WD Red 6TB over the Seagate Desktop-class 6TB, I just don't see WD (the "brand") releasing anything close to a 6TB this year since they can't even get a 5TB out the door yet and I'm ready to buy now at the $300 price point for entry into the 6TB world since my server currently has 15 4TBs and 9 3TBs with data not yet on server stored across 4 2TBs due to lack of free space. Of course, I would need to buy at least two 6TBs to initially realize the tremendous capacity of 6TB data drives...
  14. Uh, there appears to be a lot of going back and forth on proper scientific notational methods, but let's get back to the meat of the OP: I am surprised and excited that such relatively low cost 6TB drives are available NOW with Newegg listing them at $299, free shipping, and in stock! My previous experiences with Seagates have been horrible as I've experienced close to 100% failure rates during the time I was a heavy user of them back in the PATA days. But at $299 for 6TB, I'm sorely tempted to jump on one since the Hitachi 6TBs at $695 shipped from Memory Express are more than double the price. Can anyone provide their current experiences with Seagate SATA drives in general, good or bad? I have been a dedicated WD SATA user after my Seagate and Hitachi PATA experiences, especially because the WD Greens ran much cooler, quieter, and lower power consumption, but ever since WD bought Hitachi, they've been seriously late to the table with new offerings, and I'm still waiting for their 5TB drives to hit the market. In addition, even though I've never received a WD DOA, I'm now experiencing close to 50% failure rates (mostly rising bad block reallocation, but a few I/O errors) on 20 of my 2TB drives as they reach their end-of-warranty periods and beyond, and two of my newer WD 3TB drives, so even though I have been able to get most of them replaced under warranty, I'm no longer confident on the long-term reliability on my existing crop of WD 3TB and handful of WD 4TB drives. Most of my 4TB drives are Hitachi, with my very first one DOA and another one showed signs of pending failure several months later and were replaced under warranty. So given my current experiences with WD and Hitachi SATA drives, I'm willing to give Seagate another chance...
  15. I couldn't find definitive information on this but is the LSI Logic 9201-16i (LSI00244) fully compatible with unRAID out of the box (i.e. no additional drivers need to be manually installed)? It seems this is the only 16-port SAS/SATA available that I've come across that may be usable with unRAID. Expensive, relative to the "standard" SuperMicro's, but for mobo's with limited on-board SATA/PCIe ports, this is the only viable option to maximize a "Pro" registration.
  16. I still have one of the 3TB drives to copy over, untouched. I was about to copy its contents to a 2TB drive after the current file transfer concludes, but am willing to plug the 3TB back into the unRAID box for the sole purpose of recreating the errors in a syslog for public review. Right now the server is at 9 drives and humming along, so I don't presently believe the issue is with the hardware because the same parity and data drives that initially threw errors have been up and running with a file transfer underway for the past day or so, and as I said, the first 3TB drive I was able to copy over its contents to another drive just fine inside my OS X box with subsequent disk checks always clean (except the notice of those initial bad blocks pending/reassigned). During my last test, the system would always vomit errors within a few moments after initiating a "cp -pR" operation on the parity drive, no matter what I did to the hardware.
  17. Spec already posted, but forgot to include the SM AOC-SASLP-MV8. Syslog no-go, but when I was reviewing it under unMENU, all the "red" errors were basically listed as I/O errors of the various files that unRAID coughed up on (no other descriptive errors were present); first on the source drive, then on the target drive, and culminating and concentrating solely on the parity drive during the later phases of my testing with generic I/O errors listed but no errors posted on either the source or target drives.
  18. As I mentioned, I replaced the HBA card and cables both but the issue remains whenever there is a 3TB drive on the system with a 2TB parity drive. I haven't check the PSU, but it worked just fine powering 14 drives with 3 4TB, 6 3TB, and 5 2TB. The system had 7 2TB drives in the array plus the 3TB source drive when the errors occurred (and involved two different 3TB drives). It now has 8 2TB drives and another 2TB source drive being copied over as I write this. All drives are WD Greens on SuperMicro X7SPA-HF mobo, 4GB G.Skill RAM, Cooler Master Silent Pro Gold M800. There you go.....You can not have a data drive that is larger than the parity drive. I concur, though someone else does not. Well to clarify, I concur that you cannot perform file transfer operations to the array if the source drive is larger than the parity drive and the source drive is NOT part of the array (so technically, not a "data" drive within the array). Just want to be clear on the terminology used.
  19. As I mentioned, I replaced the HBA card and cables both but the issue remains whenever there is a 3TB drive on the system with a 2TB parity drive. I haven't check the PSU, but it worked just fine powering 14 drives with 3 4TB, 6 3TB, and 5 2TB. The system had 7 2TB drives in the array plus the 3TB source drive when the errors occurred (and involved two different 3TB drives). It now has 8 2TB drives and another 2TB source drive being copied over as I write this. All drives are WD Greens on SuperMicro X7SPA-HF mobo, 4GB G.Skill RAM, Cooler Master Silent Pro Gold M800.
  20. I came across a horrible defect when I installed a drive (3TB) larger than the parity drive (2TB) for the sole purpose of manually copying it's entire contents (1.5TB) onto an empty drive (2TB) within the array. I had just completed this process for two other 2TB HFS+ drives successfully, which were now incorporated into the array. I initiated all the commands via the console and unRAID kept aborting with numerous I/O errors. At first I thought it may have been due to non-printing characters in the file names because the error output always had a solid square character appended to all the file names listed with I/O errors (e.g. "This File NameBLOCKCHARACTERHERE"). I renamed all the files listed but the errors persisted. There were also OS X specific "invisible" files listed, so I cleaned those off the drive to no avail, even though I had no problems copying those types of files with the previous 2TB drive copies. I tried copying individual folders but kept getting errors, but all of a sudden the target drive in the array started showing I/O errors, and then the parity drive too. I was getting really alarmed at this point. I started a brand new config without the suspect target drive, rebuilt parity, expanded the array to include the suspect target drive, cleared and formatted it, then ran a parity check. Everything completely with zero errors. Cautiously, I started the manual copy but started getting errors on the parity drive again. I changed different ports, HBA cards (all out of another unRAID box with 3/4TB drives attached to them), and breakout SAS-SATA cables but the parity errors persisted. I again rebuilt parity and parity check and they completed successfully. I then performed directory and disk checks on the 3TB source drive but those came up clean. I then ran a SMART check and saw that there were numerous "pending bad blocks" on what previously was scanned as a healthy drive with no bad blocks. So I believed the source drive was going bad. I had another 3TB drive that I wanted data copied over so I decided to work on that instead. Before installing it, I did a directory, disk, and SMART check and it came out clean (no bad blocks whatsoever), but installed in the unRAID box and attempts to initiate the copy again came up with I/O errors on the parity drive. I then began to suspect it had something to do with the fact that it was a larger drive than the parity. I also rechecked the source 3TB drive integrity and to my dismay saw that it now had several new "bad sectors pending". To verify my suspicions, I copied all the contents (1.5TB) of the first 3TB drive onto a 2TB drive (not a single hiccup in the OS X box) and put that in the unRAID box: I was able to copy all contents to the array with zero errors. After almost a week of this process, my conclusions are that unRAID somehow attempts to do some sort of parity syncing with any drive from which files are being transferred from, even though that drive is not incorporated into the array itself, and if that drive is larger than the parity drive, it errors out and may mark the parity drive, target drive, and/or source drive as problematic, and perhaps lead to the drive incorrectly relating the I/O errors to bad sectors on the drive and reassigning them to spare blocks. This Atom system was previously my Media Server that had a mix of 2, 3, and 4TB drives so all of its components had worked flawlessly before as I only moved the hard drives to the new server. In the past, I've performed many manual file transfers via the console successfully under version 5, but the source drives were never larger than the parity drive; this is the first time I attempted file transfers with a larger data drive so I'm not sure if this bug has been existence prior to version 6. But the end result is that I have two 3TB drives that are now showing up on SMART utilities as potentially failing due to having reassigned or pending bad blacks that I suspect was incorrectly marked, directly or indirectly, due to this bug.
  21. I've been trying to nail down a pattern. These sluggishness episodes affect both read (extremely long delays in accessing a drive or user share which spans all drives, which may result in all server shares being summarily unmounted by OS X due to unresponsiveness, even though I can usually remount them just fine), as well as writes (extremely slow writes resulting in file transfer failures, but never leading to OS X unmounting the server shares). If I suspect a specific drive and if I have a spare, I replace the drive or I try switching it to a different SASLP; I've even replaced a SASLP with another brand new one. It may appear to resolve issues specific to that drive/HBA, but overall, the sluggishness continues. When WD was absent in the 4TB market, I started adding Hitachi Deskstar 5K4000 drives; all drives <4TB are WD Greens, which were a mixture of 2 and 3TBs. These same drives I did not experience performance issues when they were installed on the Atom setup, so I don't believe the types of drives are the culprit. I expanded the new server with WD 2TB Greens that I had plenty on-hand from when I had gradually upgraded the Atom to larger 3 then 4 TB drives. And I'm now slowly replacing those 2TBs with WD 4TB Greens, delaying as long as I can for the WD 5TBs to be released. I was told that unRAID will only recognize one of multiple mobo network interface ports and it's non-negotiable. How unRAID determines which port is unclear (for mine it's port #2), but it will always be that same port. Regarding switches and cabling, when I replaced the Atom server, I used the same cables and gigabit switches as I physically placed the new server in the same place; as I stated, I had no issues with the Atom server with the exact same network equipment while the new server had performance issues from the start. I even have the Atom server now attached to the same switch with no issues. However, I will replace the cable to the new server just in case... I will try this. I assume unRAID has sufficient power management features to negate the need for BIOS managed power management? I would prefer to be as power-efficient as possible, but of course, will disable APM if need be. I have been running the new SAS2LP for a week now, ensuring that the parity drive and the most accessed data drive (containing the YAMJ media library data files) is attached, and there is definitely a remarkable speed improvement of drives attached to it. I still see the sluggish performance episodes, but so far, they appear to be on drives not attached to the SAS2LP (I access the data drives directly when moving/transferring media files). I will continue observing the performance and if they continue to be as I've described, will get a second SAS2LP.
  22. Well isn't that why we are using UnRAID so drive size shouldn't be too much of a concern? Or are you planning to use these larger 5TB's in a non-RAID regular PC/workstation? And what is this "Gary's backup solution?"
  23. Waiting anxiously already. So far, WD has been vastly optimistic in its release schedule of the 5TB. We are already almost mid-way through April and so far, nada.
  24. I will first replace a suspect SASLP card with a new SAS2LP card that I just ordered. This card is never POSTed during boot up (displays port statuses and option to enter into its setup page), yet in all other respects, seems to work and all drives are accessible with no errors, SMART or otherwise. I've pulled all other SASLPs and tried it in all four different PCI slots, but it never POSTs.
  25. I just came back home and saw the reiserfs check was clean, so I proceeded to view every individual drive, both via the web GUI and mounting them on my desktop: no problems. Next, I inspected and mounted every other User share: no problems. Finally, I held my breath and tried the problematic User share: mounted like a charm. I now believe this is somehow related to my suspicions of hardware performance issues, even though there are absolutely no errors ever reported by unRAID or the SMART plug-in. I had just posted a new thread regarding my concerns with the hardware when I experienced this apparently temporary issue with the User share: http://lime-technology.com/forum/index.php?topic=32687.msg300061#msg300061. I am now pursuing a course of action to systematically replace certain components of the system one-by-one to track down any potential hardware-related issues.