ednigma

Members
  • Posts

    28
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

ednigma's Achievements

Noob

Noob (1/14)

0

Reputation

  1. @johnnie.black reiserfsck prompted me to rebuild the superblock, so I carefully followed the unRaid FAQ instructions and in the end there were enough data loss, that I decided to shrink the array and rebuild parity in order to upgrade to v6 and then add disk5 back in and restore from backup. Thanks for your help
  2. Thanks johnnie.black I'll do this and see how much I can recover and compare to my backup, and then mark as solved. Thanks again
  3. Thanks for the replies, I should not have been tinkering with the array so early in the AM, but the preclear had just finished and I was hoping the rebuild would finish overnight. @johnnie.black I see your point that formatting the disk was where I screwed up and a rebuild won't work, but then you go on to say I could try a rebuild from parity that would restore all precleared data. Could you clarify? How do I force a rebuild to occur? I would like to go thru the exercise of running reiserfsck to learn about using it since I've never had to before. Thanks Again..
  4. Hello, I am running version 4.7pro with 6 data drives and a parity drive. I had a disk that SMART was reporting pending sectors. I copied the data to some free space on another Windows machine and precleared the drive, which cleared the pending sectors (strangely, the reallocated sectors count remained 0). I put the drive back into the array and booted the server, the array started and showed an unformatted disk. I thought to myself that unraid just saw the same sonfig, so I stopped the array and unassigned the disk and reassigned the disk, started the array. The status page still showed the disk as unformatted, so I formatted it and when finished, the array showed the disk as having all its space free, no rebuild. I guess I made a mistake in unassigning the disk and reassigning the disk. Is there as way to force unRaid to rebuild the disk? I was intending to rebuild the disk and compare it to the data I saved before I precleared it. If not, I can copy the data back, but will the shares just automatically connect? For example, I noticed that in a DVD share directory, some DVDs seemed split over different disks. Can i just use for example Teracopy to copy all of the data saved on my Win machine to the Disk 5 share? For example, do I copy the DVD subdirectory to Disk5, followed by the TV subdirectory, etc? Will the DVD share directory link back up? Will the TV share? I hope I'm making sense. Of course, I'd rather rebuild the data, I hope theres a way to get UnRaid to do it Thanks
  5. In the past year or so, I have rarely turned on my unRaid server (4.7Pro) - mainly because I have been out of state for extended periods. After a clean parity check, I added a disk and wrote several hundred G of data to the array. Before I left I ran a parity check (NOCORRECT) and it showed parity updated 4 times, which I understand means that the parity verification thread detected 4 parity mismatches but no actual updates occurred. I looked at the SMART reports and only disk5 showed 16 pending sectors, 0 reallocated events, and its short offline test completed without error. No time to debug further. Back in town, I reran the parity NOCORRECT and it showed 1 sync error updated, and the syslog window showed handle_stripe read error; disk1 read error. I cancelled the parity check. Checked the SMART report for Disk1 and it showed 9 pending and 5 reallocated events, and the short SMART test showed read failure. Disk 5 still showed 16 pending sectors, but its log and short test was clean. Since I didn't have a replacement drive available I couldn't attend to the problem and shut the array down. I finally replaced disk1 and rebuilt the array. Upon completion, I get a message that the last parity check <1 day ago Parity updated 1 time to address sync errors. Rebuilding a disk only reads the parity and the other disks to write to the replacement, and the parity drive has still not been updated, right? So where did this parity error come from? Is it from a disk5 read error and if so, chances are that the rebuilt drive has at least 1 bit in error, right? The syslog doesn't seem to show any errors from the rebuild. Jan 4 23:39:58 Tower emhttp: unRAID System Management Utility version 4.7 Jan 4 23:39:58 Tower emhttp: Copyright (C) 2005-2011, Lime Technology, LLC Jan 4 23:39:58 Tower emhttp: Pro key detected, GUID: 05DC-A560-1010-153813190906 Jan 4 23:39:58 Tower emhttp: shcmd (1): udevadm settle Jan 4 23:39:58 Tower emhttp: Device inventory: Jan 4 23:39:58 Tower emhttp: pci-0000:00:1f.2-scsi-0:0:0:0 host3 (sdb) Hitachi_HDS723015BLA642_MN1B20F304G19D Jan 4 23:39:58 Tower emhttp: pci-0000:00:1f.2-scsi-0:0:1:0 host3 (sdc) ST1500DL003-9VT16L_5YD8YMY3 Jan 4 23:39:58 Tower emhttp: pci-0000:00:1f.2-scsi-1:0:0:0 host4 (sdd) Hitachi_HDS723015BLA642_MN1B21F303G5BD Jan 4 23:39:58 Tower emhttp: pci-0000:00:1f.2-scsi-1:0:1:0 host4 (sde) ST1500DL003-9VT16L_5YD8ZKC2 Jan 4 23:39:58 Tower emhttp: pci-0000:00:1f.5-scsi-0:0:0:0 host5 (sdf) Hitachi_HDS5C3015ALA632_ML0020F002NZ8D Jan 4 23:39:58 Tower emhttp: pci-0000:00:1f.5-scsi-1:0:0:0 host6 (sdg) SAMSUNG_HD154UI_S1Y6J1KS802855 Jan 4 23:39:58 Tower emhttp: pci-0000:02:00.0-scsi-0:0:0:0 host0 (sda) Hitachi_HDS723015BLA642_MN1B21F301SEVA Jan 4 23:39:58 Tower emhttp: shcmd (2): modprobe -rw md-mod 2>&1 | logger Jan 4 23:39:58 Tower emhttp: shcmd (3): modprobe md-mod super=/boot/config/super.dat slots=8,16,8,48,8,32,8,64,8,80,8,96,8,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 2>&1 | logger Jan 4 23:39:58 Tower kernel: xor: automatically using best checksumming function: pIII_sse Jan 4 23:39:58 Tower unmenu-status: Starting unmenu web-server Jan 4 23:39:58 Tower kernel: pIII_sse : 8869.600 MB/sec Jan 4 23:39:58 Tower kernel: xor: using function: pIII_sse (8869.600 MB/sec) Jan 4 23:39:58 Tower kernel: md: unRAID driver 1.1.1 installed Jan 4 23:39:58 Tower kernel: md: import disk0: [8,16] (sdb) Hitachi HDS72301 MN1B20F304G19D size: 1465138552 Jan 4 23:39:58 Tower kernel: md: import disk1: [8,48] (sdd) Hitachi HDS72301 MN1B21F303G5BD size: 1465138552 Jan 4 23:39:58 Tower kernel: md: disk1 wrong Jan 4 23:39:58 Tower kernel: md: import disk2: [8,32] (sdc) ST1500DL003-9VT1 5YD8YMY3 size: 1465138552 Jan 4 23:39:58 Tower kernel: md: import disk3: [8,64] (sde) ST1500DL003-9VT1 5YD8ZKC2 size: 1465138552 Jan 4 23:39:58 Tower kernel: md: import disk4: [8,80] (sdf) Hitachi HDS5C301 ML0020F002NZ8D size: 1465138552 Jan 4 23:39:58 Tower kernel: md: import disk5: [8,96] (sdg) SAMSUNG HD154UI S1Y6J1KS802855 size: 1465138552 Jan 4 23:39:58 Tower kernel: md: import disk6: [8,0] (sda) Hitachi HDS72301 MN1B21F301SEVA size: 1465138552 Jan 4 23:39:58 Tower kernel: mdcmd (1): set md_num_stripes 1280 Jan 4 23:39:58 Tower kernel: mdcmd (2): set md_write_limit 768 Jan 4 23:39:58 Tower kernel: mdcmd (3): set md_sync_window 288 Jan 4 23:39:58 Tower kernel: mdcmd (4): set spinup_group 0 0 Jan 4 23:39:58 Tower kernel: mdcmd (5): set spinup_group 1 0 Jan 4 23:39:58 Tower kernel: mdcmd (6): set spinup_group 2 64 Jan 4 23:39:58 Tower kernel: mdcmd (7): set spinup_group 3 0 Jan 4 23:39:58 Tower kernel: mdcmd (: set spinup_group 4 0 Jan 4 23:39:58 Tower kernel: mdcmd (9): set spinup_group 5 0 Jan 4 23:39:58 Tower kernel: mdcmd (10): set spinup_group 6 4 Jan 4 23:39:58 Tower emhttp: Spinning up all drives... Jan 4 23:39:58 Tower kernel: mdcmd (11): spinup 0 Jan 4 23:39:58 Tower kernel: mdcmd (12): spinup 1 Jan 4 23:39:58 Tower kernel: mdcmd (13): spinup 2 Jan 4 23:39:58 Tower kernel: mdcmd (14): spinup 3 Jan 4 23:39:58 Tower kernel: mdcmd (15): spinup 4 Jan 4 23:39:58 Tower kernel: mdcmd (16): spinup 5 Jan 4 23:39:58 Tower kernel: mdcmd (17): spinup 6 Jan 4 23:39:59 Tower emhttp: stale configuration Jan 4 23:39:59 Tower emhttp: shcmd (4): rm /etc/samba/smb-shares.conf >/dev/null 2>&1 Jan 4 23:39:59 Tower emhttp: _shcmd: shcmd (4): exit status: 1 Jan 4 23:39:59 Tower emhttp: shcmd (5): cp /etc/exports- /etc/exports Jan 4 23:39:59 Tower emhttp: shcmd (6): killall -HUP smbd Jan 4 23:39:59 Tower emhttp: shcmd (7): /etc/rc.d/rc.nfsd restart | logger Jan 4 23:40:00 Tower emhttp: shcmd (7): cp /var/spool/cron/crontabs/root- /var/spool/cron/crontabs/root Jan 4 23:40:00 Tower emhttp: shcmd (: echo '# Generated mover schedule:' >>/var/spool/cron/crontabs/root Jan 4 23:40:00 Tower emhttp: shcmd (9): echo '40 3 * * * /usr/local/sbin/mover 2>&1 | logger' >>/var/spool/cron/crontabs/root Jan 4 23:40:00 Tower emhttp: shcmd (10): crontab /var/spool/cron/crontabs/root Jan 4 23:40:05 Tower ntpd[1437]: synchronized to 204.9.54.119, stratum 1 Jan 4 23:40:04 Tower ntpd[1437]: time reset -0.863208 s Jan 4 23:44:15 Tower emhttp: shcmd (12): /usr/local/sbin/set_ncq sdb 1 >/dev/null Jan 4 23:44:15 Tower emhttp: shcmd (13): /usr/local/sbin/set_ncq sdd 1 >/dev/null Jan 4 23:44:15 Tower emhttp: shcmd (14): /usr/local/sbin/set_ncq sdc 1 >/dev/null Jan 4 23:44:15 Tower emhttp: shcmd (15): /usr/local/sbin/set_ncq sde 1 >/dev/null Jan 4 23:44:15 Tower emhttp: shcmd (16): /usr/local/sbin/set_ncq sdf 1 >/dev/null Jan 4 23:44:15 Tower emhttp: shcmd (17): /usr/local/sbin/set_ncq sdg 1 >/dev/null Jan 4 23:44:15 Tower emhttp: shcmd (18): /usr/local/sbin/set_ncq sda 1 >/dev/null Jan 4 23:44:15 Tower emhttp: writing mbr on disk 1 (/dev/sdd) with partition 1 offset 64 Jan 4 23:44:15 Tower emhttp: re-reading /dev/sdd partition table Jan 4 23:44:15 Tower kernel: sdd: sdd1 Jan 4 23:44:16 Tower kernel: mdcmd (18): start UPGRADE_DISK Jan 4 23:44:16 Tower kernel: unraid: allocating 38840K for 1280 stripes (7 disks) Jan 4 23:44:16 Tower kernel: md1: running, size: 1465138552 blocks Jan 4 23:44:16 Tower kernel: md2: running, size: 1465138552 blocks Jan 4 23:44:16 Tower kernel: md3: running, size: 1465138552 blocks Jan 4 23:44:16 Tower kernel: md4: running, size: 1465138552 blocks Jan 4 23:44:16 Tower kernel: md5: running, size: 1465138552 blocks Jan 4 23:44:16 Tower kernel: md6: running, size: 1465138552 blocks Jan 4 23:44:17 Tower emhttp: shcmd (19): udevadm settle Jan 4 23:44:17 Tower emhttp: shcmd (20): mkdir /mnt/disk4 Jan 4 23:44:17 Tower emhttp: shcmd (20): mkdir /mnt/disk5 Jan 4 23:44:17 Tower emhttp: shcmd (20): mkdir /mnt/disk1 Jan 4 23:44:17 Tower emhttp: shcmd (20): mkdir /mnt/disk3 Jan 4 23:44:17 Tower emhttp: shcmd (20): mkdir /mnt/disk2 Jan 4 23:44:17 Tower emhttp: shcmd (20): mkdir /mnt/disk6 Jan 4 23:44:17 Tower kernel: mdcmd (19): check Jan 4 23:44:17 Tower kernel: md: recovery thread woken up ... Jan 4 23:44:17 Tower kernel: md: recovery thread rebuilding disk1 ... Jan 4 23:44:17 Tower emhttp: shcmd (21): set -o pipefail ; mount -t reiserfs -o noacl,nouser_xattr,noatime,nodiratime /dev/md4 /mnt/disk4 2>&1 | logger Jan 4 23:44:17 Tower emhttp: shcmd (22): set -o pipefail ; mount -t reiserfs -o noacl,nouser_xattr,noatime,nodiratime /dev/md3 /mnt/disk3 2>&1 | logger Jan 4 23:44:17 Tower emhttp: shcmd (23): set -o pipefail ; mount -t reiserfs -o noacl,nouser_xattr,noatime,nodiratime /dev/md6 /mnt/disk6 2>&1 | logger Jan 4 23:44:17 Tower emhttp: shcmd (24): set -o pipefail ; mount -t reiserfs -o noacl,nouser_xattr,noatime,nodiratime /dev/md2 /mnt/disk2 2>&1 | logger Jan 4 23:44:17 Tower emhttp: shcmd (25): set -o pipefail ; mount -t reiserfs -o noacl,nouser_xattr,noatime,nodiratime /dev/md5 /mnt/disk5 2>&1 | logger Jan 4 23:44:17 Tower emhttp: shcmd (26): set -o pipefail ; mount -t reiserfs -o noacl,nouser_xattr,noatime,nodiratime /dev/md1 /mnt/disk1 2>&1 | logger Jan 4 23:44:17 Tower kernel: md: using 1152k window, over a total of 1465138552 blocks. Jan 4 23:44:17 Tower kernel: REISERFS (device md6): found reiserfs format "3.6" with standard journal Jan 4 23:44:17 Tower kernel: REISERFS (device md6): using ordered data mode Jan 4 23:44:17 Tower kernel: REISERFS (device md4): found reiserfs format "3.6" with standard journal Jan 4 23:44:17 Tower kernel: REISERFS (device md4): using ordered data mode Jan 4 23:44:17 Tower kernel: REISERFS (device md3): found reiserfs format "3.6" with standard journal Jan 4 23:44:17 Tower kernel: REISERFS (device md3): using ordered data mode Jan 4 23:44:17 Tower kernel: REISERFS (device md1): found reiserfs format "3.6" with standard journal Jan 4 23:44:17 Tower kernel: REISERFS (device md1): using ordered data mode Jan 4 23:44:17 Tower kernel: REISERFS (device md2): found reiserfs format "3.6" with standard journal Jan 4 23:44:17 Tower kernel: REISERFS (device md2): using ordered data mode Jan 4 23:44:17 Tower kernel: REISERFS (device md5): found reiserfs format "3.6" with standard journal Jan 4 23:44:17 Tower kernel: REISERFS (device md5): using ordered data mode Jan 4 23:44:17 Tower kernel: REISERFS (device md6): journal params: device md6, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 Jan 4 23:44:17 Tower kernel: REISERFS (device md6): checking transaction log (md6) Jan 4 23:44:17 Tower kernel: REISERFS (device md4): journal params: device md4, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 Jan 4 23:44:17 Tower kernel: REISERFS (device md4): checking transaction log (md4) Jan 4 23:44:17 Tower kernel: REISERFS (device md3): journal params: device md3, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 Jan 4 23:44:17 Tower kernel: REISERFS (device md3): checking transaction log (md3) Jan 4 23:44:17 Tower kernel: REISERFS (device md2): journal params: device md2, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 Jan 4 23:44:17 Tower kernel: REISERFS (device md2): checking transaction log (md2) Jan 4 23:44:17 Tower kernel: REISERFS (device md5): journal params: device md5, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 Jan 4 23:44:17 Tower kernel: REISERFS (device md5): checking transaction log (md5) Jan 4 23:44:17 Tower kernel: REISERFS (device md1): journal params: device md1, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 Jan 4 23:44:17 Tower kernel: REISERFS (device md1): checking transaction log (md1) Jan 4 23:44:17 Tower kernel: REISERFS (device md4): Using r5 hash to sort names Jan 4 23:44:17 Tower kernel: REISERFS (device md6): Using r5 hash to sort names Jan 4 23:44:17 Tower kernel: REISERFS (device md3): Using r5 hash to sort names Jan 4 23:44:17 Tower kernel: REISERFS (device md5): Using r5 hash to sort names Jan 4 23:44:17 Tower kernel: REISERFS (device md2): Using r5 hash to sort names Jan 4 23:44:17 Tower kernel: REISERFS (device md1): Using r5 hash to sort names Jan 4 23:44:18 Tower emhttp: shcmd (32): rm /etc/samba/smb-shares.conf >/dev/null 2>&1 Jan 4 23:44:18 Tower emhttp: shcmd (33): cp /etc/exports- /etc/exports Jan 4 23:44:18 Tower emhttp: shcmd (34): mkdir /mnt/user Jan 4 23:44:18 Tower emhttp: shcmd (35): /usr/local/sbin/shfs /mnt/user -o noatime,big_writes,allow_other,default_permissions Jan 4 23:44:30 Tower emhttp: get_config_idx: fopen /boot/config/shares/DVD.cfg: No such file or directory - assigning defaults Jan 4 23:44:30 Tower emhttp: get_config_idx: fopen /boot/config/shares/FromTOS1000.cfg: No such file or directory - assigning defaults Jan 4 23:44:30 Tower emhttp: get_config_idx: fopen /boot/config/shares/PBS.cfg: No such file or directory - assigning defaults Jan 4 23:44:30 Tower emhttp: get_config_idx: fopen /boot/config/shares/Q9400-DDrive.cfg: No such file or directory - assigning defaults Jan 4 23:44:30 Tower emhttp: get_config_idx: fopen /boot/config/shares/Sam154.cfg: No such file or directory - assigning defaults Jan 4 23:44:30 Tower emhttp: get_config_idx: fopen /boot/config/shares/TV.cfg: No such file or directory - assigning defaults Jan 4 23:44:30 Tower emhttp: get_config_idx: fopen /boot/config/shares/VRDsave-G5BD.cfg: No such file or directory - assigning defaults Jan 4 23:44:30 Tower emhttp: shcmd (36): killall -HUP smbd Jan 4 23:44:30 Tower emhttp: shcmd (37): /etc/rc.d/rc.nfsd restart | logger Jan 4 23:48:43 Tower ntpd[1437]: synchronized to 204.9.54.119, stratum 1 Jan 5 08:24:30 Tower kernel: md: sync done. time=31212sec rate=46941K/sec Jan 5 08:24:30 Tower kernel: md: recovery thread sync completion status: 0 I started a new parity check (NOCORRECT) and right away it shows: sync errors 3 (corrected). Am I right to assume these 3 sync errors are due to disk5? I guess my next step was to preclear the drive I removed to try and get the pending sectors reallocated and use that drive to replace drive 5 and rebuild. Any guidance would be very much appreciated Thanks Ed
  6. Thanks Alex, I just started to have this problem where several computers on my network could not see my unraid server in the Explorer Network. I thought it might have been master browser problem as I encountered it years ago when I had XP systems. You'd think that Microsoft would have solved it for Win7/8. Your post about having the router as master browser was spot on, since it is never switched off and you don't get into the master browser election BS. @GreggP I have an ASUS N66u with Merlin firmware and it is under the USB Application / Network share tab--I don't know it your router has this. I read in another forum that some ASUS routers will act as master browser if DLNA Media Server is enabled. Hope this helps Ed
  7. Hi, I am running UnRAID Server Pro 4.7 that I put together several years ago and have been away from my system and have not used it for several years. I went to add a new drive to the array and wanted to preclear it first. The new disk is a Seagate ST1500DL003 and preclear wanted to set the partition to 4k eventhough I did not use the -A option (is this normal behavior?). I looked up the drive specs and it is internally 4K and uses SmartAlign for older OSes. This got me thinking about my other drives, a 1.5T parity and 2 x 1.5T data drives. My parity is a Hitachi HDS723015BLA642, native 512 my first data drive is a Hitachi HDS5C3015ALA632, native 512 my second data drive is a Seagate ST1500DL003, native 4K My device settings are MBR:4K-aligned When I click on the disk link from the main page, the two Hitachi drives show MBR:4K-aligned, but the Seagate drive shows unknown. Why is this? I'm pretty sure I started with the 2 Hitachi drives and added the Seagate later. I typically preclear all drives. I think that I initially built the array on an older version of unRaid and upgraded to 4.7. I don't remember if I forced MBR4k using preclear on the Hitachi drives (512 native) and I don't remember if I forgot to add -A when preclearing the Seagate (did it format to 512?, does it matter?). HDPARM from myRaid: Logical Sector size: 512 bytes Physical Sector size: 4096 bytes Logical Sector-0 offset: 0 bytes What is the best way to fix this? In general, is it best to force MBR 4K for all drives when preclearing if you know you will have a mix of 512 and 4K aligned drives? Thanks Ed
  8. Thanks guys, As noted above, I manually installed and it was able to download the pkg and install successfully and I see the local copy on the flash. The confusing thing was I would log in as root (directly on the server, not telnet) and was able to ping the address on googlecode. Regards
  9. Did a quick search and couldn't find anything specific to this Trying to install SimpleFeatures on a testbuild to try it out. Made a new flash with v5.0-rc8a with unmenu. Booted and everything seems fine. Downloaded SimpleFeatures zip files and made a plugins dir inside the config dir, copied all files form the unziped SimpleFeatures file. When I rebooted, I see the following: wget: unable to resolve host address 'unraid-simplefeatures.googlecode.com' 12 times - one for each plg file. I pinged unraid-simplefeatures.googlecode.com from the server and got a responses from 74.125.142.82 Upon reboot, it should automatically download all the necessary files and install them right? There's a simpleFeatures directory under plugins, but it is empty. Ok, tried manual install of the core pkg and it seems to work. Do I have to manually invoke installplg for each plugin? Thanks Ed
  10. Thanks a bunch Joe! How did I know that you would answer and answer so clearly? I guess I was confused by reading something about AF drives performing better when aligned, but now that I read that passage again in light of your answer, it was specifically pertaining to the WD EARS drive and not to AF drives in general. Regards.. Ed
  11. I've been away from unRaid for a while and want to build an new array. I have a 2 Hitachi 1.5T drives, one is 7200rpm and the other is Coolspin (5400?). and I just purchased a Seagate ST1500DL003 1.5T spinning at 5900rpm that I understand is an AF drive (Seagate SmartAlign?). Which should be my parity drive - I was thinking the Hitachi 7200rpm. But I've since read that AF drives may perform better but is this true for the parity function? Will the Hitachi 7200rpm still perform better vs the 5900rpm AF? Do I format each drive 4k aligned or only the Seagate? When preclearing, do I have to specify the alignment or is this only selected at the time of formatting. Thanks Ed
  12. So this is somewhat common, to get address sync errors at the beginning of a parity check? Are you saying that these errors are from differences in journal entries of the data drives? I failed to mention that I mounted the disk9 to my XP desktop using a PATA to USB2 adapter and YAReG-1.0 to read the disk to see if the data was there at all. I've never seen these address sync errors in any parity check before. Are you also saying that parity w.r.t. the data drives is intact? I was afraid that the reported errors resulted in the parity being updated. I already used the Trust procedure to get to this point (which includes the Restore). Before seeing your reply, I decided to unassign disk9, start the array and copy the rest of the data from the array to some space I freed on another desktop. Since I've now unassigned disk9 and restarted the array, I'm committed to the above rebuilding procedure. I'm just still hung up on those address sync errors possibly changing the parity and rebuilding will write incorrect data. I have this sinking feeling that trying to run Parity -nocorrect as a sanity check was not a good idea and I should have just started a rebuild from the start Thanks.. Ed
  13. My original PATA unRaid server developed write errors to one of the disks a while back, but for various reasons I have not had the time to debug it. After some months, I started the server and now I had 4 drives missing -- Aha! These drives are "paired" -- the parity drive and disk 1, disk 8 and disk 9 (disk 9 had the original drive errors. I open the case and realize the for disk 9 the Y power splitter was suspect. I replaced the power splitters and reseated the IDE cable for the 4 drives and rebooted the system. Now, parity and disk1 and 8 are OK, but disk 9 was still marked disabled. I copied about 80G of data to my desktop, letting unRaid correct the data. I physically pull disk 9 and ran Spinwrite which found no errors, so I figure that I only had cabling errors and all the data is OK. So my plan was to put the drive back in, use the Trust My Array Procedure to initialize the array and run a parity check -nocorrect as a verification that no data is actually in error. After starting the array, a parity check started which I wanted to stop (so I could start a no correct), so I mistakenly pressed Stop array instead of Cancel parity. I restarted the array and unMenu says that "Parity updated 130 times to address sync errors" So now my questions... Since I feel that all my data was OK to begin with, where are these errors coming from? Parity was only running a very short time - I pressed stop as soon as I could after the array started from the Trust my array procedure. Does this mean that my parity disk has now changed and my only option is to forget about running a Parity -nocorrect and assume that my disk9 is valid and just run a normal parity check (using the restore array) letting the parity disk get updated? The disk 9 is a 250G drive almost full of which I could only copy about 80G to free space on my desktop Thanks.. Ed
  14. Running Server3.0. Can I mix SATA and PATA using the built in SATA ports on the stock Intel D865GLCLK MB? I would like to add a SATA drive to one of the two built in SATA ports on the MB and assign this as the parity drive, since this should give me somewhat better parity write performance? Presently I have eight IDE drives. Assuming I can populate the two MB SATA ports am I limited to only adding two more IDE drives for a total of 12 drives (Until I upgrade to 4.0)? I only have 1 promise TX4 card installed right now. Thanks.. Ed
  15. What version of unRaid are you using? I got the same behavior before I upgraded to ver3.0. Don't replace the motherboard yet, it's most likely that your MB is OK. I think what happened is because of the disk error, samba (which provides file services for connected Windows machines) did not start. Don't know why the unRaid server management page did not start. Using a monitor and keyboard attached to the server, I tried doing a shutdown from the command prompt several times, but still couldn't get the management page up on my Windows machine. I shut off main power (the rocker switch on the PS), reseated my promise TX2 adapter card and replugged the IDE cables. When I restarted the server, it seemed to boot OK and after waiting a bit, I could get the web page status up. the 2 drives plugged into one of the cables on my promise adapter came up as new. I was careful to not allow it to reformat the drives, I just started a sync to rebuild parity and didn't lose any data. Since then I upgraded to version 3.0 and it has not happened since (though I only updated recently). HTHs Regards.. Ed