javier911

Members
  • Posts

    30
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

javier911's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Thanks for the reply. I was getting some help from people in a chat and I've gotten most of the the data off now. Part of that was done the same way you suggested, I setup another PC with Ubuntu live USB and pulled the unraid disk and read/copied it there. There's still that other disk that suddenly went "unformatted". Attempts to reiserfsck --check on it do not work, neither when it was in the array using the md device or on Ubuntu, both exit with an error something like "bread: cannot read from block". Any ideas how that could be recovered? I guess at this point with little to lose, I could setup the unraid array again, force it to re-accept that first failed disk that was always actually OK, and then let it simulate the "unformatted" disk contents? I'd like to have unraid going again but purchasing a bunch of new large disks just isn't possible for me at this time. I probably will use some extra parts I have to build a new unraid box but re-use most of the same disks, I'll be sure to pre-clear and check them extensively though. To speed that up I'll look into running those pre-clear and checks on another fast machine with a temporary unraid install or some kinda live Linux. I understand why you suggested ditching the drives though. If I make a "new" array I won't think of it as a backup, I know that's a bad idea and admittedly I've been guilty of that in the past which is why I always had to move so slow and carefully when I started having problems, which led to this being drawn out over years :).
  2. Yes the array automatically started when it rebooted after I connected more disks to the external PSU. I can see and copy files from it, and if I go to the direct share for the disk that's red balled, \\unraid\disk2, I can see and open the files in there. The size of some directories, that is the total of all the filesizes in them, seem smaller than I remember, but that could just be my memory. I'm using a util to copy some files now and I'm looking at the the list of failed copies, there's a bunch but they're small files in very long paths so maybe just some quirk Windows cannot handle. UPDATE: There was some kind of error several times in the logs while copying those files, text file attached. unraid_errors_during_copy1.txt UPDATE 2: I should not have powered it down yesterday but with this external PSU setup and such I felt it wasn't a good idea to leave it running overnight. Unfortunately now there are more problems, there's Buffer I/O errors when it's starting up, I don't recall seeing them or at least not as many before. Another disk now says "unformatted" so there goes another 2TB of stuff. It's just getting worse and worse.
  3. Sorry it's 5.0.5. But I guess I don't need to try moving it to the PCI-E card now since the missing disk came back. Any other advice you have on how to proceed would be greatly appreciated though. I looked up old posts from when I first started having the original failed disk issue. Initially I did the "Trust My Array" procedure and I screwed it up a bit, but then someone told me to do this and it worked: Should I do that again now that I'm testing with an external PSU and see if the disk red balls again? Another suggestion after I solved the trust messup was this: I don't know if that has any relevance now. Thanks for all the help.
  4. UPDATE: The "missing" disk came back when I added more disks to the external PSU. So it's probably a power issue, correct? So now I still have the one failed disk, I still could use some advice on how to proceed. Thanks Unraid 5.0.5 on an old DDR2 AMD system, I forget exactly which CPU it is. 10 disks total, 9x 2TB data and 1x 4TB parity. I cannot recall the PSU specs and I can't see the label on it now but it's an Antec. Long time ago, one of my disks "red balled", I cannot remember everything I did now, but the disk always seemed to be OK after tests and whatever steps I took, but it kept going to red ball, so I'd shut off the box and months would pass before I'd have a try again. In the end I decided it must be either a cable issue or insufficient PSU. Now I'm trying again, I had that one failed (red balled) disk, same as the past. I disconnected all the SATA cables because it was a mess in there, this allowed me to check the power connectors and all the SATA cables carefully. I put them all back and now another disk is reporting as "missing", as I talked about in the other thread. I've changed the missing disk's SATA cable, still missing, I've switched a few disks to an external PSU to see if it's due to insufficient power, still missing. I thought about the SATA port on the motherboard as the possible culprit, but it shows up in POST. The "missing" disk has returned. On that note, does unraid care what controller or port a disk has been connected to? I could move the missing disk onto my PCI-E card which still has 3 ports free. I could also try swapping out the PSU with the most powerful PSU I have on hand which is an approximately 1-2 year old 650w. Another option is to just abandon this box entirely. With the external PSU connected for a few days, get whatever I can off the array.
  5. I'm going to setup a Linux PC so I can pull my unraid disks and try to recover whatever I can. Any recommendations for a specific Linux distro that might be the best for this purpose? I was just going to go with Ubuntu but if anyone has a better suggestion I'd appreciate it.
  6. I have a disk that shows up in POST but unraid says it's "missing", what does that mean exactly?
  7. Any problems with running an additional external power supply for some hard drives? I was thinking of putting some hdds on a separate external power supply temporarily to determine if a problem I'm having is due to insufficient power.
  8. That worked, thank you very much. I was pretty sure the data would be intact but not totally sure, now I can sleep .
  9. I decided to use that procedure on my 5.0.5 unraid box, unfortunately I misunderstood some of it, such as I didn't realize that the "recent 5.x" warning applied to 5.0.5, and I screwed up my array. Probably by refreshing the webpage like it said to not do, but again I just didn't understand, I thought that only applied to 4.7. I know I should have just let it rebuild the disk but the failed disk passed tests and I didn't want my array to be down or super slow for days. I later found this thread: http://lime-technology.com/forum/index.php?topic=19385 ...and I'm in the same situation as the one poster there where my drives all became unassigned and "blue" status. So I manually assigned them all again, making sure my parity drive is the same which is easy because it's the largest drive. Then I try to check the "Parity is already valid" and start the array but it never starts. In the log it seems to be doing the same stuff over and over to attempt to start the array. Oct 14 04:32:43 unraid1 kernel: mdcmd (16): import 15 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (17): import 16 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (18): import 17 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (19): import 18 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (20): import 19 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (21): import 20 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (22): import 21 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (23): import 22 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (24): import 23 0,0 Oct 14 04:32:43 unraid1 emhttp_event: driver_loaded Oct 14 04:33:12 unraid1 emhttp: shcmd (4650): rmmod md-mod |& logger Oct 14 04:33:12 unraid1 emhttp: shcmd (4651): modprobe md-mod super=/boot/config/super.dat slots=24 |& logger Oct 14 04:33:12 unraid1 kernel: md: unRAID driver removed Oct 14 04:33:12 unraid1 emhttp: shcmd (4652): udevadm settle Oct 14 04:33:12 unraid1 kernel: md: unRAID driver 2.2.0 installed Oct 14 04:33:12 unraid1 kernel: read_file: error 2 opening /boot/config/super.dat Oct 14 04:33:12 unraid1 kernel: md: could not read superblock from /boot/config/super.dat Oct 14 04:33:12 unraid1 kernel: md: initializing superblock Oct 14 04:33:12 unraid1 emhttp: Device inventory: Oct 14 04:33:12 unraid1 emhttp: Hitachi_HDS5C3020ALA632_ML0220F311UPBD (sdb) 1953514584 Oct 14 04:33:12 unraid1 emhttp: WDC_WD20EARS-00MVWB0_WD-WCAZA6010408 (sdc) 1953514584 Oct 14 04:33:12 unraid1 emhttp: WDC_WD20EARS-00J99B0_WD-WCAWZ0950511 (sdd) 1953514584 Oct 14 04:33:12 unraid1 emhttp: WDC_WD20EARS-00J99B0_WD-WCAWZ0929154 (sde) 1953514584 Oct 14 04:33:12 unraid1 emhttp: WDC_WD20EARS-00J2GB0_WD-WCAYY0152863 (sdf) 1953514584 Oct 14 04:33:12 unraid1 emhttp: ST2000DM001-1CH164_Z1E24FA6 (sdg) 1953514584 Oct 14 04:33:12 unraid1 emhttp: ST2000DM001-1CH164_Z1E24FLV (sdh) 1953514584 Oct 14 04:33:12 unraid1 emhttp: WDC_WD20EARS-00MVWB0_WD-WCAZA1284898 (sdi) 1953514584 Oct 14 04:33:12 unraid1 emhttp: WDC_WD20EARX-00PASB0_WD-WMAZA9865830 (sdj) 1953514584 Oct 14 04:33:12 unraid1 emhttp: HGST_HDS5C4040ALE630_PL1331LAGHYSWH (sdk) 3907018584 Oct 14 04:33:12 unraid1 kernel: mdcmd (1): import 0 8,160 3907018532 HGST_HDS5C4040ALE630_PL1331LAGHYSWH Oct 14 04:33:12 unraid1 kernel: md: import disk0: [8,160] (sdk) HGST_HDS5C4040ALE630_PL1331LAGHYSWH size: 3907018532 Oct 14 04:33:12 unraid1 kernel: md: disk0 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (2): import 1 8,32 1953514552 WDC_WD20EARS-00MVWB0_WD-WCAZA6010408 Oct 14 04:33:12 unraid1 kernel: md: import disk1: [8,32] (sdc) WDC_WD20EARS-00MVWB0_WD-WCAZA6010408 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk1 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (3): import 2 8,48 1953514552 WDC_WD20EARS-00J99B0_WD-WCAWZ0950511 Oct 14 04:33:12 unraid1 kernel: md: import disk2: [8,48] (sdd) WDC_WD20EARS-00J99B0_WD-WCAWZ0950511 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk2 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (4): import 3 8,64 1953514552 WDC_WD20EARS-00J99B0_WD-WCAWZ0929154 Oct 14 04:33:12 unraid1 kernel: md: import disk3: [8,64] (sde) WDC_WD20EARS-00J99B0_WD-WCAWZ0929154 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk3 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (5): import 4 8,80 1953514552 WDC_WD20EARS-00J2GB0_WD-WCAYY0152863 Oct 14 04:33:12 unraid1 kernel: md: import disk4: [8,80] (sdf) WDC_WD20EARS-00J2GB0_WD-WCAYY0152863 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk4 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (6): import 5 8,128 1953514552 WDC_WD20EARS-00MVWB0_WD-WCAZA1284898 Oct 14 04:33:12 unraid1 kernel: md: import disk5: [8,128] (sdi) WDC_WD20EARS-00MVWB0_WD-WCAZA1284898 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk5 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (7): import 6 8,144 1953514552 WDC_WD20EARX-00PASB0_WD-WMAZA9865830 Oct 14 04:33:12 unraid1 kernel: md: import disk6: [8,144] (sdj) WDC_WD20EARX-00PASB0_WD-WMAZA9865830 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk6 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (: import 7 8,96 1953514552 ST2000DM001-1CH164_Z1E24FA6 Oct 14 04:33:12 unraid1 kernel: md: import disk7: [8,96] (sdg) ST2000DM001-1CH164_Z1E24FA6 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk7 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (9): import 8 8,112 1953514552 ST2000DM001-1CH164_Z1E24FLV Oct 14 04:33:12 unraid1 kernel: md: import disk8: [8,112] (sdh) ST2000DM001-1CH164_Z1E24FLV size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk8 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (10): import 9 8,16 1953514552 Hitachi_HDS5C3020ALA632_ML0220F311UPBD Oct 14 04:33:12 unraid1 kernel: md: import disk9: [8,16] (sdb) Hitachi_HDS5C3020ALA632_ML0220F311UPBD size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk9 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (11): import 10 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (12): import 11 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (13): import 12 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (14): import 13 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (15): import 14 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (16): import 15 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (17): import 16 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (18): import 17 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (19): import 18 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (20): import 19 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (21): import 20 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (22): import 21 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (23): import 22 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (24): import 23 0,0 Oct 14 04:33:12 unraid1 emhttp: shcmd (4653): /usr/local/sbin/emhttp_event driver_loaded Oct 14 04:33:12 unraid1 emhttp_event: driver_loaded I see that section repeated over and over every minute or so. When I ran initconfig it gives that message about renaming super.dat to super.bak. I cannot find any file named super.bak but I did find a file named super.old. Do I have to restore that or something? Anyway yes I'm lost now, so basically 2 questions. 1) Did I lose all my data??!!? 2) How do I resolve this? Thanks
  10. Gigabyte GA-MA785GM-US2H 2GB DDR2 RAM (2x1 Corsair) 1x Hitachi and 3x WD EARS (one not in array) Antec Basiq 550 Plus syslog1.txt
  11. This is my second problem with a similar result, but this time it happened during a simple Windows XP file copy, the result is basically the same as in my last post: http://lime-technology.com/forum/index.php?topic=14751.msg139521 I was copying a directory of mostly photos, about 40GB. After awhile I went to check and copy had failed, urnaid server was then unresponsive but not totally crashed. I could attempt to connect to the webserver but pages will never load, also attempt to telnet but I'd only get to the "connected... escape character.." prompts. With no other choice I forced it to power off with the power switch, and started it up again. Problem this time is I don't have any previous logs, the syslog file simply starts when I rebooted. I found nothing old in /var/log, are there previous logs stored elsewhere? My first problem, the one in my previous post I noted above, was perhaps somewhat of a uncommon case with using FileZilla and 3 transfers writing to the array, which was worked around by dropping to 2 transfers. This time however it's a completely normal and possibly very common case that really shouldn't crash the server. So the question is becoming, not necessarily how to fix these but more should I give up on this? I basically thought that given some exceptions unraid was pretty widely compatible with hardware and a good semi-DIY setup. Am I wrong to expect reliability on my own hardware? Is unraid really not so reliable unless you buy a pre-built box? Please note I am not complaining, trying to insult, I merely would like some honest opinions so I don't keep working on this box and having even basic use cause crashes, thinking I can maybe do something about it when I cannot. I realize I could build a new system based on the exact recommended hardware, but for me that would be going too far cost-wise. I built my box with mostly existing hardware, only bought the recommended case and cages to support a lot of drives as I had planned to run this server for some time, adding drives as needed.
  12. I ran the fsck checks on both md vols, no errors. So would the final word on this be that the server simply couldn't handle 3 simultaneous transfers? Would increasing the RAM make any difference? Actually I don't think I can add RAM since I have a monster HS+fan over the first 2 slots, but anyway it would be good to know that this was simply caused by insufficient RAM. I'm still in my learning/evaluation stage on the free version, but I think I'll be buying Pro soon and adding more drives, at which time I will probably be back here for help again . Thanks to all.
  13. I didn't run the fsck because I wasn't sure if it was OK to unmount one of the md volumes, I wasn't sure if it would mess up the array or something. When the array is stopped they don't exist, so was I supposed to run fsck on the individual drive devices? Anyway, I disabled all addons, finished that FTP queue with one transfer at a time. Then I ran another FTP queue of about 30GB using 2 transfers at a time and that also completed successfully. I will now re-enable some addons one at a time and see what happens. I don't know if I will bother to try 3 transfers again, 2 is good enough as long as I can successfully complete large queues without killing unraid.