javier911

Members
  • Posts

    30
  • Joined

  • Last visited

Everything posted by javier911

  1. Thanks for the reply. I was getting some help from people in a chat and I've gotten most of the the data off now. Part of that was done the same way you suggested, I setup another PC with Ubuntu live USB and pulled the unraid disk and read/copied it there. There's still that other disk that suddenly went "unformatted". Attempts to reiserfsck --check on it do not work, neither when it was in the array using the md device or on Ubuntu, both exit with an error something like "bread: cannot read from block". Any ideas how that could be recovered? I guess at this point with little to lose, I could setup the unraid array again, force it to re-accept that first failed disk that was always actually OK, and then let it simulate the "unformatted" disk contents? I'd like to have unraid going again but purchasing a bunch of new large disks just isn't possible for me at this time. I probably will use some extra parts I have to build a new unraid box but re-use most of the same disks, I'll be sure to pre-clear and check them extensively though. To speed that up I'll look into running those pre-clear and checks on another fast machine with a temporary unraid install or some kinda live Linux. I understand why you suggested ditching the drives though. If I make a "new" array I won't think of it as a backup, I know that's a bad idea and admittedly I've been guilty of that in the past which is why I always had to move so slow and carefully when I started having problems, which led to this being drawn out over years :).
  2. Yes the array automatically started when it rebooted after I connected more disks to the external PSU. I can see and copy files from it, and if I go to the direct share for the disk that's red balled, \\unraid\disk2, I can see and open the files in there. The size of some directories, that is the total of all the filesizes in them, seem smaller than I remember, but that could just be my memory. I'm using a util to copy some files now and I'm looking at the the list of failed copies, there's a bunch but they're small files in very long paths so maybe just some quirk Windows cannot handle. UPDATE: There was some kind of error several times in the logs while copying those files, text file attached. unraid_errors_during_copy1.txt UPDATE 2: I should not have powered it down yesterday but with this external PSU setup and such I felt it wasn't a good idea to leave it running overnight. Unfortunately now there are more problems, there's Buffer I/O errors when it's starting up, I don't recall seeing them or at least not as many before. Another disk now says "unformatted" so there goes another 2TB of stuff. It's just getting worse and worse.
  3. Sorry it's 5.0.5. But I guess I don't need to try moving it to the PCI-E card now since the missing disk came back. Any other advice you have on how to proceed would be greatly appreciated though. I looked up old posts from when I first started having the original failed disk issue. Initially I did the "Trust My Array" procedure and I screwed it up a bit, but then someone told me to do this and it worked: Should I do that again now that I'm testing with an external PSU and see if the disk red balls again? Another suggestion after I solved the trust messup was this: I don't know if that has any relevance now. Thanks for all the help.
  4. UPDATE: The "missing" disk came back when I added more disks to the external PSU. So it's probably a power issue, correct? So now I still have the one failed disk, I still could use some advice on how to proceed. Thanks Unraid 5.0.5 on an old DDR2 AMD system, I forget exactly which CPU it is. 10 disks total, 9x 2TB data and 1x 4TB parity. I cannot recall the PSU specs and I can't see the label on it now but it's an Antec. Long time ago, one of my disks "red balled", I cannot remember everything I did now, but the disk always seemed to be OK after tests and whatever steps I took, but it kept going to red ball, so I'd shut off the box and months would pass before I'd have a try again. In the end I decided it must be either a cable issue or insufficient PSU. Now I'm trying again, I had that one failed (red balled) disk, same as the past. I disconnected all the SATA cables because it was a mess in there, this allowed me to check the power connectors and all the SATA cables carefully. I put them all back and now another disk is reporting as "missing", as I talked about in the other thread. I've changed the missing disk's SATA cable, still missing, I've switched a few disks to an external PSU to see if it's due to insufficient power, still missing. I thought about the SATA port on the motherboard as the possible culprit, but it shows up in POST. The "missing" disk has returned. On that note, does unraid care what controller or port a disk has been connected to? I could move the missing disk onto my PCI-E card which still has 3 ports free. I could also try swapping out the PSU with the most powerful PSU I have on hand which is an approximately 1-2 year old 650w. Another option is to just abandon this box entirely. With the external PSU connected for a few days, get whatever I can off the array.
  5. I'm going to setup a Linux PC so I can pull my unraid disks and try to recover whatever I can. Any recommendations for a specific Linux distro that might be the best for this purpose? I was just going to go with Ubuntu but if anyone has a better suggestion I'd appreciate it.
  6. I have a disk that shows up in POST but unraid says it's "missing", what does that mean exactly?
  7. Any problems with running an additional external power supply for some hard drives? I was thinking of putting some hdds on a separate external power supply temporarily to determine if a problem I'm having is due to insufficient power.
  8. That worked, thank you very much. I was pretty sure the data would be intact but not totally sure, now I can sleep .
  9. I decided to use that procedure on my 5.0.5 unraid box, unfortunately I misunderstood some of it, such as I didn't realize that the "recent 5.x" warning applied to 5.0.5, and I screwed up my array. Probably by refreshing the webpage like it said to not do, but again I just didn't understand, I thought that only applied to 4.7. I know I should have just let it rebuild the disk but the failed disk passed tests and I didn't want my array to be down or super slow for days. I later found this thread: http://lime-technology.com/forum/index.php?topic=19385 ...and I'm in the same situation as the one poster there where my drives all became unassigned and "blue" status. So I manually assigned them all again, making sure my parity drive is the same which is easy because it's the largest drive. Then I try to check the "Parity is already valid" and start the array but it never starts. In the log it seems to be doing the same stuff over and over to attempt to start the array. Oct 14 04:32:43 unraid1 kernel: mdcmd (16): import 15 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (17): import 16 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (18): import 17 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (19): import 18 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (20): import 19 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (21): import 20 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (22): import 21 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (23): import 22 0,0 Oct 14 04:32:43 unraid1 kernel: mdcmd (24): import 23 0,0 Oct 14 04:32:43 unraid1 emhttp_event: driver_loaded Oct 14 04:33:12 unraid1 emhttp: shcmd (4650): rmmod md-mod |& logger Oct 14 04:33:12 unraid1 emhttp: shcmd (4651): modprobe md-mod super=/boot/config/super.dat slots=24 |& logger Oct 14 04:33:12 unraid1 kernel: md: unRAID driver removed Oct 14 04:33:12 unraid1 emhttp: shcmd (4652): udevadm settle Oct 14 04:33:12 unraid1 kernel: md: unRAID driver 2.2.0 installed Oct 14 04:33:12 unraid1 kernel: read_file: error 2 opening /boot/config/super.dat Oct 14 04:33:12 unraid1 kernel: md: could not read superblock from /boot/config/super.dat Oct 14 04:33:12 unraid1 kernel: md: initializing superblock Oct 14 04:33:12 unraid1 emhttp: Device inventory: Oct 14 04:33:12 unraid1 emhttp: Hitachi_HDS5C3020ALA632_ML0220F311UPBD (sdb) 1953514584 Oct 14 04:33:12 unraid1 emhttp: WDC_WD20EARS-00MVWB0_WD-WCAZA6010408 (sdc) 1953514584 Oct 14 04:33:12 unraid1 emhttp: WDC_WD20EARS-00J99B0_WD-WCAWZ0950511 (sdd) 1953514584 Oct 14 04:33:12 unraid1 emhttp: WDC_WD20EARS-00J99B0_WD-WCAWZ0929154 (sde) 1953514584 Oct 14 04:33:12 unraid1 emhttp: WDC_WD20EARS-00J2GB0_WD-WCAYY0152863 (sdf) 1953514584 Oct 14 04:33:12 unraid1 emhttp: ST2000DM001-1CH164_Z1E24FA6 (sdg) 1953514584 Oct 14 04:33:12 unraid1 emhttp: ST2000DM001-1CH164_Z1E24FLV (sdh) 1953514584 Oct 14 04:33:12 unraid1 emhttp: WDC_WD20EARS-00MVWB0_WD-WCAZA1284898 (sdi) 1953514584 Oct 14 04:33:12 unraid1 emhttp: WDC_WD20EARX-00PASB0_WD-WMAZA9865830 (sdj) 1953514584 Oct 14 04:33:12 unraid1 emhttp: HGST_HDS5C4040ALE630_PL1331LAGHYSWH (sdk) 3907018584 Oct 14 04:33:12 unraid1 kernel: mdcmd (1): import 0 8,160 3907018532 HGST_HDS5C4040ALE630_PL1331LAGHYSWH Oct 14 04:33:12 unraid1 kernel: md: import disk0: [8,160] (sdk) HGST_HDS5C4040ALE630_PL1331LAGHYSWH size: 3907018532 Oct 14 04:33:12 unraid1 kernel: md: disk0 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (2): import 1 8,32 1953514552 WDC_WD20EARS-00MVWB0_WD-WCAZA6010408 Oct 14 04:33:12 unraid1 kernel: md: import disk1: [8,32] (sdc) WDC_WD20EARS-00MVWB0_WD-WCAZA6010408 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk1 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (3): import 2 8,48 1953514552 WDC_WD20EARS-00J99B0_WD-WCAWZ0950511 Oct 14 04:33:12 unraid1 kernel: md: import disk2: [8,48] (sdd) WDC_WD20EARS-00J99B0_WD-WCAWZ0950511 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk2 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (4): import 3 8,64 1953514552 WDC_WD20EARS-00J99B0_WD-WCAWZ0929154 Oct 14 04:33:12 unraid1 kernel: md: import disk3: [8,64] (sde) WDC_WD20EARS-00J99B0_WD-WCAWZ0929154 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk3 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (5): import 4 8,80 1953514552 WDC_WD20EARS-00J2GB0_WD-WCAYY0152863 Oct 14 04:33:12 unraid1 kernel: md: import disk4: [8,80] (sdf) WDC_WD20EARS-00J2GB0_WD-WCAYY0152863 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk4 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (6): import 5 8,128 1953514552 WDC_WD20EARS-00MVWB0_WD-WCAZA1284898 Oct 14 04:33:12 unraid1 kernel: md: import disk5: [8,128] (sdi) WDC_WD20EARS-00MVWB0_WD-WCAZA1284898 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk5 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (7): import 6 8,144 1953514552 WDC_WD20EARX-00PASB0_WD-WMAZA9865830 Oct 14 04:33:12 unraid1 kernel: md: import disk6: [8,144] (sdj) WDC_WD20EARX-00PASB0_WD-WMAZA9865830 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk6 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (: import 7 8,96 1953514552 ST2000DM001-1CH164_Z1E24FA6 Oct 14 04:33:12 unraid1 kernel: md: import disk7: [8,96] (sdg) ST2000DM001-1CH164_Z1E24FA6 size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk7 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (9): import 8 8,112 1953514552 ST2000DM001-1CH164_Z1E24FLV Oct 14 04:33:12 unraid1 kernel: md: import disk8: [8,112] (sdh) ST2000DM001-1CH164_Z1E24FLV size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk8 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (10): import 9 8,16 1953514552 Hitachi_HDS5C3020ALA632_ML0220F311UPBD Oct 14 04:33:12 unraid1 kernel: md: import disk9: [8,16] (sdb) Hitachi_HDS5C3020ALA632_ML0220F311UPBD size: 1953514552 Oct 14 04:33:12 unraid1 kernel: md: disk9 new disk Oct 14 04:33:12 unraid1 kernel: mdcmd (11): import 10 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (12): import 11 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (13): import 12 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (14): import 13 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (15): import 14 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (16): import 15 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (17): import 16 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (18): import 17 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (19): import 18 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (20): import 19 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (21): import 20 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (22): import 21 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (23): import 22 0,0 Oct 14 04:33:12 unraid1 kernel: mdcmd (24): import 23 0,0 Oct 14 04:33:12 unraid1 emhttp: shcmd (4653): /usr/local/sbin/emhttp_event driver_loaded Oct 14 04:33:12 unraid1 emhttp_event: driver_loaded I see that section repeated over and over every minute or so. When I ran initconfig it gives that message about renaming super.dat to super.bak. I cannot find any file named super.bak but I did find a file named super.old. Do I have to restore that or something? Anyway yes I'm lost now, so basically 2 questions. 1) Did I lose all my data??!!? 2) How do I resolve this? Thanks
  10. Gigabyte GA-MA785GM-US2H 2GB DDR2 RAM (2x1 Corsair) 1x Hitachi and 3x WD EARS (one not in array) Antec Basiq 550 Plus syslog1.txt
  11. This is my second problem with a similar result, but this time it happened during a simple Windows XP file copy, the result is basically the same as in my last post: http://lime-technology.com/forum/index.php?topic=14751.msg139521 I was copying a directory of mostly photos, about 40GB. After awhile I went to check and copy had failed, urnaid server was then unresponsive but not totally crashed. I could attempt to connect to the webserver but pages will never load, also attempt to telnet but I'd only get to the "connected... escape character.." prompts. With no other choice I forced it to power off with the power switch, and started it up again. Problem this time is I don't have any previous logs, the syslog file simply starts when I rebooted. I found nothing old in /var/log, are there previous logs stored elsewhere? My first problem, the one in my previous post I noted above, was perhaps somewhat of a uncommon case with using FileZilla and 3 transfers writing to the array, which was worked around by dropping to 2 transfers. This time however it's a completely normal and possibly very common case that really shouldn't crash the server. So the question is becoming, not necessarily how to fix these but more should I give up on this? I basically thought that given some exceptions unraid was pretty widely compatible with hardware and a good semi-DIY setup. Am I wrong to expect reliability on my own hardware? Is unraid really not so reliable unless you buy a pre-built box? Please note I am not complaining, trying to insult, I merely would like some honest opinions so I don't keep working on this box and having even basic use cause crashes, thinking I can maybe do something about it when I cannot. I realize I could build a new system based on the exact recommended hardware, but for me that would be going too far cost-wise. I built my box with mostly existing hardware, only bought the recommended case and cages to support a lot of drives as I had planned to run this server for some time, adding drives as needed.
  12. I ran the fsck checks on both md vols, no errors. So would the final word on this be that the server simply couldn't handle 3 simultaneous transfers? Would increasing the RAM make any difference? Actually I don't think I can add RAM since I have a monster HS+fan over the first 2 slots, but anyway it would be good to know that this was simply caused by insufficient RAM. I'm still in my learning/evaluation stage on the free version, but I think I'll be buying Pro soon and adding more drives, at which time I will probably be back here for help again . Thanks to all.
  13. I didn't run the fsck because I wasn't sure if it was OK to unmount one of the md volumes, I wasn't sure if it would mess up the array or something. When the array is stopped they don't exist, so was I supposed to run fsck on the individual drive devices? Anyway, I disabled all addons, finished that FTP queue with one transfer at a time. Then I ran another FTP queue of about 30GB using 2 transfers at a time and that also completed successfully. I will now re-enable some addons one at a time and see what happens. I don't know if I will bother to try 3 transfers again, 2 is good enough as long as I can successfully complete large queues without killing unraid.
  14. Do I check sdc's filesystem using the standard Linux tools from the terminal? What do you mean by see if the problem waits until 4 transfers? Thanks.
  15. I will find one and try. There has to be other users out there using Filezilla and doing probably the exact same thing, maybe I will ask for experiences in the forums. Do the LimeTech people look at these forums? I think the fact that this messes up the server in such a way that you cannot perform a clean shutdown and/or crashes it, is pretty serious.
  16. I set it to 60GB, so even with Filezilla writing 3 files at a time I doubt that would have exceeded 60GB, most movies don't even go above 8GB. This did make me think though, maybe writing 3 files at a time is too much for unraid? The speed is max 16Mbits so that's very slow but maybe it's something to do with slow sustained writing of multiple files, just throwing out some guesses. When using Windows explorer or Mac finder, I have previously copied directories of hundreds of GBs to the array, so in that way it does seem to work OK.
  17. Well it looks like I am able to reproduce this. After doing the memtest, rebooting, etc. I decided to try the same FTP transfer again, which had 3 files remaining to be resumed. This time, 1 of those 3 completed, but again eventually it failed. Same as before, the array is readable and I am even able to write test files via Windows Explorer, but Filezilla cannot resume the 2 remaining files. I didn't mention this in the first post, but after this happens I try to stop the array from the web admin and the drives status changes to "unmounting" but the array never stops. Shortly after the unraid server becomes unresponsive and I have to force it to power off. To recap, I have the unraid share as a mapped drive on Windows 7 64bit. I am downloading from an FTP server using Filezilla, and Filezilla is downloading the files directly to the unraid box via the mapped drive. Just to be clear, this has nothing to do with FTP actually on the unraid box. The files were movies, about 4GB-15GB. I just thought about the fact that resuming a file might cause problems with unraid's allocation needs, but I have 1.4TB free space on the array, and also I wouldn't expect such a failed write to cause other problems like not being able to stop the array and then crashing the server. So, am I expecting too much to be able to FTP directly to an unraid box treating it the same as any drive? I do this all the time with my small 2 bay NAS and it works fine, but I do realize there's a big difference between those hardware based dedicated appliances and unraid.
  18. I added plain text. Not sure if you read my post but the log was purposely uploaded as HTML with the highlighted for quick scan. Sorry if that's not appropriate.
  19. I was writing a long detailed message when storm rumbled in and power out, so this is actually the 'short' version. This could be a Samba bug, but the one people reported that was similar was supposedly fixed in early 2010. I see many "not tainted" error posts in the forum but none specifically for smbd. Was FTP'ing directly to the array, FileZilla, Windows 7, mapped drive. Went OK for most of it then came back to find it stopped and could not resume the file transfers, though resume had worked earlier. Checked the array, I could read, and tested writing a small file to it, but I cannot resume writing those last few files. Did a memtest just to be sure, no errors over almost 2 complete passes. Below is the error from the time I had the problem, attached is the full log in HTML with highlighting etc. Been running this 4.7 free version for only about a week, have written about 2TB to it, hardware: Gigabyte GA-MA785GM-US2H 2GB DDR2 RAM (2x1 Corsair) 1x Hitachi and 3x WD EARS (1 not in array until I get Pro license) Antec Basiq 550 Plus Aug 18 06:14:02 unraid1 kernel: Pid: 4679, comm: smbd Not tainted (2.6.32.9-unRAID # GA-MA785GM-US2H (Errors) Aug 18 06:14:02 unraid1 kernel: EIP: 0060:[<c1133477>] EFLAGS: 00210246 CPU: 0 Aug 18 06:14:02 unraid1 kernel: EIP is at radix_tree_lookup_element+0x47/0x68 Aug 18 06:14:02 unraid1 kernel: EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 01000010 Aug 18 06:14:02 unraid1 kernel: ESI: 00133800 EDI: 00000001 EBP: c20cbdd0 ESP: c20cbdc0 Aug 18 06:14:02 unraid1 kernel: DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 Aug 18 06:14:02 unraid1 kernel: Process smbd (pid: 4679, ti=c20ca000 task=f77006e0 task.ti=c20ca000) Aug 18 06:14:02 unraid1 kernel: Stack: Aug 18 06:14:02 unraid1 kernel: 00000001 c509a220 c3703770 00133800 c20cbdd8 c11334a5 c20cbdf8 c1048a7f Aug 18 06:14:02 unraid1 kernel: <0> c3703774 001337ff 00133800 c509a220 c3703770 00133800 c20cbe0c c1048c59 Aug 18 06:14:02 unraid1 kernel: <0> c509a220 00001000 ffffffff c20cbe30 c1048eb0 000000d0 00133800 c3703770 Aug 18 06:14:02 unraid1 kernel: Call Trace: (Errors) Aug 18 06:14:02 unraid1 kernel: [<c11334a5>] ? radix_tree_lookup_slot+0xd/0xf (Errors) Aug 18 06:14:02 unraid1 kernel: [<c1048a7f>] ? find_get_page+0x1d/0x79 (Errors) Aug 18 06:14:02 unraid1 kernel: [<c1048c59>] ? find_lock_page+0x13/0x4c (Errors) Aug 18 06:14:02 unraid1 kernel: [<c1048eb0>] ? grab_cache_page_write_begin+0x32/0x8e (Errors) Aug 18 06:14:02 unraid1 kernel: [<c111cb81>] ? fuse_file_aio_write+0x286/0x4fa (Errors) Aug 18 06:14:02 unraid1 kernel: [<c1225643>] ? sock_common_recvmsg+0x31/0x4a (Errors) Aug 18 06:14:02 unraid1 kernel: [<c106c46d>] ? do_sync_write+0xbb/0xf9 (Errors) Aug 18 06:14:02 unraid1 kernel: [<c103391d>] ? autoremove_wake_function+0x0/0x30 (Errors) Aug 18 06:14:02 unraid1 kernel: [<c103391d>] ? autoremove_wake_function+0x0/0x30 (Errors) Aug 18 06:14:02 unraid1 kernel: [<c106cc4b>] ? vfs_read+0xfd/0x114 (Errors) Aug 18 06:14:02 unraid1 kernel: [<c106c3b2>] ? do_sync_write+0x0/0xf9 (Errors) Aug 18 06:14:02 unraid1 kernel: [<c106cac4>] ? vfs_write+0x8c/0x116 (Errors) Aug 18 06:14:02 unraid1 kernel: [<c106d095>] ? sys_pwrite64+0x44/0x5d (Errors) Aug 18 06:14:02 unraid1 kernel: [<c1002935>] ? syscall_call+0x7/0xb (Errors) Aug 18 06:14:02 unraid1 kernel: Code: f6 75 41 eb 38 89 c2 83 e2 fe 8b 02 3b 34 85 10 46 3f c1 89 45 f0 77 2c 6b c0 06 8d 58 fa 89 f0 88 d9 d3 e8 83 e0 3f 8d 54 82 10 <8b> 02 85 c0 74 13 ff 4d f0 74 07 83 eb 06 89 c2 eb e1 85 ff 0f Aug 18 06:14:02 unraid1 kernel: EIP: [<c1133477>] radix_tree_lookup_element+0x47/0x68 SS:ESP 0068:c20cbdc0 Aug 18 06:14:02 unraid1 kernel: CR2: 0000000001000010 Aug 18 06:14:02 unraid1 kernel: ---[ end trace 2a8e219fb36cf29d ]--- unraid_log_snip.html.txt unraid_log_snip.txt
  20. Just to let everyone the final result, RMA date was July 20th, and I received the drives on August 3rd. August 1st was a Canadian holiday, but that's still 9 business days, and 14 calendar days overall. Definitely not what I expected for an advance RMA.
  21. Well I was just looking for people's experiences and the next step was to call, which I did. UPS says they don't have any indication the package is en route to me, WD claims it definitely is on the way the cited the answer above. So, the UPS tracking numbers are actually for the delivery between some intermediate in Ontario and me. That makes sense then, thanks for pointing that out. I should mention, the WD person on the phone completely failed at making that clear, he simply indicated they come from California over and over . I was hoping the advance RMA would be much faster, and I expected the replacements to come from Ontario which means shipping time would be maximum 2 days, all together I expected maybe 6 business days absolute max. Oh well, guess I won't bother with it next time.
  22. Normally I don't opt for the advance RMA, which for those who may not know means you give them your credit card info and they ship you the replacement drives before they get your old ones back. This time because it was holding up my unRAID build and everything else planned for it, I decided to do it for two 2TB Greens. I'm in Canada by the way. Quickly, I received the shipping notice with UPS tracking numbers, oh nice this seems to be going well. Wrong , it's been a full week now, so 5 business days, and the drives still haven't shipped. The tracking still shows that the shipment/label was merely created and no pkgs have been received by UPS. I figure there has to be a lot of people here that have done RMAs with WD and particularly advance RMA. Anyone have the same experience? I think I've only ever RMA'd one drive with WD a long time ago, I rarely bought WD until only the last few years, but in any case I've never done advance RMA for any drives. Really disappointed, and wondering if this is typical.