jfeeser

Members
  • Posts

    74
  • Joined

  • Last visited

Everything posted by jfeeser

  1. Looks like that got it sorted. Filesystem was repaired, drive mounted properly, and aside from one 0KB file, the lost+found was empty so i don't think i even lost anything. Thank you so much! As an aside, any idea what could've caused this, or what i could do to prevent it in the future?
  2. Ah, i misunderstood the directions and used /dev/sda3. Will re-run with /dev/md3 and report back!
  3. Apologies for the delay in reply - it was a 6TB drive and it took a long time for the XFS repair to run. Unfortunately, after all that running it spit out ".Sorry, could not find valid secondary superblock Exiting now." Is it possible to set the drive up as "new" and rebuild it from parity? If so, what's the best way to go about that? Also, what is the "best practice" for preventing something like this from happening in the future?
  4. Yeah, i was aware of the unassigned bit - that was part of a failed attempt to use a single unraid server to run my ESXI datastore and my fileserver "all under one roof". Even with the datastore share restricted to that drive i couldn't get the throughput i wanted, so i scrapped the idea. I'll run the filesystem check and report back. Thanks!
  5. Here's the file the diagnostics spat out. feezfileserv-diagnostics-20161123-0759.zip
  6. Hi all, having a bit of weirdness with my server....it's been humming along without incident for ages, and i rebooted it for an unraid OS update, and one of my drives came up saying "unmountable - no file system": The strange thing is it doesn't seem to be showing that the array is in a "degraded" mode (i'm used to seeing it say that it's "emulated" when a drive has gone bad). I ran SMART checks on the drive and they come back clean. I know from reading previous forum posts that formatting it is a bad idea since the parity will get rewritten to think that drive is "empty", so i haven't done that. What are the next steps?
  7. All, I had a catastrophic failure on my array last night - Plugged a drive into a (as of the time) unused backplane on my Norco 4224 and POP - system shuts down, all my drives are dead. This is the _second_ time this has happened - they sent me a whole new set of backplanes and i replaced the power supply - still dead. Complaints about norco aside, it looks like it's just the drive controllers that are dead. I replaced the controller of a dead drive with the controller of a "living" one, and the drive spins up but when i try to attach it to a linux box, the drive doesn't mount. Loading it up in GParted on a ubuntu live CD spits out a "Can't have a partition outside the disk!" error. If i click "ignore", the drive doesn't mount and when i click on it for details i get the screenshot here: http://imgur.com/BlL2Lmo I tried manually mounting it at the command prompt, but it spits back that the drive has a "bad superblock". When i run xfs_check, it tries to look for a secondary superblock and can't find one. The question here is, am i hosed? Are there more steps i can take? It looks like the data is still on the drive, i just currently have no way to access it. Normally i wouldn't care but there are some pictures of my son on there that i'd love to get back.
  8. I had the Crashplan addon installed - i'm wondering if that was what was doing it. I'll try removing it for the next reboot and see if stability improves. In the meantime i'll keep an eye on the syslog until the rebuild is complete. Thanks!
  9. So it would seem. Since i can't reboot it to install 5.0.5 yet, any answer to my original question?
  10. All, I recently had a drive failure and needed to rebuild it - but that's not really the issue here. Put in a new drive, started the rebuild, and all was well, until the webGUI crashed. So now i seemingly don't have a way to monitor the progress of the rebuild. Judging by the drive activity on the server-front (lots of blinky lights!), and the fact that i can still access the files on the server, the server itself is still humming along just fine. Is there an easy way to restart the webGUI without restarting the server (since i'm going to go ahead and assume that restarting the server during a data rebuild is a bad idea)? I'm running version 5.0.4. As an aside note - is there a compelling reason (other than having the latest and greatest) to upgrade my installation from 5.04 to 5.05?
  11. I have more than one of those PCI cards, so maybe i'll try sticking a second one in and move some of the drives to it. I'm pretty cognizant of the fact that eventually i'll be looking for a new motherboard, i'm just trying to exhaust all my options before i have to explain to my wife why i dropped a couple hundred on a motherboard
  12. Hate to resurrect the thread so soon, but it looks like i spoke too soon. It looks like the drive detected and cleared properly: (It's Disk 11 in that image) It goes through the whole clearing process, but when it's done, the drive is no longer properly recognized: When i click on the (sdk) in the dropdown, it churns for a second, then reloads the page with the drive unassigned. If i unplug the drive and hotplug it again, it will recognize the drive, but then it wants to clear it again. If i click the "log" button in the upper right, this is what it contains: /usr/bin/tail -f /var/log/syslog Aug 6 19:17:33 FeezFileServ kernel: sd 5:0:0:0: [sdk] Aug 6 19:17:33 FeezFileServ kernel: Result: hostbyte=0x04 driverbyte=0x00 Aug 6 19:17:33 FeezFileServ kernel: sd 5:0:0:0: [sdk] CDB: Aug 6 19:17:33 FeezFileServ kernel: cdb[0]=0x8a: 8a 00 00 00 00 00 03 b8 e4 80 00 00 00 a8 00 00 Aug 6 19:17:33 FeezFileServ kernel: sd 5:0:0:0: [sdk] Unhandled error code Aug 6 19:17:33 FeezFileServ kernel: sd 5:0:0:0: [sdk] Aug 6 19:17:33 FeezFileServ kernel: Result: hostbyte=0x04 driverbyte=0x00 Aug 6 19:17:33 FeezFileServ kernel: sd 5:0:0:0: [sdk] CDB: Aug 6 19:17:33 FeezFileServ kernel: cdb[0]=0x8a: 8a 00 00 00 00 00 03 b8 e5 28 00 00 00 a8 00 00 If i leave the drive unassigned, i can start the array with the remaining drives no problem. Any thoughts?
  13. Hey guys, just wanted to let you know that the hot-plug method worked like a charm. The drive is preclearing now, and it looks like everything should be golden. That should tide me over until i can afford a new motherboard Thanks everyone for all of your help!
  14. Thanks for the replies, all - Thornwood, I'll try what you suggest - so i can just hot-plug the drive once unraid is completely loaded, stop the array and it will pick up the new drive? Any potential downsides to doing this? Also, what motherboard did you end up switching to? The build i'm currently running is an "on the cheap" build, and now i'm gearing up to upgrade to a more robust build, so any advice is appreciated.
  15. Actually sorry, i mis-typed in the original post - it *is* a 6-port card, not a 4. I always forget the external ports are there. Garycase is correct, i already have another 3TB card, and it's plugged into the controller card and is recognized fine, so i'm not positive it's the controller card. As an experiment i moved the new 12th drive to be plugged into the motherboard (and swapped one of the existing drives to the controller card) and the same thing is happening - 11 drives, USB works, 12 drives, USB vanishes.
  16. Hi all, i've been using unraid for a while now, without a problem - every time i run out of space, i just add another drive to the array, and we're back up and running. However, this time i ran into an interesting issue. First off, the relevant hardware: Motherboard: Asrock 970DE3/U3S3 http://www.asrock.com/mb/AMD/970DE3U3S3/?cat=Specifications Attached to this is a cheapy 4-port SATA expansion card: http://www.vantecusa.com/en/product/view_detail/391 I can post the specs of the individual drives as well as any other hardware, but those are the most relevant parts for this issue. Up until now i've been adding drives without a problem, and had reached 10 of the 11 SATA ports filled with drives of various sizes. (all 8 ports on the board full, and 3 of the 4 ports on the SATA expansion card filled). As usual, i ran out of space, so i wandered down to the local MicroCenter and picked up another WD Caviar Green 3TB. Unpacked it, installed it into the last open port of the SATA card, and fired the system back up. "Error: No Bootable Devices Found". Odd. I checked everything with the USB stick, and nothing had changed (i even went so far as to use that as an excuse to upgrade to the latest version of unRAID, still no dice). I decided to go back to the last time it worked, so i unplugged the new drive from the sata port. The system started right up and booted into unRAID like nothing ever happened. Out of curiosity, i checked the bios - it sees the USB as a bootable device. Same if i hit F11 at boot for the one-time boot menu, the USB device is seen as an available option. If i plug the 12th drive back in, the bios and the one-time boot menu don't see the USB device as an option anymore. Thinking the SATA port may be bad, i shuffled a couple drives around so that the port in question was occupied. System fired right up, no issues. I also plugged the 12th drive into another system, and it was recognized just fine. So i know all the hardware is functioning. The question here is, where is the wall i'm hitting? Is there such a thing as a maximum number of supported drives? Is there a bios switch that i'm missing that suddenly disables USB booting when a 12th drive is plugged in? Any help you guys could give would be greatly appreciated - my almost-full server thanks you in advance!
  17. Happy to report that upgrading to 5.0B9 fixed the problem (or just rebooting did, either way, i'm back up and running!). Thanks a ton guys!
  18. Apologies, i should've thought to include that to begin with. I'm currently running Unraid 5.0B7 (happened to be the latest version available at time of purchase), and the syslog pastebin is available here: http://pastebin.com/XWtD9EyF Relevant log segment begins 7/18 at around 11:30, as that's when the preclear script finished and i added it into the array. /dev/sdf is the 1.5 TB drive in question.
  19. I did some searching on the forum and wasn't able to come up with an answer on my own, so pardon the new post. Here's the situation: My old media server was running windows 2003, with 4 1.5 TB drives running in a software RAID-5. I recently purchased unRAID, and built out a new server that is currently running 5 2TB drives, 4 for data and 1 for parity. All was/is running perfectly. After quite a long time of transferring data, i emptied the old server's contents into the new one. So now, after breaking down the old RAID, and adding one of the 1.5TB drives to the unRAID box, i can't seem to add it to the existing media server share. I precleared the drive using the script found on this forum, added it in on the web GUI, and it was recognized as valid, and i can access it using the "disk5" share. However, even though i have the share set to use all available drives, it doesn't add the disk5 space to the available space on the media server share. I also tried explicitly telling the share to use "disk1,disk2,disk3,disk4,disk5", but i still don't get the extra space from the 1.5 drive. Is this normal? The only thing i can think that would cause this behavior is that it's a smaller drive, but i wanted to make sure. Your thoughts, gentlemen? Thanks in advance! -J
  20. @lionelhutz - That's correct. The "situation", as it stands right now is this (the breakdown is just anime and TV right now, with a small amount of movies - most of my tv and anime DVD/BD are ripped, so there's not much more to add. I'm still working on ripping the movies, so that will be where most of the "new" content comes from at the moment): Disk 1 (2TB): TV - 100GB Anime - 1.25TB Movies - 161GB Free - 98.27GB Disk 2 (2TB): TV - 164GB Anime - 1.12TB Movies - 10.5GB Free - 90.64GB Disk 3 (2TB): TV - 1.32TB Anime - 410GB Movies - 0GB Free - 83.99GB They were written using the incorrect split level, using the "most free" writing method. I've got a 4th 2TB drive sitting in the box that i've yet to add into the array. Seeing this listed out, it's amazing how quickly DVDs and BDs add up when they're Ripped
  21. Thanks for the advice, guys. I didn't even think about moving it to the cache drive and back. I've got a shiny new 2TB drive coming in today, so temp space shouldn't be an issue. Basically, just to clarify (although i don't think it'll change the answer) i have: Television (share) \complete active shows \30 rock \season 1 episode episode episode \season2 episode episode episode Ideally, i'd like to have all of 30 rock together. Instead (for example), it sorted it like this: Disk_1 \Complete active shows \30 rock (empty) Disk_2 \Complete active shows \30 rock (empty) Disk_3 \Complete active shows \30 rock \season 1 episode episode episode \season 2 episode episode episode What would the proper split level be to make sure that everything \30 rock and below is all on the same drive? Sorry for the newb question, but in the multiple faqs and searches i've done i've seen some conflicting answers. Thanks again!
  22. All, After a mass-copy from my old fileserver to my new one, i realized i set the split level for my TV share incorrectly, and now the folders are split incorrecly across drives. After i set it to the proper split level, is there a way, short of manually copying folders from disk to disk, to quickly "reorganize" the folders to the correct structure? Thanks in advance for any help you guys can provide!