jfeeser

Members
  • Posts

    74
  • Joined

  • Last visited

Everything posted by jfeeser

  1. Thought you'd like to know i just pulled the trigger on this over the weekend. Believe it or not, the passmark of the CPU in the ebay server is _more than double_ the one that's in my fileserver now (a sempron 145!) I guess the benefit of "this thing is a fileserver and nothing else" eases a lot of my CPU needs
  2. Oh sound is definitely an issue. I've seen a bunch of posts about modding that case to remove the housing for the server-grade PSUs and putting in a desktop one (which is actually what i'm doing with my current one as well). Going for as close to silent as i can get, considering my home office and my rack share a room
  3. Yep - i was actually researching that after your last reply. Apparently all of the SuperMicro backplanes that end in "TQ" are just straight passthroughs, which would explain why on the back of it there's no SAS or breakout connectors - there's just straight up 24 sata ports. Which is fine, i've already got plenty of reverse breakout cables laying around.
  4. It's funny, i didn't even think to ask but that's a great idea. I figure i can transplant my existing hardware for now (which works fine but was just built on a cheapy single-core, single-thread CPU i had laying around - parity checks take literally a day and a half!)
  5. Thanks for the tip! I'll head over there. I figure even if the backplane is good and i can rip out the rest of the internals, i'm still coming out ahead. Thanks again!
  6. Looks like i can get 3 SAS9201-8I LSI for like $80 on ebay, so that at least solves the _controller_ problem. One step in the right direction!
  7. Sound advice. Figured the price was too good to be true. I'm mostly concerned about the controllers and the chassis - everything else i was probably going to rip out and replace anyway. Any recommendations for something that would accomodate 24 drives around that price point? (trying to spend 500 or less, and avoid Norco if i can - had a norco case wreck 13 drives simultaneously).
  8. Hi all, looking to upgrade my UnRAID rig, as i'm physically out of places to put drives in it. Looking to go from a 2U, loud-as-hell, 12-bay server with drives ranging from 3TB reds to 8TB reds/whites, to a 24-bay box. I'd be transplanting the 12 drives in the existing box and then scaling up from there (and probably keeping the 12-bay as a "backup box"). Looking at this one i just dug up on ebay: https://www.ebay.com/itm/Supermicro-24-Bay-Chassis-SAS846TQ-Server-AMD-QC-2-1GHz-2372HE-16GB-H8DME-2/202174284803?epid=1403640796&hash=item2f1286c403:g:2ggAAOSwkvFaTs00 Can anyone take a look and see if there's any potential issues with this box? I'm looking to just run vanilla unraid, no docker or VM outside of a couple of very low-footprint apps, and it will be serving content to a plex server with about 6 users, running on a separate box.
  9. Hi all, looking to upgrade my UnRAID rig, as i'm physically out of places to put drives in it. Looking to go from a 2U, loud-as-hell, 12-bay server with drives ranging from 3TB reds to 8TB reds/whites, to a 24-bay box. I'd be transplanting the 12 drives in the existing box and then scaling up from there (and probably keeping the 12-bay as a "backup box"). Looking at this one i just dug up on ebay: https://www.ebay.com/itm/Supermicro-24-Bay-Chassis-SAS846TQ-Server-AMD-QC-2-1GHz-2372HE-16GB-H8DME-2/202174284803?epid=1403640796&hash=item2f1286c403:g:2ggAAOSwkvFaTs00 Can anyone take a look and see if there's any potential issues with this box? I'm looking to just run vanilla unraid, no docker or VM outside of a couple of very low-footprint apps, and it will be serving content to a plex server with about 6 users, running on a separate box. **EDIT: Just realized this board is for *finished* builds and that i posted in the wrong place. Mods, feel free to delete this one.**
  10. Hi all, I'm rebuilding a drive from parity after having some issues with a drive that some of you helped me with the other day. Being the foolish person that i am, i decided to tinker with the network settings while the rebuild was going on and change the IP of the box, since i previously had to change it to DHCP to fix some network issues. That being said, when i reset the IP to static all of a sudden i can't ping anything from the unraid box anymore, not even the loopback address. i tried doing a "/etc/rc.d/rc.inet1 restart", but it didn't seem to work. That's fine, a reboot will probably clear it up, but i can't do that until the parity rebuild is done. So, other than watching the drive lights and waiting for them to stop blinking, is there a way from the console that i can watch the progress of the rebuild? Thanks!
  11. Just to close the thread up, when i got home i disabled INT13 on the card side and IOMMU on the motherboard side, and now the array is rebuilding properly. Thanks again for the help you guys! I'll probably replace the card in the long run but at least in the short those suggestions got me back up and running. Thanks again!
  12. Thanks for the heads' up. I think i'm going to try all the mainboard bios updating/settings changes, moving the SASLP to a different slot, flashing _that_ card's bios (all metioned here) and seeing where we get before i throw money at it. Chances are i'll still be shopping for a new card, but i figure let's try to make what i have work first.
  13. After some searching the forums for some recommended models, do you think this would be a suitable replacement? https://www.amazon.com/3P0R3-Controller-PCI-E-mini-SAS-PowerEdge/dp/B00ZSXK1YO/ref=sr_1_1?s=electronics&ie=UTF8&qid=1494866284&sr=1-1&keywords=dell+perc+h310
  14. Strange. Any idea what could've caused the sudden change? I've been using the server in this configuration for almost a year without incident.
  15. Looks like i may have spoken too soon....the parity rebuild for Disk 9 seems to have just stopped itself, and the drive is back to a red X. Here's a new diagnostic dump....any thoughts? feezfileserv-diagnostics-20170515-1057.zip
  16. Okay. After all of that Disk 4 re-detected properly, and disk 9 is re-building. Time to re-verify that all my backups are up to date. Thank you SO MUCH for all your help, and putting up with my novice-ness.
  17. Thanks. When doing the xfs_repair on md4, it spits this out: root@feezfileserv:/boot/logs# xfs_repair -v /dev/md4 Phase 1 - find and verify superblock... - block cache size set to 663264 entries Phase 2 - using internal log - zero log... zero_log: head block 776022 tail block 775959 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. (this is after stopping the array and restarting it in maintenance mode) Should i just go ahead and do the "xfs_repair -Lv /dev/md4", or is there something else i should try first?
  18. Here you go. Of note is that when i started the array this time, a _different_ disk showed up as unmountable in addition to the one that has red-x'ed previously. SMART status for _all_ of my drives (even the X'ed out one) are green. feezfileserv-diagnostics-20170515-1009.zip
  19. There's the diagnostics file. feezfileserv-diagnostics-20170515-1003.zip
  20. Hah, silly me. I assumed that safe-mode was CLI only and never actually bothered to check if the webGUI worked. Guess the caffiene hasn't kicked in yet. I'll pull the diagnostics and report back.
  21. Right, what i'm saying is that i'm currently in safe mode and would like to start the array to do the diagnostics you guys mentioned. Can that be done from safemode or do i need to reboot into "normal" mode? If i can start the array from safe mode, what are the commands to do so?
  22. I at least got that far I mean the commands to start the array from within safe mode.
  23. Apologies, how do i accomplish that? It's sad, i know windows and network gear backwards and forwards, but anything beyond the basics in *nix and i'm kindof out of my depth.
  24. Sounds good. Can i do either of those things remotely from safe mode? I only ask because i don't have physical access to the server right now, it's booted into safe mode, and i'm telnetted in.
  25. Hi all, Last night i went to upgrade my UnRAID system from 6.2.2 to the latest and greatest, and noticed some escalating odd behavior. First, when i went to do the "automated" (click here to upgrade) upgrade, it didn't work - it said something to the effect of "unable to write to flash". I thought that was odd, so being a normally windows guy i decided to reboot because a "reboot fixes everything". I stopped the array, and as soon as i did i noticed that one of the drives became unavailable (red X), and two more became "unknown" (with the expected drive name there and a dropdown to choose a disk). Very odd. I rebooted the server and unraid as well as all the drives came back up fine, and i was able to upgrade the OS. All drives reported as available. Being that it had been rebooted several times at this point i elected to run a parity check. This ran all day (the server doesn't have the fastest proc in the world) and when i came back in the evening to check on it i found that the parity check was listed as "incomplete", and that one of the drives had become unavailable. On the display attached to the unraid server i notice a ton of XFS and I/O errors. I shut down the server, checked all of the drive mountings and the cabling and all seemed well. I reseated the cables and the drives just to be on the safe side, and fired the server back up. When i did, i noticed the display was reporting similar I/O errors, and now the Web GUI is unresponsive (the page doesn't even load). I've got it rebooted into "safe mode" now as a precautionary measure as i am not home and would like to troubleshoot remotely. Can anyone advise what my "next steps" are? Thanks in advance!