Jump to content

jfeeser

Members
  • Posts

    74
  • Joined

  • Last visited

Posts posted by jfeeser

  1. Thought you'd like to know i just pulled the trigger on this over the weekend.  Believe it or not, the passmark of the CPU in the ebay server is _more than double_ the one that's in my fileserver now (a sempron 145!)

     

    I guess the benefit of "this thing is a fileserver and nothing else" eases a lot of my CPU needs :)

  2. Oh sound is definitely an issue.  I've seen a bunch of posts about modding that case to remove the housing for the server-grade PSUs and putting in a desktop one (which is actually what i'm doing with my current one as well).  Going for as close to silent as i can get, considering my home office and my rack share a room :D

  3. Yep - i was actually researching that after your last reply.  Apparently all of the SuperMicro backplanes that end in "TQ" are just straight passthroughs, which would explain why on the back of it there's no SAS or breakout connectors - there's just straight up 24 sata ports.  Which is fine, i've already got plenty of reverse breakout cables laying around.

  4. It's funny, i didn't even think to ask but that's a great idea.  I figure i can transplant my existing hardware for now (which works fine but was just built on a cheapy single-core, single-thread CPU i had laying around - parity checks take literally a day and a half!)

  5. Sound advice.  Figured the price was too good to be true.  I'm mostly concerned about the controllers and the chassis - everything else i was probably going to rip out and replace anyway.  Any recommendations for something that would accomodate 24 drives around that price point?  (trying to spend 500 or less, and avoid Norco if i can - had a norco case wreck 13 drives simultaneously).

  6. Hi all, looking to upgrade my UnRAID rig, as i'm physically out of places to put drives in it.  Looking to go from a 2U, loud-as-hell, 12-bay server with drives ranging from 3TB reds to 8TB reds/whites, to a 24-bay box.  I'd be transplanting the 12 drives in the existing box and then scaling up from there (and probably keeping the 12-bay as a "backup box").  Looking at this one i just dug up on ebay:

     

    https://www.ebay.com/itm/Supermicro-24-Bay-Chassis-SAS846TQ-Server-AMD-QC-2-1GHz-2372HE-16GB-H8DME-2/202174284803?epid=1403640796&hash=item2f1286c403:g:2ggAAOSwkvFaTs00

     

    Can anyone take a look and see if there's any potential issues with this box?  I'm looking to just run vanilla unraid, no docker or VM outside of a couple of very low-footprint apps, and it will be serving content to a plex server with about 6 users, running on a separate box.

  7. Hi all, looking to upgrade my UnRAID rig, as i'm physically out of places to put drives in it.  Looking to go from a 2U, loud-as-hell, 12-bay server with drives ranging from 3TB reds to 8TB reds/whites, to a 24-bay box.  I'd be transplanting the 12 drives in the existing box and then scaling up from there (and probably keeping the 12-bay as a "backup box").  Looking at this one i just dug up on ebay:

     

    https://www.ebay.com/itm/Supermicro-24-Bay-Chassis-SAS846TQ-Server-AMD-QC-2-1GHz-2372HE-16GB-H8DME-2/202174284803?epid=1403640796&hash=item2f1286c403:g:2ggAAOSwkvFaTs00

     

    Can anyone take a look and see if there's any potential issues with this box?  I'm looking to just run vanilla unraid, no docker or VM outside of a couple of very low-footprint apps, and it will be serving content to a plex server with about 6 users, running on a separate box.

     

    **EDIT:  Just realized this board is for *finished* builds and that i posted in the wrong place.  Mods, feel free to delete this one.**

  8. Hi all, I'm rebuilding a drive from parity after having some issues with a drive that some of you helped me with the other day.  Being the foolish person that i am, i decided to tinker with the network settings while the rebuild was going on and change the IP of the box, since i previously had to change it to DHCP to fix some network issues.  That being said, when i reset the IP to static all of a sudden i can't ping anything from the unraid box anymore, not even the loopback address.  i tried doing a "/etc/rc.d/rc.inet1 restart", but it didn't seem to work.  That's fine, a reboot will probably clear it up, but i can't do that until the parity rebuild is done.


    So, other than watching the drive lights and waiting for them to stop blinking, is there a way from the console that i can watch the progress of the rebuild?  Thanks!

  9. Just to close the thread up, when i got home i disabled INT13 on the card side and IOMMU on the motherboard side, and now the array is rebuilding properly.  Thanks again for the help you guys!  I'll probably replace the card in the long run but at least in the short those suggestions got me back up and running.

     

    Thanks again!

  10. Thanks for the heads' up.  I think i'm going to try all the mainboard bios updating/settings changes, moving the SASLP to a different slot, flashing _that_ card's bios (all metioned here) and seeing where we get before i throw money at it.  Chances are i'll still be shopping for a new card, but i figure let's try to make what i have work first.

  11. Thanks.  When doing the xfs_repair on md4, it spits this out:

     

    root@feezfileserv:/boot/logs# xfs_repair -v /dev/md4
    Phase 1 - find and verify superblock...
            - block cache size set to 663264 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 776022 tail block 775959
    ERROR: The filesystem has valuable metadata changes in a log which needs to
    be replayed.  Mount the filesystem to replay the log, and unmount it before
    re-running xfs_repair.  If you are unable to mount the filesystem, then use
    the -L option to destroy the log and attempt a repair.
    Note that destroying the log may cause corruption -- please attempt a mount
    of the filesystem before doing this.

    (this is after stopping the array and restarting it in maintenance mode)

     

    Should i just go ahead and do the "xfs_repair -Lv /dev/md4", or is there something else i should try first?
     

     

  12. Hi all, 

     

    Last night i went to upgrade my UnRAID system from 6.2.2 to the latest and greatest, and noticed some escalating odd behavior.  First, when i went to do the "automated" (click here to upgrade) upgrade, it didn't work - it said something to the effect of "unable to write to flash".  I thought that was odd, so being a normally windows guy i decided to reboot because a "reboot fixes everything".  I stopped the array, and as soon as i did i noticed that one of the drives became unavailable (red X), and two more became "unknown" (with the expected drive name there and a dropdown to choose a disk).  Very odd.

     

    I rebooted the server and unraid as well as all the drives came back up fine, and i was able to upgrade the OS.  All drives reported as available.  Being that it had been rebooted several times at this point i elected to run a parity check.  This ran all day (the server doesn't have the fastest proc in the world) and when i came back in the evening to check on it i found that the parity check was listed as "incomplete", and that one of the drives had become unavailable.  On the display attached to the unraid server i notice a ton of XFS and I/O errors.  

     

    I shut down the server, checked all of the drive mountings and the cabling and all seemed well.  I reseated the cables and the drives just to be on the safe side, and fired the server back up.  When i did, i noticed the display was reporting similar I/O errors, and now the Web GUI is unresponsive (the page doesn't even load).  

     

    I've got it rebooted into "safe mode" now as a precautionary measure as i am not home and would like to troubleshoot remotely.  Can anyone advise what my "next steps" are?  Thanks in advance!

×
×
  • Create New...