Jump to content

tmp31416

Members
  • Posts

    122
  • Joined

  • Last visited

Posts posted by tmp31416

  1. @sota: (quick reply) i am using a two-way fan splitter to power two of the fans, as the JBPWR2 power board only has leads for 2 fans.

    i knew the fans would then be running at full speed (the powerboard has only 3 pin leads, not 4), but my initial "does this turn on, is anything burning?" test gave me the impression the 4 fans weren't that noisy. 

     

    (never test on a couch that will absorb noise)

     

     

     

  2. i recently laid my hands on a working but empty 1u chassis that was being disposed of (a supermicro sc813m), to hold the last few drives i can add to my "backup" server. hey, can't beat *free*, right?  anyway, bought a used power board off ebay (a supermicro JBPWR2) along with an SFF-8088 to 8087 bracket, fished out a breakout cable and a fan splitter out of stash of parts, assembled the beast... tested it in a large room -- hey, not too noisy... and racked it.

     

    when i turned it on again, what the...?!? it's noisier than my 2u jbod drive array! (that quantum dxi6500 i posted about some time ago)

     

    i quickly found out the 40mm fans (FAN-0061L4) are rated at 44db each, so 4 times that... either i get quieter fans or i put some in-line resistors that came with noctua 120mm fans i bought for my 4u boxes (norco 4224 & rosewill 'l4500).  For the fans, i know the FAN-0100L4 would fit, though it appears to be costly to purchase in canada and only drops the noise level 10db.  another fan that could be an option (Sunon 40x40x20mm 3 pin fan #KDE1204PKVX, https://www.amazon.ca/Sunon-40x40x20mm-pin-fan-KDE1204PKVX/dp/B006ODM76C), as it is cheaper & *much* quieter, though not sure it could really be used with a at supermicro 1u chassis like mine (20mm deep instead of 28mm, 10.x CFM instead of 16.3).

     

    at the moment, am leaning towards using the resistors (cheapest known solution), but i have to make sure it's safe to do so.

     

    or is there another option i'm missing?

     

    thanks in advance!

     

  3. very, very delayed update to my situation.

    so i ordered a spare backplane off e-bay some time ago and put it aside due to lack of time, family-related circumstances and also because things inexplicably got somewhat better for a brief period of time. then the situation suddenly took a turn for the worse after : all drives in the jbod chassis decided to go awol, a good motivator to do some surgery. finally was able to do the swap yesterday.

     

    this chassis is not easy to work with, i'd say. lots of swearing was involved.

    anyway, if the faster server boot-up, the fact that all drives were again visible *and* that i was able to add a new drive means anything, it's that the old backplane was indeed defective. the new one appears to run much better than the old one.


    i hope this keeps up running without any issues for the foreseeable future.

     

  4. quick update, hastily typed:
    having to deal with a family member who is sadly in his sunset days has kept my attention away in the last week or two, so i haven't been able to continue this thread as diligently as i should have.

     

    @ken-ji: ah-ha!  thanks for noticing this, it does confirm what i observed! i am not sure what could be causing this delayed event; i will have to re-read the backplane's documentation to find anything that could be causing this. but from my previous reading of it, i don't remember anything to be configurable except too many jumpers that are of the "don't touch this, leave as-is" kind. (that's when you wonder "why put in jumpers i can't use?")

     

    i might also want to look into the hba's bios documentation to see if i did not accidentally toggle something on/off that i should not have.  maybe there's a "set hba bios to default values" that i could use to undo some mistake i might have made accidentaly.

     

    this being said, am not sure if i could perform your test.  are standard consumer sata drives really able to support hotplug? i would not want to fry the drive. (i prefer to err on the side of paranoia, here)

     

    as for moving to higher density drives, i use supermicro CSE-M35T-1B drive cages in my backup server, which, if i'm not mistaken, have issues with 3+tb drives.  that's why i use 2tb drives in that server (my main one has 4 & 3 tb drives).

     

    That's it for now, to be continued.

  5. Due to an illness in the family, was not able to beat on this situation as much as i would like.

    I did observe something, though.  It means something, unsure what it is exactly.

    it does indeed look as if you wait long enough (roughly 1.5 hours), the missing drive does become visible & usable in unraid.

     

    not sure if this makes any sense, but one might think there is a standoff of sorts, with two or more devices waiting for each other before initializing.  or that there is a timer (?) that is triggered and prevents the new drive from completing its initialization.

     

    I did notice what i assume is the H200's bios taking some time to do its post (the spinner that advances, as dots appear -- sorry this is the best description i can formulate right now), more than before the new drive was added to the external storage array.  or is this a red herring?

  6. curiouser and curiouser...

    after taking care of the homefront (kitchen, etc.), i sat down to check motherboard & bios information via the dashboard, only to end up seeing the new drive visible again. so within an hour or so, it's as if the drive, somehow, decided to wake up and be recognized by unraid.  

     

    i don't believe in problems that sort of fix themselves without any human intervention.

    and last i checked, i don't have a "more magic" switch on the chassis.

     

    i flipped between screens/tabs just in case it was the browser acting up and displaying random incorrect stuff.  even closed it and restarted it.

    nope, chrome is not going non-linear on me and the drive is still visible.

    started the array, and the drive formatted ok.

    was even able to create a new share and add the drive to it.

     

    all this is rather bewildering.

    because of the change in state (i.e., things apparently now working), i ran /tools/diagnostics again and attached the file to this message.

     

    i will take a step back, try to go over everything i've done to see if i can remember something useful, and go from there.

    btw, updating the bios is not an option, i already have the latest & greatest.

     

    cheers.

     

    nasunraid-bis-diagnostics-20190601-0320.zip

  7. after yesterday's unexpected improvements of sorts, i booted that server again tonight to continue the process of adding the new drive (shut it down after it finished clearing the disk, was now going to format it and the rest), only to have the ~"boot media not found" error anew.  after some cajoling, got unraid to boot... only to discover the new drive is again invisible to unraid.

    i am now dealing with two problems:

    (1) running gparted apparently did something to the motherboard's bios, since i cannot boot unraid like before. 

    (2) that missing/invisible drive, as far as unraid is concerned.

     

    not sure why & how, but there appears to be an issue with the motherboard's bios, so i might want to look into Tom's warning about bioses ('update your bios!') even though i wasn't affected in any way until that 9th drive in the external jbod chassis.

     

    i did run /tools/diagnostics again and am attaching the new file to this message, hoping someone else notices something in there.

    will ponder my next step(s) afterwards, to  be continued...

     

     

     

    nasunraid-bis-diagnostics-20190601-0150.zip

  8. another evening where external obligations didn't leave me time to do extensive testing.

     

    i did manage to get that 2nd server to boot under gparted live, which didn't see the new drive either (22 drives only).

    and to add insult to injury, could not find a way to get any log file off the box. things are going well -- not.

     

    rebooted with the usb thumbdrive containing unraid and ... boot media not found.

    huh, okay, turn off server, put thumbdrive in other front usb port, turn server back on... boot media not found.

    huh, not okay, reset box and go into motherboard's bios... notice it wants to boot the thumbdrive with eufi, so let's try a boot override (non-eufi)...

    and the unraid boot menu comes up.  that's much better.

     

    once i got into the unraid web gui, i clicked on the drop-down menu besides the new drive slot ... and the new drive shows up?!?!?

    for the life of me, i haven't the foggiest how it could be possible and/or what had changed.

     

    after assigning the new drive to its slot in the main menu, i did download the file created by /tools/diagnostics but am unsure how good / useful it could be.

    am running out of time tonight to elaborate (and edit this) any further, so will attach the file to this message in the hope someone else notices a clue of what happened.

     

    i could also upload the previous syslog (from yesterday) if anyone asks.

    drive clearing is still in progress as i'm typing this, so things appear to be stable.

     

    cheers.

     

     

    nasunraid-bis-diagnostics-20190531-0343.zip

  9. @ken-ji: everything's connected, as i wanted to see if i could find anything useful in the syslog.

    never thought of [ /tools/diagnostics ], will look into it tonight.

     

    also considering booting that box with a live distro (something like 'gparted live') to see what it tells me (dmesg, syslog, etc.).

     

    since i already have a copy of the entire syslog (not that it is that big), i'll have another look to see if it contains messages concerning:
    (1) the h200e, maybe the card's driver said something useful;
    (2) the jbod's backplane -- assuming 'lsilogic sasx28 a.1' as seen in the 'sas toplogy' is what i should look for (that or simply 'enclosure')

     

     

     

  10.  

    quick update:

     

    was not able to perform any significant testing this evening, but managed to check the unraid syslog to see if there would be any error message, anything that could yield a clue why unraid is not seeing the new drive that was added to the external jbod array.
    whilst the h200e does see the new drive (see the photo in the previous post, it's drive 0 (zero) in the 'sas topology'), there is absolutely no trace of the new 2tb disc in the syslog.

    not sure where and how the disc is getting lost. 
    the only certainty i have at the moment is that this install of unraid v6.7 appears to have a problem dealing with more than 22 drives (14 internal + 8 external) total.

    for reference, the main server is running 24 drives with absolutely no issue (norco 4224 chassis).

     

    to be continued.
     

     

  11. @ken-ji:

    a quick update before going to bed (can't call in sick tomorrow am!): 
    (1) removed the new drive from the jbod chassis, and as you recommended, moved one of the existing discs into the 9th bay.  it does show up in unraid.  so the bay itself is not defective.
    (2) put back the existing drive into its normal bay, put the new drive back into the 9th bay and checked the HBA's bios.  all 9 drives are listed in the 'sas topology'.
    so at the lower levels, all seems to be working.
    i'll continue testing tomorrow after work.
    cheers.

     

    p.s.: edit to add the following screenshot:

     

    IMG_20190529_000306.(ed).jpg

  12. @ken-ji:

    when the array is stopped, i see both parity drives and the existing 20 data drives.

    the 21st is still unassigned and the new disc (a wd20ezrz) does not appear in the drop-down menu.

     

    one thing i did not mention previously: at first i thought the bay containing the new drive might be defective, so i shifted the disc to the next bay to see if that was the problem.  surprise: quite a few of the existing drives disappeared, as far as unraid was concerned.  putting the new drive back into its original bay restored the server to a working configuration (no missing drives... except the new one, of course).

     

    i will try moving one of the existing drives into slot 9 of the drive array to see what happens.

    will keep you posted.

     

    p.s.: editing this message to add screenshot of drive list.

     

    test_1 (dsq existant bougé dans baie no9).(ed2).jpg

  13. quickly searched the forums, didn't find anything resembling my situation, so here goes:

     

    some time ago, i bought a direct-attached jbod array (quantum dxi6500, aka: supermicro cse-826, with sas826el1 backplane) off ebay, connecting it to my "backup" server running unraid v6.x via a dell h200e ...  that was because i ran out of available drives bays in my backup server, hence the need to expand externally to keep up with need for space.

     

    everything has been running fine as new drives were added to shares as required. (*) anyway, after basically filling the 8th drive in the das chassis, i added a 9th drive... that cannot be seen by unraid. (currently: v6.7.0)

    checked the drive (it's ok), looked around in unraid if there was an overlooked setting that could limit the number of data drive at 21 for some reason (didn't see any), checked documentation for jumpers on the backplane (nothing stood out). right now i want to eliminate the "simple, obvious & dumb reason" that prevents the 9th external drive from being recognized by unraid before i look into swapping the backplane in case it is defective. i prefer to make sure nothing is staring me in the face (current gut feeling), because not all problems are "sexy".

    and frankly, i would like to avoid unracking the jbod if at all possible.

    so has anyone else seen anything like this? anyone else has an idea what could be going on?


    thanks in advance,


    (*) btw, this storage array totally convinced me that esata is not a good idea for servers and to stick with multilane (sff-8088, etc.) cables for reliability and performance.

  14. so, to semi-answer my question above:

     

    talking to another nerd at the office, it appears he's been using the following card without any issues

    for a year or two already:

     

    http://www.newegg.ca/Product/Product.aspx?Item=N82E16815158088CVF&Tpk=PEXESATA2

    (StarTech PEXESATA2)

     

    it is also sold as a Rosewill RC-219  or a Koutech IO-PESA220 (also sold on newegg).

     

    anyone else care to say if they've been using this card, in any aforementioned incarnation, successfully or no?

     

    and if there exist a better alternative, i'm all ears.

     

    cheers.

     

    *edit* p.s.: would the HighPoint Rocket 622 (http://www.newegg.ca/Product/Product.aspx?Item=N82E16816115073) work with unraid 4.5.6 or 4.7?

      is it worth it?

     

  15. well, reading a bit to see if personal experience was out of the norm, it appears that i am not the only one who had issues with inexpensive sil3132 pci-e 1x controllers -- syba "i/o crest" cards to be exact.

     

    but after reading quite a few messages (and user reviews on newegg et al), i don't know what card i could use.

    i am running out of drive bays in my server (using 3 'SUPERMICRO CSE-M35T-1B' assemblies in my tower) and will have

    to expand outside.

     

    thing is, i have only one pci-e 1x slot available.

    so i'll need something that can fit in such a slot and can do port multiplication.

     

    one (or two) sans digital TR4M+BNC enclosures might do the trick (very unsure about its cooling capabilities, though), but i still cannot figure out what esata controller might do the trick.  suggestion / recommendation, anyone?

     

    thanks in advance.

     

     

     

    p.s.: i'm still running unraid 4.5.6 (spot the production support guy in the picture, if it ain't broken, don't fix it), but i might upgrade to unraid 4.7 if it adds necessary drivers for sata controllers.

     

  16. so, third configuration of my rig, a tweak on the second.

     

    changes:

    * replace all three SYBA SY-PEX40015 sata cards with a single Supermicro AOC-SASLP-MV8 card.

    * to free the single pcie 16x slot for the supermicro card, replace the asus passive 4350 graphics card with

      a powercolor passive 4350 with a pcie 1x connector.  surprise, with similar specs, this card is

      nonetheless bigger.  the heatsink is *huge* compared to what the asus has!

    * finally put the reset switch inside the case, by re-purposing a startech.com "plate6f" slot cover

     

    all other physical specs are the same, save for the random lock-ups that are no longer part of the package.

    well, almost: i did loose the front usb ports because the new graphics card is blocking access to the

    motherboard leads.  and two of the six  sata connectors on the motherboards are blocked by that graphics card.

    did i mention it is big?

     

    ...for some reason, knoppix now refuses to boot with this new configuration.  something i will have to

    look into when i get the time.  but as long as unraid 4.5.6 boots, i'm happy.

     

    some pictures:

     

    ** reset switch, outside:

    6olglg.jpg

     

    ** reset switch, inside (along with the new sata + graphics interfaces):

    24quzdj.jpg

     

    ** the new sata/sas controller plus the re-layed out cabling:

    2v29zcg.jpg

     

    ** another view of the cabling:

    110c6ms.jpg

     

    voilà!

     

     

     

  17.  

    Very clean build!  8)

     

    I love the idea of the aluminum ducting tape on the side panel. I've been wondering for a while what to do about my Centurion's vents; the best I could come up with was clear packaging tape. However, I could never fully bring myself to install it looking that cheap. :-\ I might steal your idea though, if you don't mind!  ;D

     

    What did you put on the sticky side facing the outside of the case? Black construction paper? Did you paint it?

     

    thank you!

    it might be cleaner than some builds, but it's not to the standards set by guitarlp, icoburn, or unraided, to name a few.

    i might make it more tidy "real soon now", though, as i will be replacing the 3 syba raid 2 port controllers with a supermicro 8 ports one -- see the thread about it in the "hardware" forum (*).  hopefully, i will be happier with the end result.

     

    as for blocking the open/unused vents, i did use black construction paper.  the alu ducting tape was used simply because it will simply stay there and not fall over time.  using anything else was not an option, as it would not have lasted.

     

    cheers.

     

    (*) reader's digest conclusion: don't try to be cheap with your components, you will end up regretting it.  too often, you *do* get what you pay for...

     

  18. Nice! What do you do with the CDROM? Did you recompile the kernel to make it accessible?

    I had planned to put on in my machine so I could rip from it with vmware.

    I think I'm going with the external SCSI route.

     

    thank you.  i think i could make it tidier if i had slightly longer sata cables to route them like some of you guys have done (straight lines, 90 degree angles, being able to go along the sides of the drives, etc.).  the last two trayless bays (KINGWIN KF-1000-BK, added after the fact) sort of spoiled "the look", i might go back and tidy things up in the future.

     

    as for the cd-rom, i used it to test all hardware before running unraid (knoppix and other live cds are a godsend).  if i re-generate another bart-pe cd, i might be able to use it to flash the firmware on new syba cards (all instructions to do so make use of windows xp or 7 for an os).  i haven't used it with unraid, maybe i should try to see if it can see the drive.  though from your question i doubt i might be able to do with "plain vanilla" unraid.  maybe we don't have to recompile the kernel to make use of the dvd-rom?  as long as Tom (mr. unraid himself) left a handful of loadable kernel modules with the os, this might just be a question of editing one's "go" script.

     

    i have an usb lcd display (2x20, iirc) that i would love to connect to my nas (usb hd44780 lcd unit from 'lcdmodkit', check on ebay) but i would need to:

    (1) fabricate or find an adapter to give it a standard usb connector to plug it into one of the external usb ports (because it connects to the motherboard usb leads),

    (2) find some sort of little enclosure to house it (it would sit on top of the nas),

    (3) find some software, running under unraid, to give me stats: drive usage, drive & cpu temp, etc.

    but like the cabling it will have to wait (unless someone can list me all the required parts off the top of their heads).

     

    cheers.

     

  19. so here it is, my second build, or more accurately, just a "reincarnation" of my existing/previous rig.

    the idea was to be able to have more drives (i was using an antec 300 and had already 6 disks in it) and especially to make maintenance easier and quicker, i.e. not to have to open the box to replace a failed drive or to add a new one.  just that is worth its weight in gold to me.

     

    the entrails of the box are the same: gigabyte ga-ma770-ud3 motherboard, sempron le-1250 cpu, 2gigs of ddr2-667 ram, asus radeon 4350 passive graphics card, tp-link gigabit nic for "server to server" transfers, an ide dvd burner "just in case", that lets me boot some live cd for testing (if need be). 

     

    the new build adds: 3 syba sil3132 pcie 1x sata cards (adding 6 ports to the mortherboard's 6), two supermicro CSE-M35SB drive cages (noctua 92mm fans replacing the stock ones) and two Kingwin KF-1000-BK trayless racks. 

     

    it replaces the antec 300 case with a coolermaster 590 case, with [3 arctic cooling AF12Pro PWM + 1 scythe sff21d] fans for cooling, and the ocz ModXStream 500w with a 600w one.

     

    i did work at closing all unnecessary vent holes to try to ensure that a max. of air would be sucked in by & around the drive assemblies. 

     

    i thus went from 6 internal drives to a possible 12, all accessible from the outside.

    the drives have been running fairly cool (max temp. has been 31c, maybe 32c).

     

    the geek factor has obviously shot upwards with all the separate drive activity lights... and the white fans (looks geeky to me).  the only disappointment is that the drive fault lights are not functional with the sata version of the supermicro cse-m35sb.  beyond that, i have been quite happy with the new server, whilst not as quiet as the previous incarnation, is not that noisy after all.

     

    here are some pictures, this time in a more manageable size (no need to use any funky software, it turns out win7's built-in picture viewer/editor can resize quite well.  the ui is a pain, but if you are persistent, you can the job done).

     

    cheers.

     

    2462wpz.jpg

     

    23lbpc0.jpg

     

    6puc1x.jpg

     

    24xj21v.jpg

     

    j9x11j.jpg

     

    • Upvote 1
  20. ...(procrastinating / taking too long a break whilst doing my income tax report)

     

    ...and this is my old rig.

    case is/was an antec 300;

    all fans (save for the top 140mm antec tri-cool) are/were scythe 800rpm fans (SFF21D);

    the psu is/was an ocz mod-xtreme 500w (modular cabling is a blessing!);

    all 6 hard drives are wd sata, with half of them being caviar 500gb and the other half caviar green 1tb;

    an ide lg dvd burner is thrown in there "just in case"

    unraid is held on an 8gb patriot "xporter xt boost" usb thumb drive

    motherboard is a gigabyte ga-ma770-ud3;

    cpu a sempron 1250;

    2gb of kingston ddr2-667 ram;

    video card is an asus passive 4350 (ati)

    the second gigabit nic is a tp-link tg-3269c (realtek pci card)

    the other expansion card seen in the picture is an adaptec 2940 and shows this used to be one of my unix test machines.

     

    the drive with the red thumbscrews is the parity drive.

     

    since a drive failure meant "opening the hood" to get to the drive, and since this case made adding more drives a bit of a challenge, is why i "reincarnated" my nas.  pictures of the new beast forthcoming.

     

    sorry if tinypics did not shrink the pictures for this post. "my bad", next time i'll endeavour to do a better job uploading them.

    //edit// thank ${deity} for the possibility to edit previous post.  the images have been shrunk and re-uploaded to tinypics.

    so here goes, 5 pics:

     

    k4hst0.jpg

     

    nnqvja.jpg

     

    2435m6f.jpg

     

    28s58n6.jpg

     

    eapqgn.jpg

     

    not the best job overall, but for a first unraid build, it served its purpose very well.

    that configuration was very quiet, even more when unraid spun down the drives.

    it could be left turned on 7/24 even if its temporary digs are near the bedrooms (until the renos are finished).

     

    and before anyone asks, it's "partially dressed" with a sunbeau "lan party bag"

    (http://www.sunbeamtech.com/PRODUCTS/Lan%20Party%20Bag/Lan%20Party%20Bag.html)

    to make a hasty exit more dignified in case (bad luck) happens.

    shut down, unplug cables, hook up the straps, get out of the house with less fear of dropping the box.

    (i guess i've been involved with way too many disaster recovery exercises at work...)

     

    cheers.

     

×
×
  • Create New...