tmp31416

Members
  • Posts

    122
  • Joined

  • Last visited

Everything posted by tmp31416

  1. @benson : the powerboard (jbpwr2) doesn't have pwm fan control capabilities. the connectors have only 3 pins on them. it would have been nice if the board could have controlled the fan speed, but it doesn't.
  2. @sota: (quick reply) i am using a two-way fan splitter to power two of the fans, as the JBPWR2 power board only has leads for 2 fans. i knew the fans would then be running at full speed (the powerboard has only 3 pin leads, not 4), but my initial "does this turn on, is anything burning?" test gave me the impression the 4 fans weren't that noisy. (never test on a couch that will absorb noise)
  3. i recently laid my hands on a working but empty 1u chassis that was being disposed of (a supermicro sc813m), to hold the last few drives i can add to my "backup" server. hey, can't beat *free*, right? anyway, bought a used power board off ebay (a supermicro JBPWR2) along with an SFF-8088 to 8087 bracket, fished out a breakout cable and a fan splitter out of stash of parts, assembled the beast... tested it in a large room -- hey, not too noisy... and racked it. when i turned it on again, what the...?!? it's noisier than my 2u jbod drive array! (that quantum dxi6500 i posted about some time ago) i quickly found out the 40mm fans (FAN-0061L4) are rated at 44db each, so 4 times that... either i get quieter fans or i put some in-line resistors that came with noctua 120mm fans i bought for my 4u boxes (norco 4224 & rosewill 'l4500). For the fans, i know the FAN-0100L4 would fit, though it appears to be costly to purchase in canada and only drops the noise level 10db. another fan that could be an option (Sunon 40x40x20mm 3 pin fan #KDE1204PKVX, https://www.amazon.ca/Sunon-40x40x20mm-pin-fan-KDE1204PKVX/dp/B006ODM76C), as it is cheaper & *much* quieter, though not sure it could really be used with a at supermicro 1u chassis like mine (20mm deep instead of 28mm, 10.x CFM instead of 16.3). at the moment, am leaning towards using the resistors (cheapest known solution), but i have to make sure it's safe to do so. or is there another option i'm missing? thanks in advance!
  4. very, very delayed update to my situation. so i ordered a spare backplane off e-bay some time ago and put it aside due to lack of time, family-related circumstances and also because things inexplicably got somewhat better for a brief period of time. then the situation suddenly took a turn for the worse after : all drives in the jbod chassis decided to go awol, a good motivator to do some surgery. finally was able to do the swap yesterday. this chassis is not easy to work with, i'd say. lots of swearing was involved. anyway, if the faster server boot-up, the fact that all drives were again visible *and* that i was able to add a new drive means anything, it's that the old backplane was indeed defective. the new one appears to run much better than the old one. i hope this keeps up running without any issues for the foreseeable future.
  5. quick update, hastily typed: having to deal with a family member who is sadly in his sunset days has kept my attention away in the last week or two, so i haven't been able to continue this thread as diligently as i should have. @ken-ji: ah-ha! thanks for noticing this, it does confirm what i observed! i am not sure what could be causing this delayed event; i will have to re-read the backplane's documentation to find anything that could be causing this. but from my previous reading of it, i don't remember anything to be configurable except too many jumpers that are of the "don't touch this, leave as-is" kind. (that's when you wonder "why put in jumpers i can't use?") i might also want to look into the hba's bios documentation to see if i did not accidentally toggle something on/off that i should not have. maybe there's a "set hba bios to default values" that i could use to undo some mistake i might have made accidentaly. this being said, am not sure if i could perform your test. are standard consumer sata drives really able to support hotplug? i would not want to fry the drive. (i prefer to err on the side of paranoia, here) as for moving to higher density drives, i use supermicro CSE-M35T-1B drive cages in my backup server, which, if i'm not mistaken, have issues with 3+tb drives. that's why i use 2tb drives in that server (my main one has 4 & 3 tb drives). That's it for now, to be continued.
  6. Due to an illness in the family, was not able to beat on this situation as much as i would like. I did observe something, though. It means something, unsure what it is exactly. it does indeed look as if you wait long enough (roughly 1.5 hours), the missing drive does become visible & usable in unraid. not sure if this makes any sense, but one might think there is a standoff of sorts, with two or more devices waiting for each other before initializing. or that there is a timer (?) that is triggered and prevents the new drive from completing its initialization. I did notice what i assume is the H200's bios taking some time to do its post (the spinner that advances, as dots appear -- sorry this is the best description i can formulate right now), more than before the new drive was added to the external storage array. or is this a red herring?
  7. curiouser and curiouser... after taking care of the homefront (kitchen, etc.), i sat down to check motherboard & bios information via the dashboard, only to end up seeing the new drive visible again. so within an hour or so, it's as if the drive, somehow, decided to wake up and be recognized by unraid. i don't believe in problems that sort of fix themselves without any human intervention. and last i checked, i don't have a "more magic" switch on the chassis. i flipped between screens/tabs just in case it was the browser acting up and displaying random incorrect stuff. even closed it and restarted it. nope, chrome is not going non-linear on me and the drive is still visible. started the array, and the drive formatted ok. was even able to create a new share and add the drive to it. all this is rather bewildering. because of the change in state (i.e., things apparently now working), i ran /tools/diagnostics again and attached the file to this message. i will take a step back, try to go over everything i've done to see if i can remember something useful, and go from there. btw, updating the bios is not an option, i already have the latest & greatest. cheers. nasunraid-bis-diagnostics-20190601-0320.zip
  8. after yesterday's unexpected improvements of sorts, i booted that server again tonight to continue the process of adding the new drive (shut it down after it finished clearing the disk, was now going to format it and the rest), only to have the ~"boot media not found" error anew. after some cajoling, got unraid to boot... only to discover the new drive is again invisible to unraid. i am now dealing with two problems: (1) running gparted apparently did something to the motherboard's bios, since i cannot boot unraid like before. (2) that missing/invisible drive, as far as unraid is concerned. not sure why & how, but there appears to be an issue with the motherboard's bios, so i might want to look into Tom's warning about bioses ('update your bios!') even though i wasn't affected in any way until that 9th drive in the external jbod chassis. i did run /tools/diagnostics again and am attaching the new file to this message, hoping someone else notices something in there. will ponder my next step(s) afterwards, to be continued... nasunraid-bis-diagnostics-20190601-0150.zip
  9. another evening where external obligations didn't leave me time to do extensive testing. i did manage to get that 2nd server to boot under gparted live, which didn't see the new drive either (22 drives only). and to add insult to injury, could not find a way to get any log file off the box. things are going well -- not. rebooted with the usb thumbdrive containing unraid and ... boot media not found. huh, okay, turn off server, put thumbdrive in other front usb port, turn server back on... boot media not found. huh, not okay, reset box and go into motherboard's bios... notice it wants to boot the thumbdrive with eufi, so let's try a boot override (non-eufi)... and the unraid boot menu comes up. that's much better. once i got into the unraid web gui, i clicked on the drop-down menu besides the new drive slot ... and the new drive shows up?!?!? for the life of me, i haven't the foggiest how it could be possible and/or what had changed. after assigning the new drive to its slot in the main menu, i did download the file created by /tools/diagnostics but am unsure how good / useful it could be. am running out of time tonight to elaborate (and edit this) any further, so will attach the file to this message in the hope someone else notices a clue of what happened. i could also upload the previous syslog (from yesterday) if anyone asks. drive clearing is still in progress as i'm typing this, so things appear to be stable. cheers. nasunraid-bis-diagnostics-20190531-0343.zip
  10. @ken-ji: everything's connected, as i wanted to see if i could find anything useful in the syslog. never thought of [ /tools/diagnostics ], will look into it tonight. also considering booting that box with a live distro (something like 'gparted live') to see what it tells me (dmesg, syslog, etc.). since i already have a copy of the entire syslog (not that it is that big), i'll have another look to see if it contains messages concerning: (1) the h200e, maybe the card's driver said something useful; (2) the jbod's backplane -- assuming 'lsilogic sasx28 a.1' as seen in the 'sas toplogy' is what i should look for (that or simply 'enclosure')
  11. quick update: was not able to perform any significant testing this evening, but managed to check the unraid syslog to see if there would be any error message, anything that could yield a clue why unraid is not seeing the new drive that was added to the external jbod array. whilst the h200e does see the new drive (see the photo in the previous post, it's drive 0 (zero) in the 'sas topology'), there is absolutely no trace of the new 2tb disc in the syslog. not sure where and how the disc is getting lost. the only certainty i have at the moment is that this install of unraid v6.7 appears to have a problem dealing with more than 22 drives (14 internal + 8 external) total. for reference, the main server is running 24 drives with absolutely no issue (norco 4224 chassis). to be continued.
  12. @ken-ji: a quick update before going to bed (can't call in sick tomorrow am!): (1) removed the new drive from the jbod chassis, and as you recommended, moved one of the existing discs into the 9th bay. it does show up in unraid. so the bay itself is not defective. (2) put back the existing drive into its normal bay, put the new drive back into the 9th bay and checked the HBA's bios. all 9 drives are listed in the 'sas topology'. so at the lower levels, all seems to be working. i'll continue testing tomorrow after work. cheers. p.s.: edit to add the following screenshot:
  13. @ken-ji: when the array is stopped, i see both parity drives and the existing 20 data drives. the 21st is still unassigned and the new disc (a wd20ezrz) does not appear in the drop-down menu. one thing i did not mention previously: at first i thought the bay containing the new drive might be defective, so i shifted the disc to the next bay to see if that was the problem. surprise: quite a few of the existing drives disappeared, as far as unraid was concerned. putting the new drive back into its original bay restored the server to a working configuration (no missing drives... except the new one, of course). i will try moving one of the existing drives into slot 9 of the drive array to see what happens. will keep you posted. p.s.: editing this message to add screenshot of drive list.
  14. quickly searched the forums, didn't find anything resembling my situation, so here goes: some time ago, i bought a direct-attached jbod array (quantum dxi6500, aka: supermicro cse-826, with sas826el1 backplane) off ebay, connecting it to my "backup" server running unraid v6.x via a dell h200e ... that was because i ran out of available drives bays in my backup server, hence the need to expand externally to keep up with need for space. everything has been running fine as new drives were added to shares as required. (*) anyway, after basically filling the 8th drive in the das chassis, i added a 9th drive... that cannot be seen by unraid. (currently: v6.7.0) checked the drive (it's ok), looked around in unraid if there was an overlooked setting that could limit the number of data drive at 21 for some reason (didn't see any), checked documentation for jumpers on the backplane (nothing stood out). right now i want to eliminate the "simple, obvious & dumb reason" that prevents the 9th external drive from being recognized by unraid before i look into swapping the backplane in case it is defective. i prefer to make sure nothing is staring me in the face (current gut feeling), because not all problems are "sexy". and frankly, i would like to avoid unracking the jbod if at all possible. so has anyone else seen anything like this? anyone else has an idea what could be going on? thanks in advance, (*) btw, this storage array totally convinced me that esata is not a good idea for servers and to stick with multilane (sff-8088, etc.) cables for reliability and performance.
  15. upgraded one my my servers from 6.6.2, with nerdpack plugin, no problem that i could see. i'll upgrade the other (main) one asap... that one is still at 6.4.x
  16. i recently bought a used quantum dxi6500 (just a rebadged supermicro cse-826 jbod chassis?) via ebay to add some storage to this server that has only 15 drives in it. i intend to connect the external drive array to the server via a dell h200e card also bought on ebay (will have to flash it to proper mode). the storage enclosure does not come with caddies, though (this explains why the price was very, very good). so i need to procure a few. now, the same type of enclosures is/was being used at work and quite of few of them are being life-cycled. i might be able to grab some caddies as the drives are being sent to be destroyed (while still attached to the caddies, can you imagine?), but am not betting my paycheque on it. i'm having difficulties identifying the exact model of the missing caddies, as supermicro has quite a few variations. does anyone know what to look for exactly on ebay or elsewhere? thanks in advance,
  17. what i did: built a new server, re-using as many parts from my *old* server as i could (includes 15 out of the the 20 old disks). re-formatted old usb key, extracted unraid 6.2.4 zip file onto it, made it bootable. after powering up the server, assigned all the drives but then didn't see an obvious "select the filesystem you want to use" option. i was a bit surprised unraid said right away "ah, those are reiserfs drives". i would have thought that because i was reassigning all those drives at once with a new "blank" unraid installation that the os would have asked me right away if i wanted to re-use the filesystem already present on the newly-assigned (data) disks, or not. i should have dug around, i'll admit that was somewhat of an EUBF on my part. anyway, after some hesitation, i interrupted the parity rebuild and powered down. using another machine (windows), blew the partition tables on all the drives, created new partitions & did a quick ntfs format. re-imaged the usb key to try to make sure previous config would not stick. reinserted drives in new server, powered up again... and most drives were still viewed as reiserfs. tried a second "cleaning cycle", still traces of reiserfs. that's when i posted my message. got my solution from squid and now i have 19tb of xfs goodness. though one drive as roughly 22gb used (out of 2tb), not sure where that came from -- but that's the topic of another thread. cheers.
  18. thanks to squid's reply, i selected xfs as a filesystem for all my "recycled" drives that somehow kept insisting to be formatted using reiserfs (even after repeatedly blowing partition & reformatting in ntfs, how is this possible?) and unraid is currently formatting them to xfs. i guess it was not obvious to me, now i can say TIL. so the functionality is there, just not quickly guessed. maybe this could be considered a ui issue to be addressed later? anyway, thanks for the help.
  19. a quick googling and forum search didn't reveal anything like what i am going to ask below -- but if someone else *did* ask for this, sorry about the repeat. long story short: i have finally "reincarnated" my old server from a cm centurion 590 mid-tower into a 4u rackmount rosewill rsv-l4500, moving the 3 supermicro 5in3 in the new case, etc. it will be my "redundancy server" for my really critical data (going from 20 to only 15 drives, no more external sans digital boxes). because i'm recycling my old reiserfs-formatted hard drives, unraid 6.2.4 saw a pile of usable hard drives and wanted to use them more or less "as is" to build a new array. it appears there's no way to order unraid to format the data drives to xfs when it tries to build a new array (read: the dual parity). am now on my second pass of blowing partitions and formatting the drives to ntfs before re-inserting them into the case. i did manage to get about half the disks to format to xfs after blowing the partition tables on all the data drives, but some of them kind of stuck to reiserfs despite my attempts at forcing a preclear/formatting on all the drives. so, my request is, when recycling old data drives to build a new server, could we please get the option to format all the data drives and to specify the filesystem for the new array? thanks in advance.
  20. i don't think either that unraid would replace vmware as a virtualization platform, but with clustering-like features mentioned earlier in this thread, it could become a low-cost, highly flexible + reliable form of entreprise network storage. like for network shares for corporate desktops. your average cubicule dweller doesn't need super-high throughput to save his/her latest spreadsheet, for example. *edit: forgot the word "entreprise".
  21. i *am* an old-school it guy (trained on punched cards, we still had a pdp-8 with a tty in the corner of the terminal/keypunch room, know the joys of getting a batch job rejected because of a jcl error, etc.), just not in management. in this era of shrinking budgets and gross mismanagement, even i figured out that unraid could become a very capable, low-cost (due to be able to use/re-use generic hardware) solution for enterprise network storage. i kept mentioning to upper management to keep an eye on the various freenices (*bsd, gnu-linux, etc.) over the years, as they could become useful and possibly could be used to replace more expensive solution. the suits kept telling me those were toys and they would never use that. guess what's been happening for the last 4-5 years? they have been phasing out hp/ux, tru64, solaris, even vms, in favour or red hat entreprise linux. anything can happen, as long as unraid keeps evolving.
  22. An implicit/"under the hood" commit command to release i/o ops after pattern analysis was just one way to express the idea of not committing all i/o to disc as they come in. i agree that such an approach (delay actual writes until deemed safe) would need to be low-level enough to be transparent to the user. one has to ask: is this an unraid or a filesystem function? anyway, that's just a side consideration. whilst ransomware attacks are not a subject to dismiss lightly, protecting data from hardware failure is still the main reason behind the use of unraid for many of us. hence my initial request for built-in replication/back-up between unraid servers. and i can see the need for protection from finger/mouse slips, provided by versioning, which could greatly augment the value of data replication between servers. i'm not dismissing resilience against ransomware attacks, i think such a function builds on previously developed features, and that limetech should make """clustering""" a higher priority. i also have a gut feeling that built-in automated replication/backups coupled with versioning could help sell unraid in enterprise environments. how many times have i heard over the years upper management dismiss gnu-linux or any other floss as toys that would never be deployed in a corporate environment... only to see it nowadays replacing solaris, hp/ux, tru64, vms, etc. wholesale? maybe we'll end up seeing unraid have a significant presence in data centres?
  23. so the point made in the last two posts (re. ransomware attacks) not only put the emphasis on versioning, but also on *version protection*. finding a way to ensure that older versions of files are left untouched is a good idea. i too wonder sometimes if/when i'll fall victim to a reprobate's actions, considering i was once the victim of a "drive-by download" a few years back, despite my precautions. if my network shares were to get mucked up by ransomware, if my server were somehow infected (i wonder how, but anything network has to be considered vulnerable to malice nowadays), i would definitely not be happy. /* not-fully thought-out brainstorming below */ i'd wonder if a mechanism to detect ransomware actions (i.e., sequential, mass-encryption of files & directories) could be implemented? and to resist ransomware attacks, could file ops be implemented in an oracle-like fashion, i.e., "<command>, <command>, <command>, commit", the 'commit' command be issued by unraid (transparently/hidden from the user) once a certain volume of ops be reached and no suspicious pattern detected? /* end of possible brain-fart */ *** so now the feature-request becomes "built-in & automated server or share replication & versioning", the optional versioning being built with a level of protection of prior versions. i would also refine & add to my previous ideas of being able to specify which share to replicate, and for "future unraid" to be able to have multiple replication servers: to be able to specify specific network share ---> replication server associations. some share could be replicated locally, others could be replicated remotely. or am i going overboard with this?
  24. to sort of quote someone, i thought of "one more thing": replication could be complete or partial. this because we can't always build two servers with the exact same configuration (number of disks, etc.). i know the redundant/backup server i would like to build as soon as i can would be re-using many parts of my previous unraid build. obvious consequence: less drive space than the current/main server. i also know some of my network shares are definitely preferable to others. i'd prefer to replicate/backup a share containing my cv, personal photos and whatnot, instead of, say, the one containing the "reality" tv shows recorded for my gf (the things we do for love...). btw, jonathanm brought up the need for versioning (à la vms, i presume?) to deal with the case of finger/mouse slip. that would be a good optional feature for replication/backup. cheers.
  25. i have searched this topic but haven't found anything. i cannot believe this appears to never have been requested, i must need new glasses... anywho, just in case i am the first one to ask for this: whilst i understand one should use his/her unraid server as much as possible and not be wasting cpu cycles (hence the vm technology that has been baked in), to me, unraid is still an expandable & fault-tolerant storage technology first and foremost. stable dual-parity makes for a more robust unraid server (and this is much appreciated, thanks Tom!), but the ultimate in fault-tolerance is still redundancy and more specifically *server redundancy*. now, one can already build a second unraid box and set up some scripts to copy over data to the secondary server, but i think we could do better. it would be nice if there was some built-in functionality to automate data replication between servers, between a "master" and a "slave". that functionality would be managed through the gui and in fact could involve more than a single pair of servers -- i would say in my post-holiday dazed state that this could be considered clustering. "cluster" members could be local (same rack) or remote (different building). local data replication could be considered by some as a rather simple problem to tackle, but remote replication might be something different due to available bandwidth and so on. curious to know what others think of this idea. cheers -- and a happy 2017 to everyone.