Jump to content

doron

Members
  • Content Count

    237
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by doron

  1. Have you looked at the Cooler Master case I pointed to on eBay in the previous post? You might find it interesting (and they still have a few available as I write this). (Curiously, I also have a CM590 which is what I plan to be moving away from... Tight.)
  2. Like the rest of you of this thread, I have also been searching for the thing - a tower with 5.25" bays top to bottom(*). I've bumped on this on eBay and grabbed one. They seem to still have a few left. They are NIB. This one has 9 5.25" bays, all exposed. You can fill them with 5-in-3 or 3-in-2 of your fancy, to get a nice 15 or 12 hotswap 3.5" drives. It's a bit of a beast (full tower, deep, tons of cooling options). It allegedly supports E-ATX mobos but unfortunately does not fit boards like X9DR3-F (which is E-ATX by spec) - the I/O plate and PCI/E slots do not align. Weird. It does take ATX (and smaller FF) boards happily. Thought some of you might find this interesting. (*) I was looking at using a CSE846 but dropped it due to its sound signature - too noisy for its designated location.
  3. Not exactly. I'm saying I've been working only with RDMs for years and years and both SMART and spin down were working just fine. SSDs do not actually spin down. SMART works for them, of course. That is a different question that is unrelated to RDM vs. controller p/t. I'm not authoritative on this one - I do use SSD for cache.
  4. Okay this is very interesting. I just did a brief google study and indeed found references to this claim - e.g. here, here and here. Honestly, I haven't even heard of that claim until this thread. Conversely, I've been running all my drives as RDMs for quite a few years. SMART was fine, spin down worked. Ironically, I've recently moved all my mechanical drives to a passed-through controller; the one drive I left as RDM is my cache drive, which is an SSD, which does not spin down. I just re-tested smartctl again against this drive and it works, as does hdparm. Does anyone have any supporting data to the contrary (i.e. that either of these does not work)? Or could this be some form of "tribal myth" that has been passed on as common wisdom?
  5. Pretty sure. Is there some common wisdom saying that you don't get SMART and spindown with RDMs?
  6. Why would you not get SMART and spin down in RDM? From my experience, you get both.
  7. This command will definitely add a newline char at the end of your passphrase - not what you want. You want: echo -n "passphrase" > keyfile
  8. Quite hard to select one thing I love about Unraid, but if forced to reduce to one, it'll be back to basics: The ability to add drives, in singles or batches, of different sizes, and get them all protected and still recoverable in case of multi drive failure. This has been Unraid's first differentiator, and I think it still is. Now with encryption and dual parity... I'd like to see SAS drives spun down... 🙂
  9. If it would be more convenient for you, you can use this tool to change your pass key into something that's along @limetech's guidance above.
  10. Uneventful upgrade from rc8 here. A thing of beauty. Thanks.
  11. I believe that article is both quite dated (2/2012) and quite inaccurate. It compares things that should not really be compared; the SAS drives considered are the 10KRPM and 15KRPM little beasts, which are indeed very different animals than the 7.2KRPM spindles, in terms of actual drive technology. But this has little to do with SAS per se: They were manufactured only with SAS interfaces, simply because no home user would spend the $$$$ for these ultrafast, yet relatively lower capacity drives. Before SSDs became the rage, those critters have been your tier-1 storage of choice (e.g. cache). When you compare apples to apples, e.g. 7.2KRPM 12TB enterprise-level drives, the difference is only in the attached electronics as @limetechsaid. You can actually buy the same drive and select your electronics. Case in point, check out this datasheet. Check the bottom for ordering options. As mentioned above, the main differences lie with the bus protocol performance and extremely different configuration and topology options. Not the performance or reliability of the actual spindle.
  12. Yes, understood. There are good reasons to prefer SAS over SATA, although admittedly most of them(*) reside in the ballpark of enterprise computing as opposed to home / SOHO, where I'm guessing most of Unraid install base lives. (I'm a diehard veteran of the former, yet my Unraid is the latter, ergo...). I'll concede that in retrospect, I would have been be better off if my 12TB HGST's were SATA - if not for any other reason, for this spindown one. (*) Much larger and more complex drive topologies, including multihomed and multi-tier connections and enclosures, faster bus transfer speeds (12Gb/s on SAS3), more reliable performance in the presence of those complex configurations, and more. Again - most/all reside in the enterprise computing realm. That's a good start, yes. Thanks! It is well understood and appreciated that you have bigger fish to fry at this time. Hence: <plea> Would you consider adding, on top of the above (not attempting to spin them down), an upcall hook - in the spirit of the EVENT scripts calls - for all spindown/spinup actions? This would allow those of us who are knee deep in this (and paying the electricity bills...) to script this up (currently this is Hard™ - as the action takes place in the kernel code). I will definitely take a stab - others might too. Once this is (hopefully) brushed up, you can consider adopting it into the core product. Does this make sense? </plea>
  13. Interesting. When you say the disk did not spin down - how did you test that hasn't actually spun down?
  14. You sure do. I did capitalize, but you're right, should have been Soon™. Mea Culpa.
  15. That might be a bit tricky. I thought about wrapping hdparm as a stopgap hack, but for most spinup/down actions Unraid does not call the userland hdparm program - it uses its kernel code (and /proc/mdcmd) to issue the relevant ATA command to the drive. Frontending that interface would be a more intrusive hack. Also, there doesn't seem to be an "Event" script upcall for spinup / spindown event (there's a thought...). Let's hope @limetech adds this capability to core Unraid Soon.
  16. Drive encryption is one of Unraid's many good features. When you encrypt part or all of your array and cache, at some point you might end up wanting to change your unlock key. Just how often, would depend on your threat model (and on your level of paranoia). At this time (6.8), Unraid does not have a UI for changing the unlock key. Here is a small tool that will let you change your unlock key. Each of the current and new unlock keys can either be a text password / passphrase, or a binary key file if you're into those (I am). Your array must be started to use this tool. Essentially, the script validates the provided current key against your drives, and on all drives that can be unlocked with the current key, replaces it with the new one (in fact, it adds the new key to all of them, and upon success, removes the old key from all of them). Important: The tool does not save the provided new (replacement) key on permanent storage. Make very sure you have it backed up, either in memory (...) or on some permanent storage (not on the encrypted array 😜 ). If you misplace the new key, your data is hosed. Currently this script needs to be run from the command line. I may turn it into a plugin if there's enough interest (and time) - although I'm pretty sure Limetech has this feature on their radar for some upcoming version. Usage: unraid-newenckey [current-key-file] [new-key-file] Both positional arguments are optional and may be omitted. If provided, each of them is either the name of a file (containing a passphrase or a binary key), or a single dash (-). For each of the arguments, if it is either omitted or specified as a dash, the respective key will be prompted for interactively. Note: if you provide a key file with a passphrase you later intend to use interactively when starting the array (the typical use case on Unraid), make sure the file does not contain an ending newline. One good way to do that is to use "echo -n", e.g.: echo -n "My Good PassPhrase" > /tmp/mykeyfile This code has been tested, but no warranty is expressed or implied. Use at your own risk. With the above out of the way, please report any issues. unraid-newenckey
  17. Thanks @Hurubaw! ... and there goes my hypothesis. The WD40EFRX certainly is not a 4Kn (*), which is what I thought might be a common theme. That is for the author to answer authoritatively. (*) 4k native sector size, as opposed to 512e or 512, which both present 512 size sectors, either emulated or native.
  18. Could you post the specific model of the HDDs - both the ones failing and the ones completing successfully? ("WD Red" or "WD Black" are marketing names and can point to a bunch of different drive models). You can find the model name as the left part of the "identification" col on the GUI - examples could be WDC_WD4000F9YZ or WDC-WD60EDAZ or HUH721212AL4200. Do not post the serial number, which is typically the rightmost part and should be considered private information). Trying to compare it to my case and perhaps suggest a common theme to the failing drives vs. succeeding ones.
  19. Wizardry huh. More like rustiness. In fact, even that filter was kinda heavy-handed - a better version would have been lsblk | grep "crypt" | sed -r 's/[^[:alnum:]]*([[:alnum:]]*) .*/\1/' At any rate, I did hack up that script. You can find it here. EDIT: Moved the script to its own thread.
  20. While @johnnie.black is obviously correct and this item will do the job perfectly, if you plan to only use SATA 6Gb/s drives on this box (and not SAS3 drives), it might be an overkill. Looking at your source, this may serve you just as well, for a fraction of the cost. If you go that route, you will need two SFF-8087/SFF-8643 cables, such as this.
  21. Umm, apparently the logs are not being stored on permanent storage, and the server has been restarted since then... So what I have does now not cover the relevant time. Sorry about that.
  22. Isn't it?... 🙂 All it does is take lsblk output lines that contain "crypt", and extract the first alphanum token out of them. Run the line on your server and see what you get. Might be a good idea to write a script to do a key change amass. If I get some spare time later today I might hack up something.
  23. This has been discussed in other threads, e.g. here, but I didn't find an entry in Feature Requests so here goes. Unraid is not spinning down SAS drives. It appears to try, and the GUI indicates that the drive is spun down (with a grey ball and temp not being presented), but in reality these drives keep spinning, remaining warm and drawing full power, 24x7. The problem seems to be that hdparm, which is used to spin down drives, does not affect SAS drives. A solution might be to use the sg_start command (haven't tested this thoroughly but it seems to be doing the right thing): sg_start -s /dev/sdX <== spin up sg_start -S /dev/sdX <== spin down (unfortunately the above does not seem to do the right thing unto SATA drives, so we'll either need to have conditional logic, or maybe just run both tools in sequence for each spindown/spinup operation.) I'm sure adding SAS spindown capability will be met with massive gratitude from a lot of us.