doron

Community Developer
  • Posts

    640
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by doron

  1. That is correct. I proposed a patch to the author, hopefully he will consider merging it in.
  2. As mentioned above, running Unraid as a VM is not a common occurrence, and isn't even officially supported. Some of us do it, though. Reasons vary. For me, it is the combination of a few reasons. (a) ESXi has been rock solid for me, for years now. It just works - and, in my scenario, it's always on, essentially feeling as part of the hardware. I've also not felt a need to update it (running an ancient 6.x version), so it's really always up - as long as the UPS can carry it. In contrast, I've been updating Unraid occasionally (that's a good thing!), taking it down from time to time for maintenance, and so forth. During those times, my other VMs were up and chewing. (b) Historically, for me, Unraid has mainly served as a (great) NAS, with its unique approach to building the storage array, protecting it etc. Its additional services (a few docker containers) have been an upside, running under Unraid as a convenience. Over the years Unraid as developed immensely and moved from being a storage solution into a one-stop-shop for your home server needs; I've not boarded that train - at least not completely. The shorter version is - I've started with VMWare as a hypervisor, it's been rock solid, my other VMs are not affected by Unraid's dynamic nature (updates etc.), never saw a good reason to flip that structure.
  3. Perhaps this calls for a new feature of this little tool - back up my LUKS headers. I'll take a look to see what it takes to do that reliably for the entire array. @Jclendineng
  4. Yes. When using -S, several drive families then require an explicit spin-up command sent to them to spin up (in sg_start jargon, that'd be "-s" - small s). That is not what Unraid expects - it expects an implicit wake up whenever subsequent i/o is issued against the drive. So while "-S" would cause more drives to spin down, it will also cause some of them to stay spun down, which would translate into Unraid marking the drive as faulty and various other small hells breaking loose. Yes, this is typically the solution to spin-ups closely following spin-downs. Glad you found the issue.
  5. So you believe both LUKS headers got corrupted simulateously? Have you tried a backup copy of the keyfile?
  6. If you have both a data drive and a cache drive, and both stopped being openable with your keyfile at the very same time, - I'd bet your key slots are fine. Chances are, either (a) you have some cabling issue or controller issue, or (b) something happened to the keyfile (have you backed it up? Perhaps use a backup copy). I'd put lower chances on (c) someone did change the key on your drives or (d) some malware played nasty games with your LUKS headers. Another thing - have you run "memtest" recently?
  7. @Jclendineng, if I was unclear, that was a genuine question. Since you suspected a bit flip, that would be a viable hypothesis only if you have exactly one data drive in the array. Each drive has its own LUKS header; the chance of a bit flipping in all of them at once is practically zero.
  8. Need some context - have you been using the script that's the subject of this thread?
  9. That's an interesting thought. It should be easy to add, however if I do that, it will also need A way to remove a key (after making sure you are holding one of the remaining keys - it is too easy to lock the door with the keys inside if you're not careful) A monitor on the concurrent number of keys. LUKS has a certain number of key slots, and a careless user (not you) might keep adding keys, filling up the slots. This is becoming even more hairy since there might be a different number of occupied slots in different HDDs. As you can see, there's more to take care of if we want to officially support multiple keys. And while it is all doable, and your suggested feature does make sense, I wonder how many people would actually use it. If other people want to weigh in on this, please comment in this thread. (Regardless, if you feel brave enough and/or fluent enough with Linux CLI, you can use "cryptsetup luksAddKey" unto all of your array drives (and cache, if applicable) to add the emergency key. Make sure you operate against /dev/mdN (for diskN) and not /dev/sdX. Be very careful 🙂 )
  10. If you're referring to the SAS Spindown plugin (linked above by @trurl), it has last been updated in Aug 2022.
  11. Seagate. Shocking 🙂 (*) That model is indeed a rebranded Seagate ST990080SS. Thanks for reporting! (*) Most hit-and-misses with SAS drive spindown seem to involve Seagate made drives. 🤷‍♂️
  12. Thanks for providing the update. That's very interesting, and supports the hypothesis that the behavior is dependent on the combination HDD+controller. If you don't mind sharing the make/model of your drives, that'd be helpful too.
  13. I have a couple USB DVB-T2 adapters. Could never get reliable, quality performance from them. The TBS PCIe card has been rock solid (for me). Hey, certainly not your fault, your work on this is so much appreciated! (Not sure I ever understood why these cards can't be supported with in-tree drivers (kmods). I must be missing something.) Very much appreciated. Thank you.
  14. Which models are they? Are they DVB-T2? PCIe or USB? Curious, as I'm expecting to be forced to replace my TBS PCIe DVB-T2 card at some not-too-far point in the future 😞
  15. There's probably actual i/o activity against the drives, which spins them back up. The "read SMART" messages just indicates that Unraid, whenever an HDD starts or spins up, (re)reads its SMART attributes.
  16. Thanks for reporting success. Very glad to hear the plugin is helpful to you!
  17. Glad you are finding it helpful! I've seen more than one report like the one just below your post, that reinstalling/rebooting made it work for some. I don't have a very good explanation as to why that would be the case, but I'd try that first. If it still doesn't make it work, please post diagnostics.
  18. Are you seeing the actual drives anywhere else? e.g. when you go into BIOS setup? Or, when you are not passing the LSI controller to Unraid, are you seeing the actual drives in ESXi?
  19. Seems like either this HP drive or the controller (or both) do not support the spin down instruction. Unfortunately, support for these instructions is not ubiquitous among SAS drives.
  20. Indeed. Missed that (probably due to the same problem...). Thank you and sorry for the noise.
  21. This is kind of weird - I often read the Unraid forum via the "Unread" activity stream. For the past couple days, this stream comes back empty - as if "nothing there yet". Is something broken in my browser or has something gone wrong with the forum server? Thanks for any insight.
  22. Indeed they are, and impressive it is 🙂 Unraid is quite straightforward in what it does with HDDs and, subsequently, what it needs from them. This is, IMHO, one of its strengths (it does come at a price tho). It builds an independent filesystem on each data drive, and maintains parity data so as to reconstruct them in case of failure. Therefore, as long as there's a valid filesystem on the drive served to it, it will eventually work with that fs and be merry.
  23. @BlueBull, The short answer is - yes, it can be done(*) (so, essentially, no - I'm not confirming your suspicion 🙂 ). Quite simply in fact. As long as you are passing the drives through (i.e. RDM and not, say, whole-disk virtual drive), the transition to a passed-through HBA is possible, preserving your data. You will need to be careful with the procedure of making the change - let me know if you need guidance, I think there's some past stuff in this subforum - but assuming you do the right thing, you will not need to start fresh. (no issue with UUIDs). (*) Can be done - and I've done exactly that in the past.
  24. At the time of posting, this is still the way to go. Note the preferred method is to install the plugin (off Community Applications) rather than downloading from the top post in this thread.