doron

Community Developer
  • Posts

    635
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by doron

  1. Hi @ich777, any plans to make a 6.10.0-rc1 kernel compatible version of the plugin / drivers? Thanks so much for all your work!
  2. Ah yes, should have re-scanned the thread. It's really curious why some setups (e.g. mine) do not experience this phenomena at all. My drives are WDC 4TB and WD/HGST 12TB, on an LSI/SM SAS controller. They spin up/down perfectly, same as they did in 6.9.1.
  3. Curious: Does anyone have this problem in 6.9.2 or in 6.10.0-rc1 with drives that are not Seagate? Trying to narrow down on the root cause.
  4. Somewhat related, the NerdPack has a bug that manifests itself when the version of the Unraid OS has a minor version number > 9 (i.e. double digit). I posted in the thread and on Github, hopefully the author will pick it up Soon.
  5. The Nerdpack plugin code has an issue with Unraid versions with a minor version number higher than 9. It parses the version string wrongly. Version 6.10 triggers this issue 😞 I opened an issue on Github several days ago, here (did not submit a pull req so far).
  6. Oops - missed Simon's post... 🤦‍♂️
  7. You guys do mean 27/28 not 37/38 I presume.
  8. ijqk kfwt dvya ugcv qmlr
  9. What Simon said. Basically, as of 6.9.2, some SAS drives (mainly Seagates) and also some SATA drives have an issue with spindown. Essentially they appear to spin down but then immediately spin back up. Not sure yet re the source of this issue, might be kernel/driver related, probably not related to the plugin (as (a) it happens with SATA drives as well and (b) some SAS drives spin down and up perfectly under 6.9.2).
  10. Hi, thanks for posting. Sure, they're generally decent drives. But you will probably need to live with them spinning 24x7 😞 Basically, there's conflicting data as to their behavior. I started with excluding them, then started a mini-project of collecting data points from users. Since I did receive a couple of positive data points for these drives, I commented the exclusion out, "for now". My controller is based on the same chip. I use HGST drives. They spin down/up without a hitch. So it could be a combination of the controller/HDDs, or just the latter. I tend to believe it's the latter (the HDDs), but the jury's still out. At any rate, these drives have given much more thumbs-down data points than thumbs up. As I said I started collecting this data. Whatever seemed conclusive is in the exclusions file. Perhaps compiling it to a list of "what works" may indeed be a good idea, time permitting. Not as far as I'm aware, but that's up to Limetech to answer authoritatively.
  11. Either the BP will not physically accept SAS drives to begin with (as noted by @JorgeB above) due to the connectors being SATA only, or the controller will see a SAS drive(*). Neither the cables nor the BP will change the protocol spoken by the drive from SAS (essentially, SCSI) to SATA (essentially, ATA). (*) In some cases, speed might be negotiated down (e.g. 6Gb/s SAS instead of 12Gb/s SAS).
  12. Yes, it does. That's why you would opt for New Config - which will have Unraid see the new IDs. The important thing is that the data on the drives, as seen by Unraid, will be bit-identical, and with the presumption above holding, it should be. And of course, you'd need to make very sure the drives maintain their Unraid slots - Or Else 🙂
  13. I'm not overly familiar with Proxmox specifically, but assuming its HDD passthrough passes the entire physical drive to the VM, you should be fine with the process you described (also check the "parity is valid" checkbox). If the drive seen by Unraid is block/sector-wise identical to what it was seeing previously, this should work. The remaining question would be whether you would actually be able to pass the SATA controller. Not all on-board controllers can be passed through; this would be mobo-dependent. But you'll figure that one out pretty quickly 🙂
  14. @BurntOC, here are some related messages from this forum: This and this.
  15. Not quite. You can pass your SATA drives as RDM (see other posts in this subforum, can't link right now). I've been doing that for years and it worked very well. SMART, Spindown, performance,gthe works. There's a "tribal knowledge" thing going around that these things do not work in that scenario. Not the case in my experience.
  16. Will wait for 6.10 to solve.
  17. Limiting the passphrase characters in a very conservative way was a response to some difficulties people experienced when inputting these phrases via GUI, through various, not 100% compatible, versions of Unraid. I decided then to make it quite restrictive; can't recall whether the underscore was a deliberate omission or not. At any rate, you can enter any passphase you want, with any characters you like, by using a keyfile.
  18. It is a 6.9.2 issue. See above for an open bug thread about it. The issue occurs with both SATA and SAS drives (although not all drives), and is seemingly unrelated to this plugin.
  19. Running Unraid under ESXi 6 for quite a few years. It's been rock solid. I haven't done any scientific performance measurements iron vs. VM but have not seen any performance issues. One small oddity that I haven't fully investigated is what appears to be high CPU usage sometimes when an array drive spins up from standby. Not consistently reproducible and not a real issue so haven't taken the time.
  20. Thanks for reporting. Just to make sure you're seeing an instance of the recently reported 6.9.2 issue (you probably are; see above for more details): In your system log, do you see the "SAS Assist" spindown messages, immediately followed by a "Read SMART" message for same drive?
  21. I'd check the CMOS battery. From the manual of your mobo: Losing the System's Setup Configuration 1. Make sure that you are using a high-quality power supply. A poor-quality power supply may cause the system to lose the CMOS setup information. Refer to Chapter 2 for details on recommended power supplies. 2. The battery on your motherboard may be old. Check to verify that it still supplies ~3VDC. If it does not, replace it with a new one. 3. If the above steps do not fix the setup configuration problem, contact your vendor for repairs.
  22. @bonienlfyi - this is still the case - probably worth moving from "prereleases" to proper bug reports?
  23. It kind-of works (if you use with --no-verify, since gpg isn't available) but it will downgrade your db to 7.0 rather than upgrade to the latest... I guess it can be tweaked to do the right thing. Or, you can use the above.
  24. To update drivedb.h you can add something like this to your go script: wget https://raw.githubusercontent.com/smartmontools/smartmontools/master/smartmontools/drivedb.h -O /usr/share/smartmontools/drivedb.h This will install the latest smartmontools drive database. Whether it will cover a given drive is a different question 🙂