Jump to content

FlamongOle

Community Developer
  • Posts

    555
  • Joined

  • Last visited

1 Follower

About FlamongOle

  • Birthday June 3

Converted

  • Gender
    Male
  • URL
    https://ubrukelig.net
  • Location
    Norway

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

FlamongOle's Achievements

Enthusiast

Enthusiast (6/14)

87

Reputation

  1. Nah, it's either disabling the cronjob entirely from Disk Location or leave it as it is. "S.M.A.R.T updates" under Settings
  2. Best is not to install anything unless you know what you're doing. It's relatively easy if the package is already built, other than that. Wait for next Unraid version that includes a newer smartmontools package. Earlier I used to include a newer version of smartmontools with Disk Location plugin, but I don't want to maintain that. As for now, Disk Location threat both of you NVMe drives the same and cant tell a difference looking at the unique ID it has been given.
  3. Might be an issue related: https://github.com/smartmontools/smartmontools/issues/233
  4. Likely not Disk Location per say, but due to a bug related to smartmontools: https://github.com/smartmontools/smartmontools/issues/233 Nothing clean and useful I can do about it.
  5. Nevermind first post, this is likely due to a bug with smartmontools, or the compiler used for it (might be related to Unraid OS somehow) https://github.com/smartmontools/smartmontools/issues/233
  6. See next post and nevermind below. I've seen that report from someone else "recently" and I need to get some info from you as the other one did not provide with all the information I needed. In terminal, please do this: php /usr/local/emhttp/plugins/disklocation/pages/cron_disklocation.php cronjob There you should see which device(s) will post this information, it might be related to smartctl, but is something I must find out. When you see which device output the fault, please give me the output of: smartctl -x --all --json /dev/<device> *where <device> is the wrong device, eg. sda or nvme1n1 Please share that output to me, you might do it in PM if you don't want to reveal the serial number or other drive data if you care. No content will be revealed.
  7. I'm baffled that you found this out the hard way? Try: Tools -> Disk Location -> System -> Backup -> 'click the backup' -> Restore
  8. I see that the "CTRL CMD" is active, but I don't really see the reason why it shouldn't just be auto detected. Per disk setting in Unraid, you might have set a "SMART controller type" manually instead of leaving it to "Automatic", maybe you must do it for a reason, but usually NVMe and SATA is auto detected. Also I think "ATA" is not really common these days anymore as SATA uses the SCSI protocol (correct me if I'm wrong)? Under "Disk Settings" (Unraid settings) you will find "Global disk settings", ensure "SMART controller type" is set as Automatic as well. The sg0 is typically the USB drive containing Unraid OS. Let the OS auto detect the SMART controller type unless you must specify it to work, but I think that's more common for special RAID cards and HD enclosures etc.
  9. Hard to say, which versions of things are you running? And anything special in system/php log files?
  10. Do not replace and RMA yet, not sure it's the drive. But to find out, you can try: smartctl -x --all --json /dev/nvme1n1
  11. I run Unraid 6.12.9 and don't notice any errors with the cronjob. The first post mention "corrupted size vs prev_size", this has nothing to do with Disk Location. You can always try to run the cron manually, either: /etc/cron.hourly/disklocation.sh or to get output: php /usr/local/emhttp/plugins/disklocation/pages/cron_disklocation.php cronjob
  12. Yes, disklocation and disclocation-master/devel if it exists.
  13. If the first name is either "Parity, Data or Cache", it is detected as a traditional Unraid pool. If it just has a name like "Luna" and "Merlin", it is detected as ZFS pools. "Cache: merlin*" and "Merlin" is not the same pool (or at least it shouldn't be). Unassigned devices is only added if you add them there, for all I know, adding the new drives in the wrong slots might have caused this. Not sure how it would change anything otherwise. The serialized modelname+serialnumbers string are assigned to a drive slot (TrayID), and would not be moved anyhow by itself. There has been modifications on how to prioritize the differnt pools, as for now the ZFS pools is sort of above the traditional Unraid array. It might be time to end the "traditional Unraid" way of labeling as they now allow for multiple pools etc. which was not a thing "back in the days." I haven't made up my mind what would be the best option here. There's more things to be updated and considered for this at this point.
  14. Currently no. Your observation is correct. For now you can still increase the global to have it like "worst case scenario" if it bothers you. I might look into Unraids settings and use those directly at a alter point.
  15. Just to add some more here, if it's under ZFS filesystem, it won't see it as it uses info from the zpool status. I must be under Unraid array.
×
×
  • Create New...