FlamongOle

Community Developer
  • Posts

    548
  • Joined

  • Last visited

1 Follower

About FlamongOle

  • Birthday June 3

Converted

  • Gender
    Male
  • URL
    https://ubrukelig.net
  • Location
    Norway

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

FlamongOle's Achievements

Enthusiast

Enthusiast (6/14)

85

Reputation

  1. I see that the "CTRL CMD" is active, but I don't really see the reason why it shouldn't just be auto detected. Per disk setting in Unraid, you might have set a "SMART controller type" manually instead of leaving it to "Automatic", maybe you must do it for a reason, but usually NVMe and SATA is auto detected. Also I think "ATA" is not really common these days anymore as SATA uses the SCSI protocol (correct me if I'm wrong)? Under "Disk Settings" (Unraid settings) you will find "Global disk settings", ensure "SMART controller type" is set as Automatic as well. The sg0 is typically the USB drive containing Unraid OS. Let the OS auto detect the SMART controller type unless you must specify it to work, but I think that's more common for special RAID cards and HD enclosures etc.
  2. Hard to say, which versions of things are you running? And anything special in system/php log files?
  3. Do not replace and RMA yet, not sure it's the drive. But to find out, you can try: smartctl -x --all --json /dev/nvme1n1
  4. I run Unraid 6.12.9 and don't notice any errors with the cronjob. The first post mention "corrupted size vs prev_size", this has nothing to do with Disk Location. You can always try to run the cron manually, either: /etc/cron.hourly/disklocation.sh or to get output: php /usr/local/emhttp/plugins/disklocation/pages/cron_disklocation.php cronjob
  5. Yes, disklocation and disclocation-master/devel if it exists.
  6. If the first name is either "Parity, Data or Cache", it is detected as a traditional Unraid pool. If it just has a name like "Luna" and "Merlin", it is detected as ZFS pools. "Cache: merlin*" and "Merlin" is not the same pool (or at least it shouldn't be). Unassigned devices is only added if you add them there, for all I know, adding the new drives in the wrong slots might have caused this. Not sure how it would change anything otherwise. The serialized modelname+serialnumbers string are assigned to a drive slot (TrayID), and would not be moved anyhow by itself. There has been modifications on how to prioritize the differnt pools, as for now the ZFS pools is sort of above the traditional Unraid array. It might be time to end the "traditional Unraid" way of labeling as they now allow for multiple pools etc. which was not a thing "back in the days." I haven't made up my mind what would be the best option here. There's more things to be updated and considered for this at this point.
  7. Currently no. Your observation is correct. For now you can still increase the global to have it like "worst case scenario" if it bothers you. I might look into Unraids settings and use those directly at a alter point.
  8. Just to add some more here, if it's under ZFS filesystem, it won't see it as it uses info from the zpool status. I must be under Unraid array.
  9. Should be no change with that. Idle is the same as active, disk spinning. Standby is spun down. It won't show instantly if it's spun up or down as it might take up to 5 minutes to gather new data. And that is a recent intended change. This plugin works semi-independent of Unraid disk statuses.
  10. Update 2024.03.22 Commit #307 - MINOR: Just a minor cleanup in the javascript handling and code of the Locate script. Added also a check if the device is assigned or unassigned so that the Locate works if there's unassigned devices in the list, which is kind of the point. Thanks to @warwolf7 who patiently explained the situation well enough for me to finally get the problem
  11. Aha, Ill have another look tomorrow or so
  12. I tried both, and things around it, but whatever I did, it would not start the locate script, nor the blinking of the disk visualization. Next release will revert the "-1" change, as it wont accept "Stop" click on the last device. Also cleaned up where the script was loaded twice in different positions. But it won't change your error code, which I don't seem to get myself. For curiosity, which browser do you use? It works without problem on Chrome, Vivaldi, Firefox for me.
  13. Update 2024.03.21b b) Fixed a minor issue when fixing #309 below. Commit #309 - BUG: If the disklocation.conf wasn't created (which is expected in some cases), the recent install scripts would fail. Now it will check if that file exists before it tries to pull out information from it, otherwise use default location of the database. Commit #307 - ISSUE: Yet again changed a bit of the javascript based upon information from warwolf7 (I hate javascript). This should fix the issues during boot of the server, did not know it would stall the entire server if a plugin install would fail. However, deleting the "disklocation" folder should almost never be necessary and will also delete the database and backups created. Deleting the .plg file would be enough in this case. You can always restore them from an Unraid backup, if you have, you should have... The javascript change is minor and I could not get @warwolf7 line working, so instead did a simple "-1" in the loop to skip the empty array. I hate javascript.
  14. Update 2024.03.20 Commit #307 - ISSUE: Cleaned up the javascript for the Locate functions, should hopefully work better overall now. Tested with Chromium based browsers and Firefox. @warwolf7 you can try now with this update. Hopefully, that'll fix it.
  15. Update 2024.03.19 Commit #305 - IMPROVEMENT: Skip updating devices during install if database version is matching the current database version. This will make the update faster for existing install, except if the database is getting an upgrade. Commit #304 - IMPROVEMENT: Prioritize using ZFS data over Unraid if it's not a traditional Unraid array. Only visible for the device and dashboard page (e.g. Device is ONLINE (ZFS) instead of ACTIVE (Unraid)). This will also use the "zfs pool" name, instead of "unraid type" which defaulted to "Cache"