FlamongOle

Community Developer
  • Posts

    548
  • Joined

  • Last visited

Everything posted by FlamongOle

  1. I'm not mounting any NFS shares to Unraid, I am using Unraid as a server only. UD only have NFS share enabled on one share "Data"
  2. disk "sdg" and "sdf" are irrelevant as they are only for local online backup and not really part of the NFS. .60 machine is a windows 10 with NFS client, but dunno if that is affected as this isn't that much used anyway. Look at .0.40 / .5.40 for the correct connection and NFS shares. I use both UD and Unraid NFS to mount with own mount options for my set permissions.
  3. Just add that I need to "sudo umount" the share when the error occurs which is user mountable at client side, and then remount it to get access again (regular user mount).
  4. Every time I move a file from one location to another (from eg. shared scratch location (nvme) to a cached location (mec. disk)) within shared locations in NFS I constantly get stale file handles and the client drops out. This can be critical as some VM's uses the same mounted shares as I have in general bad luck with write permissions with 9p over VM's. But it's quite annoying for my regular workstation as well as I suddenly looses access due to stale handle when a "Mover" has been running. I dunno if this is only related to cache shares (but it might look like), or if it happens when things have changed on one of the disks in general. The problem did not happen under 6.7.x or earlier. I have tried fuse_remember values of standard 330 (which I used with success in earlier versions), 900 to just try a higher number, and also -1 as I have plenty of memory - though I'm not sure if I want that kind of cache to last that long. Honestly can't find any proper answer for what this actually does anyway. It makes Unraid almost entirely unusable for me, and I can't figure out why this suddenly happens. I hope it's something wrong on my end, but I haven't really changed anything the last 4-5 years (even before my Unraid time). odin-diagnostics-20200101-2012.zip
  5. Update 2019.12.30 Happy new year! Commit #119 - MINOR ISSUE/BUG: Fixed dropdown list in "Tray Allocations" to show all custom colors. Only "Empty color" can be now chosen for resetting to default colors, for simplicity. @ICDeadPpl: this should simplify it for you
  6. The default colors are added, just choose one of the first 4 colors (1=parity/2=data/3=cache/4=unassigned) and it should reset to default (even if you choose the wrong one). Else bug discovered as not all colors are added to the list on that page, but a reset should already be possible.
  7. Follow that thread, then you see it's linked to another post about a workaround. It seems like I am affected by the same thing. Every time something is written to the cache drive from another computer, docker or VM, I get a stale handle.
  8. Update 2019.12.24 Happy holidays (for real this time)! Commit #117 - MINOR ISSUE: Fixed LED for empty tray which was misaligned.
  9. Update 2019.12.23 Happy holidays! Commit #114 - IMPROVEMENT: Changed the color to use the default Unraid class instead of regular "green" and "red" Commit #113 - IMPROVEMENT: Added minor tweak to the LEDs, mouse over and design should work better in vertical mode. Commit #112 - MINOR ISSUE: Added forgotten stripslashes and htmlspecialchars for the group name, quotes etc. should now work.
  10. Ever since upgrading to 6.8.0 from previous stable release I get stale NFS handles all the time on different connected "drives". I see some had the problem under rc5 but solved it with a workaround. The problem seems to persist under the stable release as well. I set this as "urgent" as NFS+Cache is like 90% of the reason I use Unraid.
  11. 1. You can only choose a custom color -not- chosen for other type of drives. If you have a "Data" disk and choose to use the "Unassigned" drive color, it will not accept it and it will revert back to it's default. If you want to have the same color for Data and Unassigned, then it must be defined as a default in the Configuration tab, and not under "Tray allocations" 2. Add a group!
  12. I need a bit more information than this, did you force scan the disks after inserting new ones are whatever you did? If there's any disk changes, a force scan must be done and maybe a reassignment afterwards for the new drives. In my mind it seems correct to have 2 before 3, that's how I count Please post a screenshot of the dashboard issue.
  13. Thanks for the feedback. The quotes are probably stored in the database, but I might have forgotten to strip them. Will fix this when I get some time, which isn't soon as I have busy days in December. This plugin uses grouping, so use that for separating the differences in the server. Even if it wouldn't match 100% of the server layout (which this plugin isn't named you will still know which drives it is Maybe in the future.
  14. Update 2019.11.19 Commit #110 - IMPROVEMENT: Centered the tray config on the Dashboard page because someone got an OCD attack 😛
  15. This plugin is only built for stable releases. However, it's likely you missed the past change where the plugin was divided in two sections: "Disk Location" which is found under "Tools" "Disk Location (Configuration)" which is found under "Settings"
  16. Make sure the drives output SMART with minimum of "model name" and "serial number" per drive. If the serial number is hidden or just zeroes, this will fail and probably overwrite each drive of the same model name showing only the last one. Check if your controllers can bypass the SMART data. Maybe your raid controllers can enable some settings in this regards. Do a smart check(smartctl -i /dev/sgX) for each drive listed in "lsscsi -u -g"
  17. Update 2019.09.27 Commit #108 - BUG: Missing bracket in CSS file, caused color error for the LEDs
  18. Update 2019.09.26 Commit #105 - FEATURE: Add force removal of drives stuck in database for any reason under a new tab called "Drives" under Configuration. This has slimmed down the "Information" page and leaves that with just the info without the controll/ operation buttons. Some bugfixes applied to these buttons as well. Commit #104 - IMPROVEMENT: Unclear tray assigning because of TrayID assignment in some cases, made it more clear with added information. @Melocco try this new feature, hope it works @tgrdn maybe the tray allocation is better and made more clear now with the new update
  19. Alright, I see multiple drives have the same name. It's hard to figure out what happened here, what can maybe cause this issue is replacement of multiple drives and assigning them to slot before force scan has been made (or reboot if hot-swapping just partly works) to make sure the drives ends up in the correct list (flag). I would probably delete the database and start over, that is the easy solution at least. If you know how to deal with terminal and SQLite, you can set the flag to "r" for the removed disks manually (might be needed to clear out/null the "devicenode" as well). Just make a backup of the existing one before you edit it. Or just wait long enough until I bother making a "Remove disk" button.
  20. Did you try to "Force rescan all"? This should remove old drives and add them into the "Information" view instead as a history reference
  21. Hi I see you as well do the same mistake as many others, under the configuration of tray allocations, you assign devices based upon ID's and NOT the actual number you define. Else the colors seems to match all pictures as far as I can tell. The configuration will always show 1,2,3,....-> regardless of your settings, it is also stated under "Help" (click that button). Example: You add a group of 5 drives and want it to count backwards so it looks like: 5|4|3|2|1 In the configuration, because it is based upon ID, it will show as: 1|2|3|4|5 Meaning; you set the drive '1' to be the first drive from the left side physically seen. The devices does not rearrange if you suddenly choose to use another count properties, it will only show different numbers and the drives are stuck in place because of its ID.
  22. Ah, and to make clear, the dates are only transferred if the drives are part of the Unraid array. It does not work with unassigned devices as this doesn't seem to really store the data per disk. It looked like it stored the last input you stored for -all- of the unassigned devices as one common unit. The plugin does not use that data because of that, and you have to specify those manually via the plugin.
  23. Make sure you haven't specified a custom color for the drive, just choose any default colors and it should reset to standard scheme. At least make sure of it, I dunno why else it should behave like that
  24. I dunno why you SSD shows like that, it might be lacking some information. Model name and serial numbers are required and has to be unique when it's combined, so make sure it's not blank or contains some odd characters or whatever. If "Force Scan All" does not work, it might be because the old name/serial numbers and rest of the data are still showing as the old disk. This is something the system itself handles and the only way I know to fix this is by a reboot of the server. At least this is often the case with hot-swapping as far as I have experienced. If it shows even after a deleted database, there's nothing I can do.