FlamongOle

Community Developer
  • Posts

    548
  • Joined

  • Last visited

Everything posted by FlamongOle

  1. Where? Under "Configuration" you should, under "Tray Allocation" you shouldn't. The point is that it shouldn't mix up Unraid assigned devices with unassigned, but you can still manually force a custom color. Unless you choose the default colors specified under "Configuration". If you choose any color specified under "Configuration" in "custom color" it will reset like choosing "Gray/Empty" color.
  2. Update 2020.02.19 Commit #124 - MINOR ISSUE/IMPROVEMENT: Added a "Reset All Color" button with following informative text to clarify the coloring of devices.
  3. You have the same colour for unassigned and data drives. The colour picker always list the chosen colours first and in this row from the left: parity, data, cache, unassigned, empty. The extra two added ones are the default colours for data and cache (which is replaced by your custom colour). The behavior is expected
  4. Update 2020.01.27 Commit #122 - IMPROVEMENT: "Force scan all" should check all drives in the database except manually deleted (hidden) ones. Earlier scanned just assigned devices, leaving some removed devices in the assignment list instead of "not found" list. Commit #121 - IMPROVEMENT: Better use of FontAwesome, PNG icon files removed.
  5. @CHBMB Appreciate your complicated plugin, take your time. Thanks! I use it for Emby transcoding and BOINC for Science United!
  6. Oh sweet! I was thinking about creating a plugin/dashboard thing for this tool as well. Glad I didn't have to do it! My Corsair HX1200i is satisfied Looks good!
  7. NFS outdated? It is for sure old, but still widely used and the logic choice for linux users anyway. I tried using Samba without so much more luck really, the problem seemed similar as long as I tried to mount it in fstab. Anyway, to the important bit: Changing the hard link tunable to "no" seem to have solved the issue. Just to be sure here, there's nothing else affected by this? Does Unraid use hard links, and will this tunable might break some functionality? Or is it that I just can't make hard links myself anymore (which I don't use anyway)? fuse_remember has usually always worked at 330, so I will be leaving it to the default value. Anyway, thanks for the update! Tip: add some info that stale file handles might cause issues with hard link support turned on - maybe even on the NFS page itself - would make sense?
  8. In syslog you would find this: Jan 2 21:05:30 Odin emhttpd: req (28): cmdStartMover=Move now&csrf_token=**************** Jan 2 21:05:30 Odin emhttpd: shcmd (2519): /usr/local/sbin/mover |& logger & Jan 2 21:05:30 Odin root: mover: started Jan 2 21:05:30 Odin move: move: file /mnt/cache/storage_ole/testfile Jan 2 21:05:31 Odin root: mover: finished File and folder names might reveal things you don't want to be released. And should probably be scrambled regardless of settings (at least after the main share name). The syslog file is just included in the diagnostic without anonymize this data when the anonymize button is checked.
  9. Alright, I have to ask again, because it is something really strange happening here The only affected NFS shares is the ones with an enabled "Cache" drive. Reading/writing directly to: UD devices: no problems. Cache/scratch: no problems. Unraid data with cache: stale handles after a file has been moved (but not always, sometimes it get instant stale instead when a file has been created) Unraid data without cache: no problems (deactivated cache for the ones which had problems). NFS mount options are identical all over (tried static (my default) and autofs), I even got the same problem with SAMBA, but only when it's mounted via fstab at client side. I simply can't see why this problem would be on my end. Running without cache drive is -NOT- a solution, just a workaround.
  10. Changed Status to Closed Changed Priority to Minor
  11. It turns out it happens with my Kubuntu 19.10 install, dmesg is filled up with: [ 307.567757] NFS: server 192.168.5.10 error: fileid changed fsid 0:59: expected fileid 0xfd00000301de1fda, got 0xfd0500006014b341 [ 307.567965] NFS: server 192.168.5.10 error: fileid changed fsid 0:59: expected fileid 0xfd00000301de1fda, got 0xfd0500006014b341 ..when I create a file on a share, after refreshing the folder I get a stale file handle. Can access it through Samba without any problems, and it works with NFS in Win10Pro (even if it spits out a load of inputs into syslog in Unraid).
  12. I don't see why this is related to UD at all. UD only share one device "Data", and does NOT mount anything from a remote location. Unraid/UD does not have anything mounted from remote devices at all. So far this breaks the functionality for me, I see it's correct to use "Urgent"
  13. Must add that the 9000 MTU 10GbE NIC ran entirely their own local network directly connected between two NIC's, and should not even be conflicted within the regular net. But it was tested with 1500 MTU as all other cards. The network connection here is quite solid overall and never had it drop out. Also, there was someone mentioning the same problem with 6.8.0rc5 in the forums here as well. He/she "fixed" it by disabling the cache, also not a real fix.
  14. There's a reason why I choose to use logging, I want to make sure the Mover worked as intended. And so far we have the option of creating anonymous diagnostic file, then this should be excluded/scrambled as well. Turning it off does not remove the Mover log entries until you reboot when the syslog is written from scratch again. Not solving the problem unless it was off the entire time (which is default). It still should be fixed.
  15. All NIC's running at 1500 MTU, no difference whatsoever.
  16. The 1GbE uses the default MTU, and there was no change. However, it was working for over a year without problems with jumbo frames on the 10GbE. I doubt this is a network problem as it happens on two entirely different network connections and subnets.
  17. Changed Status to Open Changed Priority to Minor
  18. I decided to just look through the entire diagnostics file to check if it really is anonymous. It is not: if the "mover" logging is enabled, it will write the entire move with path and filename into the syslog and the diagnostic does NOT anonymize this data. Quoted from the diagnostic page: Yes, it says it will backup the syslog - but I expected it to run through an anonymize filter first. This is a serious security flaw for the users, not good!
  19. Alright, doesn't matter if I use standard 1GbE connection or 10GbE with jumbo frames. It still get stale file handle after the mover has ran after creating just a simple "test" file with nothing inside on a cached share.
  20. I will try to use my 1GbE connection with 1500 MTU instead, just to check. I have replaced one 10GbE card with another brand for newer PCIE connection recently.
  21. Client side, only two examples given (but they remain the same). First = Unraid, Second = UD: # mount 192.168.5.10:/mnt/user/scratch on /mnt/scratch type nfs (rw,nosuid,nodev,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=14,retrans=2,sec=sys,mountaddr=192.168.5.10,mountvers=3,mountport=38043,mountproto=tcp,local_lock=none,addr=192.168.5.10,user=ole) 192.168.5.10:/mnt/disks/Data/private/ole on /mnt/private type nfs (rw,nosuid,nodev,noexec,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=14,retrans=2,sec=sys,mountaddr=192.168.5.10,mountvers=3,mountport=38043,mountproto=tcp,local_lock=none,addr=192.168.5.10,user=ole) Server side: # exportfs /mnt/user/scratch 192.168.5.40 /mnt/disks/Data 192.168.0.0/20
  22. I use 9000 jumbo frames for my 10GbE connection only. Both cards supports 9000 or even above and has worked earlier without issues. Direct connection from Unraid server to client.