Jump to content

dlandon

Community Developer
  • Posts

    10,389
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. I'm trying to help you understand what is causing these messages. They come from CIFS mounts. I asked this queston to understand how you are mounting those CIFS mounts so we can get to the bottom of why messages are showing in the log. I can't help until you provide more information. Your first post was a criticism of how you felt that you were not getting the support you deserved. I'm trying to help you get the support you want, but you've got to help by providing additional information. It's very difficult to look at a log snippet of many repeating log entries and provide an answer.
  2. Those messages are from a remote mounted SMB share. Are you mounting any in fstab, or are you using UD to mount remote shares?
  3. I don't think there is a way to do that since the vfs_recycle finctionality is built into samba and it doesn't support compression.
  4. It is still supported and working for Unraid 6.10. It may look like there is no development because it just works and doesn't need anything.
  5. Wsdd is not needed for SMB1, as you found out. It is used for Windows browsing if NetBIOS is not used.
  6. Can you roll back to 9.2 and let me know if it works? I suspect changes to SMB in the Unraid 6.10.
  7. When did your remote shares last work? What version of Unraid and UD? There have been no changes in UD in SMB mounting for a long time. Unfortunately, I think your NAS bay is just too old. SMB1 has a lot of security issues and is not recommended to use any longer. Note: I think you have mover logging turned on. It is clogging your syslog. If you don't need it, you should turn it off.
  8. Go to Settings->SMB and enable NetBIOS and UD will use SMB1 if none of the other protocols will mount.
  9. Lots of plugins put up the banner including: CA, File Manager, FCP, Recycle Bin, etc. The changes recently were to fix some issues with the Unraid 6.10 release and changed ids of UD devices. When I see something that causes users issues, I will release an update. I try not to do a lot of unnecessary updates, but sometimes it's really necessary. The idea behind the banner is so users will keep plugins up to date. If that is not done, there will be posts about issues that have already been solved because users don't update.
  10. The latest changes I made changed the way that works. If you have the refresh the recycle bin in the background set, the recycle bin is not cleared right away. It will clear on the next refresh of the recycle bin sizes. If you set to not refresh in the background, the recycle bin will clear immediately.
  11. It looks like you didn't unmount the disk before you physically removed it. If the mount button shows a not 'Unmount', you'll need to reboot to clear it up. Don't just pull the disk out and expect it to work when you plug it back in.
  12. I see several issues. You are getting general protection faults: May 18 11:55:57 Loki kernel: traps: lsof[64428] general protection fault ip:151a2e0966ae sp:6211c4e2f66d19bb error:0 in libc-2.33.so[151a2e07d000+15e000] May 18 11:56:25 Loki kernel: traps: lsof[68667] general protection fault ip:153589cc46ae sp:311500c92e34b55e error:0 in libc-2.33.so[153589cab000+15e000] May 18 11:56:46 Loki kernel: traps: lsof[71467] general protection fault ip:154c4cef06ae sp:a6ed2d0bfacb66cb error:0 in libc-2.33.so[154c4ced7000+15e000] May 18 11:57:10 Loki kernel: traps: lsof[72228] general protection fault ip:14f2dac7a6ae sp:e4e808728812df1a error:0 in libc-2.33.so[14f2dac61000+15e000] May 18 11:57:33 Loki kernel: traps: lsof[73329] general protection fault ip:1553577dd6ae sp:43c1fa20eae56adb error:0 in libc-2.33.so[1553577c4000+15e000] I don't know where these gpfs are coming from, but I suspect the preclear disk plugin. Remove the preclear disk plugin. There is a replacement that works a lot better. You can install it from CA. preclear.disk.plg - 2021.04.11 (Up to date) (Incompatible) These log entries are from the remote server being non-responsive or network issues: May 18 11:57:34 Loki unassigned.devices: Error: shell_exec(/bin/df '/mnt/remotes/DS918_Unraid' --output=size,used,avail | /bin/grep -v '1K-blocks' 2>/dev/null) took longer than 5s! ### [PREVIOUS LINE REPEATED 1 TIMES] ### May 18 11:57:39 Loki unassigned.devices: Error: shell_exec(/bin/df '/mnt/remotes/DS918_XXX' --output=size,used,avail | /bin/grep -v '1K-blocks' 2>/dev/null) took longer than 5s!
  13. PM this file to me. Don't post it here as it has some private informatiuon. /flash/config/plugins/unassigned.devices/unassigned.devices.cfg I suspect it may have an issue as I can't reproduce your problem.
  14. Those disks are not unassigned disks. You are using them as array disks. It is not recommended to use usb disks for array disks. I do not know how to fix your situation. Hopefully someone can chime in here and help you.
  15. Look carefully at the serial no (id). The Historical devices have spaces, the new ud devices have underscore. This is because ud has changed to using the devs.ini id. They are the same devices, ud just thinks they are different devices. Apply your settings to the new found devices and then remove the historical devices.
  16. The preclear disk plugin is deprecated. Go to CA and look for UD preclear plugin and install that. It is a direct replacement for preclear disk.
  17. There has been a change in UD that affects the device id. Please read this:
  18. Unraid 6.10 has been released. There are some changes to UD. You can see them here: With the latest release of UD, the minimum version of Unraid supported is now 6.9. There are some enhancements in 6.9 ad 6.10 that UD uses for new features. I've removed the spin down timer used in Unraid versions prior to 6.9.
×
×
  • Create New...