Jump to content

dlandon

Community Developer
  • Posts

    10,389
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. There is a new release of UD that fixes a long standing issue with Unassigned Devices disk device ids. The way UD was determining the device id was not consistent with Unraid. Some USB devices were not showing the id the same as Unraid showed. The change in device id won't cause any loss of data, it's just the Unassigned Device id UD uses. Once you apply the following fix, your device will be back to operating normally. What will happen after you update UD, is some disk devices will not show the same id as before for the disk device and will appear as a new disk device. To fix this: Re-apply all your Historical Device settings to the new device, including any script file assignments. Once you have the settings applied to the new device id, remove the Historical device. Be sure to set the mount point the same as the older drive if you use the '/mnt/disks/mount point' in a Docker container or VM. Once you've updated the settings for the device, you can unplug the device and re-plug it and it will auto mount if you have it set to auto mount. It appears this applies to flash drives, some NVME drives, and older WD portable drives. Those of you that have tried external drive bays and found UD saw the same serial number for all drives but Unraid saw them as separate drives, will probably find that using the latest UD version will help with UD seeing the disks individually and not all with the same serial number.
  2. Read the previous post. The inotify watches would not start on that disk, so file activity could not run.
  3. I said update to Unraid 6.10, not UD. You should run a file system check on the drive - click on the check mark next to the mount point when the disk is unmounted. It may have been corrupted when it was removed without being unmounted.
  4. I doubt anyone will be able to help you with such an old version of Unraid. Update to the latest stable version. Then if you have issues, post the diagnostics zip file.
  5. I've made a change in UD to use the actual device id rather than a cobbled id from the ID_MODEL + ID_SERIAL_SHORT that's been used since UD was originally written. Not sure why the original author chose that way of doing things.
  6. There is a bug ib 6.9.2 where this can happen because Unraid does not assign the proper devX designation. Once you update to 5.10, this issue should clear up.
  7. I would look at the following: Your server is an Intel Atom. That's not a termendously powerful prpcessor. Client side setup. Is the slow down in reading from the server or writing on the client? Install the Tips and Tweaks plugin. See if some disk cache adjustments will help. Also see if any processor scaling governor changes will help. Make the recommended changes to the NICs. Disable NIC Flow Control and NIC Offload.
  8. You should upgrade to 6.10rc8. It will use NFSv4 that is much better than NFSv3 in 6.9.2. It should also solve the stale file handle issues. Edit: 6.10rc8 also fixes the rpcnind log spamming.
  9. This is what udev shows. Note the ID_SERIAL: [DEVLINKS] => /dev/disk/by-id/nvme-Samsung_SSD_960_EVO_500GB_S3X4NB0K309824V-part1 /dev/disk/by-id/nvme-eui.0025385381b19427-part1 /dev/disk/by-partlabel/Linux\x20filesystem /dev/disk/by-partuuid/12409271-f9ad-4a6d-a35e-2c01ba882106 /dev/disk/by-uuid/526c7d84-3822-4ea8-a730-c0a3c955725c [DEVNAME] => /dev/nvme0n1p1 [DEVPATH] => /devices/pci0000:00/0000:00:01.2/0000:08:00.0/nvme/nvme0/nvme0n1/nvme0n1p1 [DEVTYPE] => partition [DISKSEQ] => 19 [ID_FS_TYPE] => xfs [ID_FS_USAGE] => filesystem [ID_FS_UUID] => 526c7d84-3822-4ea8-a730-c0a3c955725c [ID_FS_UUID_ENC] => 526c7d84-3822-4ea8-a730-c0a3c955725c [ID_MODEL] => Samsung SSD 960 EVO 500GB [ID_PART_ENTRY_DISK] => 259:2 [ID_PART_ENTRY_NAME] => Linux\x20filesystem [ID_PART_ENTRY_NUMBER] => 1 [ID_PART_ENTRY_OFFSET] => 2048 [ID_PART_ENTRY_SCHEME] => gpt [ID_PART_ENTRY_SIZE] => 976771087 [ID_PART_ENTRY_TYPE] => 0fc63daf-8483-4772-8e79-3d69d8477de4 [ID_PART_ENTRY_UUID] => 12409271-f9ad-4a6d-a35e-2c01ba882106 [ID_PART_TABLE_TYPE] => gpt [ID_PART_TABLE_UUID] => c6c838bf-0a9b-4fd5-bc49-3dee6cf4ebab [ID_SERIAL] => Samsung SSD 960 EVO 500GB_S3X4NB0K309824V ID_SERIAL is what UD uses for the serial id. This seems to have changed with rc8 and I suspect the change to collapse the multiple underscores.
  10. rc8 release notes: This release includes some bug fixes and update of base packages. Notable changes: correct device status handling for single-slot pools collapse multiple underscores within nvme /dev/disk/by-id symlinks to single underscore
  11. Spend some time cleaning up the following FCP errors: May 7 13:53:00 NAS root: Fix Common Problems Version 2022.04.14 May 7 13:53:01 NAS root: Fix Common Problems: Warning: Share system set to cache-only, but files / folders exist on the array May 7 13:53:01 NAS root: Fix Common Problems: Warning: Docker application binhex-krusader has volumes being passed that are mounted by Unassigned Devices, but they are not mounted with the slave option May 7 13:53:01 NAS root: Fix Common Problems: Warning: unRaids built in FTP server is running ** Ignored May 7 13:53:01 NAS root: Fix Common Problems: Warning: No destination (browser / email / agents set for Warning level notifications May 7 13:53:07 NAS root: Fix Common Problems: Warning: Deprecated plugin serverlayout.plg ** Ignored May 7 13:53:11 NAS root: Fix Common Problems: Warning: preclear.disk.plg Not Compatible with Unraid version 6.9.2 May 7 13:53:11 NAS root: Fix Common Problems: Warning: statistics.sender.plg Not Compatible with Unraid version 6.9.2 May 7 13:53:16 NAS root: Fix Common Problems: Warning: Missing DNS entry for host May 7 13:53:17 NAS root: Fix Common Problems: Warning: The plugin openvpn_client_x64.plg is not known to Community Applications and is possibly incompatible with your server May 7 13:53:17 NAS root: Fix Common Problems: Warning: The plugin openvpn_server_x64.plg is not known to Community Applications and is possibly incompatible with your server Also remove the preclear.disks plugin and replace it with the UD Preclear Plugin if you feel you need to preclear disks. Edit: You have a disk problem: May 8 23:46:06 NAS file.activity: Starting File Activity May 8 23:58:48 NAS inotifywait[42835]: Couldn't watch /mnt/disk6: Structure needs cleaning
  12. Can you show the output of this command run in a terminal? cat /tmp/file.activity/file.activity.disks I'm adding some additional logging to try to see what is happening with file activity inotify starting and stopping. I'm also working on trying to determine why inotify is not reporting any file events. This has been reported several times.
  13. You should always commit a mount point name, even if you want to use the default, so it will be consistent even if the default name were to change. You do this by clicking on the mount point when the disk is unmounted and click 'Change'. You don't have to change the mount point to save it. It seems lately that the default names have been changing when updating. Not sure why.
  14. All I see is the initial start of file activity. I don't see anything showing you manually started it. I don't see any stopping of the file activity n the log. It's probably not enough inotify user watches.
  15. Go to a terminal and give me the outut of the 'mount' command. Also, please switch to a lighter theme before you post a screen shot. The dark theme is pretty much impossible to read.
  16. Post a screen shot of the UD page and attach diagnostics to your next post.
  17. With the "vfs objects = recycle", the copy works? That's a bit odd. Are you using the recycle bin plugin?
  18. Correct. After a reboot the disk is not paused, but you can resume where you left off.
×
×
  • Create New...