-
Posts
10,390 -
Joined
-
Last visited
-
Days Won
20
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by dlandon
-
Unraid 6.10 has been released. There are some changes to UD. You can see them here: With the latest release of UD, the minimum version of Unraid supported is now 6.9. There are some enhancements in 6.9 ad 6.10 that UD uses for new features. I've removed the spin down timer used in Unraid versions prior to 6.9.
-
There is a new release of UD that fixes a long standing issue with Unassigned Devices disk device ids. The way UD was determining the device id was not consistent with Unraid. Some USB devices were not showing the id the same as Unraid showed. The change in device id won't cause any loss of data, it's just the Unassigned Device id UD uses. Once you apply the following fix, your device will be back to operating normally. What will happen after you update UD, is some disk devices will not show the same id as before for the disk device and will appear as a new disk device. To fix this: Re-apply all your Historical Device settings to the new device, including any script file assignments. Once you have the settings applied to the new device id, remove the Historical device. Be sure to set the mount point the same as the older drive if you use the '/mnt/disks/mount point' in a Docker container or VM. Once you've updated the settings for the device, you can unplug the device and re-plug it and it will auto mount if you have it set to auto mount. It appears this applies to flash drives, some NVME drives, and older WD portable drives. Those of you that have tried external drive bays and found UD saw the same serial number for all drives but Unraid saw them as separate drives, will probably find that using the latest UD version will help with UD seeing the disks individually and not all with the same serial number.
-
-
Bad R/W performances and high shfs CPU usage, NFS stale file handle
dlandon replied to marc0777's topic in General Support
I would look at the following: Your server is an Intel Atom. That's not a termendously powerful prpcessor. Client side setup. Is the slow down in reading from the server or writing on the client? Install the Tips and Tweaks plugin. See if some disk cache adjustments will help. Also see if any processor scaling governor changes will help. Make the recommended changes to the NICs. Disable NIC Flow Control and NIC Offload. -
Bad R/W performances and high shfs CPU usage, NFS stale file handle
dlandon replied to marc0777's topic in General Support
You should upgrade to 6.10rc8. It will use NFSv4 that is much better than NFSv3 in 6.9.2. It should also solve the stale file handle issues. Edit: 6.10rc8 also fixes the rpcnind log spamming. -
This is what udev shows. Note the ID_SERIAL: [DEVLINKS] => /dev/disk/by-id/nvme-Samsung_SSD_960_EVO_500GB_S3X4NB0K309824V-part1 /dev/disk/by-id/nvme-eui.0025385381b19427-part1 /dev/disk/by-partlabel/Linux\x20filesystem /dev/disk/by-partuuid/12409271-f9ad-4a6d-a35e-2c01ba882106 /dev/disk/by-uuid/526c7d84-3822-4ea8-a730-c0a3c955725c [DEVNAME] => /dev/nvme0n1p1 [DEVPATH] => /devices/pci0000:00/0000:00:01.2/0000:08:00.0/nvme/nvme0/nvme0n1/nvme0n1p1 [DEVTYPE] => partition [DISKSEQ] => 19 [ID_FS_TYPE] => xfs [ID_FS_USAGE] => filesystem [ID_FS_UUID] => 526c7d84-3822-4ea8-a730-c0a3c955725c [ID_FS_UUID_ENC] => 526c7d84-3822-4ea8-a730-c0a3c955725c [ID_MODEL] => Samsung SSD 960 EVO 500GB [ID_PART_ENTRY_DISK] => 259:2 [ID_PART_ENTRY_NAME] => Linux\x20filesystem [ID_PART_ENTRY_NUMBER] => 1 [ID_PART_ENTRY_OFFSET] => 2048 [ID_PART_ENTRY_SCHEME] => gpt [ID_PART_ENTRY_SIZE] => 976771087 [ID_PART_ENTRY_TYPE] => 0fc63daf-8483-4772-8e79-3d69d8477de4 [ID_PART_ENTRY_UUID] => 12409271-f9ad-4a6d-a35e-2c01ba882106 [ID_PART_TABLE_TYPE] => gpt [ID_PART_TABLE_UUID] => c6c838bf-0a9b-4fd5-bc49-3dee6cf4ebab [ID_SERIAL] => Samsung SSD 960 EVO 500GB_S3X4NB0K309824V ID_SERIAL is what UD uses for the serial id. This seems to have changed with rc8 and I suspect the change to collapse the multiple underscores.
-
Spend some time cleaning up the following FCP errors: May 7 13:53:00 NAS root: Fix Common Problems Version 2022.04.14 May 7 13:53:01 NAS root: Fix Common Problems: Warning: Share system set to cache-only, but files / folders exist on the array May 7 13:53:01 NAS root: Fix Common Problems: Warning: Docker application binhex-krusader has volumes being passed that are mounted by Unassigned Devices, but they are not mounted with the slave option May 7 13:53:01 NAS root: Fix Common Problems: Warning: unRaids built in FTP server is running ** Ignored May 7 13:53:01 NAS root: Fix Common Problems: Warning: No destination (browser / email / agents set for Warning level notifications May 7 13:53:07 NAS root: Fix Common Problems: Warning: Deprecated plugin serverlayout.plg ** Ignored May 7 13:53:11 NAS root: Fix Common Problems: Warning: preclear.disk.plg Not Compatible with Unraid version 6.9.2 May 7 13:53:11 NAS root: Fix Common Problems: Warning: statistics.sender.plg Not Compatible with Unraid version 6.9.2 May 7 13:53:16 NAS root: Fix Common Problems: Warning: Missing DNS entry for host May 7 13:53:17 NAS root: Fix Common Problems: Warning: The plugin openvpn_client_x64.plg is not known to Community Applications and is possibly incompatible with your server May 7 13:53:17 NAS root: Fix Common Problems: Warning: The plugin openvpn_server_x64.plg is not known to Community Applications and is possibly incompatible with your server Also remove the preclear.disks plugin and replace it with the UD Preclear Plugin if you feel you need to preclear disks. Edit: You have a disk problem: May 8 23:46:06 NAS file.activity: Starting File Activity May 8 23:58:48 NAS inotifywait[42835]: Couldn't watch /mnt/disk6: Structure needs cleaning
-
Can you show the output of this command run in a terminal? cat /tmp/file.activity/file.activity.disks I'm adding some additional logging to try to see what is happening with file activity inotify starting and stopping. I'm also working on trying to determine why inotify is not reporting any file events. This has been reported several times.
-
You should always commit a mount point name, even if you want to use the default, so it will be consistent even if the default name were to change. You do this by clicking on the mount point when the disk is unmounted and click 'Change'. You don't have to change the mount point to save it. It seems lately that the default names have been changing when updating. Not sure why.
-
With the "vfs objects = recycle", the copy works? That's a bit odd. Are you using the recycle bin plugin?