Jump to content

dlandon

Community Developer
  • Posts

    10,395
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. Maybe. The new delay timer is only appropriate if the network is up and running, but the remote server needs a bit more time to be online. This is only an issue when the server is first started, and does not apply once remote shares have been mounted. It has nothing to do with the disk 'df' timeouts in your log. The general protection faults have nothing to do with this. If you have a new server and are experiencing GPFs, I'd run a memory test.
  2. You're mounting this share on the Unraid server using NFSv3: May 28 13:34:24 GnaXServer unassigned.devices: Mounting Remote Share '192.168.2.130:/volume1/Andi'... May 28 13:34:24 GnaXServer unassigned.devices: Mount NFS command: /sbin/mount -t 'nfs' -o rw,noacl '192.168.2.130:/volume1/Andi' '/mnt/remotes/Jabba' May 28 13:34:24 GnaXServer nfsrahead[4700]: setting /mnt/remotes/Jabba readahead to 128 May 28 13:34:24 GnaXServer unassigned.devices: Successfully mounted '192.168.2.130:/volume1/Andi' on '/mnt/remotes/Jabba'. You can mount this using MFSv4 by a setting in UD. This is a client mount setting used when mounting NFS shares with UD. NFSv3 is known to have stale file issues with Unraid because of the Mover moving files from a cache to the array. Several attempts at work arounds are only partially effective. NTFSv4 is the best solution
  3. New version of UD. Notable change: Added a setting in UD Settings to set a time delay to wait after the Network is detected to be up and running before Remote Shares are mounted. This gives time for remote servers to be ready after the network is up. This should help in those cases where some extra time is needed before attempting to mount remote shares. The default is 5 seconds.
  4. I don't see the remote share issue you first started with. Unmount your transport data disk and run a file system check on it. The timeouts indicate a possible issue with the disk. The NFS warning is only noting that UD devices are not set up to be shared with NFS to let users know why UD devices are not showing up as NFS shares. You can ignore it.
  5. What version of Unraid? You're mounting an Unraid NFS share from an Ubuntu VM?
  6. Start by turning off the UD debug logging. It is flooding the log with messages and making it very hard to read. Once you've done that, reboot and post new diagnostics so I can better see what is happening.
  7. How are you mounting them? You need to do the following: Use Unraid 6.10 or later for NFSv4 support. Specify NFSv4 in your mount command. -t 'nfs4'
  8. There's a new recycle bin setting to control that:
  9. I'm not sure what you've done, but try to start the array and the disk should be emulated.
  10. That drive should be emulated, assuming you have parity, and you can start the array. Use 'diagnostics' on a command line to get diagnottics. Preclear log can be downloaded from the UD Preclear page. Click on the download icon.
  11. Try the following: Post your diagnostics and the UD Preclear log so I can see if there is a reason I can find for the failure. Install the Binhex Preclear Docker container. Run UD preclear and select the docker from the dropdown to run the preclear. See if the results are the same.
  12. UD allows changing the UUID on BTRFS disks also. Maybe I need to block that on pool disls.
  13. Yes, the Pool devices all have to have the same UUID. There is a way with a command line command, but I would be afraid to offer that up because I'd probably screw it up. @JorgeB is who could help with this. Be patient though, he lives in Europe and it's late at night there.
  14. New version of UD. The notable change is with root shares. Changes were made in the configuration to be more aligned with how other SMB and NFS shares are configured. This change will cause any existing root shares to fail to mount. The fix is to remove any root shares and add them back. Save any script files you have on the root shares as they will be deleted when the root share is removed. The other change was pools failed to mount with a false share name is already being used message. I broke this making a change yesterday to address where a duplicate share name was not detected. Should be all sorted out now.
  15. That won't work. Have UD mount them with this command: /usr/local/sbin/rc.unassigned mount source' - where source is the SMB/NFS source.
  16. I see the problem and will issue an update to UD as soon as I figure out what is happening.
  17. You don't need to change anything. This feature is intended to be a help with zvol issues in 6.12. In 6.12, you can unassign a ZFS pool device that might have zvols and mount it in UD so you can do a file system check, extract files, and just generally work with the zvol. The best use case is when you have a VM on a zvol and are having issues with the VM.
  18. I have found an additional issue in UD where the Unraid wave continues to show and does not go away. This comes from ISO and Remote Shares with unprintable (from languages like Chinese) and php reserved characters being used to identify the share. This is mostly an issue when English is not used on the server, but will occasionally occur on an English language server with characters like '#' in the ISO file name or Remote share name. If you experience this issue, do the following: Delete the '/flash/config/plugins/unassigned.devices/iso_mount.cfg' file. Click on the double arrows in the upper right hand corner of the UD page. Refresh the UD page. If the Unraid wave continues, Delete the '/flash/config/plugins/unassigned.devices/samba_mount.cfg' file. Click on the double arrows in the upper right hand corner of the UD page. Refresh the UD page. You will have to then add your ISO and Remote shares back to UD. The next release of UD will take care of both ISO and Remote shares.
  19. I found a problem and I am fixing it now. Wait for the newest version of UD, update UD, and then remove the samba_mount.cfg file and add your remote shares back.
×
×
  • Create New...