Jump to content
We're Hiring! Full Stack Developer ×

dlandon

Community Developer
  • Posts

    10,285
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. Why do you need to make that change? Post your diagnostics so I can investigate further. I need to see what version of Unraid and NFS you are using, plus some other details.
  2. It looks like you are using a default mount point - /mnt/disks/S3ESNX0K197057Y. Why don't you use a mount point that makes more sense than a cryptic number and set it in UD so it won't change? Click on the partition mount point to change it. I have no idea about your docker container issues.
  3. @Lev Are you showing the complete 'ls' output? It appears something is missing.
  4. @Lev I have an idea. Can you show me the results of the command 'ls /dev/disk/by-id'? EDIT: With the scsi plugin installed.
  5. Your log is so full of these messages, I can't find anything about UD mounting a disk: Jan 4 16:35:38 Tower kernel: eth0: renamed from vetheb13b06 Jan 4 16:35:38 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth33e51aa: link becomes ready Jan 4 16:35:38 Tower kernel: docker0: port 1(veth33e51aa) entered blocking state Jan 4 16:35:38 Tower kernel: docker0: port 1(veth33e51aa) entered forwarding state Jan 4 16:35:38 Tower kernel: docker0: port 1(veth33e51aa) entered disabled state Jan 4 16:35:38 Tower kernel: vetheb13b06: renamed from eth0 Jan 4 16:35:38 Tower kernel: docker0: port 1(veth33e51aa) entered disabled state Jan 4 16:35:38 Tower kernel: device veth33e51aa left promiscuous mode Jan 4 16:35:38 Tower kernel: docker0: port 1(veth33e51aa) entered disabled state Jan 4 16:36:38 Tower kernel: docker0: port 1(veth447cdf3) entered blocking state Jan 4 16:36:38 Tower kernel: docker0: port 1(veth447cdf3) entered disabled state Jan 4 16:36:38 Tower kernel: device veth447cdf3 entered promiscuous mode Jan 4 16:36:38 Tower kernel: docker0: port 1(veth447cdf3) entered blocking state Jan 4 16:36:38 Tower kernel: docker0: port 1(veth447cdf3) entered forwarding state Jan 4 16:36:38 Tower kernel: docker0: port 1(veth447cdf3) entered disabled state Jan 4 16:36:38 Tower kernel: eth0: renamed from veth430d7c6 These happen about every minute and swamp the log. You'll need to clear this up. You also need to update to the latest version of UD.
  6. I'm not going to be able to do much more without hardware so I can shoot it down. Thanks for the feedback.
  7. As you found out, there is no issue at all with doing that. That's a rather long and cryptic mount point, but if it works for you that's good.
  8. Those are stored on the flash drive in the 'preclear_reports' folder. If you still have issues, post a screen shot showing all the disk info.
  9. There should only be one mount point. The 'Samsung...' mount point does not make sense. That disk mount point is the all numeric mount point. The best way to handle this is to click on the mountpoint (to the right of the icons) and name it something that makes sense. Then reboot your server. You should only see one mountpoint at /mnt/disks/. If not, then post diagnostics.
  10. I've redone the network check and mount of remote shares to run in the background so the Unraid startup is not delayed. I've also extended the timeout to 120 seconds so UD does not give up too soon. This is in release 2022.01.02c.
  11. I've rewritten the network check to stop at 30 seconds firm. The 2022.01.02b release has the fix for this.
  12. Set the debug level to none in UD settings. It does nothing to help with this issue and spams the log with too many log entries. Only set a debug level when asked for support. I'm looking at the code and may have an idea why this is happening.
  13. Need to see more of the screen shot. There isn't enough there to see what is going on. In general, if there is a partition that does not have a file system, the mount button will show but will be grayed out. If there is no partition, the format button will show. I'm not sure exactly when preclear creates the partition, but I think if you stopped it before it was completed, the partition wouldn't be created. You also have to erase the preclear log before the format button will be active. I don't understand your comment about sdi being in limbo because I can't see any device designations except for sdi at the top. Do you have today's release of UD?
  14. Good. I see that you are using the server name now and not the IP address. The TRUENAS is responding to a ping because UD lets you click the mount button, but it looks like TRUENAS is not allowing that NFS share to be mounted. Check your TRUENAS and verify that the share '/mnt/Skywalker/Nikki' is set for NFS sharing. From what I recall, you started having these issues with a sudden power down of the Unraid and the TRUENAS. Something probably got messed up from that.
  15. I'm trying to solve the problem with scsi disks not showing up when the dynamix.scsi.devices plugin is installed. UD was previously just including certain disk device types (sd, md, and nvme), but the scsi devices were being excluded this way. I changed the logic to exclude certain devices (namely CD and DVD devices), but more got included than I expected. I've now excluded 'dm' devices, so maybe my idea will work. I didn't see this on either of my servers because I don't show any 'dm' devices.
  16. I just applied a fix. Update to the 2022.01.02a release and those should not show up.
  17. I've applied a fix for this. I'll refresh today's release once I see if anyone else comes up with additional devices. For the time being, just ignore it.
  18. Update to today's release of UD, delete your NFS remote shares and then re-add them using the server name. Click on the 'Search for Servers' button and choose the server you want.
  19. New release of UD: Pool devices are now handled better by UD. Pool devices must all have the same mount point in UD and any of the pool device's 'Mount' button will mount the pool. When mounted the primary device has an active 'Unmount' button. Secondary devices are marked 'Pool'. The primary device is used to umnount the pool. Pooled devices unassigned from Unraid Pools can be mounted by UD. The UD check for mount point duplicates now works even with Pool Devices. The setting to not check UD devices for duplicates has been removed. It is no longer necessary. Ability to save a blank mount point. This removes the UD mount point setting and can be helpful with a disk pool device that has a UD historical mount point. Pooled devices will use the disk label (common to all pooled devices) as the mount point. Fix for CIFS mounts on Windows computers that sometimes won't mount because of local name resolution. It is recommended to have SMBv1 disabled on all Windows computers and NetBIOS disabled on your Unraid server. Fix for disks showing in sdX order. They were sometimes out of order. SMB/NFS, ISO, and historical devices are now sorted in alphabetic order. Applied a fix that should show SCSI disks in UD with the dynamix.scsi.device plugin installed. UD now excludes CD and DVD disks, and accepts all other devices. This was changed from including only sd, hd, and nvme devices. I have no way to test though, so I don't know if this works. Feedback would be appreciated. If you see a device in UD that should not be there, let me know as I may need to exclude it. SMB and NFS server lookups display all servers now by name and not IP address. It is best to use the server name and not the IP address. If the IP address of a server is not static, it can change causing remote mounts to fail. Some minor GUI changes. With the changes in resolving local servers and the lookup of servers now showing server names and not IP addresses, there should be no need for using IP addresses for remote shares, especially NFS shares. I recommend updating all remote shares to use the server name and not the IP address.
×
×
  • Create New...