Jump to content
We're Hiring! Full Stack Developer ×

dlandon

Community Developer
  • Posts

    10,289
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. Not a dumb question. When on the UD or settings pages, click the 'Help' and you'll see some documentation. The other place is the first and second posts of this very long thread.
  2. Are the log messages continuing? If they are, stop those dockers and find out which one has the problem.
  3. Stop all your dockers and then start them one at a time to see which one is causing the logging.
  4. Go to the UD settings and set 'NFS Security' to 'Private'. Then enter this line: *(sec=sys,rw,insecure,anongid=100,anonuid=99,no_root_squash) in the 'Rules'.
  5. Why do you need to make that change? Post your diagnostics so I can investigate further. I need to see what version of Unraid and NFS you are using, plus some other details.
  6. It looks like you are using a default mount point - /mnt/disks/S3ESNX0K197057Y. Why don't you use a mount point that makes more sense than a cryptic number and set it in UD so it won't change? Click on the partition mount point to change it. I have no idea about your docker container issues.
  7. @Lev Are you showing the complete 'ls' output? It appears something is missing.
  8. @Lev I have an idea. Can you show me the results of the command 'ls /dev/disk/by-id'? EDIT: With the scsi plugin installed.
  9. Your log is so full of these messages, I can't find anything about UD mounting a disk: Jan 4 16:35:38 Tower kernel: eth0: renamed from vetheb13b06 Jan 4 16:35:38 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth33e51aa: link becomes ready Jan 4 16:35:38 Tower kernel: docker0: port 1(veth33e51aa) entered blocking state Jan 4 16:35:38 Tower kernel: docker0: port 1(veth33e51aa) entered forwarding state Jan 4 16:35:38 Tower kernel: docker0: port 1(veth33e51aa) entered disabled state Jan 4 16:35:38 Tower kernel: vetheb13b06: renamed from eth0 Jan 4 16:35:38 Tower kernel: docker0: port 1(veth33e51aa) entered disabled state Jan 4 16:35:38 Tower kernel: device veth33e51aa left promiscuous mode Jan 4 16:35:38 Tower kernel: docker0: port 1(veth33e51aa) entered disabled state Jan 4 16:36:38 Tower kernel: docker0: port 1(veth447cdf3) entered blocking state Jan 4 16:36:38 Tower kernel: docker0: port 1(veth447cdf3) entered disabled state Jan 4 16:36:38 Tower kernel: device veth447cdf3 entered promiscuous mode Jan 4 16:36:38 Tower kernel: docker0: port 1(veth447cdf3) entered blocking state Jan 4 16:36:38 Tower kernel: docker0: port 1(veth447cdf3) entered forwarding state Jan 4 16:36:38 Tower kernel: docker0: port 1(veth447cdf3) entered disabled state Jan 4 16:36:38 Tower kernel: eth0: renamed from veth430d7c6 These happen about every minute and swamp the log. You'll need to clear this up. You also need to update to the latest version of UD.
  10. I'm not going to be able to do much more without hardware so I can shoot it down. Thanks for the feedback.
  11. As you found out, there is no issue at all with doing that. That's a rather long and cryptic mount point, but if it works for you that's good.
  12. Those are stored on the flash drive in the 'preclear_reports' folder. If you still have issues, post a screen shot showing all the disk info.
  13. There should only be one mount point. The 'Samsung...' mount point does not make sense. That disk mount point is the all numeric mount point. The best way to handle this is to click on the mountpoint (to the right of the icons) and name it something that makes sense. Then reboot your server. You should only see one mountpoint at /mnt/disks/. If not, then post diagnostics.
  14. I've redone the network check and mount of remote shares to run in the background so the Unraid startup is not delayed. I've also extended the timeout to 120 seconds so UD does not give up too soon. This is in release 2022.01.02c.
  15. I've rewritten the network check to stop at 30 seconds firm. The 2022.01.02b release has the fix for this.
  16. Set the debug level to none in UD settings. It does nothing to help with this issue and spams the log with too many log entries. Only set a debug level when asked for support. I'm looking at the code and may have an idea why this is happening.
  17. Need to see more of the screen shot. There isn't enough there to see what is going on. In general, if there is a partition that does not have a file system, the mount button will show but will be grayed out. If there is no partition, the format button will show. I'm not sure exactly when preclear creates the partition, but I think if you stopped it before it was completed, the partition wouldn't be created. You also have to erase the preclear log before the format button will be active. I don't understand your comment about sdi being in limbo because I can't see any device designations except for sdi at the top. Do you have today's release of UD?
  18. Good. I see that you are using the server name now and not the IP address. The TRUENAS is responding to a ping because UD lets you click the mount button, but it looks like TRUENAS is not allowing that NFS share to be mounted. Check your TRUENAS and verify that the share '/mnt/Skywalker/Nikki' is set for NFS sharing. From what I recall, you started having these issues with a sudden power down of the Unraid and the TRUENAS. Something probably got messed up from that.
  19. I'm trying to solve the problem with scsi disks not showing up when the dynamix.scsi.devices plugin is installed. UD was previously just including certain disk device types (sd, md, and nvme), but the scsi devices were being excluded this way. I changed the logic to exclude certain devices (namely CD and DVD devices), but more got included than I expected. I've now excluded 'dm' devices, so maybe my idea will work. I didn't see this on either of my servers because I don't show any 'dm' devices.
  20. I just applied a fix. Update to the 2022.01.02a release and those should not show up.
×
×
  • Create New...