Jump to content

dlandon

Community Developer
  • Posts

    10,283
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. When you stop the array, Unraid will detect unassigned disks as soon as a change is made in the array assignments. When the array is started, UD will detect a hot plug event and ask Unraid to update the unassigned disks. If Unraid doesn't handle the hot plug event properly, then the device will show sdX instead of Dev X. Unraid controls the assignment of the Dev X designation which will always be the same for each disk. There is something going on with the hot plug detection. Is this a USB disk?
  2. UD would not be causing your network issues. It is just a plugin that adds additional functionality, like mounting remote SMB and NFS remote shares. You need to go to 'Apps' which is the Community Applications (CA for short) and search for Unassigned Devices and install it. You will then see it at the 'Unassigned Devices' tab of the Main tabs. You can manage your remote mounts from there. Your network issue can be from cabling, a bad switch, a bad NIC, etc. If your motherboard has multiple NIC ports, try a different port. Try different cabling. Try rebooting your network switch. Check you network switch configuration if it is managed. Edit: I just noticed in your diagnostics you are using eth3. Switch to the eth0 port.
  3. It shouldn't be sdg, it should be 'Dev X'. May or may not be 4. Click on the double arrows icon in the upper right of the UD page and see if it clears up.
  4. Jumbo frames are a setting on your network devices. If you don't know what it is, you're not using them. UD is an Unraid plugin to manage disks and devices outside of the array. This includes SMB and NFS remote shares.
  5. Your network is dropping offline: Nov 18 22:09:52 PTR1-NAS-1 dhcpcd[2238]: br0: rebinding lease of 192.168.20.50 Nov 18 22:09:56 PTR1-NAS-1 wsdd[12079]: udp_send: Failed to send udp packet with Network is unreachable Nov 18 22:09:56 PTR1-NAS-1 upsmon[9622]: UPS [[email protected]]: connect failed: Connection failure: Network is unreachable Nov 18 22:09:57 PTR1-NAS-1 dhcpcd[2238]: br0: probing address 192.168.20.50/23 Nov 18 22:09:58 PTR1-NAS-1 wsdd[12079]: udp_send: Failed to send udp packet with Network is unreachable ### [PREVIOUS LINE REPEATED 4 TIMES] ### Nov 18 22:10:01 PTR1-NAS-1 upsmon[9622]: UPS [[email protected]]: connect failed: Connection failure: Network is unreachable Nov 18 22:10:03 PTR1-NAS-1 wsdd[12079]: udp_send: Failed to send udp packet with Network is unreachable Nov 18 22:10:03 PTR1-NAS-1 dhcpcd[2238]: br0: leased 192.168.20.50 for 7200 seconds Nov 18 22:10:03 PTR1-NAS-1 dhcpcd[2238]: br0: adding route to 192.168.20.0/23 Nov 18 22:10:03 PTR1-NAS-1 dhcpcd[2238]: br0: adding default route via 192.168.20.1 Nov 18 22:10:03 PTR1-NAS-1 rpcbind[27514]: connect from 192.168.20.30 to getport/addr(mountd) Nov 18 22:10:03 PTR1-NAS-1 dnsmasq[13177]: reading /etc/resolv.conf Nov 18 22:10:03 PTR1-NAS-1 dnsmasq[13177]: using nameserver 192.168.20.1#53 Nov 18 22:10:03 PTR1-NAS-1 dnsmasq[13177]: using nameserver 192.168.20.40#53 Nov 18 22:10:03 PTR1-NAS-1 rpcbind[27553]: connect from 192.168.20.32 to getport/addr(mountd) Nov 18 22:10:04 PTR1-NAS-1 ntpd[2312]: Listen normally on 3 br0 192.168.20.50:123 Nov 18 22:10:04 PTR1-NAS-1 ntpd[2312]: new interface(s) found: waking up resolver Nov 18 22:10:05 PTR1-NAS-1 rpcbind[27576]: connect from 192.168.20.30 to getport/addr(mountd) Nov 18 22:10:06 PTR1-NAS-1 upsmon[9622]: Communications with UPS [email protected] established Nov 18 22:10:10 PTR1-NAS-1 rpcbind[27670]: connect from 192.168.20.32 to getport/addr(mountd) Nov 18 22:10:15 PTR1-NAS-1 rpcbind[27701]: connect from 192.168.20.30 to getport/addr(mountd) Nov 18 22:10:20 PTR1-NAS-1 rpcbind[27732]: connect from 192.168.20.32 to getport/addr(mountd) Nov 18 22:10:25 PTR1-NAS-1 rpcbind[27763]: connect from 192.168.20.30 to getport/addr(mountd) Causing remote NFS mounts to fail. Are you using Jumbo packets on your network? I'd also recommend that you use the Unassigned Devices Plugin (called UD). It makes it a lot easier to mount and manage SMB and NFS remote shares,
  6. It's a little more involved that that. UD collects all the disks by-id into an array. wwn ids are ignored. It then removes any from the array with a device number (hdX, sdX, and nvmeX) that is the same device number as a device in the array - it's assumed to be an array disk. What's left are unassigned disks. Any duplicates in the by-id are taken as unassigned because each one has a unique device number. Any that are skipped are assumed in the array if the device number is the same as a disk in the array.
  7. You are using the environment variable $LABEL as if it is the $MOUNTPOINT variable at times. The $LABEL variable is the physical label of the partition on the disk and may or may not be the $MOUNTPOINT UD is using. Don't use it as the mountpoint /mnt/disk/$LABEL you will have issues because it is not necessarily the actual mountpoint. UD will unmount /mnt/disk/$MOUNTPOINT. The $MOUNTPOINT may not be the same as /mnt/disk/$LABEL. The script that gets executed is defined in the UD settings for the device and may or may not be the $MOUNTPOINT or the $LABEL. Don't do this: if [ ! -n ${MOUNTPOINT} ]; then MOUNTPOINT="/mnt/disks/${LABEL}" fi when you mount it. Let UD handle it. EDIT: You can check to see if the disk is mounted with this: if mountpoint -q $MOUNTPOINT; then
  8. UD depends on the id that udev picks up. UD uses the /by-id/ to determine what is in the array and what is unassigned and that becomes the id UD uses.
  9. While I'm looking at your issue, update to the latest UD and see if it helps. I've been doing a lot of code cleanup and I may have messed something up.
  10. There is nothing you need to cleanup. Someone else has posted something similar. Post the complete diagnostics zip and I'll try to reproduce the issue.
  11. UD has to have unique serial numbers for each disk. Your external drive bays are presenting disks with a common serial number.
  12. I am adding some logging to UD to show that the NFS mount will not work until NFS is enabled. I also am disabling the Penguin icon when adding a NFS mount with a warning on the dialog when NFS is not enabled. NFS is required to be enabled for NFS clients on Unraid.
  13. File activity limits the log to 20,000 entries, but it looks like the logic to limit the entries is removing the latest entries and not the first entries. I'm doing some testing now.
  14. You have sdv assigned to the array as a cache disk. What I suspect is that UD sees all the disks as being the same serial number and since sdv is assigned to the array, UD doesn't recognize sdt and sdu as being unassigned. Do this and let's see if this is the case. Stop the array. Unassign sdv as a cache drive. Look at the UD page and see if UD sees the disks. If it does, show a screen shot. You can then assign sdv back to the array.
  15. We can't do much with the limited information provided. Post your complete Diagnostics zip file and provide more information on your specific issue. Manual or timed spin down? It would also help to know about the disk drive that is not spinning down. I don't have any issues with UD disks spinning down, except when a parity check is running, which is a known issue.
  16. UD goes through a sequence of SMB versions when mounting remote CIFS shares until the remote share mounts. The idea is to mount the remote share with the most secure version of SMB the server will support. The sequence is as follows: No version specified. The remote server will mount with the most secure SMB version it will support. The idea is that the remote server will handle the SMB version setting on it's own. Some servers insist on the SMB version being specified though. This is controlled by a UD setting and can be disabled. If it's not enabled in UD settings, this step is just skipped. SMB v3.1.1. This is a very secure version and offers some read/write speed enhancements. SMB v3.0. This is in case the remote server doesn't support v3.1.1, it will hopefully at least support v3.0. SMB v2.0. SMB v1.0. If NetBIOS is enabled on the Unraid server. If not, SMB v1.0 won't work. SMB v1.0 is not very secure and is not recommended unless the remote server is very old and doesn't support newer versions of SMB. The issue you had was that I changed the v2.0 to v2.1 and your ASUS router did not accept the v2.1. It didn't mount with v1.0 because NetBIOS was disabled on the Unraid server. This was fixed in a later release of UD. SMB v1.0 won't work unless Unraid and the remote server both have NetBIOS enabled.
  17. Probably because it can't connect. Failure code 115 is "Operation now in progress". UD is able to ping the remote server. That's why the 'Mount' button is active. I read some comments about port 445 and firewalls using Strato. You should investigate that.
  18. Please try this. I've added the uid and gid. See if this causes it to not work. mount -t cifs -v -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,credentials='/tmp/stratocreds' '\\CIFS.HIDRIVE.STRATO.COM/root' /mnt/strato
  19. When you mount the remote cifs device on another server, how long does it take to mount? Can you show the mount parameters on the other server so I can see what might be different?
  20. A cloud storage? Has it worked before? Did something change recently? It seems to be taking more than 10 seconds to connect. I would think 10 seconds is enough time.
  21. I tested this on my ASUS RT-5300 router and I see the same. It does not want to mount SMB v2.1. I too have the latest version.
×
×
  • Create New...