Jump to content

dlandon

Community Developer
  • Posts

    10,381
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. There was at least one version of UD that had a bug that showed 'Array' when it shouldn't. Updating UD fixed that bug.
  2. The only change was to respect the NFS version setting in the UD Settings. Just switch that to NFSv3 and it will mount them the same as before the change. This really surprises me though, because NFSv3 is known to cause the issues you are describing.
  3. You won't be able to do anything until you clear the 'Array' status. Reboot and it should clear up. Those disks were probably not unmounted before being removed, or they were part of the array and fell out of the array.
  4. Ok, that shows that there are no deleted files being logged. Show the output of this command: cat /var/log/samba/log.smbd This will show if any logging is done at all.
  5. I found a bug in UD that was preventing NFSv4 from working. That's why I thought you were only using NFSv3. I just released a fix. I suggested this fix because I wanted to be sure there was no interraction between the nfs and cifs mounts. I totally get it, but I can't explain the log entries you are getting. There is very little information available about this log message except it has something to do with an interrupted system call. You might check to be sure your network is not having any issues. Are you using /mnt/remotes/ or the SMB share to transfer data?
  6. I'm having a little trouble understanding why you are cross mounting root shares, but for starters you need to be using NFSv4 and not NFSv3. The mount comand in your log is for NVSv3. Apr 11 17:01:43 9900K unassigned.devices: Mount NFS command: /sbin/mount -t 'nfs' -o rw,noacl 'NAS1:/mnt/user/Media2' '/mnt/remotes/NAS1_Media2' Set NFSv4 in the UD Settings, then unmount and remount your NFS shares.
  7. Show the output of this command: cat /var/log/samba/log.smbd | grep unlink
  8. Your USB disconnected and Linux assigned a new designation thinking it is a new disk. The UD update did not have anything to do with the issue. UD does not take over any disks. Unraid did not see it as the flash drive so it ended up as Unassigned and UD then shows it.
  9. If I recall, changing the mount point on the primary pool device will change them all.
  10. Did you see my suggestion about removing all the historical information for the devices? Enter the disk label as the mount point. A pool of devices is where all the disks have the same label and the same UUID. Because UD isn't pool aware, you have to trick it into finding the pool.
  11. It probably works, but you should not assume [global]. Add the [global] tag ahead of your settings.
  12. Did you copy your old configuration to the new disk?
  13. I'm not sure of your sequence of events, but probably. Look in the log after your unmount attempts and see if it's because the mounts are busy.
  14. In order to support legacy devices using SMB2 and connecting to Unraid shares. the implementation of these security settings will have to be configurable. Because of the desire to get 6.10 released, it is being held up for now. For the time being, you can put those settings with a [global] tag in smb-extra.conf.
  15. Try a blank the mount point and it should pick up the default pool label. If that doesn't work, remove all the pool devices and delete each one in Historical devices. Then re-install them. Edit: The mount point has to be the disk label on the pool devices.
  16. All devices in the pool must have the same mount point. Click on the mount point when the devices are unmounted and make them all the same. UD will mount an existing pool, but pool management is beyond UD's scope.
  17. Probably. Just clear the disk don't do an erase. Erase is not a preclear.
  18. That's a waste of time and doesn't solve anything. It's telling you that you have a credentials issue.
  19. While on the UD Preclear page, click on the Help icon or press F1. It will explain the different options of the script. Erase does not write zeroes to the disk. It writes rendom patterns so a disk can be disposed of. Only run 'Clear DIsk' once and you will be good to go.
  20. This is a waste of your time and very rarely fixes anything. The disk is failing because of this: Apr 8 23:45:16: Error: shell_exec(/usr/sbin/smartctl --info --attributes -d auto '/dev/sdh' 2>/dev/null) took longer than 10s! Apr 09 23:46:19 preclear_disk_ZA1JXPN2_3743: Post-Read: dd command failed, exit code [141]. Apr 09 23:46:19 preclear_disk_ZA1JXPN2_3743: Post-Read: dd output: 33827061760 bytes (34 GB, 32 GiB) copied, 141.91 s, 238 MB/s Apr 09 23:46:19 preclear_disk_ZA1JXPN2_3743: Post-Read: dd output: 17442+0 records in Apr 09 23:46:19 preclear_disk_ZA1JXPN2_3743: Post-Read: dd output: 17441+0 records out So, you are running the "Clear Disk" option and when verifying the disks they are failing? I'm not a disk expert, but my first impression is a disk cabling or controller issue. Post the full SMART report for one of the disks that is failing. Also, if you are running more than one preclear at a time, only run one and see if there is a difference.
  21. Does the log show any deleted files? Post your diagnostics.
×
×
  • Create New...