Jump to content

dlandon

Community Developer
  • Posts

    10,389
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. All devices in the pool must have the same mount point. Click on the mount point when the devices are unmounted and make them all the same. UD will mount an existing pool, but pool management is beyond UD's scope.
  2. Probably. Just clear the disk don't do an erase. Erase is not a preclear.
  3. That's a waste of time and doesn't solve anything. It's telling you that you have a credentials issue.
  4. While on the UD Preclear page, click on the Help icon or press F1. It will explain the different options of the script. Erase does not write zeroes to the disk. It writes rendom patterns so a disk can be disposed of. Only run 'Clear DIsk' once and you will be good to go.
  5. This is a waste of your time and very rarely fixes anything. The disk is failing because of this: Apr 8 23:45:16: Error: shell_exec(/usr/sbin/smartctl --info --attributes -d auto '/dev/sdh' 2>/dev/null) took longer than 10s! Apr 09 23:46:19 preclear_disk_ZA1JXPN2_3743: Post-Read: dd command failed, exit code [141]. Apr 09 23:46:19 preclear_disk_ZA1JXPN2_3743: Post-Read: dd output: 33827061760 bytes (34 GB, 32 GiB) copied, 141.91 s, 238 MB/s Apr 09 23:46:19 preclear_disk_ZA1JXPN2_3743: Post-Read: dd output: 17442+0 records in Apr 09 23:46:19 preclear_disk_ZA1JXPN2_3743: Post-Read: dd output: 17441+0 records out So, you are running the "Clear Disk" option and when verifying the disks they are failing? I'm not a disk expert, but my first impression is a disk cabling or controller issue. Post the full SMART report for one of the disks that is failing. Also, if you are running more than one preclear at a time, only run one and see if there is a difference.
  6. Does the log show any deleted files? Post your diagnostics.
  7. That's a lot cleaner also. NFSv4 might be better at handling this situation.
  8. The unmount is failing with a mount timeout: Apr 9 17:21:52 unRAID unassigned.devices: Unmount cmd: /sbin/umount -fl '10.253.0.2:/mnt/user/ricketts' 2>&1 Apr 9 17:22:22 unRAID unassigned.devices: Error: shell_exec(/sbin/umount -fl '10.253.0.2:/mnt/user/ricketts' 2>&1) took longer than 30s! Apr 9 17:22:22 unRAID unassigned.devices: Unmount of 'ricketts' failed: 'command timed out' The '-fl' option does a force lazy unmount on a NFS mount. The idea is to force the unmount even if the remote server is off-line. It appears that is not working and by all the research I've done, it is supposed to. I don't have an answer.
  9. Maybe this will help: https://www.thomasmaurer.ch/2019/09/how-to-enable-ping-icmp-echo-on-an-azure-vm/
  10. Several things I see: Apr 1 13:29:12 unRAID kernel: eth0: renamed from veth47e1ca0 Apr 1 22:00:32 unRAID rsync[13809]: connect from 192.168.0.12 (192.168.0.12) Apr 1 22:00:32 unRAID rsync[13810]: connect from 192.168.0.12 (192.168.0.12) Apr 1 22:00:37 unRAID rsyncd[13809]: name lookup failed for 192.168.0.12: Name or service not known Apr 1 22:00:37 unRAID rsyncd[13809]: connect from UNKNOWN (192.168.0.12) Apr 1 22:00:37 unRAID rsyncd[13810]: name lookup failed for 192.168.0.12: Name or service not known Apr 1 22:00:37 unRAID rsyncd[13810]: connect from UNKNOWN (192.168.0.12) Apr 1 22:00:37 unRAID rsync[13854]: connect from 192.168.0.12 (192.168.0.12) Apr 1 22:00:42 unRAID rsyncd[13854]: name lookup failed for 192.168.0.12: Name or service not known Apr 1 22:00:42 unRAID rsyncd[13854]: connect from UNKNOWN (192.168.0.12) Apr 1 22:00:42 unRAID rsyncd[13854]: rsync allowed access on module backups from UNKNOWN (192.168.0.12) Apr 1 22:00:44 unRAID rsyncd[13854]: rsync to [email protected] from UNKNOWN (192.168.0.12) Apr 1 22:00:44 unRAID rsyncd[13854]: receiving file list Apr 1 22:00:44 unRAID rsync[13924]: connect from 192.168.0.12 (192.168.0.12) You need to stop your rsync if the remote goes off-line. It's spamming the log. You are using NFSv3. MFSv4 is more robust. Upgrade to 6.10 and enable NFSv4 in UD Settings. I don't see any UD unmount attempts in the log. There isn't an Unraid shutdowon in the log.
  11. The remote server must respond to a ping or UD thinks it is off-line and marks the share as off-line.
  12. Edit the ownCloud Docker template and select php 7.3 or 7.4. 7.4 would be preferrable to prevent this in the future.
  13. What version of Unraid are you using? If 6.10 are you using NFSv3 or NFSv4?
  14. When it happens again, check the disk ball. If is is grayed out, the remote server is not responding to a ping and UD thinks it's off-line.
  15. When the remote server goes off-line, the check for the size, used, and free fails and zeroes are returned. That's why you see the 0 bytes for size. This is generally due to a remote server going off-line or a network issue. Post diagnostics the next time it happens before you reboot.
  16. It doesn't appear that there is anything for me to fix here. You need to understand how the age days work. In your case, aging files 4 days it is actually aging files older that 4*24 hours ago and not 4 calendar days ago. If the cron runs at 3:00 AM, any files older that 4*24 hours ago will be removed. If there is a file that was deleted 4 days ago at 3:30 AM, it would not be aged at 3:00 AM because it is less that 4*24 hours ago, even though it was deleted 4 calendar days ago. If you want better granularity, then run the cron every hour. This will remove any files older that 4*24 hours ago every hour.
×
×
  • Create New...