Jump to content

dlandon

Community Developer
  • Posts

    10,397
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. What you want is to have your script run before VMs and Dockers are shutdown. Did you check the log to verify that the remote share unmounted properly?
  2. Post at the user scripts forum and see if that is the right point to apply the script. It may be too late in the sequence to do what you want.
  3. You don't uninstall UD. UD just unmounts all devices in the shutdown sequence. Try setting up a user script that happens when the 'At Stopping of Array' event occurs with this command: /usr/local/sbin/rc.unassigned umount //SERVER/share This will unmount the remote share '//SERVER/share' and should occur before the VM is shut down. The '//SERVER/share' is the SOURCE shown in UD.
  4. UD has no way of knowing when a particular VM is started and that it needs to wait for that VM. The shutdown process is controlled by Unraid and UD devices are unmounted as one of the last things. VMs and Dockers are shutdown first. This is done because when a user has a VM or their Dockers on a UD disk, it can't be unmounted until VMs and Dockers are shutdown. You'll have to find another way of doing this.
  5. Yes, if the router shares them with SMB. UD can mount them as remote shares. No. You can only add physical disks to the array.
  6. Your disk issues are probably a cable problem and that is why you don't see anything in the SMART report.
  7. No. The FFMPG in the Zoneminder docker is the built in version of Ubuntu. You would be better served by posting all this on the Zoneminder forum where you can get the attention of the Zoneminder developers and the ES author.
  8. A) Nothing has changed. B) The cuda version is dependent on your nvidia driver and support for that in cuda. I have no experience with cuda. I put together the script with input from pliablepixels (the author of ES). You should go the the Zoneminder forums and discuss your issue there.
  9. You are having some disk errors: Nov 14 16:18:28 STORAGE kernel: ata4.00: exception Emask 0x10 SAct 0xfe00 SErr 0x190002 action 0xe frozen Nov 14 16:18:28 STORAGE kernel: ata4.00: irq_stat 0x80400000, PHY RDY changed Nov 14 16:18:28 STORAGE kernel: ata4: SError: { RecovComm PHYRdyChg 10B8B Dispar } Nov 14 16:18:28 STORAGE kernel: ata4.00: failed command: READ FPDMA QUEUED Nov 14 16:18:28 STORAGE kernel: ata4.00: cmd 60/40:48:38:b6:2a/05:00:d5:00:00/40 tag 9 ncq dma 688128 in Nov 14 16:18:28 STORAGE kernel: ata4.00: status: { DRDY } Nov 14 16:18:28 STORAGE kernel: ata4.00: failed command: READ FPDMA QUEUED Nov 14 16:18:28 STORAGE kernel: ata4.00: cmd 60/40:50:78:bb:2a/05:00:d5:00:00/40 tag 10 ncq dma 688128 in Nov 14 16:18:28 STORAGE kernel: ata4.00: status: { DRDY } Nov 14 16:18:28 STORAGE kernel: ata4.00: failed command: READ FPDMA QUEUED Nov 14 16:18:28 STORAGE kernel: ata4.00: cmd 60/40:58:b8:c0:2a/05:00:d5:00:00/40 tag 11 ncq dma 688128 in Nov 14 16:18:28 STORAGE kernel: ata4.00: status: { DRDY } Nov 14 16:18:28 STORAGE kernel: ata4.00: failed command: READ FPDMA QUEUED Nov 14 16:18:28 STORAGE kernel: ata4.00: cmd 60/d0:60:f8:c5:2a/00:00:d5:00:00/40 tag 12 ncq dma 106496 in Nov 14 16:18:28 STORAGE kernel: ata4.00: status: { DRDY } Nov 14 16:18:28 STORAGE kernel: ata4.00: failed command: READ FPDMA QUEUED Nov 14 16:18:28 STORAGE kernel: ata4.00: cmd 60/f0:68:c8:c6:2a/04:00:d5:00:00/40 tag 13 ncq dma 647168 in Nov 14 16:18:28 STORAGE kernel: ata4.00: status: { DRDY } Nov 14 16:18:28 STORAGE kernel: ata4.00: failed command: READ FPDMA QUEUED Nov 14 16:18:28 STORAGE kernel: ata4.00: cmd 60/40:70:b8:cb:2a/05:00:d5:00:00/40 tag 14 ncq dma 688128 in Nov 14 16:18:28 STORAGE kernel: ata4.00: status: { DRDY } Nov 14 16:18:28 STORAGE kernel: ata4.00: failed command: READ FPDMA QUEUED Nov 14 16:18:28 STORAGE kernel: ata4.00: cmd 60/40:78:f8:d0:2a/05:00:d5:00:00/40 tag 15 ncq dma 688128 in Nov 14 16:18:28 STORAGE kernel: ata4.00: status: { DRDY } Nov 14 16:18:28 STORAGE kernel: ata4: hard resetting link Nov 14 16:18:29 STORAGE kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 14 16:18:34 STORAGE kernel: ata4: hard resetting link Nov 14 16:18:35 STORAGE kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 14 16:18:37 STORAGE kernel: ata4: hard resetting link Nov 14 16:18:47 STORAGE kernel: ata4: softreset failed (device not ready) Nov 14 16:18:47 STORAGE kernel: ata4: hard resetting link Nov 14 16:18:51 STORAGE kernel: ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 14 16:18:51 STORAGE kernel: ata4.00: configured for UDMA/133 Nov 14 16:18:51 STORAGE kernel: ata4: EH complete I'm not exactly sure what you have done, but it looks like preclear is running on that disk at the same time you are trying to format it: Nov 15 18:20:51 STORAGE unassigned.devices: Error: shell_exec(/sbin/blockdev --getsz /dev/sdi | /bin/awk '{ print }' 2>/dev/null) took longer than 2s! Nov 15 18:25:17 STORAGE preclear.disk: Pausing preclear of disk 'sdi' Nov 15 18:25:21 STORAGE preclear.disk: Resuming preclear of disk 'sdi' Nov 15 18:25:58 STORAGE preclear.disk: Pausing preclear of disk 'sdi' Nov 15 18:26:04 STORAGE preclear.disk: Resuming preclear of disk 'sdi' Nov 15 18:30:20 STORAGE preclear.disk: Pausing preclear of disk 'sdi' Nov 15 18:30:23 STORAGE preclear.disk: Resuming preclear of disk 'sdi' Nov 15 18:31:48 STORAGE preclear.disk: Pausing preclear of disk 'sdi' Nov 15 18:31:48 STORAGE preclear.disk: Resuming preclear of disk 'sdi' Nov 15 18:43:48 STORAGE preclear.disk: Pausing preclear of disk 'sdi' Nov 15 18:43:50 STORAGE preclear.disk: Resuming preclear of disk 'sdi' Nov 15 18:45:30 STORAGE preclear.disk: Pausing preclear of disk 'sdi' Nov 15 18:45:37 STORAGE preclear.disk: Resuming preclear of disk 'sdi' Nov 15 18:49:41 STORAGE preclear.disk: Pausing preclear of disk 'sdi' Nov 15 18:49:44 STORAGE preclear.disk: Resuming preclear of disk 'sdi' Nov 15 18:50:01 STORAGE preclear.disk: Pausing preclear of disk 'sdi' Nov 15 18:50:06 STORAGE preclear.disk: Resuming preclear of disk 'sdi' Nov 15 18:50:47 STORAGE preclear.disk: Pausing preclear of disk 'sdi' Nov 15 18:50:50 STORAGE preclear.disk: Resuming preclear of disk 'sdi' Nov 15 18:51:10 STORAGE preclear.disk: Pausing preclear of disk 'sdi' Nov 15 18:51:19 STORAGE preclear.disk: Resuming preclear of disk 'sdi' If you are preclearing any disks let them finish. Then remove the preclear plugin. Reboot and then see if you can format the disk. Watch the log for the disk errors and solve that.
  10. I am releasing a new version of UD today that should stop those php errors. What I meant was take the USB flash drive to a Windows computer and check the file system using Windows.
  11. I'm not sure I understand what you are doing here. How are you doing that? UD will mount a remote share if you turn on the 'Auto Mount' switch, or is the VM not mounting it? THE UD mounting?
  12. Can you post diagnostics? It looks like UD cannot get the disk information. All the '-' indicate that something is missing and UD just filled in a '-'.
  13. Your disk is mounted fine. The issue is with remotes shares and maybe iso mounts. This will occur if the configuration files are corrupted. Delete the following files and then reboot: /flash/config/plugins/unassigned.devices/samba_mount.cfg /flash/config/plugins/unassigned.devices/iso_mount.cfg If the problem persists, you need to remove your UNRAID flash drive and check it on a computer and fix any file problems. Once that is cleared up, we can take a look at why your app can't see the mounted nvme disk.
  14. I don't think the disk is going off line. I don't see anything in the log that indicates it went offline. It looks like UD is throwing php errors when getting the unassigned disks. I've fixed the php errors. This will be in the next release. You won't see them, but if the process of getting the unassigned disks fails, none of your disks will show. I suspect a problem with disk information when a disk is hot plugged. Does this issue show up at random, or is there an event like a disk being hot plugged and then the problem shows? Start with your server freshly booted and wait for the issue to happen. When it does, post here and I'll walk you through some trouble shooting steps.
  15. The automount of disks is done on the server event "disks_mounted" so UD disks are mounted when VMs and Docker containers are started. The automount of remote shares is done on the server event "started" so the network is available and the mounts will be successful. When you say a user script is not ideal, what does that mean?
  16. All UD mounts will remain available in /mnt/disks. I will be implementing the /mnt/remotes for remote shares with symlinks to /mnt/disks so nothing will have to change. The use of /mnt/remotes mount point will be optional.
  17. Maybe your script should be written with the mount point as a global variable so it can be changed in one place without a script rewrite. I'd suggest you do that for future compatibility.
  18. UD will think any disk not assigned to the array (data, parity, or cache) is a UD device. In this case you should be able to mark the disk as passed through but because the disk doesn't seem to show a serial number, UD doesn't enable that switch. The php error indicates a potential problem with the serial number. That has come up before when there is a character in the serial number like a quote. You have an older version of UD. Update UD and then post diagnostics.
×
×
  • Create New...