Jump to content

dlandon

Community Developer
  • Posts

    10,398
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. When it has its own IP address, the 443 port will not conflict with other dockers because they are on a different IP address. Yes it is expected that port 443 is used.
  2. Then give Zoneminder a static IP address. You won't have the 443 conflict and all ports will be open.
  3. Change 'Network Type' to 'Host' and see if that helps.
  4. Plugin is alive and well. There has been nothing to work on for a while.
  5. Working fine for me. Remove the /flash/config/plugins/file.activity/ folder and try again.
  6. Enter the IP address in the ownCloud config.
  7. I did the face recognition separately because it seems to load a lot into the docker and for those not using it can save some docker space. This sucker is getting a bit bloated. Just a heads up. If you set the INSTALL_HOOK to "1" and run the docker, and then set the INSTALL_FACE to "1" and restart the docker, the face recognition won't load. It's best to set both to "1" and then run the docker the first time. You can reset this by forcing a re-load of the docker with both set and it will load everything.
  8. I just updated the docker to include the zmes_hook_helpers files. Please update and let me know if there are any other adjustments I need to make.
  9. I figured out there was something missing. I'm working on an update to correct this. Once I get it ready, please check it out and let me know if there are any other changes. You'll have to add the helper files to the /config/hook/ folder. Update the docker without the INSTALL_HOOK variable, then copy the helper files to for /hook/zmes_hook_helper/ folder, then restart the docker with INSTALL_HOOK="1". Should be up and running at that point. Edit: The zmes_hook_helpers files are included in the docker now.
  10. So I need to crate a /config/hook/zme_hook_helpers/ folder and copy those contents to /usr/bin/? Do I copy the files or the entire folder. i.e. copy /config/hook/zme_hook_helpers/ /usr/bin/zme_hook_helpers/ Edit: Looks like it is zmes_hook_helpers. Give me the copy command you used to copy the files.
  11. Yes. It should accomplish this for you. Let me know if there are any adjustments required.
  12. Not really. The files have to be deleted with a SMB client like Windows. Deleting them with Linux won't save them in the recycle bin.
  13. I assume you are saying that your dockers are getting hung up when a remote server goes offline. UD is not a service and has no involvement in mounted devices and remote shares once mounted. It is only a UI that manages mounting, unmounting, and monitoring unassigned devices and remote mounts. The reason UD looks like it is hanging up your system is that it can't perform some of the work needed to monitor the devices. It does eventually time out if given the chance, but it can't control how long it takes to get the status of the mounted devices. It is not 'hanging up' your dockers. Your dockers are hanging themselves. Stop the offending dockers and UD should become responsive again. There is very little I can do with UD to solve a CIFS remote mounted share when the server is offline. That capability is built into the Linux kernel and all I can do is possibly adjust the mount options. I've done about as much as I can to help with this situation. Remote mounting a share for a docker is probably going to cause a lot of issues like this when the remote server goes offline. I'm unclear why you are remote mounting the shares in the first place. So you can have a local mount point for the docker? You should probably find a better way to do this. Move the data to the local server? Posting your diagnostics would help me see where you are having problems.
  14. There have been quite a few changes to zmeventnotification.ini over the last few releases of zmeventnotification. You should review the default .ini file and make appropriate changes to your .ini file if you are not using the default .ini.
  15. My guess is that your dockers are locking up when they lose access to the shares.
  16. Yes. UD will eventually time out when the remote shares are not accessible. It depends on how many remote shares you have mounted. The issue with the Dockers hanging has nothing to do with UD. I can imagine they struggle a lot when the remote mounts go away. Show a screen shot of the UD webgui so I can see what you are mounting remotely. Why so much remote mounting? You are probably asking for problems like you have when your network goes off, or the remote server is rebooted. Why not move some of the data to the local server?
  17. Be sure you have the latest version of UD installed. Your server should not lock up with remote shares going off line. UD is probably very busy trying to deal with the preclear status updates and your remote shares going offline. Remove the preclear plugin and try again. Post your diagnostics. You can get the diagnostics from the command line with 'diagnostics'.
  18. I appreciate what it is that you are looking for and can understand your frustration. This plugin is offered to help recover deleted files. As you may have seen, the file versions are noted by a copy number. It would be nice to have a recover button to restore the file back to it's original location, but that would take a major effort in creating a file browser with that capability. I volunteer my efforts and do not get paid for my time. It's not an undertaking I am interested in investing in for the few times it would be used. While it is a bit cumbersome to have to download the file and copy it to the restored location, this is not an everyday operation.
  19. Thought I had all this worked out. Fixed in the next release.
  20. You didn't read the first post. Start over. This is the setup for your configuration. This is the initial login setup.
  21. Here you go. #!/bin/bash # # Backup MyComputer desktop VM. # TIMEOUT=${TIMEOUT:-300} # # Stop the VM. # echo "Shutdown MyComputer VM" virsh shutdown MyComputer echo "Waiting for MyComputer VM to shutdown..." count=0 while [ $(/usr/sbin/virsh list --name --state-running | grep "MyComputer" | wc -l) -gt "0" ]; do if [ "$count" -ge "$TIMEOUT" ]; then break fi count=$(expr $count + 1) sleep 1 done if [ $(/usr/sbin/virsh list --name --state-running | grep "MyComputer" | wc -l) -gt "0" ]; then echo "MyComputerVM did not shut down" /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "VM Backup" -d "MyComputer VM Backup failed" -i "normal" exit 1 else echo "MyComputerVM successfully shut down" fi # # Copy VM to array Computer Backups # echo "Copy MyComputerVM to array..." rsync -a -v --delete /mnt/user/system/domains/MyComputer /mnt/user/Computer\ Backups/VM_BACKUPS/ # # Start the VM. # echo "Start MyComputer VM..." virsh start MyComputer /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "VM Backup" -d "MyComputer VM Backup completed" -i "normal" echo "MyComputerVM Backup Done" It shuts down the VM normally and waits a certain amount of time to be sure it has stopped, backs up the VM to the array and then restarts the VM. This performs a full shutdown of the VM and not a hibernate. Change 'MyComputer' to the name of your VM. You'll have to adjust the rsync destination to your NFS share. Over the network, a sizable VM could take a while.
×
×
  • Create New...