wirenut

Members
  • Posts

    210
  • Joined

  • Last visited

Everything posted by wirenut

  1. Thank YOU! This was the solution. Followed the steps in the link, all dockers re-installed properly. This has been frustrating me for for a while now.
  2. Update went smooth. Booted right up. Systems driver screen opens right up again. Still have my docker container show update available after updating situation, but that issue is posted in the docker engine thread.
  3. updated unraid to 6.12.10, situation persists.
  4. Are you running Avast on a machine that's on the same network as your unRaid server?
  5. If there is any logs or other info I can provide to troubleshoot this let me know.
  6. Just read this and thought, huh, let me try. So i did. I have the same result.
  7. Built in, installed via community apps. binhex - Krusader and now binhex - preclear. I've also had the same issue with linuxserver - tautulli docker, but that one will correctly finish after trying 2 or 3 times when it happens.
  8. On 3/18/2024 at 8:41 AM, binhex said: sorry dude but there is nothing i can do about this as it's an unraid related issue, the image is accessible and for me i cannot reproduce the issue you are seeing. you could try deleting the container completely, then re-create it, this may fix it *shrug*. An FYI, with the update last night the situation has returned. Interestingly it also now effects your preclear docker for me. Over 200 views in my docker engine post, but unfortunately no help. For what its worth, I have updated to unraid 6.12.9. If you have any other ideas I could try, suggestions appreciated.
  9. Future version of which, docker or unraid? because I have updated to unraid 6.12.9 and the issue persists. As of today, I now have the issue with two containers. This is definitely not my issue as I get a notification that auto update has done it thing overnight as scheduled, then get a notification in the morning that an update is available. log into server, go to docker tab, run update, same issue. So, docker page isn't open for any period of time other than to run the update.
  10. Well... thanks again. I removed the container, deleted the app data folder and installed fresh. The whole process took about 10 minutes to complete but it installed and is no longer showing update ready. I'll see how goes it as time passes. Attached is the info from the install command window if it may be of some use to you? And again, Thank You! binhex-krusader install 2024.03.18.txt
  11. Thanks for the suggestion, but its been there a few days with over 100 views showing and not one reply. I see you mention a couple posts up about a new build coming. I guess i will wait for that and see if it changes anything for me.
  12. I also have a docker that is persistent with this issue. I have posted in the docker support thread and was assured that the container is fine and this may be an unraid docker engine related problem and to post the issue there. I did this two days ago but have not yet received any feedback. Maybe my post details aren't clear? They are here if you wouldn't mind taking a look: Thanks in advance!
  13. I posted in the container support thread and was advised to post it here: Got notification docker auto updated overnight during regular schedule for this action. Then received fix common problems notice this morning there was a docker update available, checked docker tab and sure enough said there was. Manually updated, finished successfully. Still shows update available. Pulling image: binhex/arch-krusader:latest IMAGE ID [1269367198]: Pulling from binhex/arch-krusader. IMAGE ID [ba79b0a00404]: Already exists. IMAGE ID [014829145dc4]: Already exists. IMAGE ID [8981650a5ede]: Already exists. IMAGE ID [3e6f6e9c14bc]: Already exists. IMAGE ID [21c1cc086e6b]: Already exists. IMAGE ID [d61b27411559]: Already exists. IMAGE ID [6bed959144b4]: Already exists. IMAGE ID [867955bafe2a]: Already exists. IMAGE ID [af760b31737e]: Already exists. IMAGE ID [b91d01ffc1e9]: Already exists. IMAGE ID [b8db4f2655a2]: Pulling fs layer.Downloading 100% of 3 KB.Verifying Checksum.Download complete.Extracting.Pull complete. IMAGE ID [5f904db18c50]: Pulling fs layer.Downloading 100% of 1 KB.Verifying Checksum.Download complete.Extracting.Pull complete. IMAGE ID [c6b8e095478d]: Pulling fs layer.Downloading 100% of 684 MB.Verifying Checksum.Download complete.Extracting. TOTAL DATA PULLED: 684 MB Stopping container: binhex-krusader Error: Removing container: binhex-krusader Successfully removed container: binhex-krusader Command executiondocker run -d --name='binhex-krusader' --net='bridge' --privileged=true -e TZ="America/Chicago" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Tower" -e HOST_CONTAINERNAME="binhex-krusader" -e 'TEMP_FOLDER'='/config/home/.config/krusader/tmp' -e 'WEBPAGE_TITLE'='Krusader' -e 'VNC_PASSWORD'='' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:6080]/vnc.html?resize=remote&host=[IP]&port=[PORT:6080]&autoconnect=1' -l net.unraid.docker.icon='https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/krusader-icon.png' -p '6080:6080/tcp' -v '/mnt/user':'/media':'rw' -v '/mnt/cache/appdata/binhex-krusader':'/config':'rw' 'binhex/arch-krusader' bf494d6710cd0ba84e94b716483ae787bbb08c3aead77f316918569d68ccf692 The command finished successfully!
  14. Interesting, I have been away a few days and was looking to address this today based on binhex last response to me and saw your post. I tried manually again to see if i had same issue you describe, and it appears I do. @binhex my repository is correct as you directed to check, should I still post the issue at https://forums.unraid.net/forum/58-docker-engine/ ?? Pulling image: binhex/arch-krusader:latest IMAGE ID [1269367198]: Pulling from binhex/arch-krusader. IMAGE ID [ba79b0a00404]: Already exists. IMAGE ID [014829145dc4]: Already exists. IMAGE ID [8981650a5ede]: Already exists. IMAGE ID [3e6f6e9c14bc]: Already exists. IMAGE ID [21c1cc086e6b]: Already exists. IMAGE ID [d61b27411559]: Already exists. IMAGE ID [6bed959144b4]: Already exists. IMAGE ID [867955bafe2a]: Already exists. IMAGE ID [af760b31737e]: Already exists. IMAGE ID [b91d01ffc1e9]: Already exists. IMAGE ID [b8db4f2655a2]: Pulling fs layer.Downloading 100% of 3 KB.Verifying Checksum.Download complete.Extracting.Pull complete. IMAGE ID [5f904db18c50]: Pulling fs layer.Downloading 100% of 1 KB.Verifying Checksum.Download complete.Extracting.Pull complete. IMAGE ID [c6b8e095478d]: Pulling fs layer.Downloading 100% of 684 MB.Verifying Checksum.Download complete.Extracting. TOTAL DATA PULLED: 684 MB Stopping container: binhex-krusader Error: Removing container: binhex-krusader Successfully removed container: binhex-krusader Command executiondocker run -d --name='binhex-krusader' --net='bridge' --privileged=true -e TZ="America/Chicago" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Tower" -e HOST_CONTAINERNAME="binhex-krusader" -e 'TEMP_FOLDER'='/config/home/.config/krusader/tmp' -e 'WEBPAGE_TITLE'='Krusader' -e 'VNC_PASSWORD'='' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:6080]/vnc.html?resize=remote&host=[IP]&port=[PORT:6080]&autoconnect=1' -l net.unraid.docker.icon='https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/krusader-icon.png' -p '6080:6080/tcp' -v '/mnt/user':'/media':'rw' -v '/mnt/cache/appdata/binhex-krusader':'/config':'rw' 'binhex/arch-krusader' bf494d6710cd0ba84e94b716483ae787bbb08c3aead77f316918569d68ccf692 The command finished successfully!
  15. Same issue for the last two days. Is this an issue on my end or the update side? I am not sure how to troubleshoot this.
  16. Got notification docker auto updated overnight during regular schedule for this action. Then received fix common problems notice this morning there was a docker update available, checked docker tab and sure enough said there was. Manually updated, finished successfully. Still shows update available. any idea why?
  17. New build on my radar for this year. I've saved it in my wishlist and hope it's still available when I am ready
  18. Hmmm Looks like just what I am looking for also. If you pull the trigger, please make sure to leave your thoughts once you have it in hand.
  19. Have you looked at the 'mover tuning' plugin? I currently use it for this very purpose. If parity check is running, mover doesn't start.
  20. Hmmmm, Damn. it is and now have added Processes to kill before Array is Stopped: ssh,bash My error and not a bug, this is a good thing . Thanks ljm42!
  21. I am one who has had an unclean shutdown after upgrading. I followed the instructions for the upgrade to 6.12.3. I did have to use the command line instruction, upgrade went fine and server was up for 6 days, this morning i needed to reboot server. I spun up all discs I individually stopped all my dockers and my VM. I hit reboot button. it is about 4 hours into unclean shutdown parity check with no errors, log repeated this while shutting down: Jul 22 08:34:04 Tower root: umount: /mnt/cache: target is busy. Jul 22 08:34:04 Tower emhttpd: shcmd (5468228): exit status: 32 Jul 22 08:34:04 Tower emhttpd: Retry unmounting disk share(s)... Jul 22 08:34:09 Tower emhttpd: Unmounting disks... Jul 22 08:34:09 Tower emhttpd: shcmd (5468229): umount /mnt/cache Jul 22 08:34:09 Tower root: umount: /mnt/cache: target is busy. Jul 22 08:34:09 Tower emhttpd: shcmd (5468229): exit status: 32 Jul 22 08:34:09 Tower emhttpd: Retry unmounting disk share(s)... attached diagnostics captured from reboot. not sure what/why it happened. tower-diagnostics-20230722-0834.zip
  22. I've had syslog server set up for some time now and it has been doing its job just fine. I have recently made a change (approx 24 hours ago) to the settings to enable log rotation due to current log file size of 471,511,591 bytes and to see how it works. The rotation to a new file has not occurred yet. I though maybe making a settings change would enable the check but that didn't happen. What is rotation file size check schedule, daily, weekly or monthly?
  23. ok then, with that said its working again. thank you.