wirenut

Members
  • Posts

    210
  • Joined

  • Last visited

Posts posted by wirenut

  1. 1 hour ago, ljm42 said:

     

    This is suspect.  Try manually stopping the container (reload the page and be sure it is really stopped) before doing the update and see if it makes a difference.

     

    Otherwise, I agree with the suggestion to recreate the docker image

     

    2 hours ago, Kilrah said:

    Thank YOU! This was the solution.

    Followed the steps in the link, all dockers re-installed properly. This has been frustrating me for for a while now.

     

     

    • Like 1
  2. 21 hours ago, moli said:

    Hey there,
    as I logged in today via VPN, I was kinda forced to go the FCP page, as I had red warning popup.
    It stated, that on 4 days in a row someone might have tried to hack my unraid server.

    "Possible Hack Attempt on Mar 23
    On Mar 23 there were 192 invalid login attempts. This could either be yourself attempting to login to your server (SSH / Telnet) with the wrong user or password, or you could be actively be the victim of hack attacks. A common cause of this would be placing your server within your routers DMZ, or improperly forwarding ports.

    This is a major issue and needs to be addressed IMMEDIATELY

    NOTE: Because this check is done against the logged entries in the syslog, the only way to clear it is to either increase the number of allowed invalid logins per day (if determined that it is not a hack attempt) or to reset your server. It is not recommended under any circumstance to ignore this error  More Information"

    Pretty much the same was recorded the following 3 days.

    I added the diagnostics zip. I am glad for any help.

     


     

    molinode-diagnostics-20240403-1902.zip 202.34 kB · 0 downloads

     

    Are you running Avast on a machine that's on the same network as your unRaid server?

  3. On 4/1/2024 at 8:35 AM, primeval_god said:

    Is this a container run via unRAID's built in Dockerman interface or by some other method? 

     

    On 4/1/2024 at 10:18 AM, wirenut said:

    Built in, installed via community apps.

    binhex - Krusader and now binhex - preclear. I've also had the same issue with linuxserver - tautulli docker, but that one will correctly finish after trying 2 or 3 times when it happens.

    If there is any logs or other info I can provide to troubleshoot this let me know.

  4. 21 minutes ago, PSYCHOPATHiO said:

    Nothing huge but on both servers if I try to access "System Drivers" under tools it won't open, I reverted to 6.12.8 & it opens without an issue. Now back again to 6.12.9 on both of my servers & it gets stuck on loading.

     

    I clicked on it by mistake while trying to open system devices & I do not care if it works or not, just a note to fix for next update if is the same issue exists for other users.

    firefox_2fCrjnb8Y4.png

    Just read this and thought, huh, let me try. So i did.

    I have the same result.

  5.   On 3/18/2024 at 8:41 AM, binhex said:

    sorry dude but there is nothing i can do about this as it's an unraid related issue, the image is accessible and for me i cannot reproduce the issue you are seeing.

    you could try deleting the container completely, then re-create it, this may fix it *shrug*.

     

     

    An FYI, with the update last night the situation has returned. Interestingly it also now effects your preclear docker for me.

    Over 200 views in my docker engine post, but unfortunately no help. For what its worth, I have updated to unraid 6.12.9.

    If you have any other ideas I could try, suggestions appreciated.

     

  6. On 3/21/2024 at 4:42 PM, Kilrah said:

    Believe the issue's been found and fixed for a future version

    Future version of which, docker or unraid? because I have updated to unraid 6.12.9 and the issue persists. As of today, I now have the issue with two containers.

     

    On 3/21/2024 at 4:42 PM, Kilrah said:

    happens if you leave the Docker page open for a long time and don't reload it before applying updates.

    This is definitely not my issue as I get a notification that auto update has done it thing overnight as scheduled, then get a notification in the morning that an update is available.

    log into server, go to docker tab, run update, same issue. So, docker page isn't open for any period of time other than to run the update.

  7. 55 minutes ago, binhex said:

    sorry dude but there is nothing i can do about this as it's an unraid related issue, the image is accessible and for me i cannot reproduce the issue you are seeing.

    you could try deleting the container completely, then re-create it, this may fix it *shrug*.

    Well... thanks again. I removed the container, deleted the app data folder and installed fresh. The whole process took about 10 minutes to complete but it installed and is no longer showing update ready. I'll see how goes it as time passes. Attached is the info from the install command window if it may be of some use to you?

    And again, Thank You!

     

     

    binhex-krusader install 2024.03.18.txt

  8. On 3/13/2024 at 4:56 PM, wirenut said:

    ok Done. Thank You binhex

    Thanks for the suggestion, but its been there a few days with over 100 views showing and not one reply.

    I see you mention a couple posts up about a new build coming. I guess i will wait for that and see if it changes anything for me.

  9. 18 hours ago, Squid said:

    What happens if you manually hit apply update on one of them?

    I also have a docker that is persistent with this issue. I have posted in the docker support thread and was assured that the container is fine and this may be an unraid docker engine related problem and to post the issue there. I did this two days ago but have not yet received any feedback. Maybe my post details aren't clear? They are here if you wouldn't mind taking a look:

     

    Thanks in advance!

  10. 1 hour ago, binhex said:

    i am confident the image is fine as other people have pulled it down with no issue (including myself) so the issue you are seeing is either specific to your network in some way or it's an unraid bug that you are seeing, so i think it's worth a post there and see what people say.

    ok Done. Thank You binhex

  11. I posted in the container support thread and was advised to post it here:

     

    Got notification docker auto updated overnight during regular schedule for this action.

    Then received fix common problems notice this morning there was a docker update available, checked docker tab and sure enough said there was.

    Manually updated, finished successfully. Still shows update available.

     

     

    Pulling image: binhex/arch-krusader:latest

    IMAGE ID [1269367198]: Pulling from binhex/arch-krusader.
    IMAGE ID [ba79b0a00404]: Already exists.
    IMAGE ID [014829145dc4]: Already exists.
    IMAGE ID [8981650a5ede]: Already exists.
    IMAGE ID [3e6f6e9c14bc]: Already exists.
    IMAGE ID [21c1cc086e6b]: Already exists.
    IMAGE ID [d61b27411559]: Already exists.
    IMAGE ID [6bed959144b4]: Already exists.
    IMAGE ID [867955bafe2a]: Already exists.
    IMAGE ID [af760b31737e]: Already exists.
    IMAGE ID [b91d01ffc1e9]: Already exists.
    IMAGE ID [b8db4f2655a2]: Pulling fs layer.Downloading 100% of 3 KB.Verifying Checksum.Download complete.Extracting.Pull complete.
    IMAGE ID [5f904db18c50]: Pulling fs layer.Downloading 100% of 1 KB.Verifying Checksum.Download complete.Extracting.Pull complete.
    IMAGE ID [c6b8e095478d]: Pulling fs layer.Downloading 100% of 684 MB.Verifying Checksum.Download complete.Extracting.

    TOTAL DATA PULLED: 684 MB

     

    Stopping container: binhex-krusader

    Error:

     

    Removing container: binhex-krusader

    Successfully removed container: binhex-krusader

     

    Command executiondocker run
      -d
      --name='binhex-krusader'
      --net='bridge'
      --privileged=true
      -e TZ="America/Chicago"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="Tower"
      -e HOST_CONTAINERNAME="binhex-krusader"
      -e 'TEMP_FOLDER'='/config/home/.config/krusader/tmp'
      -e 'WEBPAGE_TITLE'='Krusader'
      -e 'VNC_PASSWORD'=''
      -e 'UMASK'='000'
      -e 'PUID'='99'
      -e 'PGID'='100'
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.webui='http://[IP]:[PORT:6080]/vnc.html?resize=remote&host=[IP]&port=[PORT:6080]&autoconnect=1'
      -l net.unraid.docker.icon='https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/krusader-icon.png'
      -p '6080:6080/tcp'
      -v '/mnt/user':'/media':'rw'
      -v '/mnt/cache/appdata/binhex-krusader':'/config':'rw' 'binhex/arch-krusader'

    bf494d6710cd0ba84e94b716483ae787bbb08c3aead77f316918569d68ccf692

    The command finished successfully!

     

     

     

     

  12. 2 hours ago, Shesakillatwo said:

    I experienced this issue the other day as well.  So I did nothing for a few days and tried the update again last night and it worked.  Ii appeared to me that when this issue occurs that one of the larger components/files does not actually reach the "Pull Complete" stage after which I still have the indicator that an update is available.   I have also seen this before....  Could it be an Unraid Timeout issue???

    Interesting, I have been away a few days and was looking to address this today based on binhex last response to me and saw your post. I tried manually again to see if i had same issue you describe, and it appears I do. @binhex my repository is correct as you directed to check, should I still post the issue at https://forums.unraid.net/forum/58-docker-engine/ ??

     

    Pulling image: binhex/arch-krusader:latest

    IMAGE ID [1269367198]: Pulling from binhex/arch-krusader.
    IMAGE ID [ba79b0a00404]: Already exists.
    IMAGE ID [014829145dc4]: Already exists.
    IMAGE ID [8981650a5ede]: Already exists.
    IMAGE ID [3e6f6e9c14bc]: Already exists.
    IMAGE ID [21c1cc086e6b]: Already exists.
    IMAGE ID [d61b27411559]: Already exists.
    IMAGE ID [6bed959144b4]: Already exists.
    IMAGE ID [867955bafe2a]: Already exists.
    IMAGE ID [af760b31737e]: Already exists.
    IMAGE ID [b91d01ffc1e9]: Already exists.
    IMAGE ID [b8db4f2655a2]: Pulling fs layer.Downloading 100% of 3 KB.Verifying Checksum.Download complete.Extracting.Pull complete.
    IMAGE ID [5f904db18c50]: Pulling fs layer.Downloading 100% of 1 KB.Verifying Checksum.Download complete.Extracting.Pull complete.
    IMAGE ID [c6b8e095478d]: Pulling fs layer.Downloading 100% of 684 MB.Verifying Checksum.Download complete.Extracting.

    TOTAL DATA PULLED: 684 MB

     

    Stopping container: binhex-krusader

    Error:

     

    Removing container: binhex-krusader

    Successfully removed container: binhex-krusader

     

    Command executiondocker run
      -d
      --name='binhex-krusader'
      --net='bridge'
      --privileged=true
      -e TZ="America/Chicago"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="Tower"
      -e HOST_CONTAINERNAME="binhex-krusader"
      -e 'TEMP_FOLDER'='/config/home/.config/krusader/tmp'
      -e 'WEBPAGE_TITLE'='Krusader'
      -e 'VNC_PASSWORD'=''
      -e 'UMASK'='000'
      -e 'PUID'='99'
      -e 'PGID'='100'
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.webui='http://[IP]:[PORT:6080]/vnc.html?resize=remote&host=[IP]&port=[PORT:6080]&autoconnect=1'
      -l net.unraid.docker.icon='https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/krusader-icon.png'
      -p '6080:6080/tcp'
      -v '/mnt/user':'/media':'rw'
      -v '/mnt/cache/appdata/binhex-krusader':'/config':'rw' 'binhex/arch-krusader'

    bf494d6710cd0ba84e94b716483ae787bbb08c3aead77f316918569d68ccf692

    The command finished successfully!

     

  13. On 3/7/2024 at 1:47 PM, wirenut said:

    Got notification docker auto updated overnight during regular schedule for this action.

    Then received fix common problems notice this morning there was a docker update available, checked docker tab and sure enough said there was.

    Manually updated, finished successfully. Still shows update available.

     

    image.png.3b5ad4c5eea9a4f44a9feebd233df21e.png

     

    any idea why?

    Same issue for the last two days.

    Is this an issue on my end or the update side? I am not sure how to troubleshoot this.

  14. Got notification docker auto updated overnight during regular schedule for this action.

    Then received fix common problems notice this morning there was a docker update available, checked docker tab and sure enough said there was.

    Manually updated, finished successfully. Still shows update available.

     

    image.png.3b5ad4c5eea9a4f44a9feebd233df21e.png

     

    any idea why?

  15. 1 hour ago, ljm42 said:

    This is not related to the recent bug fix. Most likely, you had a SSH or a web terminal open and cd'd to the cache drive, like this:

    root@Tower:/mnt/cache/appdata# 

    Hmmmm, Damn.

     

    1 hour ago, ljm42 said:

    If desired, you can install the Tips and Tweaks plugin, by default it will automatically kill any ssh or bash process when you stop the array.

    it is and now have added Processes to kill before Array is Stopped: ssh,bash

     

    My error and not a bug, this is a good thing .

    Thanks ljm42!

     

     

    • Like 1
  16. I am one who has had an unclean shutdown after upgrading.

    I followed the instructions for the upgrade to 6.12.3. I did have to use the command line instruction, upgrade went fine and server was up for 6 days, this morning i needed to reboot server.

    I spun up all discs

    I individually stopped all my dockers and my VM.

    I hit reboot button.

    it is about 4 hours into unclean shutdown parity check with no errors, log repeated this while shutting down:

     

    Jul 22 08:34:04 Tower root: umount: /mnt/cache: target is busy.
    Jul 22 08:34:04 Tower emhttpd: shcmd (5468228): exit status: 32
    Jul 22 08:34:04 Tower emhttpd: Retry unmounting disk share(s)...
    Jul 22 08:34:09 Tower emhttpd: Unmounting disks...
    Jul 22 08:34:09 Tower emhttpd: shcmd (5468229): umount /mnt/cache
    Jul 22 08:34:09 Tower root: umount: /mnt/cache: target is busy.
    Jul 22 08:34:09 Tower emhttpd: shcmd (5468229): exit status: 32
    Jul 22 08:34:09 Tower emhttpd: Retry unmounting disk share(s)...

     

    attached diagnostics captured from reboot.

     

    not sure what/why it happened.

    tower-diagnostics-20230722-0834.zip

  17. I've had syslog server set up for some time now and it has been doing its job just fine.

    I have recently made a change (approx 24 hours ago) to the settings to enable log rotation due to current log file size of 471,511,591 bytes and to see how it works. The rotation to a new file has not occurred yet. I though maybe making a settings change would enable the check but that didn't happen. What is rotation file size check schedule, daily, weekly or monthly?