Jump to content

ljm42

Administrators
  • Posts

    4,469
  • Joined

  • Last visited

  • Days Won

    32

Posts posted by ljm42

  1. 4 minutes ago, craigr said:

    I logged out of and then uninstalled Unraid Connect.  I cannot sign back in.  This can't be affecting the log, can it?  Should I reinstall the plugin?

     

    Sorry, I'm confused. If you uninstall the Unraid Connect plugin then the sign in box will be removed from the upper right corner of the page and you won't be able to sign in to Unraid Connect from within the webgui. Uninstalling the Unraid Connect plugin will have no effect on logging in to the webgui as root.

     

    If that doesn't answer your question, please show a screen shot to help me understand the issue

  2. 1 hour ago, craigr said:

    I didn't know healthcheck was enabled by default in unRAID Docker containers. 

     

    It is actually up to the container, depends on whether it has code to run healthchecks.

     

    I don't know if the other containers you mentioned have code to run healthchecks, don't think I have seen anyone suggest disabling it on them. But it shouldn't hurt to disable it.

     

    I don't yet know if everyone should disable healthchecks on Plex, but if they are having issues with containers not being able to restart I will suggest it.

    For anyone else reading this, if you have the NVIDIA plugin installed be sure it is up to date. Plex healthchecks with older versions versions of the NVIDIA plugin will definitely cause issues.

     

    52 minutes ago, craigr said:

    I am seeing this in stdout.log.

     

    Unraid-api will log activity to help with troubleshooting, the logging shouldn't be a problem (just to repeat for anyone else who stumbles on this - the unraid-api currently logs up to 10MB before rolling the log, in the future that limit will be reduced. Alone it is not enough to fill the 128MB log partition on Unraid)

     

    Based on that log snippet I'd suggest signing out of Unraid Connect and then signing back in. If you have further questions about the actual contents of the log, please go to Settings -> Management Access -> Unraid Connect and see the section about Unraid-api logs. They could have confidential information so I'd suggest not posting them publicly.

    • Like 1
  3. 1 hour ago, namida said:

    You can access the webui just after booting up, but you can’t access it after a few hours after booting up.

     

    We are working on issues related to ipv6. For now I'd recommend going to Settings -> Network and configuring the system for ipv4 only

     

    Also, it looks like you might be having issues when stopping the array. See https://forums.unraid.net/topic/141479-6122-array-stop-stuck-on-retry-unmounting-disk-shares/#comment-1281063

     

  4. 11 hours ago, craigr said:

    Is this the My Server plugin or whatever it's called that now seems to be integrated into unRAID?  I've had loads of issues with it in the past.  

     

    The unraid-api/stdout.log grows to 10MB and then is rolled. In the next Connect plugin release we're going to reduce the size it can grow to, but even so there is no way that by itself it will fill the 128MB log partition. But why don't you go ahead and uninstall the Unraid Connect plugin for now, just to rule it out

     

    TBH if there are any other plugins you can live without, it may help to simplify the system by removing as many as you can.

     

    You run Plex right? Go into your Plex Docker container settings, switch to advanced view, and add this to the Extra Params

    --no-healthcheck

     

    • Thanks 1
  5. Think of Unraid as an appliance, you want to modify the OS as little as possible.  You should run apps like this in a Docker container, PiHole is a pretty popular one. We generally recommend that Unraid use public DNS servers like 8.8.8.8 to reduce issues with accidental blocking and increase stability when the container isn't running.

  6. 50 minutes ago, comet424 said:

    and none of the dockers work... is it because i had upgraded to 6.12.2

     

    Yes, but there is a quick fix. This is from the 6.12.0 release notes:

    Quote

     

    https://docs.unraid.net/unraid-os/release-notes/6.12.0

    If you revert back from 6.12 to 6.11.5 or earlier, you have to force update all your Docker containers and start them manually after downgrading. This is necessary because of the underlying change to cgroup v2 starting with 6.12.0-rc1.

     

     

  7. 10 hours ago, eggman9713 said:

    So now it seems to be behaving itself on my server.

    That is what makes it difficult to track this down : ) but we're working on a solution.

     

    15 minutes ago, wicked_qa said:

    I've been struggling with this issue, and this solved my problem. Thank you!

    Thanks for confirming this resolved the issue with the stopping the array, we're working on a solution

     

    • Like 1
  8. We are working on a fix for nginx with IPv6 but for now you should go to Settings -> Network Settings and change eth0 to "IPv4 only" if that works in your environment.

     

    Also, please see my comment here about browsers potentially causing issues with backgrounded tabs:

    https://forums.unraid.net/bug-reports/stable-releases/612-unraid-webui-stop-responding-then-nginx-crash-r2451/page/3/?tab=comments#comment-25245

     

  9. wanip4.unraid.net doesn't respond to ping, so no errors there.

     

    Your screenshot is cutoff, is there a "restart API" option? If so, go ahead and click that.  If you don't have that option, open a web terminal and type:

    unraid-api start

    wait two minutes, then:

    unraid-api report

    and paste the results back here

  10. UPDATE! please see my comment further down in this thread https://forums.unraid.net/topic/141479-6122-array-stop-stuck-on-retry-unmounting-disk-shares/#comment-1283203

     

     

    -------------------------------

    Original message:

    -------------------------------

     

    I hit this today when stopping my array.

     

    Here is what worked for me, would appreciate if someone hitting this would confirm it works for them too.

     

    To get into this state, stop the array. If you are having this issue you will see "retry unmounting shares" in the lower left corner.  Note: There are other reasons this message could happen (like if you left an SSH terminal open while cd'd into the array). This discussion assumes none of the usual suspects apply.

     

    In a web terminal or SSH type 'losetup'. In my case it showed:

    root@Tower:/etc/rc.d# losetup
    NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE                           DIO LOG-SEC
    /dev/loop1         0      0         1  1 /boot/bzfirmware                      0     512
    /dev/loop2         0      0         1  0 /mnt/cache/system/docker/docker.img   0     512
    /dev/loop0         0      0         1  1 /boot/bzmodules                       0     512

     

    The problem is that docker.img is still mounted. Note that in my case is it on /dev/loop2

     

    Then run `/etc/rc.d/rc.docker status` to confirm that docker has stopped:

    # /etc/rc.d/rc.docker status
    status of dockerd: stopped

    (It should be stopped, since you were in the process of stopping the array. But if Docker is still running, you can type `/etc/rc.d/rc.docker stop` and wait a bit, then run status again until it has stopped.)

     

    Then to fix the problem, type:

    umount /dev/loop2

    (use whatever /dev/loopX docker.img is on, as noted above) 

     

    Once that is unmounted, the array will automatically finish stopping.

     

    We are looking into a fix for this, but it would help if we could reliably reproduce the problem (it has only happened to me once). If anyone is able to identify what it takes to make this happen I'd appreciate it.

    • Like 4
    • Thanks 9
    • Upvote 5
  11. Do you have any Docker containers running on port 80 or port 443? 

     

    nginx is refusing to start because something is already on port 80:

    Jul  5 18:15:20 PLEXBOXX nginx: 2023/07/05 18:15:20 [emerg] 1797#1797: bind() to [::1]:80 failed (99: Cannot assign requested address)

    The usual cause is a port conflict with a Docker container. However, a conflict with port 80 should also affect 6.9.2 so I'm somewhat confused.

     

    I'd suggest carefully editing config/docker.cfg on the flash drive and change this:

    DOCKER_ENABLED="yes"

    to this:

    DOCKER_ENABLED="no"


    Then reboot. The webgui should load after this.  Then you can try starting Docker and probably one of your containers won't be able to load.

     

    You can either change the port that container uses, or go to Settings -> Management Access and change the port that Unraid uses for HTTP and HTTPS. We recommend you pick high numbers between 1000 and 64000.

  12. What version of Unraid are you currently running?

     

    We improved the killswitch in 6.11.2 (after your previous thread) but it does require you to make a dummy change to the tunnel and apply.

     

    Please follow the first post of this guide closely to setup the tunnel and container:

      https://forums.unraid.net/topic/84316-wireguard-vpn-tunneled-access-to-a-commercial-vpn-provider/

     

     

    If you are able to bypass the kill switch using a tunnel created/modified in 6.11.5 or 6.12.2 please provide details on how to reproduce the issue

×
×
  • Create New...