Jump to content

Wolfman1138

Members
  • Posts

    9
  • Joined

  • Last visited

Posts posted by Wolfman1138

  1. Hey Ken-ji,

     

    I have reconfigured my server to have an XFS standalone drive for Dropbox to use, so I have reinstalled the docker.  But I have had some issues, and I am hoping that you'd be able to help.

     

    I installed Dropbox and it did not appear to sync.  I went into the Console and ran "dropbox.py status".  This said that Dropbox wasn't running.  So I tried "dropbox.py start" which said the daemon wasn't running.  So i tried "dropbox.py update" and then "dropbox.py start -i" and it seemed to come up.  

    I left the server and when I came back, the Docker drive was full.  Apparently, what ever I did installed the dropbox connection point on the docker.img.  Yikes.   I uninstalled it, wiped all the data and reinstalled the docker.  And here I am.

     

    My new log file look like this and I am back to Dropbox isn't running:

    Quote

    dropbox: locating interpreter
    dropbox: logging to /tmp/dropbox-antifreeze-DGRnO6
    dropbox: initializing
    dropbox: initializing python 3.7.5
    dropbox: setting program path '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172/dropbox'
    dropbox: setting python path '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172:/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172/python-packages.zip'
    dropbox: python initialized
    dropbox: running dropbox
    dropbox: setting args
    dropbox: applying overrides
    dropbox: running main script
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172/cryptography.hazmat.bindings._constant_time.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172/cryptography.hazmat.bindings._openssl.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172/cryptography.hazmat.bindings._padding.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172/psutil._psutil_linux.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172/psutil._psutil_posix.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172/apex._apex.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172/tornado.speedups.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172/PyQt5.QtWidgets.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172/PyQt5.QtCore.cpython-37m-x86_64-linux-gnu.so'
    dropbox: load fq extension '/dropbox/.dropbox-dist/dropbox-lnx.x86_64-96.4.172/PyQt5.QtGui.cpython-37m-x86_64-linux-gnu.so'

     

    Any ideas where to start?

     

    Thank you.

     

    - Wolfman

     

     

  2. Help.  I accidently killed my script window while running the "clear an array drive."  What do I do?

    I know it was about 3TB into a 10TB drive at the time.

    Right now the 10TB drive reports back as 33.6GB on the Main screen.  From the reading I did, it may not report the right size.

     

    I searched the forum, and apparently no one else is a dumb as I am and killed the script window. :)

     

    Can I get back the progress window?  Can I check the processes to see if it is still running?

     

    I was removing the drive because I saw a few reallocated sectors. (pre-fail)  I had already "unbalanced" all the data off the disk into the rest of the array, so I am not worried about data loss right now.  I just want to finish removing this drive so I can get the array back to normal.

     

    Thanks for the help in advance.

     

     

  3. Cool!  I think I may hve found a bug in Unraid.

     

    image.png.11d9d57d258578f807b513f171631808.png

     

    This is really weird.  The container port was grayed out and I could not change it.  I could only edit the host port.

     

    I was able to fix the container port with the following steps:

    Edit Docker and change network to Host mode

    Set Host port back to the match the container default of 8989. 

    Restart docker by hitting apply. (Docker didn't work, but it changed the container port back to 8989.)

    Edit Docker again and change Network to bridge

    Restart docker

    Edit Host port again, and it set it to the desired map port.

    Restart docker

     

     

    I can recreate the mismatched container port by switching back and forth between Host and Bridge while having the Host set to a port that doesn't match the container port.

     

    thanks for the help

  4. Hi Folks,

     

    Sorry about the title, but I messed up some stuff and I can't seem to find anyone else reporting the same issue.

    I messed around with adding some dockers and two of my docker Web GUIs stopped repsonding.  They are not accessible via their bridge IP address anymore.

    I didn't make any changes to the two that failed.

    Example:  Long ago I had remapped Sonarr to use port 1234.  The WebGui link now reverts to 8989 (the default)  But if I type the expected address of :1234 in manually, it still does not work.  I cannot get to the GUI in this configuration now at all in bridge mode.

     

    image.png.dc24458f500b3ebdada31def251b253e.png

    I can fire up a console on both failing dockers and they look ok to my untrained eye.

     

    I read the forums and did some experiments.  I can get everything to work if I switch to a fixed IP using br0, but when I reverted back to Bridge, it didn't fix the issue.  Host did not fix the issue and one docker can't run at all in host mode.

     

    I also tried rebuilding the docker engine network using these commands:

       # rm /var/lib/docker/network/files/local-kv.db
       # /etc/rc.d/rc.docker restart

     

    This did not fix the issue.

     

    I included a txt capture of running some network status:

        #  ifconfig
        #  docker network ls
        #  docker network inspect bridge
        #  iptables -vnL

    Network_Config_issue.txt

     

    Any help woould be apprecaited.

  5. Hi Ken-Ji,

     

    I am a new user and I unfortunately (or fortunately if you are looking for debug cases) have seen both the "tornado access" issue that Zangief sees and the loss of Dropbox link issue.

     

    How my install and issues transpired:

    I installed Dropbox Container last week and successfully download the contents of my Dropbox after linking the container (I believe it was March 8th)

    I restarted Unraid several times for various reasons and noticed that Dropbox syncing stopped.

    I re-linked Dropbox again this morning (used the same computer ID according to Dropbox logs) -> This seems to have fixed the file syncing issue

    Now I see the Tornado warnings.

     

    One point to note:  Dropbox logs have an info button.  They say that the app version that I attached to last wed is different that today. Could that be the source of the warnings?

    Wed was Ver 20.4.19 (No warnings)

    Today was Ver 21.4.25 (Tornado Warnings)

     

    Thank you for writing and supporting such a great container!

     

    Examples of Warnings:

    WARNING:tornado.access:404 HEAD /blocks/80601104/Pt4BW29AkVG9OHQ0p1mEm9iUohoM4wj5ZrzhMKxWRn8 (172.17.0.1) 7.56ms

    WARNING:tornado.access:404 HEAD /blocks/80601104/Q4Po9VGoWD2phpWKnpiWcgVApVMVlLtXUPpzRMDUfGw (172.17.0.1) 5.03ms

    WARNING:tornado.access:404 HEAD /blocks/80601104/_lLWd7V7qli4yTmNul1-GPcHxv8UAsA9Pzq-DaLMUIY (172.17.0.1) 3.85ms

    WARNING:tornado.access:404 HEAD /blocks/80601104/1YrODn5oJ8vw_8_4DV-ivOg6_ynmEYhGls_mmJQU38U (172.17.0.1) 11.21ms

    WARNING:tornado.access:404 HEAD /blocks/80601104/mmhWSWkX7oaBNO7rWDvp6A3kQ8IG6a491xsLBGieAdo (172.17.0.1) 6.25ms

    WARNING:tornado.access:404 HEAD /blocks/80601104/kM5OdnihEZRTzDHPH6B8-Ejl4adctD7GfwNmFJkzy7k (172.17.0.1) 3.20ms

    WARNING:tornado.access:404 HEAD /blocks/80601104/gfTD7I8RHgNPMNoLd4kNOq6Va7BTvFwojlPRaUu4HS0 (172.17.0.1) 11222.82ms

    WARNING:tornado.access:404 HEAD /blocks/80601104/ZYgl17MFnN9n4pkCRpXeQnmtH4tMVWbtDSll39Ntzbc (172.17.0.1) 11236.61ms

    WARNING:tornado.access:404 HEAD /blocks/80601104/TIxIpqiDCVU5aMefJvd7kUAps0NI5cRbddy5loXs_xM (172.17.0.1) 10209.27ms

    WARNING:tornado.access:404 HEAD /blocks/80601104/vG7IWYDMvbQxTeklh3-IHJz4DAc-iQCzaP7l9Sk8QZE (172.17.0.1) 10215.41ms

    WARNING:tornado.access:404 HEAD /blocks/80601104/t3xKThU191S3SNQc5Brr6zh2IDdq-RDgcunsQd-ThSI (172.17.0.1) 9215.61ms

    WARNING:tornado.access:404 HEAD /blocks/80601104/RD17Z3z1OEDwl46CiPp5K9rrfPL-OfVMYUceleqD2xQ (172.17.0.1) 9225.69ms

     

×
×
  • Create New...