Anticast

Members
  • Posts

    22
  • Joined

  • Last visited

Posts posted by Anticast

  1. Other things to try...

     

    Try making sure 'nobody' owns the dropbox folder (User 99) by running: 

    chown -R nobody /mnt/disk1/dbox

    IIRC, this is one of the things that mgutt's version does on startup, and I did this as well to my dropbox folder.

     

    Also, another nice thing that mgutt's version has is that it spams the current dropbox status to the log on an interval. This version doesn't do that. Once the daemon starts, its all quiet. I had to execute the following command in the container to monitor my first-run sync status to make sure it was making progress (which it did, it took about 15 minutes to index my 410k files):

    dropbox status

    or if your container is named 'dropbox_dropbox_1' (if you launched it via docker compose from a folder called 'dropbox') then you can check the status from outside the container in the unraid web shell like this:

    docker exec dropbox_dropbox_1 dropbox status

     

  2. @rayzor

     

    I don't know anything about the folder renaming error, so I can't help there. The best I can think to do is show you my full compose file and container logs just so you can compare and maybe get some ideas...

     

    My `docker-compose.yml`:

    version: "3.8"
    services:
      dropbox:
        image: janeczku/dropbox:latest
        environment:
          DBOX_UID: 99
          DBOX_GID: 100
        volumes:
          - /mnt/cache/dropbox:/dbox/Dropbox
          - /mnt/cache/dockerdata/dropbox:/dbox/.dropbox

     

    Here are my container logs, pulled for Portainer, after the last two restarts (containers are restarted as part of a backup I run). As you can see, I don't get any interesting info from the logs:

    2022-12-29T11:08:21.531538010Z Checking for latest Dropbox version...
    2022-12-29T11:08:27.338688852Z Latest   : 163.4.5456
    2022-12-29T11:08:27.338715502Z Installed: 163.4.5456
    2022-12-29T11:08:27.338721952Z Dropbox is up-to-date
    2022-12-29T11:08:27.339702163Z Starting dropboxd (163.4.5456)...
    2022-12-29T11:08:28.040850317Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/cryptography.hazmat.bindings._openssl.cpython-38-x86_64-linux-gnu.so'
    2022-12-29T11:08:28.076813609Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/cryptography.hazmat.bindings._padding.cpython-38-x86_64-linux-gnu.so'
    2022-12-29T11:08:28.116893534Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/apex._apex.cpython-38-x86_64-linux-gnu.so'
    2022-12-29T11:08:28.242098016Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/psutil._psutil_linux.cpython-38-x86_64-linux-gnu.so'
    2022-12-29T11:08:28.246191719Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/psutil._psutil_posix.cpython-38-x86_64-linux-gnu.so'
    2022-12-29T11:08:29.502305456Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/tornado.speedups.cpython-38-x86_64-linux-gnu.so'
    2022-12-29T11:08:32.567984571Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/wrapt._wrappers.cpython-38-x86_64-linux-gnu.so'
    2023-01-02T11:00:35.478754170Z 
    2023-01-02T11:00:35.608528761Z Session terminated, terminating shell... ...terminated.
    2023-01-02T11:08:47.893458300Z Checking for latest Dropbox version...
    2023-01-02T11:08:53.778131711Z Latest   : 163.4.5456
    2023-01-02T11:08:53.778158271Z Installed: 163.4.5456
    2023-01-02T11:08:53.778164171Z Dropbox is up-to-date
    2023-01-02T11:08:53.779235301Z Starting dropboxd (163.4.5456)...
    2023-01-02T11:08:54.319299271Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/cryptography.hazmat.bindings._openssl.cpython-38-x86_64-linux-gnu.so'
    2023-01-02T11:08:54.353710302Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/cryptography.hazmat.bindings._padding.cpython-38-x86_64-linux-gnu.so'
    2023-01-02T11:08:54.380142041Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/apex._apex.cpython-38-x86_64-linux-gnu.so'
    2023-01-02T11:08:54.489037207Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/psutil._psutil_linux.cpython-38-x86_64-linux-gnu.so'
    2023-01-02T11:08:54.490007007Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/psutil._psutil_posix.cpython-38-x86_64-linux-gnu.so'
    2023-01-02T11:08:55.723200077Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/tornado.speedups.cpython-38-x86_64-linux-gnu.so'
    2023-01-02T11:08:58.516284166Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/wrapt._wrappers.cpython-38-x86_64-linux-gnu.so'

     

  3. On 11/14/2022 at 8:44 PM, JustinRSharp said:

    Has anyone had this issue:

     

    Please visit https://www.dropbox.com/cli_link_nonce?nonce=716006b069262d54bbc79ec32c8a1a2 to link this device.
    This computer is now linked to Dropbox. Welcome P
    [ALERT]: Your Dropbox folder is on a file system that is no longer supported.

    I had this same issue when I mounted '/mnt/user/dropbox' into the container instead of '/mnt/cache/dropbox'. Changing to all my mounted volumes were on '/mnt/cache' instead of '/mnt/user' fixed this for me. You can also pick a specific array disk as well: '/mnt/diskX' if you want your dropbox data to be on the array. Search this thread more for more info, as this info has already been posted earlier.

    • Like 1
  4. On 8/11/2022 at 10:51 PM, jluerken said:

    I also give up. Always having issues with Dropbox out of date, logfile permissions or re-adding the machine to the account. 

    It is not a problem of your container mgutt, it is how Dropbox handles things.

     

    So if you can create a debian container with a dropbox solution inside that would be wonderful ;-)

     

    I echo the much thanks to mgutt for spending his effort in putting this together.

     

    I've also been having issues with this image for about a month now with it crashing every 5 to 60 minutes and taking ~48 hours to get from "410k files left to index" down to "370k files left to index."

     

    Given mgutt's suggestion of trying a debian container, I looked to the image he forked from, which is 'janeczku/dropbox' which is based on debian.

     

    Switching to 'janeczku/dropbox' allowed the dropbox daemon to index my 410k files in about 15 minutes and then it finished syncing about 10 minutes later (but that is a function of how "out of date" the local version is with the online version).

     

    I know its not super popular 'round here, but here is my docker compose file in case it can help someone else:

    version: "3.8"
    services:
      dropbox:
        image: janeczku/dropbox:latest
        environment:
          DBOX_UID: 99
          DBOX_GID: 100
          #DBOX_SKIP_UPDATE: true
        volumes:
          - /mnt/cache/dropbox:/dbox/Dropbox
          - /mnt/cache/dockerdata/dropbox:/dbox/.dropbox

     

    This has so far been working well enough for syncing purposes. But one thing that didn't work "out of the box" was checking on the dropbox daemon status, which can be done like this from an unraid shell:

    docker exec container_name dropbox status

     

    Unfortunately, once the daemon was done indexing and started actually downloading/syncing files then checking the status resulted in a python error. This was caused by a bug with the 'dropbox-cli' script in the container that caused it to fail due to a text encoding issue. Luckily, its easy enough to fix by hand, at least partially, by changing line 67 of '/usr/bin/dropbox-cli' from this

    enc = locale.getpreferredencoding()

    to this

    enc = "utf-8"

     

    I'm not super docker savvy so I wasn't able to figure out how to edit the file in the container, so I just moved '/usr/bin/dropbox-cli' to '/dbox/.dropbox/dropbox-cli' (which is mounted outside the container), edited with vim from the unraid web shell, and then copied the file back. Now 'docker status' works as expected. I don't suspect the above fix will work for everything because the docker-cli script is a bit of a hot mess. There are other places in the script that are still directly pulling from 'locale.getpreferredencoding()' instead of reading the 'enc' value, so those places may still fail. I also first tried just pulling in the latest official script from here (which appears to have fixed the bug) but found that it requires python3 which isn't available in the container and I didn't want to muck with that when I could just change a few lines of python.

  5. On 3/24/2022 at 6:24 AM, Konfitüre said:

    The problem is that my router can only have one local DNS

     

    If your router is like mine (ASUS) then it only lets you set one DNS server in the DHCP settings because it then sets itself as the secondary.

     

    So, if you want two piholes running, then

    • Set router's DHCP DNS server to pihole1. This then sets the DHCP DNS backup to the router.
    • Set the router's first DNS server to pihole2 and its second server to some external fallback, if you want that.

    With this set up, if pihole1 goes down then DNS requests will get sent, via your router, to pihole2. Not ideal as you'll lose the requesting client on pihole2, but at least you're still handling the DNS.

  6. Just in case it helps someone else later...

     

    I found a better solution to this thanks to this post: 

     

     

    By modifying `smb-extra.conf` I made a share that uses the authenticated user name as part of the path. No need to make shares for each user.

     

    I currently have this mapping to a subfolder being mapped into a Dropbox container and now when all Windows users save something to their network drive it syncs to dropbox in a few seconds.

     

    [userhome]
    comment = %U home directory
    path = /mnt/disk2/dropbox/users/%U
    valid users = %U
    browsable = yes
    writable = yes
    create mask = 0777
    directory mask = 0777
    vfs objects =

     

     

  7. Thanks again for the guidance itimpil! I don't know how I screwed it up before (maybe wrong ordering of connecting to samba drives like you said) but I cleared out the shares, made new shares for each user, logged in to Windows as that user, mapped their share to a drive, and moved their Documents to that share drive. Logging out and logging back is is persisting the drives per user (like you said it would) so I think I'm in business.

     

    Thanks again!

  8. I'm running Windows Server as a VM with accounts for my family. I want to give each user their own *private* folder somewhere on the array and map that location to their documents.

     

    I tired creating a share for each user but I'm not able to have multiple samba connections to unRAID, and I don't want to share a base "users" folder because then all users can modify all other users documents.

     

    Any ideas on how I can set this up?

  9. I just upgraded from 6.8.3 to 6.9.2 and now my "main" Windows VM keeps crashing (blue screen of death, BSOD) about 2 minutes after the first user logs in.

     

    I booted the VM into safe mode, downloaded BlueScreenView http://www.nirsoft.net/utils/blue_screen_view.html and used it to see that the cause is DRIVER_IRQL_NOT_LESS_OR_EQUAL 0x000000d1 and sometimes the "Caused By Driver" is ntoskrnl.exe (NT Kernel & System) and sometimes its ndis.sys (Network Driver Interface Specification).

     

    I don't know the right way to fix this. I assume that there has been a KVM change in unRAID from 6.8.3 to 6.9.2 and now I need to update the VM drivers but I don't know which ones. My first through is the network driver due to ndis.sys being listed in the minidump, but running in safe mode with network connectivity seems to work fine with the VirtIO network driver.

     

    Any suggestions or directions on something to try would be appriciated.

  10. I pulled the trigger and picked up this item for ~$100 USD: "DS-510 USB to Gigabit Ethernet USB Device Server & AC Power Supply" https://www.amazon.com/dp/B00U9UDSH8

     

    The setup was straight forward (just needed to install "client" software on my VM) and after a few option changes in that software I can now plug in a USB device into the hub on one side of my house and it automatically shows up in my VM running on my unRAID server on the other side of the house.

     

    So far I've tested it with a USB Headphone/Microphone combo and a Silhouette Cameo (paper/vinyl cutter) and both work great.

  11. I'm looking for a solution to the same problem, but am hoping to not run any cables.

     

    Is there a wireless or network-based solution (or anything else that doesn't require running new cables) that will let me plug in a USB device away from the server, like near a thin client, and have it "connected" to the passed through USB port in my unRAID VM?

  12. So, I found this post from 2011: https://www.linuxquestions.org/questions/slackware-14/disabling-irq-16-a-879964/

     

    Quote

    I am using Slackware64 13.37 and every once and awhile I will get a notification that the kernel is disabling IRQ #16. When this happens it feels like I lose 3D acceleration. KDE becomes very sluggish and lags like crazy. Even when I type into text boxes they lag. The only way to fix it temporarily is to reboot, but then it will happen again in a few days.

     

    Isn't unRAID based on Slackware? Perhaps this is an issue with the distro and not something specific to unRAID.

  13. This same problem was happening with me. I had my Windows gaming VM with my NVIDIA GPU passed through but the performance was crap.

     

    I would see "kernel:Disabling IRQ #16" errors on the web terminal but had no idea what it meant.

     

    After booting up Windows the performance for basic operations like right-clicking on the desktop were super laggy (like 1 second) but I figured it was a driver issue. So, I reinstalled the NVIDIA drivers and noticed that for the few seconds between the old NVIDIA drivers being unloaded and the new NVIDIA drivers starting up the performance of Windows jumped up to what I would normally expect.

     

    I assumed I was doing something wrong since there is a lot of chatter on the internet about screwing up GPU pass through. I spent 3 full nights fiddling with VM settings, restarting the VM, changing VFIO-PCI settings, rebooting unRAID, trying the VM again, etc. etc. and nothing helped. During this whole time I had one display hooked up to my motherboard output and another display hooked up to the NVIDIA GPU.

     

    After I finally gave up with the VM I unplugged the display hooked up to the motherboard and started fiddling with other unRAID stuff. Several reboots and a week or two later I decided to boot up the VM again to fiddle a bit more and this time it just worked, every time. Even VMs that were "slow" before now ran like normal. This made me super nervous but I was just happy that it was working.

     

    About a month later, due to flash drive issues, I needed to hook up the on board display again so I could see the unRAID output. I got the flash drive issues fixed and started up the VM and saw that it was slow again. I shut down the VM and saw this in my log (just in case it helps LT):

     

    Dec 18 01:05:48 Tower kernel: irq 16: nobody cared (try booting with the "irqpoll" option)
    Dec 18 01:05:48 Tower kernel: CPU: 2 PID: 0 Comm: swapper/2 Tainted: P O 4.19.107-Unraid #1
    Dec 18 01:05:48 Tower kernel: Hardware name: Gigabyte Technology Co., Ltd. Z170X-Gaming 7/Z170X-Gaming 7, BIOS F20 11/04/2016
    Dec 18 01:05:48 Tower kernel: Call Trace:
    Dec 18 01:05:48 Tower kernel: <IRQ>
    Dec 18 01:05:48 Tower kernel: dump_stack+0x67/0x83
    Dec 18 01:05:48 Tower kernel: __report_bad_irq+0x30/0xa5
    Dec 18 01:05:48 Tower kernel: note_interrupt+0x1d8/0x229
    Dec 18 01:05:48 Tower kernel: handle_irq_event_percpu+0x4f/0x6f
    Dec 18 01:05:48 Tower kernel: handle_irq_event+0x34/0x51
    Dec 18 01:05:48 Tower kernel: handle_fasteoi_irq+0x92/0xfc
    Dec 18 01:05:48 Tower kernel: handle_irq+0x1c/0x1f
    Dec 18 01:05:48 Tower kernel: do_IRQ+0x46/0xd0
    Dec 18 01:05:48 Tower kernel: common_interrupt+0xf/0xf
    Dec 18 01:05:48 Tower kernel: </IRQ>
    Dec 18 01:05:48 Tower kernel: RIP: 0010:cpuidle_enter_state+0xe8/0x141
    Dec 18 01:05:48 Tower kernel: Code: ff 45 84 f6 74 1d 9c 58 0f 1f 44 00 00 0f ba e0 09 73 09 0f 0b fa 66 0f 1f 44 00 00 31 ff e8 7a 8d bb ff fb 66 0f 1f 44 00 00 <48> 2b 2c 24 b8 ff ff ff 7f 48 b9 ff ff ff ff f3 01 00 00 48 39 cd
    Dec 18 01:05:48 Tower kernel: RSP: 0018:ffffc900031dbe98 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffde
    Dec 18 01:05:48 Tower kernel: RAX: ffff88884fa9fac0 RBX: ffff88884faaa100 RCX: 000000000000001f
    Dec 18 01:05:48 Tower kernel: RDX: 0000000000000000 RSI: 000000001fefa611 RDI: 0000000000000000
    Dec 18 01:05:48 Tower kernel: RBP: 0000018f4e3253cb R08: 0000018f4e3253cb R09: 0000000000000001
    Dec 18 01:05:48 Tower kernel: R10: 0000000000000000 R11: 071c71c71c71c71c R12: 0000000000000001
    Dec 18 01:05:48 Tower kernel: R13: ffffffff81e5b120 R14: 0000000000000000 R15: ffffffff81e5b198
    Dec 18 01:05:48 Tower kernel: ? cpuidle_enter_state+0xbf/0x141
    Dec 18 01:05:48 Tower kernel: do_idle+0x17e/0x1fc
    Dec 18 01:05:48 Tower kernel: cpu_startup_entry+0x6a/0x6c
    Dec 18 01:05:48 Tower kernel: start_secondary+0x197/0x1b2
    Dec 18 01:05:48 Tower kernel: secondary_startup_64+0xa4/0xb0
    Dec 18 01:05:48 Tower kernel: handlers:
    Dec 18 01:05:48 Tower kernel: [<0000000008c48ea5>] i801_isr [i2c_i801]
    Dec 18 01:05:48 Tower kernel: [<0000000067bc464a>] vfio_intx_handler
    Dec 18 01:05:48 Tower kernel: [<0000000067bc464a>] vfio_intx_handler
    Dec 18 01:05:48 Tower kernel: Disabling IRQ #16

    Then I started searching and found this post.

     

    Based on the info here, I found that I can 100% replicate the "slow Windows VM" by just plugging in a display into my motherboard. When I do, the GPU utilization goes to near 100% in the Windows VM and the only fix I've found is to reboot unRAID.