Anticast

Members
  • Posts

    22
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Anticast's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Other things to try... Try making sure 'nobody' owns the dropbox folder (User 99) by running: chown -R nobody /mnt/disk1/dbox IIRC, this is one of the things that mgutt's version does on startup, and I did this as well to my dropbox folder. Also, another nice thing that mgutt's version has is that it spams the current dropbox status to the log on an interval. This version doesn't do that. Once the daemon starts, its all quiet. I had to execute the following command in the container to monitor my first-run sync status to make sure it was making progress (which it did, it took about 15 minutes to index my 410k files): dropbox status or if your container is named 'dropbox_dropbox_1' (if you launched it via docker compose from a folder called 'dropbox') then you can check the status from outside the container in the unraid web shell like this: docker exec dropbox_dropbox_1 dropbox status
  2. @rayzor I don't know anything about the folder renaming error, so I can't help there. The best I can think to do is show you my full compose file and container logs just so you can compare and maybe get some ideas... My `docker-compose.yml`: version: "3.8" services: dropbox: image: janeczku/dropbox:latest environment: DBOX_UID: 99 DBOX_GID: 100 volumes: - /mnt/cache/dropbox:/dbox/Dropbox - /mnt/cache/dockerdata/dropbox:/dbox/.dropbox Here are my container logs, pulled for Portainer, after the last two restarts (containers are restarted as part of a backup I run). As you can see, I don't get any interesting info from the logs: 2022-12-29T11:08:21.531538010Z Checking for latest Dropbox version... 2022-12-29T11:08:27.338688852Z Latest : 163.4.5456 2022-12-29T11:08:27.338715502Z Installed: 163.4.5456 2022-12-29T11:08:27.338721952Z Dropbox is up-to-date 2022-12-29T11:08:27.339702163Z Starting dropboxd (163.4.5456)... 2022-12-29T11:08:28.040850317Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/cryptography.hazmat.bindings._openssl.cpython-38-x86_64-linux-gnu.so' 2022-12-29T11:08:28.076813609Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/cryptography.hazmat.bindings._padding.cpython-38-x86_64-linux-gnu.so' 2022-12-29T11:08:28.116893534Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/apex._apex.cpython-38-x86_64-linux-gnu.so' 2022-12-29T11:08:28.242098016Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/psutil._psutil_linux.cpython-38-x86_64-linux-gnu.so' 2022-12-29T11:08:28.246191719Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/psutil._psutil_posix.cpython-38-x86_64-linux-gnu.so' 2022-12-29T11:08:29.502305456Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/tornado.speedups.cpython-38-x86_64-linux-gnu.so' 2022-12-29T11:08:32.567984571Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/wrapt._wrappers.cpython-38-x86_64-linux-gnu.so' 2023-01-02T11:00:35.478754170Z 2023-01-02T11:00:35.608528761Z Session terminated, terminating shell... ...terminated. 2023-01-02T11:08:47.893458300Z Checking for latest Dropbox version... 2023-01-02T11:08:53.778131711Z Latest : 163.4.5456 2023-01-02T11:08:53.778158271Z Installed: 163.4.5456 2023-01-02T11:08:53.778164171Z Dropbox is up-to-date 2023-01-02T11:08:53.779235301Z Starting dropboxd (163.4.5456)... 2023-01-02T11:08:54.319299271Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/cryptography.hazmat.bindings._openssl.cpython-38-x86_64-linux-gnu.so' 2023-01-02T11:08:54.353710302Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/cryptography.hazmat.bindings._padding.cpython-38-x86_64-linux-gnu.so' 2023-01-02T11:08:54.380142041Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/apex._apex.cpython-38-x86_64-linux-gnu.so' 2023-01-02T11:08:54.489037207Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/psutil._psutil_linux.cpython-38-x86_64-linux-gnu.so' 2023-01-02T11:08:54.490007007Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/psutil._psutil_posix.cpython-38-x86_64-linux-gnu.so' 2023-01-02T11:08:55.723200077Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/tornado.speedups.cpython-38-x86_64-linux-gnu.so' 2023-01-02T11:08:58.516284166Z dropbox: load fq extension '/opt/dropbox/dropbox-lnx.x86_64-163.4.5456/wrapt._wrappers.cpython-38-x86_64-linux-gnu.so'
  3. I had this same issue when I mounted '/mnt/user/dropbox' into the container instead of '/mnt/cache/dropbox'. Changing to all my mounted volumes were on '/mnt/cache' instead of '/mnt/user' fixed this for me. You can also pick a specific array disk as well: '/mnt/diskX' if you want your dropbox data to be on the array. Search this thread more for more info, as this info has already been posted earlier.
  4. I echo the much thanks to mgutt for spending his effort in putting this together. I've also been having issues with this image for about a month now with it crashing every 5 to 60 minutes and taking ~48 hours to get from "410k files left to index" down to "370k files left to index." Given mgutt's suggestion of trying a debian container, I looked to the image he forked from, which is 'janeczku/dropbox' which is based on debian. Switching to 'janeczku/dropbox' allowed the dropbox daemon to index my 410k files in about 15 minutes and then it finished syncing about 10 minutes later (but that is a function of how "out of date" the local version is with the online version). I know its not super popular 'round here, but here is my docker compose file in case it can help someone else: version: "3.8" services: dropbox: image: janeczku/dropbox:latest environment: DBOX_UID: 99 DBOX_GID: 100 #DBOX_SKIP_UPDATE: true volumes: - /mnt/cache/dropbox:/dbox/Dropbox - /mnt/cache/dockerdata/dropbox:/dbox/.dropbox This has so far been working well enough for syncing purposes. But one thing that didn't work "out of the box" was checking on the dropbox daemon status, which can be done like this from an unraid shell: docker exec container_name dropbox status Unfortunately, once the daemon was done indexing and started actually downloading/syncing files then checking the status resulted in a python error. This was caused by a bug with the 'dropbox-cli' script in the container that caused it to fail due to a text encoding issue. Luckily, its easy enough to fix by hand, at least partially, by changing line 67 of '/usr/bin/dropbox-cli' from this enc = locale.getpreferredencoding() to this enc = "utf-8" I'm not super docker savvy so I wasn't able to figure out how to edit the file in the container, so I just moved '/usr/bin/dropbox-cli' to '/dbox/.dropbox/dropbox-cli' (which is mounted outside the container), edited with vim from the unraid web shell, and then copied the file back. Now 'docker status' works as expected. I don't suspect the above fix will work for everything because the docker-cli script is a bit of a hot mess. There are other places in the script that are still directly pulling from 'locale.getpreferredencoding()' instead of reading the 'enc' value, so those places may still fail. I also first tried just pulling in the latest official script from here (which appears to have fixed the bug) but found that it requires python3 which isn't available in the container and I didn't want to muck with that when I could just change a few lines of python.
  5. Actually, for ASUS routers it looks like you can change what the secondary DNS server is via ssh: https://www.reddit.com/r/pihole/comments/mawsjz/comment/grva31i/?utm_source=share&utm_medium=web2x&context=3
  6. If your router is like mine (ASUS) then it only lets you set one DNS server in the DHCP settings because it then sets itself as the secondary. So, if you want two piholes running, then Set router's DHCP DNS server to pihole1. This then sets the DHCP DNS backup to the router. Set the router's first DNS server to pihole2 and its second server to some external fallback, if you want that. With this set up, if pihole1 goes down then DNS requests will get sent, via your router, to pihole2. Not ideal as you'll lose the requesting client on pihole2, but at least you're still handling the DNS.
  7. Any word on this? I've set up my Windows VMs to also hibernate but appear to be running into the same problems as golli53. I'm running unRAID 6.9.2.
  8. Just in case it helps someone else later... I found a better solution to this thanks to this post: By modifying `smb-extra.conf` I made a share that uses the authenticated user name as part of the path. No need to make shares for each user. I currently have this mapping to a subfolder being mapped into a Dropbox container and now when all Windows users save something to their network drive it syncs to dropbox in a few seconds. [userhome] comment = %U home directory path = /mnt/disk2/dropbox/users/%U valid users = %U browsable = yes writable = yes create mask = 0777 directory mask = 0777 vfs objects =
  9. Thanks again for the guidance itimpil! I don't know how I screwed it up before (maybe wrong ordering of connecting to samba drives like you said) but I cleared out the shares, made new shares for each user, logged in to Windows as that user, mapped their share to a drive, and moved their Documents to that share drive. Logging out and logging back is is persisting the drives per user (like you said it would) so I think I'm in business. Thanks again!
  10. Awesome, thanks itimpi! I'll reset the shares and unRAID users and give it another shot!
  11. Thanks for the responses! I'm likely doing something wrong, but I was having issues getting multiple connections from multiple users. My assumption was because its one computer (running Windows Server) and each user is connecting via RDP. If you're saying that this setup should still work, then I'll tear it down and try again.
  12. Yes, if share 'user_1' and share 'user_2' have the same credentials then both users can read/write all other user's data.
  13. I'm running Windows Server as a VM with accounts for my family. I want to give each user their own *private* folder somewhere on the array and map that location to their documents. I tired creating a share for each user but I'm not able to have multiple samba connections to unRAID, and I don't want to share a base "users" folder because then all users can modify all other users documents. Any ideas on how I can set this up?
  14. A little bit more data... I've updated all VirtIO drivers to 0.1.185 and still got BSOD. I removed the bridge interface from the VM and still get the BSOD. I waited at the log in screen and also still get the BSOD (no user logged in).
  15. I just upgraded from 6.8.3 to 6.9.2 and now my "main" Windows VM keeps crashing (blue screen of death, BSOD) about 2 minutes after the first user logs in. I booted the VM into safe mode, downloaded BlueScreenView http://www.nirsoft.net/utils/blue_screen_view.html and used it to see that the cause is DRIVER_IRQL_NOT_LESS_OR_EQUAL 0x000000d1 and sometimes the "Caused By Driver" is ntoskrnl.exe (NT Kernel & System) and sometimes its ndis.sys (Network Driver Interface Specification). I don't know the right way to fix this. I assume that there has been a KVM change in unRAID from 6.8.3 to 6.9.2 and now I need to update the VM drivers but I don't know which ones. My first through is the network driver due to ndis.sys being listed in the minidump, but running in safe mode with network connectivity seems to work fine with the VirtIO network driver. Any suggestions or directions on something to try would be appriciated.