ericswpark

Members
  • Posts

    60
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ericswpark's Achievements

Rookie

Rookie (2/14)

9

Reputation

2

Community Answers

  1. @itimpi thanks! Found the diagnostics that UnRAID saved before rebooting (attached below). Note that while the timestamp is newer, that's because the diagnostics in the OP was generated and downloaded from my browser which has an earlier time zone. This is the diagnostics captured before the one in OP. I did skim through the logs but I don't understand what it means: Dec 3 01:15:55 dipper root: rmdir: failed to remove '/mnt/user': Device or resource busy Dec 3 01:15:55 dipper emhttpd: shcmd (1343364): exit status: 1 Dec 3 01:15:55 dipper emhttpd: shcmd (1343366): rm -f /boot/config/plugins/dynamix/mover.cron Dec 3 01:15:55 dipper emhttpd: shcmd (1343367): /usr/local/sbin/update_cron Dec 3 01:15:55 dipper emhttpd: Retry unmounting user share(s)... Dec 3 01:15:58 dipper root: Status of all loop devices Dec 3 01:15:58 dipper root: /dev/loop1: [2049]:12 (/boot/previous/bzfirmware) Dec 3 01:15:58 dipper root: /dev/loop0: [2049]:10 (/boot/previous/bzmodules) Dec 3 01:15:58 dipper root: Active pids left on /mnt/* Dec 3 01:15:58 dipper root: USER PID ACCESS COMMAND Dec 3 01:15:58 dipper root: /mnt/addons: root kernel mount /mnt/addons Dec 3 01:15:58 dipper root: /mnt/cache: root kernel mount /mnt/cache Dec 3 01:15:58 dipper root: /mnt/disk1: root kernel mount /mnt/disk1 Dec 3 01:15:58 dipper root: /mnt/disk2: root kernel mount /mnt/disk2 Dec 3 01:15:58 dipper root: /mnt/disk3: root kernel mount /mnt/disk3 Dec 3 01:15:58 dipper root: /mnt/disks: root kernel mount /mnt/disks Dec 3 01:15:58 dipper root: /mnt/remotes: root kernel mount /mnt/remotes Dec 3 01:15:58 dipper root: /mnt/rootshare: root kernel mount /mnt/rootshare Dec 3 01:15:58 dipper root: /mnt/user: root kernel mount /mnt/user Dec 3 01:15:58 dipper root: root 1142 ..c.. rpcd_mdssvc Dec 3 01:15:58 dipper root: root 2998 ..c.. rpcd_mdssvc Dec 3 01:15:58 dipper root: Active pids left on /dev/md* Dec 3 01:15:58 dipper root: Generating diagnostics... Dec 3 01:16:00 dipper emhttpd: shcmd (1343368): /usr/sbin/zfs unmount -a Dec 3 01:16:00 dipper emhttpd: shcmd (1343369): umount /mnt/user Dec 3 01:16:00 dipper root: umount: /mnt/user: target is busy. Dec 3 01:16:00 dipper emhttpd: shcmd (1343369): exit status: 32 Dec 3 01:16:00 dipper emhttpd: shcmd (1343370): rmdir /mnt/user Dec 3 01:16:00 dipper root: rmdir: failed to remove '/mnt/user': Device or resource busy Dec 3 01:16:00 dipper emhttpd: shcmd (1343370): exit status: 1 Dec 3 01:16:00 dipper emhttpd: shcmd (1343372): rm -f /boot/config/plugins/dynamix/mover.cron Dec 3 01:16:00 dipper emhttpd: shcmd (1343373): /usr/local/sbin/update_cron Dec 3 01:16:00 dipper emhttpd: Retry unmounting user share(s)... I'm guessing PIDs 1142 and 2998 of the process `rpcd_mdssvc` is responsible for not allowing the unmount, but I don't know what that process is. Google says it's related to Samba, but I thought UnRAID was supposed to terminate Samba before trying unmount. Any ideas why this didn't happen here? dipper-diagnostics-20231203-0115.zip
  2. Hi, After updating from 6.12.5 to .6, I issued a reboot, and when the server came back up it said it detected an unclean shutdown. I'm guessing something took a while to unmount and UnRAID had to forcefully reboot. I don't remember where the logs for the hard shutdown are stored – is there a way to recover that? I've attached a diagnostics zip taken after the reboot, but I'm not sure if that includes the logs before the shutdown. Any advice on pinpointing this would be appreciated! dipper-diagnostics-20231202-1120.zip
  3. Please see post for more information. Sometimes the update dialog will be blank and not show the current progress (like Docker pulling the new image, etc.) In the console, the following error is logged: 11:00:27.506 Uncaught TypeError: progress_span[data[1]] is null progress_dots[data[1]]< https://dipper:1443/Docker:1494 setInterval handler* https://dipper:1443/Docker:1494 emit https://dipper:1443/webGui/javascript/dynamix.js?v=1700087630:15 onmessage https://dipper:1443/webGui/javascript/dynamix.js?v=1700087630:15 listen https://dipper:1443/webGui/javascript/dynamix.js?v=1700087630:15 start https://dipper:1443/webGui/javascript/dynamix.js?v=1700087630:15 openDocker https://dipper:1443/Docker:252 updateContainer https://dipper:1443/plugins/dynamix.docker.manager/javascript/docker.js?v=1700087630:108 i https://dipper:1443/webGui/javascript/dynamix.js?v=1700087630:51 h https://dipper:1443/webGui/javascript/dynamix.js?v=1700087630:51 t https://dipper:1443/webGui/javascript/dynamix.js?v=1700087630:51 u https://dipper:1443/webGui/javascript/dynamix.js?v=1700087630:51 updateContainer https://dipper:1443/plugins/dynamix.docker.manager/javascript/docker.js?v=1700087630:105 action https://dipper:1443/plugins/dynamix.docker.manager/javascript/docker.js?v=1700087630:12 dispatch https://dipper:1443/webGui/javascript/dynamix.js?v=1700087630:5 handle https://dipper:1443/webGui/javascript/dynamix.js?v=1700087630:5 4 Docker:1494:59 Sometimes (like 1 out of 30 attempts), the update dialog will function normally, but I can't tell what the trigger is. I can't test more because I only have a chance to reproduce whenever Docker images have a new update. This is reproducible from the version in the original post (6.12.3) up to 6.12.5. I haven't updated to 6.12.6 yet, but I'm pretty sure the issue is there as well because the .6 release only updates ZFS if I'm understanding the changelog correctly. I will edit this comment once I update and check.
  4. Hi, is there a way of backing up the Docker apps without stopping the containers? I know that that might result in a partial backup as files are modified by the running containers, but the entire process results in about a 30-minute downtime every week.
  5. Not really. When I said "there's no reason why SMB can do so as well", I meant that in terms of encryption overhead. I also put in the qualifier: "given improved latency." However, when you put SMB and SFTP next to each other, you can see why it doesn't perform very well over VPN. SMB is a very chatty protocol, and the increased latency means slower transfer speeds and the general choppyness that I am experiencing. In terms of protocol overhead, SMB just cannot compete with SFTP (and/or other protocols). I was hoping there would be some sort of Docker thing available, but I think I may have to resort to a VM, as you said. Thanks for the reply.
  6. Tailscale's DERP servers are only used as a last resort, and when it is being used I don't fault the SMB speeds. The server is running a 4600G, and the client device is an M1 Pro. I don't think encryption overhead is to blame here. If SFTP over the same connection setting can do 10 MB/s, then there's no reason why SMB can do so as well, given improved latency. But VPN will always add latency, and SMB suffers over high-latency links. Anyway, I think we've gone off topic. I'm looking for a protocol or something else to use as an alternative to SMB, only when I'm accessing the server away from home. UnRAID's built-in SFTP (and FTP) servers don't cut it because they only permit root user login, which I cannot give out to my users (and is generally more tedious as I have to go in and manually set correct permissions after the fact). At this point. I don't think trying to improve what is hitting against a theoretical limit of SMB is going to be beneficial here.
  7. Sure! So on SMB over VPN, I'm getting around 1-5 MB/s on a good day, but with horrible latency. That means that file listings take a long time to load (around 10-20 seconds), transfers take a long time to start (around 30 seconds minimum), and the actual progress update is rather sporadic and "jumpy" (about once every 10-20 seconds). For contrast, on LAN (again, 50 ms ping on average), listings take less than 2 seconds, transfers start in less than 2 seconds, and the actual progress update is done extremely rapidly (once every second or so). I get 200 Mbps down/up (symmetrical), and the server has 100 Mbps down/up (also symmetrical). I know losses and overhead and all that, but I should be getting around 10 MB/s, not 1-5 MB/s. And I do get 10 MB/s if I transfer using UnRAID's built-in SFTP. Tailscale uses DERP relays in case a direct connection cannot be established, but most of my transfers are done over a direct link, so VPN server speed shouldn't be an issue. On restrictive networks I take the DERP relay overhead into account. I mean, 1-5 MB/s over VPN is also "fast", but compared to SFTP I think it's rather slow and high-latency to be usable. Sure, it works in a pinch, but actually navigating and accessing files on the NAS is painful, to the point where all of my family members complain whenever they have to do it away from home.
  8. @JorgeB thanks, but I'm looking for alternative file transfer technologies, not VPN protocols. As far as I'm aware, Tailscale uses Wireguard, and I'm using Tailscale right now. I'm looking for a way to get better speeds over VPN, as SMB doesn't perform very well over high-latency networks bridged via VPN.
  9. When in an SSH session, I sometimes need to modify/move/create a lot of files and directories. As I can only log in as the root user, this messes up the permissions. I know I can technically `chmod` them, but it would be way easier to switch to the `nobody` user with `su nobody`, make the necessary changes, then exit the session and be dropped right back to the root SSH shell.
  10. Basically title. When I'm at home, I'm getting great performance. But when I'm away from my home network and access my server using a VPN, it adds latency and SMB slows to a crawl. Think fifteen seconds just pulling up a directory listing crawl. Or 30 seconds transferring a 2 MB file crawl. How much latency? Not much. When I'm at home, I get <100 ms latency. When I'm out and about, the latency varies depending on the connection, but I've noticed the horrible performance even with 200 ms latency. I wanted to use something like SFTP, but learned that UnRAID doesn't support SFTP unless you log in as the root user. Which is a huge pain because you need to manually set correct permissions and other members of my family obviously cannot use the root user to access everything on the server. I previously tried Nextcloud, but found the experience very janky. The Nextcloud Docker tends to corrupt the DB, and having Nextcloud as a middleman still feels clunky compared to mounting the server directly using SMB/SSHFS. Anybody have any tips on alternative file access methods? Maybe an FTPS/SFTP server via Docker? How do you access files on your server when you're away from home through VPN that has high latency?
  11. There is an error in the console during the update: Uncaught TypeError: progress_span[data[1]] is null progress_dots[data[1]]< https://dipper:1443/Docker:1514 setInterval handler* https://dipper:1443/Docker:1514 emit https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:15 onmessage https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:15 listen https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:15 start https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:15 openDocker https://dipper:1443/Docker:252 updateContainer https://dipper:1443/plugins/dynamix.docker.manager/javascript/docker.js?v=1678007099:108 i https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:51 h https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:51 t https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:51 u https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:51 updateContainer https://dipper:1443/plugins/dynamix.docker.manager/javascript/docker.js?v=1678007099:105 action https://dipper:1443/plugins/dynamix.docker.manager/javascript/docker.js?v=1678007099:12 dispatch https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:5 handle https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:5 Docker:1514:59 Uncaught TypeError: progress_span[data[1]] is null <anonymous> https://dipper:1443/Docker:1518 emit https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:15 onmessage https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:15 listen https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:15 start https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:15 openDocker https://dipper:1443/Docker:252 updateContainer https://dipper:1443/plugins/dynamix.docker.manager/javascript/docker.js?v=1678007099:108 i https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:51 h https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:51 t https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:51 u https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:51 updateContainer https://dipper:1443/plugins/dynamix.docker.manager/javascript/docker.js?v=1678007099:105 action https://dipper:1443/plugins/dynamix.docker.manager/javascript/docker.js?v=1678007099:12 dispatch https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:5 handle https://dipper:1443/webGui/javascript/dynamix.js?v=1680052794:5 Docker:1518:5 Could this be related?
  12. Checking devtools, I see the webUI tries to establish a connection to a websocket. A GET request is sent to wss://dipper:1443/sub/docker (the 1443 port is because I have the webUI running on port 1443), but the response is 101 (Switching Protocols): The response tab is showing that data is properly flowing in, suggesting that the connection has succeeded? I have no idea why it won't update the window, then. Any ideas on what is failing here? How does the WebUI communicate status updates from the Docker service when updating a container?
  13. I'm not sure exactly when this bug appeared, but I'm running 6.12.3 and the Docker update screen will not update and show the image being pulled down to the server: trim-start.mov That screen stays like that until the stop and restart part of the update comes into play: end.mov Anybody else run into a similar problem? Diagnostics attached. dipper-diagnostics-20230830-1532.zip
  14. Users should be able to access the server over the built-in SSH/SFTP server. Currently, the `/etc/passwd` file is set so that shell access is disabled for regular users, and no home directory is assigned. Only the root user can use the built-in SSH/SFTP server. If there is an option in the user settings page that enables shell access, it should allow for users to modify and create files over SSH/SFTP, which is much more preferable than using the root account and manually checking to make sure that the correct permissions are applied. Plus, SFTP is much more reliable over VPN compared to SMB.