Zetas

Members
  • Posts

    3
  • Joined

  • Last visited

Everything posted by Zetas

  1. I hope this isn't considered a necropost but I seem to be having an issue with unraid connecting to my containers over the br0.5 i created. Is this supposed to be blocked? From the unraid console: root@Nexus:~# ping 10.0.1.5 PING 10.0.1.5 (10.0.1.5) 56(84) bytes of data. From 10.0.1.6 icmp_seq=1 Destination Host Unreachable From 10.0.1.6 icmp_seq=2 Destination Host Unreachable From 10.0.1.6 icmp_seq=3 Destination Host Unreachable From 10.0.1.6 icmp_seq=4 Destination Host Unreachable From 10.0.1.6 icmp_seq=5 Destination Host Unreachable From 10.0.1.6 icmp_seq=6 Destination Host Unreachable ^C --- 10.0.1.5 ping statistics --- 7 packets transmitted, 0 received, +6 errors, 100% packet loss, time 6136ms pipe 4 And this is what my route looks like: root@Nexus:~# ip route default via 192.168.1.1 dev br0 proto dhcp src 192.168.1.44 metric 217 default via 10.0.1.1 dev br0.5 proto dhcp src 10.0.1.6 metric 219 10.0.1.0/24 dev br0.5 proto dhcp scope link src 10.0.1.6 metric 219 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 172.18.0.0/16 dev br-17bf4a1665ee proto kernel scope link src 172.18.0.1 linkdown 192.168.1.0/24 dev br0 proto dhcp scope link src 192.168.1.44 metric 217 Unraid can ping itself (10.0.1.6) and the gateway (10.0.1.1) but not any of the docker containers using the same br0.5 tagged network. I don't think it is "by-design" it seems like its a configuration issue on my end. I followed all the instructions on the guide and I'm running a unifi managed switch, USG and cloud key. Everything else seems to be working. I can access the br0.5 containers from any other device on the network.
  2. Ok, so I should fire off the mount with a subscript. And I suppose use a different script with a "killall rclone" or something like that? Not as elegant as I was hoping for but as long as I can make it work it's fine. I am curious to know why the abort button doesn't actually abort the script. It doesn't seem like it's very useful. If you set a script to "run in the background" it seems reasonable to assume the abort button will cancel that script. Right now it seems like it does nothing.
  3. I'm having an issue with user scripts where when I have a script set to run in the background and then abort it, it stays running. This is my script: #!/bin/bash #---------------------------------------------------------------------------- # This script mounts your remote share with the recommended options. | # Just define the remote you wish to mount as well as the local mountpoint. | # The script will create a folder at the mountpoint | #---------------------------------------------------------------------------- # Local mountpoint mntpoint="/mnt/disks/cloud" # It's recommended to mount your remote share in /mnt/disks/subfolder - # This is the only way to make it accesible to dockers # Remote share remoteshare="cache:" # If you want to share the root of your remote share you have to # define it as "remote:" eg. "acd:" or "gdrive:" home="/mnt/user/appdata/rclone" tmpupload="$home/tmp" #--------------------------------------------------------------------------------------------------------------------- mkdir -p $mntpoint mkdir -p $tmpupload rclone --allow-non-empty --allow-other mount $remoteshare $mntpoint --cache-db-path=$home --cache-tmp-upload-path=$tmpupload --cache-tmp-wait-time=30m -vv And after "aborting" the script using the userscripts button this is what `ps auxw |grep rclone` says on the system: root 3230 0.0 0.0 9812 2044 pts/0 S+ 13:48 0:00 grep rclone root 27063 0.0 0.0 11956 2908 ? SN 12:48 0:00 /bin/bash /tmp/user.scripts/tmpScripts/rclone_mount_plugin/script root 27066 0.0 0.0 11960 2908 ? SN 12:48 0:00 /bin/bash /usr/sbin/rclone --allow-non-empty --allow-other mount cache: /mnt/disks/cloud --cache-db-path=/mnt/user/appdata/rclone --cache-tmp-upload-path=/mnt/user/appdata/rclone/tmp --cache-tmp-wait-time=30m -vv root 27068 0.8 0.2 155556 66464 ? SNl 12:48 0:31 rcloneorig --config /boot/config/plugins/rclone/.rclone.conf --allow-non-empty --allow-other mount cache: /mnt/disks/cloud --cache-db-path=/mnt/user/appdata/rclone --cache-tmp-upload-path=/mnt/user/appdata/rclone/tmp --cache-tmp-wait-time=30m -vv This same behavior is apparent in my plexdrive backgrounded script also. After aborting it still stays running. Am I doing something wrong here?