Xoron

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by Xoron

  1. I resolved my issue by looking deep into the deluge logs. First, I was getting an error that /config/ssl/daemon.cert could not be read. It has permissions of 600 and was owned by root:root. I changed permissions to 666 on the file. chmod 666 /config/ssl/daemon.* Next it was throwing the same error in /config/auth. So I performed the same chmod on it chmod 666 /config/auth Now I no longer get the never get the error below, and the GUI loads correctly: [info] Deluge process started [info] Waiting for Deluge process to start listening on port 58846...
  2. I'm having the same deluge no WebUi issue since updating to 6.11.5 today. I too am using Mullvad VPN for the docker image. I'm getting the same exact message in the logs that I've seen mentioned elsewhere [info] Deluge process started [info] Waiting for Deluge process to start listening on port 58846... What I've tried so far Force Updating the Deluge VPN Container Removing the container, and reinstalling it Generated a new Mullvad OVPN file Changed DNS servers around in the Container config I think my VPN is actually up, just the WebUI for Deluge won't load. I can actually access the Web UI of my other dockers that I route through Deluge VPN. Following this thread for a solution.
  3. Thanks to @JorgeB for pointing me to this thread, and to @Cessquill for the good work finding, working in, and resolving this obscure issue. It's plagued me for a few months (and I was away traveling, so I was hoping I wouldn't drop my array while away. Simple change, reboot, all good.
  4. Thanks for pointing me to that thread @JorgeB. I had searched the forums, but didn't stumble on it myself.
  5. I have two Ironwolf ST8000VN004 7200RPM drives as my Parity Drives. One is less about 9 months old, the the other about 4 months old. Both drives are throwing errors. I've precleared each drive, without any errors Replaced the SATA cable (SFF-8087) that is connected to both drives. Run an extended SMART self test on one of the drives, no errors detected Looking at the smart data, there are no apparent SMART values out of whack I reviewed the diagnostics for the drive, but nothing obvious jumps out. (Attached for review) Looking for suggestions on what might be wrong, and what to look at next to fix this issue (If it's a false error, then suppress it.)
  6. I'm having a similar issue with my WD 8TB IronWolf 7200RPM Drives. Now I see that the read error, really isn't an error. BUT it's throwing errors on the GUI and I'm getting alerts that the drive is failing. Other than disabling the alerts (which I'd rather not do), is there a way to have unraid return the correct value.
  7. I have the same exact problem. My main network is 10.168.10.0/24 which on my unraid box is on the br0 interface. As well, I have interfaces for non-internet accessible VLans (br2, br2.200). Whenever I reboot, unraid changes my default route to the 10.168.200.1 IP address on the br2.00 vlan interface. I can add a default route to the 10.168.10.1 address using the GUI, but I can't remove the default one on br2.200. I have to SSH in and run the command root@Nas:~# route del -net 0.0.0.0 gw 10.168.200.1 This happens on every reboot, with no way to have the default I set stick to 10.168.10.1 after a reboot.
  8. Well, I've figured out the issue, and it was network related. Somehow, on a recent reboot of my unraid server, it set my default gateway to my DMZ interface's Default gateway. Of course, my DMZ has much tighter rules, and didn't allow outbound traffic to the internet on ports 1194/1195/1300, so the tunnel would never come up. Changed the default route to the correct network segment, and boom, tunnel came up. @Binhex, thanks for pointing me at it being a networking issue.
  9. @Binhex thanks fro the quick reply. I've downloaded the config files from Mullvad multiple times, and the same servers are in the config file, but with different ports all the time. (I've seen 1195, 1197, 1300), so this leads me to believe that Mullvad has multiple ports open. Again, I've used the identical config files on my PC's OpenVPN client, and connected without issue. So I think the openvpn file is correct. That being said, I did manually modify the config file to use port 1194, but same results. It's possible that this is some sort of unraid networking / firewall issue. I've got the delugevpn docker setup in Bridged mode (which from what I've read looks to be the preferred choice). Does this mean that the traffic, from deluge, hitting my firewall would be the same IP as my unraid server? (Want to see if something strange is happening on the firewall side).
  10. First: @binhex, thanks for all the work you put into your dockers and the support you offer on this forum. Forum users, looking for help with a VPN issue I'm having. I had delugevpn setup and working for months with my VPN provider (Mullvad) using openvpn. It's been working great up until now. But since a recent server shutdown, I can't get the VPN tunnel to work in the docker + I can't access the deluge gui. I'm at a loss to what the issue is. What I've done so far (with no success): When I set the Container Variable: VPN_ENABLED to NO, I can then access the GUI on port 8112. Downloaded and implemented a new config file for openvpn from Mullvad I've confirmed the openvpn config file by using the exact same config file in my openvpn client on my PC. So I know the file is structured correctly. As well, my PC and Unraid server are on the same network segment, and have the same firewall rules applied to both. Tried using Mullvad VPN servers in another region Pulled an older version of the docker to see if openvpn 2.5.5 client was the issue. (binhex/arch-delugevpn:2.0.4-2-01) Running ifconfig from inside the docker, I don't see tun0 up (which I think is the correct interface for the VPN tunnel) I've completely wiped the docker, cleaned up the files in appdata and pulled the full delugevpn again. I've stopped and restarted docker services on my unraid server. I've looked over the supervisord.log so many times, I don't see any errors or anything that could explain why the tunnel isn't coming up (See the attached supervisord.log file) What am I missing / what else can I look at to see why this isn't working? supervisord.log
  11. I've been running the Cloudberry Docker, backing up to a Backblaze bucket for a while now. But I am missing files and folders from my B2 bucket. I purchased the Linux license of CB, and I'm only trying to backup about 600G of data (well under the limit of the license I have). I've had a look at the settings on CB, but everything looks like it's configured correctly. Below is the retention policy of the backup job, which overrides the global settings: NOT CONFIGURED to delete any files automatically. Keep 3 versions of each file Delete files that have been deleted locally after 30 days My B2 Bucket is configured to keep all versions of the file (IE allow CB to manage it) Yet, with these settings, I'm still not seeing all of my folders backed up. Is this a bad config, or is CB just that unreliable?