strike

Members
  • Posts

    636
  • Joined

  • Last visited

Everything posted by strike

  1. Before you loose any more files, ssh into your server or open a terminal in the webui and run this command on all your disks: find /mnt/disk1/ -type f -exec chattr +i "{}" \; Now all your files can not be deleted, renamed or edited. When you figure out what is causing the issue and figured out how to solve it you can run this command on all your disks to "unlock" your files again. find /mnt/disk1/ -type f -exec chattr -i "{}" \; Too figure out what is causing it start one container at the time and watch the logs. Maybe your radarr/sonarr etc has been hacked. You're probably reverse proxying them with swag right? Look at the nginx access and error logs. Check every container that has access to your files. And change your password for all containers you're accessing through swag asap.
  2. You should be able to ssh into your server an enter this command. use_ssl no Then you can access the server webui with the IP address. Go to settings->management access and create a new cert. Edit: And of course set the static IP again, forgot about that.
  3. With VPN off there are no iptables rules in place, but with VPN enabled there are very strict iptables rules in place to prevent leaking.
  4. You got it wrong this time too, it should be 192.168.178.0
  5. I use Cathy for this. I found it here on the forum, can't remember who mentioned it tho. Just scroll down a bit on that site and you'll find it. Awesome tool!
  6. If you are referring to the corrupt db issue your safest bet is to back up your appdata and just update. If you run into the issue restore from backup. The truth is, the longer you wait to update the more likely you're gonna run into issues. This is because in major updates there are gonna be database changes and the db needs to be migrated to the latest version. Sometimes this can cause issues especially when you have not been keeping up with updates. There has been many updates to radarr since this issue and the more you wait the risk is higher that they updates includes more upgrades to the db. And because you're now so far behind the migration of the db has a higher risk of failing. So just get it over with already IMHO. This goes for all software updates btw, keep backups and update regularly to avoid issues in the future. Yes, sometimes updates has issues, but you're gonna have even more issues later on if you don't keep up to date.
  7. And now I remembered WHY it matters just to put that out there as well. It's because unraid does not know how big the file you're going to copy is, it only knows how much space is left on the disk. So if the file is bigger than the space left on the disk it will fail if the minimum free space is not set. If unraid sees that there is less than the minimum free space left it will choose another disk IF the split level permits it. If not it will continue to fill the disk until it runs out of space or files are manually moved to another disk to free up space. Edit: Paraphraseing, unraid do know how big your files are. But when creating a file unraid does not know how big it's going to be before it created. And when copying/moving a file you're essentially making a new file just with the same data.
  8. It might be because of the way rsync is copying directories/files. As I said in my previous post rsync will create the entire directory structure before copying any files. And thus will most likely try to copy all the files into the already created directories. If you do a normal copy I think you will find the cache setting is working as intended. I haven't tested it tho, as I never use the cache feature. I have cache no set on all my shares except the appdata and VM share which is set to only.
  9. Just run the mover and it will move data already copied from cache to the array, then set cache to no and do the rest of the copy. When finished set cache to yes if you want. Also be sure to set your split level to split any dir. you can change it after the initial copy if you want. Split level is important because rsync will create all the directories first before it copies any files, so if the split level is set to anything other than to split any directory on the initial copy you will run into the same issue with the disk filling up.
  10. I know, but the other containers where running through the binhex vpn container, thus making it a binhex vpn config issue. But glad the faq helped you out
  11. Have you read the faq for the binhex vpn containers, specifically Q24? https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md If that doesn't solve your issue you should post in the appropriate support thread for the container. It has nothing to do with the unraid OS. To find the correct support thread right click the container and choose support.
  12. Are you using rclone to mount your gdrive by any chance? If so, then rclone is most likeliy running as root and screwing up your permissions. Add this to your rclone mount script: --uid 99 --gid 99 Btw, you should have mentioned rclone and gdrive (if I'm right and your'e using that) in your posts as this is vital information and that's why nobody was able to help you. I'm only guessing here because your host path in your screenshot says mergerfs/gdrive...
  13. You had it almost right in your last post. The port is wrong, should be 8118. So change the port and change the IP back to the 198.... IP
  14. Do this: https://github.com/binhex/documentation/blob/master/docker/faq/help.md also post your docker run command in your next post
  15. From your log: 022-04-03 10:00:34,126 DEBG 'start-script' stdout output: 2022-04-03 10:00:34 [UNDEF] Inactivity timeout (--ping-restart), restarting 2022-04-03 10:00:34,127 DEBG 'start-script' stdout output: 2022-04-03 10:00:34 SIGHUP[soft,ping-restart] received, process restarting See Q17: https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
  16. That looks good. Wrong volume mappings usually cause the symptoms you're seeing, but your mappings looks ok. So it's not that. Do you have free space in your Downloads folder? Can you write to it from inside the delugevpn container? Try to open a terminal in the container and do: fallocate -l 100M /data/completed/test.img That should write a test file in your completed folder. If it fails your have a permission issue.
  17. Post your docker run command and a screenshot of your deluge downloads settings
  18. Remove the port forwarding on your router, it's serves no purpose when you're using a vpn and is a security risk. Also request a different port from torguard and set that as the incoming port in deluge. Port 58846 is in use by the deluge deamon and I suspect that's why the active port is showing as 58847 which is the next port number in line.
  19. Post your docker run command or a screenshot of your container template settings. Also post a screenshot of the downloads settings in deluge Edit: Be sure to redact your username/password
  20. What IP are you trying to reach and what is your unraid IP? You can not use the unraid IP, but you have to use the IP you set up in zerotier. Edit: And have you authorized you device in zerotier?
  21. Have you tried just rebooting? Do you by any chance have wireguard on unraid enabled and/or a VM running?
  22. This usually happens when you run another docker container with the same port assignments. Check you docker tab to ensure that no other container is using the same ports. Or maybe the easier way is to click edit on the container, scroll down and click "show docker allocations". If there are multiple containers running on the same port they will be marked in red IIRC. If there are any containers using the same port, change ports on one of them. There will be no logs created for the container because the container fails to start. But you will see an error in your unraid syslog I think. There could be several other reasons why you get this error as well, so you will have to post your diagnostics in your next post if you want more help.
  23. As I said you can not use openvpn becuse you can not port forward due to carrier grade NAT. And you're using openvpn wrong anyway. To reach your unraid machine with openvpn, you need a openvpn server on the unraid machine and a client on the machine you're connecting from. But as I said just forget about openvpn, you can not use it if you can not port forward on your router. use zerotier instead. You can use the openvpn server on your VPS as a router in a way so that clients can connect to each other but that is a advanced feature I think. If you want to try this google openvpn client-to-client. But I would just use zerotier instead and connect directly to your unraid machine instead of routing everything through a VPS, seems like too much trouble if you ask me.