gacpac

Members
  • Posts

    669
  • Joined

  • Last visited

Everything posted by gacpac

  1. I think I found a solution Inside of the let's encrypt conf file for nextcloud I found location / { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_nextcloud nextcloud; proxy_max_temp_file_size 2048m; proxy_pass https://$upstream_nextcloud:443; } } Which makes sense why my files stop at 2GB. I'll give it a try and check again
  2. Honestly, for who knows what reason my drive wasn't getting unmounted. I uninstalled the unassigned devices plugin and installed the open files plugin. I was able to do it after that, and then put back the unassigned devices.
  3. @dorgan Hey I restarted my server today and now I see this. Is this how it's supposed to work. Because it looks like it's saving information for the running server. It's lost after restart.
  4. I know the VPN part I use it as well but if my array goes down, my VPN goes with it. The security I have for unraid goes as follows. I'm forwarding an unknown port to port 443 and accessing my unraid with a certificate from Let's Encrypt for SSL. Also I created a login password Plus I'm using ddns because I don't have a static IP
  5. Do I need to use the companion plugin to use the app? I don't think I need to but just to confirm. I forwarded a port through my router to access the server outside of the location.
  6. @dorgan I also noticed. Does this plugin save bandwidth information from previous months? I can only see information from the current month.
  7. Hey I have a question. Currently I use the plugin and it tells me my usage. But I'm having a problem where I need to know, how much INTERNET TRAFFIC I'm using and INTERNAL TRAFFIC. I think your plugin gives you both as one right.
  8. I like Krusader and don't use it often. But is this normal? I already emptied the trash and deleted the app and reinstalled again.
  9. Your plex works fine outside the network. BTW. Please edit that picture because everybody can see your IP. You are exposing yourself, be careful.
  10. Thank you so much @Djoss , I'll keep the docker installed and check the forum for the latest updates. I assume the process will get easier over time.
  11. I currently use 3 ways. 1. Create a redirect using let's encrypt or nginxproxy manager. 2. Using a VM inside unraid connecting through realvnc or TeamViewer 3. Also using a VPN with the OpenVPNas docker The downside of all these, which is what I struggle. I can't access the server if the array is off. Which is something I want.
  12. Of course, I get what you mean. If a bigger file shows, it goes to the array. Look what my initial problem was. I was downloading 100GB worth of files, each of them was 600mb, and my cache was set to 2GB. Cache filled up, and everything broke. So, my solution was to change it to 500Mb, and it somehow worked. But then I was using Time Machine with the cache for added performance, and each of those files is 8Mb, hence filling up again and breaking everything. So I had to change the cache setting again. Now and then I copy files to the share, backups for VMs and my cache fills up if the files are smaller than what I set for the cache. And mover works fine, but my cache SSD is 128gb, which is fine for most people, even for me. And when I'm doing multiple things at once. It tends to fill pretty quick, and I have to do the mover manually, and I'm okay with that because I'm working with the server. But I'm downloading couple shows or ISOs for operating systems which tend to vary in sizes. Example, my animes are about 100mb each. IF the cache is set for 2Gb and the size of what I'm downloading doesn't compare to the limit set, the option for moving to the array will not take place. My solution was for now, get the downloads outside the cache and the array and then have the Sonarr or Radarr move it. Additionally, I created a path for some downloads or operations, to the cache, and a location that saves in the array. What would be nice is that I could set up a limit meaning, If the cache drive is 90 or 95% full, go to the array. Hey, maybe I have a wrong setting. I can send a screenshot of what I have. At first, I sent the logs, and they quickly figured it was the cache filling too quickly.
  13. The largest file I will ever write. So look, last few weeks I was using it with Time Machine to complete the first backup, and I had to set the biggest file to be 8MB so when the cache gets full it will send the remaining to the raid. But before that happened to a TV series I was downloading, the biggest file was 600MB and some were 300MB. And again had to set up the cache to 300 MB so it sends the files to the raid. To me doing this is a hassle. This is something that I want to set and forget. Honestly the best option that works for me, is to set my downloads outside the array and then let my Sonarr or Radarr move them to the array. For files that I want to download in the cache like programs I set a different download path that sends it to the cache and at night Mover takes place. Do you get what I mean?
  14. Understood, Now, depending on the use case. Files can change at any time. I have it set up for the user shares, but for my cache varies a lot.
  15. I like the UI and how you can do the changes. I see the web app seems easy, but I need to put my customizations again, then there's no point.
  16. That's what I'm planning right now. I'll get a USB HDD and set the downloads there. Then let the sonar move it to the array. I'll also create another location within the torrent client to download array when I need it. Currently I'm using cache for my apps and VMs only. Honestly, It's kind of a bummer that it doesn't route to the next empty location when the disk fills up. And I don't mind much that it fills. Now, what drives me crazy is that the dockers crash and I have to do mover, and then restart the server. If the developers can have a look into a possible fix for future builds, it would be amazing. But for now, I'll edit the post and mark it as solved.
  17. Hey, I'm a little bit excited about this new app. I might migrate from let's encrypt to this one but need some help setting up the proxy host. Is there some guide somewhere here or the GitHub.
  18. Honestly. I started using duckdns docker container.
  19. @trurl I see. The only thing is that I might lose performance if I need to manipulate the file. If it's a movie I don't care but sometimes I download files and keep using them from the cache, like converting using handbrake or just other things from work.
  20. Hi, I noticed I still get the Cache to fill and the dockers stop working. This breaks the purpose of going to the raid to save files when cache is getting full. I currently have the setting to run when 95% and I had to take it out. Also, changed the schedule for Mover to run every 8 hours. I'm sure there should be a better way. I had to install the unbalance plugin to manually rsync folders from cache to the raid. Can somebody send me a screenshot of their tuning, or at least give me an idea.
  21. @clowrym Thanks for that. Actually, I was able to make it see my other networks. To me still doesn't make sense because I keep using the server IP plus the port but. I had to add manually there the virtual network and the vpn network to access the torrent. Now Something that really bugs my mind and I don't know why is. Currently I have the OpenVPN-as docker container setup with UDP using port 9443, forwarded through my router. Works perfect. But when I have the Transmission docker running, My OpenVPN doesn't connect using UDP. I currently have it running on TCP. If Transmission for PIA is using OpenVPN Port 1198, why is it blocking my connection for my own VPN.