strike

Members
  • Posts

    650
  • Joined

  • Last visited

Everything posted by strike

  1. What you're seeing is called transcoding. This happens when your client does not support what you're trying to play. Either the video format, audio format or maybe the subtitle. Here is when transcoding comes in. What it does is change whatever format the client isn't supporting into some format is does support, so it can play the file. This is done by the emby server on the fly and on a default setup is using the cpu of the server. And as you're seeing it takes quite a lot power from the cpu to do this on the fiy. Emby is also setup default to "dynamically" transcode your file which means it will only transcode the first part of a file to reduce cpu utilization (10 min maybe) to begin with so the file can start playing. And when emby sees you're catching up to that point it will start the transcoding again, this way emby always stays ahead a few % not causing the video to buffer. This dynamic setting in emby is also why you can't skips chapters without causing buffering. Because remember emby only transcodes the first part of the file and keeps ahead a few % at a time. So when you skip chapters the video will start to buffer because that part is not transcoded yet. You can turn off this dynamically setting in the emby server settings under transcoding. Disable this: When disabled emby will transcode the whole file from start to finish in one go. So instead of the cpu taking 3-4 min at 80% every now and then it can now maybe take 15-20 min to transcode the whole file. But the nice thing is that you can skip forward as long as the transcoding has caught up to that point. Transcoding can also happen if you're streaming remote (or local) and the file has to high bitrate for you're connection so it has to transcode it to a lower quality in order to be able to play it. Usually only happens when playing from a remote location or on a crappy wifi locally. To avoid transcoding all together you will have to buy a client that supports most formats so it can direct play the file instead. You can see in the emby server dashboard why the file is transcoding if you want to troubleshoot. Or you if you have a discrete GPU or integrated GPU on your cpu you can use that to set up hardware transcoding in emby and avoid using the cpu. GPU's are much faster then the cpu. So instead of the cpu using 15 min to transcode a file the gpu does this in maybe 1 min.
  2. The last couple days privoxy is timing out, any reason why that might happen? The tunnel is up and everything is working except medusa and radarr is not finding anything because of privoxy timing out. This is the error I get in Medusa as an example. HTTPConnectionPool(host='192.168.2.218', port=8118): Read timed out. (read timeout=30) If I disable proxy in medusa and radarr they are able to reach the indexer and grab torrents and send them to deluge just fine. There is no error in the supervisord log either that I can see.
  3. I meant the ducker run command for delugevpn
  4. I wouldn't know since I don't use wireguard.
  5. That looks good. Are you using open or private trackers? If private, maybe the deluge version you're running is blacklisted? Also search for windscribe in this thread, maybe someone has had the same issue. I certainly remember someone mentioning windscribe in this thread.
  6. Yes, you should be able to use it from any browser. Don't know why it's not working. Maybe post your docker run command or a screenshot of your container settings.
  7. What ip are you trying to connect to?
  8. Post your docker run command and a screenshot of the downloads settings in deluge.
  9. How are you trying to reach the webui? If you're using the ip adress to try to reach the webui you're doing it wrong. Like it says in the faq you have to use localhost. As in http://localhost:9696 to reach prowlarr.
  10. Post your docker run command and a screenshot of the downloads settings in deluge.
  11. What do you mean by this? Nothing will download? Post your docker run command and a screenshot of the download settings in deluge. See Q21: https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
  12. Sorry for the late reply. This is your issue. Switch to normal bridge network and it will work. Or continue to use the thin client.
  13. Do this: https://github.com/binhex/documentation/blob/master/docker/faq/help.md
  14. Set strict port forwarding to "no". That setting is only for PIA users. I can't remember if it does anything when provider is not set to PIA, but worth a shot. Also can you confirm that your router is at 192.168.10.1 ? Since you had a power outage it's not impossible that your router has "reset" and maybe now has a different IP range.
  15. Before you loose any more files, ssh into your server or open a terminal in the webui and run this command on all your disks: find /mnt/disk1/ -type f -exec chattr +i "{}" \; Now all your files can not be deleted, renamed or edited. When you figure out what is causing the issue and figured out how to solve it you can run this command on all your disks to "unlock" your files again. find /mnt/disk1/ -type f -exec chattr -i "{}" \; Too figure out what is causing it start one container at the time and watch the logs. Maybe your radarr/sonarr etc has been hacked. You're probably reverse proxying them with swag right? Look at the nginx access and error logs. Check every container that has access to your files. And change your password for all containers you're accessing through swag asap.
  16. You should be able to ssh into your server an enter this command. use_ssl no Then you can access the server webui with the IP address. Go to settings->management access and create a new cert. Edit: And of course set the static IP again, forgot about that.
  17. With VPN off there are no iptables rules in place, but with VPN enabled there are very strict iptables rules in place to prevent leaking.
  18. You got it wrong this time too, it should be 192.168.178.0
  19. I use Cathy for this. I found it here on the forum, can't remember who mentioned it tho. Just scroll down a bit on that site and you'll find it. Awesome tool!
  20. If you are referring to the corrupt db issue your safest bet is to back up your appdata and just update. If you run into the issue restore from backup. The truth is, the longer you wait to update the more likely you're gonna run into issues. This is because in major updates there are gonna be database changes and the db needs to be migrated to the latest version. Sometimes this can cause issues especially when you have not been keeping up with updates. There has been many updates to radarr since this issue and the more you wait the risk is higher that they updates includes more upgrades to the db. And because you're now so far behind the migration of the db has a higher risk of failing. So just get it over with already IMHO. This goes for all software updates btw, keep backups and update regularly to avoid issues in the future. Yes, sometimes updates has issues, but you're gonna have even more issues later on if you don't keep up to date.
  21. And now I remembered WHY it matters just to put that out there as well. It's because unraid does not know how big the file you're going to copy is, it only knows how much space is left on the disk. So if the file is bigger than the space left on the disk it will fail if the minimum free space is not set. If unraid sees that there is less than the minimum free space left it will choose another disk IF the split level permits it. If not it will continue to fill the disk until it runs out of space or files are manually moved to another disk to free up space. Edit: Paraphraseing, unraid do know how big your files are. But when creating a file unraid does not know how big it's going to be before it created. And when copying/moving a file you're essentially making a new file just with the same data.
  22. It might be because of the way rsync is copying directories/files. As I said in my previous post rsync will create the entire directory structure before copying any files. And thus will most likely try to copy all the files into the already created directories. If you do a normal copy I think you will find the cache setting is working as intended. I haven't tested it tho, as I never use the cache feature. I have cache no set on all my shares except the appdata and VM share which is set to only.
  23. Just run the mover and it will move data already copied from cache to the array, then set cache to no and do the rest of the copy. When finished set cache to yes if you want. Also be sure to set your split level to split any dir. you can change it after the initial copy if you want. Split level is important because rsync will create all the directories first before it copies any files, so if the split level is set to anything other than to split any directory on the initial copy you will run into the same issue with the disk filling up.