nasbox

Members
  • Posts

    5
  • Joined

  • Last visited

nasbox's Achievements

Noob

Noob (1/14)

4

Reputation

  1. If you think you're write speeds are slow due to downloading to the Array (ie. IO speeds), check if this is the case by changing the download/incomplete & download/complete folder to RAM. You can do this by mapping these folders to either /shm (a folder which gives you access to 50% of RAM), or /tmp (all of RAM). In doing so, you should make sure that you have ample RAM to download your test file without maxing out RAM (that would be bad). If this downloads at max speed, then it's likely an IO issue. I used this method to prove my slow write speeds were IO. I never download to the Array, for me it's full of HDDs, which are much slower than SSD. Plus, if you have a Parity drive, you're just wasting IO. Setup a Cache drive with an SSD and download to that. I've recently moved to a Samsung EVO, which works nicely for me. Added point: If you're using the Binhex VPN containers, I have found in the past that the DNS name servers I was using were very slow. Slow DNS servers resulted in slow download speeds. As the DNS servers were intermittently slow, the slow download speeds were also intermitent, making diagnosing this problem hard. I changed the standard name servers to PIA's DNS server (I use PIA as my VPN). If you don't use PIA, you could try Cloudflare's or Google's DNSs. Good Luck.
  2. I've recently started using Compose Manger to orchestrate my containers in Unraid. I've used docker-compose for a long time on old servers, but moved away from it when I started using Unraid, as I preferred the GUI. I have recently shifted back to compose as my container stack is getting complicated and, frankly, it's easier to manage complicated stacks with docker compose. Compose Manager has some quirks which I've come across. I wasn't able to easily find the answers to these when researching the quirks. In case it helps anyone else, I thought I'd share some of these below: 1) WebUI Link & Icon: It is possible to automatically add WebUI Links and Icons from a docker-compose.yml, this saves you having to add them manually via the UI Labels section. Just add the following labels to your docker-compose.yml: labels: # Unraid Labels # net.unraid.docker.managed: "composeman" net.unraid.docker.icon: "https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/deluge-icon.png" net.unraid.docker.webui: "http://[IP]:[PORT:8112]/" net.unraid.docker.shell: "shell" I haven't managed to get the Console working (automatically closes for me), let me know if anyone has a solution for this. 2) Creating a new custom docker network using Compose, and naming it as you wish: You can create Custom Docker Networks using compose.yml files and Compose Manager. To do so, just add the "external: false" variable to the network. If you do this, you will notice that the network name ends up being "[STACK_NAME]_[THE NAME YOU CHOSE]". You can change this behaviour by adding a "name: XXX" variable in your compose file: networks: nassy: driver: bridge external: false name: nassy This makes the network name become just [THE NAME YOU CHOSE]. 3) Custom Docker Networks: Network Name in Unraid UI showing Network ID vs Network Name: When using Custom Docker Networks which have already been created (ie. external = true), if you define the network, and use the 'networks: - nassy' command to specify what the container should use (shown below): networks: nassy: external: true name: nassy services: # ----- Core Apps -----# ## watchtower ## watchtower: image: containrrr/watchtower:latest container_name: watchtower networks: - nassy You will find that that Unraid's Docker UI will show the Network ID (rather than the Network Name) in the UI: This is not the case if: a) you define the network as "external: false" (having Compose create the new network, even if it's already created) or b) you just use the "network_mode: XXXX" command in docker compose: services: # ----- Core Apps -----# ## watchtower ## watchtower: image: containrrr/watchtower:latest container_name: watchtower network_mode: "nassy" Under both a) and b) you will see the correct name in the Web UI: I am not sure if this is a Compose Manager or Unraid bug, but when inspecting the Containers using the Command Line, if you set "external: false" the Container's "NetworkMode" lable is the Network ID. Under both a & b described above, "NetworkMode" is the Network Name. That is all for now, I may update this as I come up with more. I hope this helps someone
  3. Thanks for the response - I took your recommendation and bought a Samsung EVO 870 500g HD for my downloads - works perfectly. I didn't realise how different SSDs could be. I also reconsidered how SABnzb downloads, I've set the incomplete directory to the new Evo SSD to minimise IO, and the complete directory to the Cache Pool (where movies go). This minimises IO while allowing Sonarr/Radarr to use Hardlinks when moving files to the media library. Thanks again for your help, and SSD recommendation - all works perfectly now.
  4. Thanks for the prompt response. Overnight, I decided to test whether FUSE was causing the issue. I remapped /mnt/user/downloads to /mnt/cache_patriot/downloads - no joy. Not FUSE related. After reading your response I remapped downloads & unpacking (.../usenet/incomplete & .../usenet/complete) to RAM (via Unraid's /tmp/... folder). My logic: I wanted to identify whether it was a CPU problem (which I doubted) or IO issue (which, admittedly, I also doubted) - problem solved! Downloading, unpacking and or repairing in RAM resolved the yo-yo problem. I have to admit I was surprised, my current Cache Pool arrangements is: Cache_nvme: 2x Raid1 NVME: For Docker, VMs, etc Cache_patriot: 1x Cheap Patriot SSD: For downloads, transcoding, and anything with high IO Cache_ssd: 2x Raid1 SSD: general Cache for data requiring redundancy The yo-yo occurred on both the patriot & raid1-ssd cache pools. It didn't occur straight away, for between 20m and ~1h of sustained 88MBs downloading, SabNZB was fine downloading and unpacking files. Randomly, it would start, and occur regularly, not going away until a restart. After the restart, the cycle would start again. This pattern of behaviour made me think it was a strange i) Unraid issue, ii) Sabnzb issue, and or iii) a combination of both. This thinking was compounded by the problem having only started occurring in the last ~2-3 months (it ran fine on the cache_ssd pool for a long time, when I first setup Unraid). I'd love to continue downloading to RAM. But my server lives in a Jonsbo N3 which requires an mITX mobo, max RAM at this stage is 64G. This isn't sufficient for large nzb files. I have moved my downloads folder to the nvme Cache Pool which works fine (just like RAM). This is probably not an ideal long term solution as the high IO is using up 2x NVMEs - the whole point of the Patriot SSD was to avoid this. I may have to find a good SSD which can deal with the IO. Thanks for your help. Excuse the length of this post, I wanted to share in the hope it helps others. This was driving me crazy! PS. If anyone can recommend an SSD which works well with SabNZB, please shout out.
  5. Firstly, Binhex, I've been a long-time user of your docker repositories. Just awesome. Thank you. For what it's wroth, I too am having unusual activity recently with sanznbdvpn, after a few downloads my unpacks become inexplicably slow, which coincides with downloads becoming slow - download speeds start 'yo-yoing'. Thinking this may have been related to par2cmdline-turbo, I tried Binhex's updated docker image tagged 'test' which includes par2cmdline-turbo. To be fair, normal unpack speeds seem quicker, it seems to work. But I am still getting 'yo-yo' like download speeds which coincide with strangely slow unpacks. These occur after a bit of sustained downloading. At first I thought it was my ISP or VPN (PIA), but this wouldn't explain the slow unpacking. This was tested using both 1) a RAID1 BTRFS 2x SSD Cache, and 2) an XFS 1x SSD Cache. It doesn't seem to be related to storage write/read speeds. I've also set direct_unpack_threads (Default 3) to 6, to give sabnzbd a bit more grunt - no joy (I'm running a 13500 with 20 threads + 32GB of Ram, which should be plenty). Here is an example of the yo-yoing: You can see that the speeds have gone from a locked 88MBps (where I cap it, I have a 1Gbps connection), and started yo-yoing. This behaviour coincided with an unpack process slowing down (pictured below). Pausing the downloads, letting the unpacks finish and restarting fixes the issue, albeit temporarily: After a while, the yo-yoing behaviour starts again. Interestingly, the time it takes for yo-yoing to start varies. You can see below that downloads held constant at 88MBps until the strange unpacking/download speed behaviour started again. I thought I'd share my experience as this seems to be impacting more and more people. My motherboard has a Dragon 2.5G LAN + Intel® Gigabit LAN, the same behaviour occurs on both. Hope this post helps. EDIT1: I should say I'm new to Unraid (Jan of this year), but have been using Binhex's repositories on Synology NAS systems for years. Also, Unraid team - you are amazing. An absolutely great platform - thanks to you too!