Dimtar

Members
  • Posts

    541
  • Joined

  • Last visited

Everything posted by Dimtar

  1. It’s a major new version running on an entirely different language, I think this is the right approach. On another note, the container is great. Thanks team.
  2. @johnnie.black Just wanted to report back, I added an additional 8GB of memory to the system. Its only been 24 hours but I have had no load problems and the memory load in the dashboard hasn't went above 27%. Thanks a bunch for your help, I learnt alot during this whole process.
  3. Here is a very rough example. rclone mount --cache-db-path /mnt/user/rclonedb --cache-chunk-path /mnt/user/rclonechunk googlecache: /mnt/user/googlemount You can just set it all in line, the above example would only work if you created the shares before hand and hopefully put them on a cache drive. Hope this helps somewhat.
  4. @Aric There is a command with rclone to set the cache DB location, have you tried that?
  5. Why am I not getting email notifications of your replies? This comment is not directed at you, just speaking out loud. Thanks for the tip, I am pretty sure its the NZBGet docker as its the only container doing anything each day. I also install iotop to help me, thanks for your help I appreciate it.
  6. Load is currently 67, attached is the diagnostics dump.
  7. Its no longer at load but i'll provide next time.
  8. Hi all. This problem wasn't happening but then it started randomly and its happening more and more. So right now, my server is near un-usable. Its currently sitting at a load of 60. (please see screenshot) The server isn't doing anything, the load will at some stage come down and the server will be usable again but I am trying to work out what exactly is causing this. If i open netdata during this time, it shows IOwait as the biggest issue but I am not sure what is causing that. This is the result of "free -hm", am I running out of ram and its swapping to drives? root@ASTRO:~# free -hm total used free shared buff/cache available Mem: 3.6G 2.9G 190M 473M 558M 14M Swap: 0B 0B 0B Like right now I have 10% of kswap0 and 10% of unraidd, is that it swapping to the disk? I don't have a cache drive and the server is essentially ideal right now. Any ideas on stuff to look at?
  9. Has anyone installed an RSS plugin? I tried the one from the Deluge site but neither of the python versions installed.
  10. Could you try backing up somewhere else? Like B2, just as a test don't do the full backup. Just to work out if its Google Drive or something else?
  11. The docker is now available in CA, just an FYI. This isn't loading for me, this just keeps going in the log over and over: e":"No such container: f08423aa0af6"}
  12. @DZMM I read your thread a few days ago, I am still impressed. I was hoping to hear from others too.
  13. Hi all. So there has been alot of videos and talk about having an unRaid box, running a VM and passing through the graphics card. It makes me wonder, who is actually using this day to day? As in they have a fully functioning desktop PC capable of running Windows with a keyboard/mouse/monitor and running unRaid underneath. Keen to hear all stories.
  14. I have hardly even used it, maybe its just really noisy? Thanks for looking into this, whenever you do.
  15. Thanks for the fast reply. I added the extra parameters you mentioned and this was enough to give the container a new ID. This deleted the old folder and removed the 11GB file its self. So all should be good now, thanks again.
  16. @binhex My containerid-json.log has just reached 11GB. It looks like its logging everything Lidarr is doing, is that normal? Its going to fill up my docker.img pretty soon.
  17. Nzbget and Sonarr are /downloads in your mappings. Where as Binhex-Lidarr is /data. Is that it?
  18. It’s a saying, a compliment in this context.
  19. I agree, this project does look solid and I am not even mad they are forking Sonarr.
  20. Just via putty, copied the app data of the old docker. Is that what you mean?
  21. Thanks @binhex Just installed your version, copied in my database etc. from another docker. All up and running in 5 minutes. Top work.