• Posts

  • Joined

  • Last visited

Everything posted by syrys

  1. Alright, here is my "hacky" solution to the above problem. It works for now, if someone has a better solution, let me know. Install User Scrips plugin (if you dont have it already), and add the following script: #!/bin/bash mkdir /mnt/disks/rclone_volume chmod 777 /mnt/disks/rclone_volume obviously you can set -p flag on the mkdir if you need nested directories or if you have issues with sub directories not being there, but from trial and error on my unraid setup, at boot (before array starts), `/mnt/disks/` exist. edit the script to include all the mount folders you want (if you have multiple mounts), and chmod 777 each of them. Set the above user script to run on every array start. Just to make sure my container doesnt start prior to this finishing (unsure if it can happen?), i added a random other container above my rclone container (a container that doesn't need drives to be mounted), and set a delay to 5 secs (so rclone container waits 5 seconds). This might be unnecessary. Hope it helps someone.
  2. Hmm, ive been banging my head against the desk all day, can someone here give me some advice on how to fix this? So this issue that was already mentioned several times, i get this. But the solution mentioned does not work after server restart. Executing => rclone mount --config=/config/.rclone.conf --allow-other --read-only --allow-other --acd-templink-threshold 0 --buffer-size 1G --timeout 5s --contimeout 5s my_gdrive: /data 2020/09/02 14:00:21 mount helper error: fusermount: user has no write access to mountpoint /data 2020/09/02 14:00:21 Fatal error: failed to mount FUSE fs: fusermount: exit status 1 First of all, i have the docker installed, and all the settings as mentioned throughout this thread. I do also have couple of extra rclone flags passed in, but these arnt the issue. So, lets say the mount point defined is `/mnt/disks/rclone_volume`, when i restart the server (docker auto starts), and i see the above mentioned error. If i stop the docker, and do `ls -la` i see the ownership is `root:root` for `/mnt/disks/rclone_volume`. alright, sure, `chmod 777`, `chown 911:911` the rclone_volume, then restart the docker, cool, everything works. `/mnt/disks/rclone_volume` gets mounted correctly (`ls -la` shows 911:911 great), i can browse the files, no errors in the docker logs. Sweet, everything is sorted right? No, unfortunately not. The moment i reboot the unraid server (remember the docker auto starts), i get the above mentioned error on the docker logs, and obviously the drive is not mounted. so back to `ls -la` on the `/mnt/disks/rclone_volume`, and its back to `root:root` and `755`. So basically, every time start my server, i have to manually `chmod 777` and/OR `chown 911:911` the `/mnt/disks/rclone_volume`, and then start the docker? Any idea whats causing this? I cant be the only one having this issue can i? So, essentially, for this docker to successfully mount a drive, it needs the mount destination to either be `777` or `911:911`. But for what ever reason, at rebbot/start or unraid, the ownership of `/mnt/disks/rclone_volume` gets reset to `root:root` even if you had set it to `911:911` prior to restart (i assume user 911 doesnt exist at the very start, so it defaults to root?). at the start of the boot, unraid (?) also sets `/mnt/disks/rclone_volume` to 755 (even if you had it set to 777 before restart). wtf? could this be related to another plugin i might have?
  3. oh wow, that sounds worrying. alright, ill follow cache pool issue instructions from that link. as for the docker image recreation, are there any instructions i should be following for this? or is it basically manually re creating a new docker, and re downloading all the previous dockers ive been using (plex, sonarr, etc) and setting up the same docker settings for each of them as before?
  4. Sorry for the late response. I waited a little bit so the issue triggers again so i can get the exact error and get the diagnostic file when it happens. week ago when the server was having similar issues, it didnt even let me download the diagnostic file, sigh. Anyway, same error happened again. Here are some errors on the system log: Jul 15 01:08:35 karie kernel: print_req_error: I/O error, dev sdj, sector 6078419056 Jul 17 01:05:49 karie kernel: BTRFS: error (device loop2) in btrfs_finish_ordered_io:3107: errno=-5 IO failure hmm, unsure what the first error is either. sdj is my unassigned device drive, the drive i have all my downloads going to (and then they get picked up and moved to the array, nothing important in sdj). Any help is appreciated. Thanks in advance
  5. Hey all, i think i need some serious help with my unraid (cache?). My unraid setup was running nice and smooth (minor unrelated hiccups) for few years now. I have about 10 drives, 1 parity, 1 unassigned drives drive (i use this for downloads etc), and 2x ssds (cache). since couple of months ago, ive been getting some weirdness. my SABnzbd docker was the first to complain (atleast that i noticed) with errors like this on every download: Traceback (most recent call last): File "/usr/share/sabnzbdplus/cherrypy/", line 663, in respond self.body.process() File "/usr/share/sabnzbdplus/cherrypy/", line 989, in process super(RequestBody, self).process() File "/usr/share/sabnzbdplus/cherrypy/", line 558, in process proc(self) File "/usr/share/sabnzbdplus/cherrypy/", line 223, in process_multipart_form_data process_multipart(entity) File "/usr/share/sabnzbdplus/cherrypy/", line 215, in process_multipart part.process() File "/usr/share/sabnzbdplus/cherrypy/", line 556, in process self.default_proc() File "/usr/share/sabnzbdplus/cherrypy/", line 715, in default_proc self.file = self.read_into_file() File "/usr/share/sabnzbdplus/cherrypy/", line 729, in read_into_file fp_out = self.make_file() File "/usr/share/sabnzbdplus/cherrypy/", line 512, in make_file return tempfile.TemporaryFile() File "/usr/lib/python2.7/", line 511, in TemporaryFile (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags) File "/usr/lib/python2.7/", line 244, in _mkstemp_inner fd =, flags, 0600) OSError: [Errno 30] Read-only file system: '/tmp/tmpoyMrxR' This issue happens after a day or so of the system running, basically the unraid system somehow "breaks", and then no more downloads work (above error). Simply restarting the unraid server fixes it (restarting docker does not, sometimes it wont even allow restarting docker, says something went wrong). Anyway, last line of above error is interesting. Something is in read only mode? my cache drive maybe? Then exploring my unraid logs, i see things like this: BTRFS: error (device loop2) in btrfs_finish_ordered_io:3107: errno=-5 IO failure some btrfs issue? only btrfs usage is the 2x cache ssds i believe. so something going on with that? Ive googled the above error, and got to a place where people suggested that the cache might be corrupted. But i wasent really clear on how to know for sure, and how to go about fixing this. any help will be appreciated. i am a total noob with all this just to clarify. when i restart, everything works fine. ive disabled couple of dockers i use (radarr, deluge) which i dont use as much, and with those disabled the weirdness/errors are less frequent. so basically, when i restart the server, everything works fine (other dockers run fine: plex, sonarr, sabnzbd), but after couple of days of running, i notice weirdness like sonarr no longer imports files, plex sometimes crashes and dies, sab sometimes fails to download anything (first error i mentioned).
  6. Hmm interesting. Thanks for the info guys. So, that means, if i want to get 4k transcoding to be done in my plex docker, i need to upgrade to like a 7th/8th gen intel CPU/mobo (since GPUs arnt supported)? If i were to upgrade, does it matter what the CPU is, can i just upgrade to a cheap 8th gen i3 like the i3 8100? would that be enough? I guess my alternative is to run the plex server in a windows VM, thats probably a good option. Is there any instructions on how to pass through the GPU to my windows VM? Has anyone done this successfully?
  7. Hey Guys, Im trying to get some step by step instructions about how to actually install a GPU on my unraid server and get plex server (running as a docker on unraid) to use that GPU to do transcoding. My unraid server runs on a i7 4770 cpu which isnt enough for 4k h265 transcoding, so i just installed a GTX 970 (might move to a 1030 or 1050 later) on the box. Although the hardware is installed, im not sure how to: 1. Check if the GPU is correctly installed (that OS/Unraid can actually see it)? 2. How to get my plex docker (linuxserver/plex) to actually make use of this GPU. Ive seen couple of threads about this, but the instructions wernt too clear for me (im a noob at these things). Can someone give me some instructions if they can? would be greatly appreciated I tried to follow this: On this step modprobe i915 chmod -R 777 /dev/dri I got the following error: chmod: cannot access '/dev/dri ': No such file or directory There were couple of other threads about editing syslinux.cfg and "go" file, but i have no idea where they are, how to edit them, and what to edit them to. So im pretty stuck Any help appreciated. PS: im running unraid 6.6.5
  8. ah, fail. i thought it used to auto update OS =/ ah well, time to update
  9. my unraid is set to auto update but currently at v6.4.1. i dont see this option. i assume its not currently on the main releases.
  10. I know you guys keep suggesting that this is a user/config error time and time again. but after a year or so of struggling with this, my issue turned out to be jackett. i dont really know much about docker, and no one really gave me any good way to calculate disk usage per container, so i just deleted one container after another (order of least important). my docker size was going up by about 2-3%/day, so i would delete one container each day to see when the increase stopped. turned out that the culprit for me was jackett. it was likely saving logs or something somewhere it shouldnt have as it was configured correctly. As i mentioned, ive been struggling with this for a long time, ive made sure that every configurable setting in every container was configured correctly. something else you can try is, earlier in this thread, someone answered my question about ssh-ing into docker. If you have time to try it, you could try ssh, then use some disk/folder usage command to narrow down exactly which folder/files might be the cause, then submit a bug report to them. hope this helps.
  11. The culprit is Radarr. As @John_Msuggested, i ended up turning off some containers that i didnt use too much. The % stopped going up. And i narrowed down the problem to Radarr. Soo... How that i know the culprit container, how do i actually go about fixing it? I spent a lot of time making sure all the setting are correct to the best of my ability. I suspect the storage usage i comes from a non configurable path. Any suggestions?
  12. Yeah ive thought about that. Thus the reason why i asked if anyone know a way for me to check/monitor the docker container space usage. I still dont really have an answer to that Sure, as you mentioned, i can turn off some of the containers and see if that would stop the usage creeping up, but the culprit is something im likely heavily use/rely on (heck, i only have 8 containers running, and its likely the top 4 on the list i posted). Ill leave the turning off thing as a last resort, and am hoping that unraid/lime-tech or atleast someone who is familiar with docker to give me some sort of proper way to debug this. 89%
  13. Hey Mate, Thanks for the link. I did have a look in there and made sure the basic things like that log file limit etc are set. But i dont see anything in the faq about a way to find which image uses up how much space or anything along those lines. So i still have no way of knowing what image is the culprit, and how to go about fixing it. As i mentioned earlier, my settings/configs should be correct, as i have been using these for years without much issue. The issue is likely coming from a recent update for one of these images, that is likely saving some files in a location it shouldnt be. I just have no way to actually find this out? FYI: im at 87% now and have no idea what to do
  14. Hey All, Ive been using unraid with docker apps for years (im still a noob though). Recently (since about few weeks ago) ive been noticing that my docker disk utilization is slowly going up (im not installing anything, and all the apps are set to write to a different drive, as mentioned, ive been using this without much issues for years). Aaanyway, ive been getting warning emails saying "Docker image disk utilization X%" where X is now at about 85, and growing a slowly (1-2% every day). So, between July 9th and July 21st, it went from <70% to 95%, then i deleted lot of apps that i dont use very much to get it down well below 70%. Then between August 17th to today (25th) it went from 70% to 85%. This means clearly one of the dockers are doing something strange (something it shouldnt), and possibly writing logs to the docker image or something. So, my question is, how on earth do i find out who the culprit is so i can look further into dealing with it and/or coming up with a solution? FYI, here is a list of 8 apps im actually running: plex: linuxserver/plex radarr: linuxserver/radarr sabnzbd: linuxserver/sabnzbd Sonarr: linuxserver/sonarr deluge: linuxserver/deluge DuckDNS: coppit/duckdns jackett-public: dreamcat4/jackett cadvisor: google/cadvisor My current allocated space for docker is 20gb. Any help appreciated :)
  15. Sweet, i was actually doing just that right after i posted. When i goto add and select the template, it gets populated with all the existing data. So yup, that solved the problem... i think. All is working for now. i think... Thanks heaps for the advice
  16. Ah i see, that sucks. Yeah i just removed Advanced Buttons. I just saw the thread discussing this and made a quick post asking if there is any solution: So, essentially i have to start from scratch with the dockers that i lost? If anyone has any ideas of getting them up and running again, let me know.
  17. I had Advanced Buttons installed and tried to updated couple of dockers, and these dockers have disappeared from the docker menu/list (this issue was covered in the first page of this thread). I have since uninstalled Advanced Buttons. But, is there any way to get those dockers up and running again, or are they good as gone?
  18. Also note, my other dockers are running just fine, its only when i update them that they disappear. Im using a plugin to update dockers (Advanced Buttons, just adds extra buttons), and i saw a glance of "No such file or directory" error at the end of the update, here is some of my logs from the last 30 mins or so. I think its that "The command failed" line, what is that all about? Feb 13 16:41:08 karie kernel: docker0: port 5(vethbf54628) entered disabled state Feb 13 16:41:08 karie kernel: device vethbf54628 left promiscuous mode Feb 13 16:41:08 karie kernel: docker0: port 5(vethbf54628) entered disabled state Feb 13 16:41:08 karie avahi-daemon[8096]: Withdrawing address record for fe80::e7:21ff:fec9:d396 on vethbf54628. Feb 13 16:41:09 karie [11757]: The command failed. Error: sh: /usr/local/emhttp/usr/bin/docker: No such file or directory<br> Feb 13 16:52:23 karie emhttpd: req (1): cmd=/webGui/scripts/share_size&arg1=appdata&arg2=ssz1&csrf_token=**************** Feb 13 16:52:23 karie emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/share_size appdata ssz1 Feb 13 16:52:28 karie emhttpd: req (2): cmd=/webGui/scripts/share_size&arg1=Docker&arg2=ssz1&csrf_token=**************** Feb 13 16:52:28 karie emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/share_size Docker ssz1 Feb 13 16:59:23 karie [13986]: The command failed. Error: sh: /usr/local/emhttp/usr/bin/docker: No such file or directory<br> Feb 13 17:00:02 karie root: move: file /mnt/disk5/system/libvirt/libvirt.img Feb 13 17:00:02 karie root: move_object: /mnt/disk5/system/libvirt/libvirt.img File exists Feb 13 17:00:02 karie root: move_object: /mnt/disk5/system/libvirt: Directory not empty Feb 13 17:00:02 karie root: move_object: /mnt/disk5/system: Directory not empty
  19. I just did an update on my SABnzbd docker on unraid, and the docker disappeared from my dockers list (i also did the Radarr docker update as well, and that also disappeared). Can anyone give me any advice on this so i can debug? So essentially just 15 mins ago, i clicked the docker to update on SABnzbd. Once the update is finished, the docker no longer is on my dockers list (online or offline). I was going to post in the docker support for linuxserver SABnzbd page, but to make sure its the docker issue, i did an update for Radarr docker as well, and that also disappeared, wtf. I dont know what to do now I also did a server restart too, no change. Also to clarify, these dockers arnt running, the urls i use to access Radarr and SABnzbd are not responding. Unraid 6.4.1.
  20. Hey, thanks so much for the advice. I tried plex addon on kodi, and all works perfectly fine. when i messed around with direct play on plex app, it was hit and miss. with kodi, so far, 100% success (and is perfect). I will bring my queries to the plex forums. Only downside with kodi is the UI is not as good (and i dont see any options like automatic subtitles for things like anime). But maybe ill get used to it, who knows. Thanks again
  21. I have plexpy docker installed (never really used it much), but i have it disabled atm. Ive disabled almost ever docker thats not 100% necessary to get the max performance. But, i use a unraid plugin called "Dynamix System Statistics" (i think) to see the server stats (shows me cpu/network/ram/hdd activity). Alright, ill try out Kodi. I didnt realise kodi had a plex plugin (that sounds strange haha). I did install Kodi just earlier on the shield, but got stuck on some stupid thing, where as soon as you launch the app, it just says "waiting for external storage" and nothing happens (cant do anything besides closing the app). Ill look into that further. As for transcoding, i may be using that work incorrectly. Im just trying to get my 4k content from my plex server playing on my plex client on the shield. Honestly, i have no idea if the server is transcoding/converting this or sending it direct (direct play?). Is there any way i can tell if its being transcoded? As for 10bit, is there a way i can tell if my content is 10bit? Also, i can play some 4k content just fine. Its really weird. So i have some shows, where every episode (4k) works just fine. And i have other shows where none of the episodes (4k) play properly (well, it freezes time to time and says server isnt powerful enough). It seems like there is a connection to certain shows.
  22. Sorry, im really not sure where this thread should be posted, but i will post here for now and hopefully someone can link me to somewhere more suitable if this is not appropriate. ---------------------------------------- I have been using unraid for couple of years now. I use a docker for plex. I play plex on a Nvidia Shield (Android TV device). My plex server is running a i7 4770 with 16gb ram. So, since recently, my plex client keeps pausing videos, failing to smoothly playback most of my recent 4k videos. Anything from movies or tv shows. Plex has a popup saying my server is not powerful enough to playback the video. Is that really true? an i7 4770 is not powerful enough to transcode 4k videos? I use plugin on unraid to monitor server stats, and most of the time when its skipping, the server CPU usage is nowhere near 100%, sometimes barely hitting 30+%. So, what on earth is going on? Can anyone shead some light on this, and maybe point me in the right direction? Im not sure if the cause is the plex docker/server, plex client (unlikely, i've tried multiple clients), unraid (somehow not allowing full cpu access to the docker?), 4770 really isnt powerful enough (if so, what is?), something to do with media type (some specific encoding?), or something else entirely? TY in advance.
  23. Yeah i did, but to do so, you had to turn off docker. I simply just followed squid's instructions: For those that know about docker, is there really no way for one to browse into a docker containers file system and inspect/delete a file? Say you want to locate and delete a log fine manually, and you are sure its just being saved inside the docker, is there really no way to browse into the docker's file system and try locate and remove said file(s)? This would solve most of our problems (although hackey, its still a viable solution).
  24. Haha i dont mind. Maybe suggestions you get might help me with my issue too (fingers crossed)
  25. This is my output: > du -h -d 1 /var/lib/docker/ 0 /var/lib/docker/tmp 9.6G /var/lib/docker/containers 22G /var/lib/docker/btrfs 12M /var/lib/docker/image 68K /var/lib/docker/volumes 0 /var/lib/docker/trust 84K /var/lib/docker/network 2.4M /var/lib/docker/unraid 0 /var/lib/docker/swarm 32G /var/lib/docker/ Not really sure what this is actually telling me though