wickedathletes

Community Developer
  • Posts

    435
  • Joined

  • Last visited

Everything posted by wickedathletes

  1. Just looking for a second pair of eyes. I thought originally it was Radarr causing an issue because I had my /downloads path set to /mnt/User/downloads instead of /mnt/cache/downloads. When I switched it and restarted Radarr my image dropped 20GB, however it just filled up again, about 6 hours later. Not really sure what could be causing it, and I am sure I am missing something but did go through the faq on space issues.
  2. thanks, I think I figured it out. Not really sure why (probably my lack of understanding), but it was the container mapping on radarr. NZBGet was pointing to /mnt/cache/downloads and Radarr was pointing to /mnt/user/downloads. Despite that just being the shared version of the cache drive (to my understanding at least), I switched it over and my img file immediately went down 20 GB. Weirdly enough, CP is pointing to the User share and never caused an issue for me.
  3. ya no idea.. unless Radarr does something weird in a temp folder when it renames a file? No settings in Radarr are doing anything. Its handing off to NZBGet without issue and its definitely Radarr filling it, because i disabled radarr while NZBGet was still running and it stopped filling up. This is the only setting that maybe could do something? not really sure. Also, below are all my other file management settings
  4. ya after you said i might have something wrong I thought about that too, I am checking but so far not sure. The downloads are going to the correct location so I am not sure what would fill it.
  5. anything out of the ordinary on this? This setup works 100% fine on my other 10 or so apps for the last 1 or so years.
  6. anyone having docker container size issues using this container? I have a 30gb container and within about 20 minutes of rebuilding it and turning radarr on it filled 100% from about 5gb. Radarr is indexing about 2200 movies for me, so maybe that is why? Should I just grown my docker image or is this a bug of sorts?
  7. Worse than WAF, I have my parents breathing down my neck because they are 3 episodes left of breaking bad and my server has been down 2 days. damn you remote access plex. hahaha
  8. /mnt/cache/apps Has to be, mover only managed to move half of my Plex app folder before dying, everything else is working fine. Is there a way to verify Plex is correct? I did have a lot of images that seemed "broke." Doesn't have to be /mnt/cache on the templates. /mnt/user works just fine. Best guess on the broken images is that plex is referencing /mnt/user (you can verify by editing the container) and unRaid is grabbing the version of the file(s) on the array and missing the symlinks because only half the share got copied. Back to what I originally suggested. Delete /mnt/user0/apps and restore again or even better, delete /mnt/user/apps and restore again. Either way you'll be back to where you where a couple of days ago. awesome, I think I will give that a whirl in a day or so... because when sh*t hits the fan, it hits the fan and of course I had a drive die at the same time, so right now I am preclearing and adding a replacement in the mix, so I definitely dont want to do anything right now.
  9. /mnt/cache/apps Has to be, mover only managed to move half of my Plex app folder before dying, everything else is working fine. Is there a way to verify Plex is correct? I did have a lot of images that seemed "broke." or its kind of not... crap. I see my downloads are going to my disk5 now... ugh...
  10. Is this something that could wait until the next backup and then restore from that?
  11. As an experiment you could set appdata to "cache prefer" and run the mover again. It will take just as long, and you may still have to follow squid's directions, but it would be an interesting experiment. I suspect it will fail, but the resulting log file and summary of actions leading up to this could be useful for limetech. I am all for testing but what happens if I nuke something in the mean time? Just use the CA backup and restore again? I don't want to hose my server, its my production box so to speak.
  12. Everything is up and running now, thank you. Quick one: Because mover is like half finished now I am getting this error in Fix Common Problems: "Share apps set to cache-only, but files / folders exist on the array" If I just delete what is on my array will that go away? Is there a way to tell mover to stop thinking its 50% done in a process? All of my files are on the cache drive now. This is actually a weird little edge case and I'm not 100% sure how the user system handles copying to a cache-only share when an identical file exists on the array. This is what I would do to make sure everything is ok The apps folder sitting on the array is going to be from mover yesterday, and will be safe to kill. Either use mc or the dolphin/krusader apps to kill the folder from /mnt/user0 or rm -rf /mnt/user0/apps Note that you must use user0 and not user to only delete the files on the array. Then do the restore again. (Only files which are missing from the cache drive version will get copied) I'll have to play around tonight to see exactly what CA Backup does in this edge-case, as I know its not something that I ever tested. if i do the restore again but have changed this since then is that a problem?
  13. Everything is up and running now, thank you. Quick one: Because mover is like half finished now I am getting this error in Fix Common Problems: "Share apps set to cache-only, but files / folders exist on the array" If I just delete what is on my array will that go away? Is there a way to tell mover to stop thinking its 50% done in a process? All of my files are on the cache drive now.
  14. cool, ya i have everything other than appdata backed up elsewhere as well.
  15. Sure that's in your syslog and not on one of the UI pages? If so, which page? tools->system log. screen shot: https://drive.google.com/open?id=0B7baFOqwwO97dVMwVkZiRHRLU3M I think at this point I just want to know how to kill a Mover job? If I reboot will it kill it or will it just start up again? It's a UI page that's doing it. If you reboot (and it works from the UI), then mover will not start back up again. I know what the error is (a PHP script attempted to allocate more memory than it was allowed to) and your only real recourse is to reboot. What I think happened here is that you have a sizeable Plex library, and mover logs everything which basically means that your syslog wound up with an extra 200,000+ lines in it and. Since your already half way done (hopefully), just start mover back up after the system reboots. Side Note: Using CA Backup to handle copying the appdata would have been - Far, far faster process as it doesn't have the overhead that mover does - You wind up with a backup copy (the original cache drive) just in case something goes wrong - Doesn't log all moves into the syslog for this very reason (logs it elsewhere) I used CA Backup and forgot my backup runs Wednesday AM so I have a nice clean fresh backup, which is why I don't care about mover at this point and ya, my plex library is kind of large So since I have a backup point can I just plugin my new cache drive, format it and restore?
  16. Sure that's in your syslog and not on one of the UI pages? If so, which page? tools->system log. screen shot: https://drive.google.com/open?id=0B7baFOqwwO97dVMwVkZiRHRLU3M I think at this point I just want to know how to kill a Mover job? If I reboot will it kill it or will it just start up again?
  17. this is all my syslog says: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 72 bytes) in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(300) : eval()'d code on line 73
  18. I am in the process of moving my data off my cache to put on a new cache drive. the mover appears stuck, it hasnt written to log in over an hour Dec 8 13:47:33 Hades root: cd+++++++++ apps/Plex/Library/Application Support/Plex Media Server/Media/localhost/a/da0557cb221d85715fb47d468e8bd5cdbb0ac65.bundle/Contents/Thumbnails/ ---that folder has 1 image in it. I was using the instructions here: https://lime-technology.com/wiki/index.php/Replace_A_Cache_Drive I do have an app folder backup from last night, so not sure how I can kill this process? if I shouldnt or what? It is writing to my drive with space on it, so not sure why it just died out. any help appreciated, my server is dead in the water as we speak.
  19. I know, but having it split at level 1 the show is still being split across different drives... Should that not have been the case? Ideally, with my scenario, am I to assume switching it to level 2 should fix my issue anyways?
  20. But they are, I have The Flash on numerous drives without issue. Is it because it should actually be on the Season folder and not the show? and if that is the case I guess I would want 2? \SHARENAME\Show Name\Season ##\file.ext \TV Series\The Flash (2014)\Season 03\file.ext \\SHARENAME\level 1\level 2\file.ext this way multiple seasons can be on multiple disks?
  21. So NZBGet writes to the cache. Sonarr takes from the cache and moves it to a volume on the array such as: /tv = /mnt/user/TV Series/
  22. My real world example: Minimum set: 5GB. 2GB was free. It tried writing a video file to the drive that was 2.2GB. It failed and left a .partial~ file (I assume this is a Sonarr thing). The split level I have set on my TV Series share is "Automatically split only the top level directory as required." My directory structure is \TV Series\The Flash (2014)\Season 03\file.ext When trying to use that directory it failed. So, bringing it all back I guess my questions are now: 1. Is my structuring bad? If so what would be recommended? The way it appears is Shows can be split per drive without issue. 2. If the split level is an issue on the Minimum split can drives just be excluded from the share but still have their data shown in it? That way I can manually fill the drive and "turn them off" from being written too but still show in the share. 3. If 1 & 2 don't solve it, I guess I am back to splitting up the space evenly and leaving this completely manual
  23. So its not working, maybe I am doing something wrong? On the share I put a "Minimum Free Space" to 5GB for that specific share. My docker is writing to that share specifically and it placed a 3GB file (or tried); when only 1GB was left. Am I not understanding this correctly? Basically what I want is to not have to move data around because a drive fills up. In its current state however, Sonarr still writes to that location regardless...
  24. Don't understand what you are trying to accomplish. You should just refer to everything as user shares and let unRAID worry about what drive anything is on. Set minimum free on each user share to be larger than the largest single file you will write to that share and everything else should take care of itself. Honestly I didn't know that setting existed haha. So thank you! That was what was frustrating me, because without it things would never copy over, and it would backlog etc etc. THANKS! Turn on Help in the webUI. Lots of things are explained for each page. its on, Ive just been here to long, before help. I set all that stuff up in 4.0.
  25. Don't understand what you are trying to accomplish. You should just refer to everything as user shares and let unRAID worry about what drive anything is on. Set minimum free on each user share to be larger than the largest single file you will write to that share and everything else should take care of itself. Honestly I didn't know that setting existed haha. So thank you! That was what was frustrating me, because without it things would never copy over, and it would backlog etc etc. THANKS!