wickedathletes

Community Developer
  • Posts

    423
  • Joined

  • Last visited

Everything posted by wickedathletes

  1. Worse than WAF, I have my parents breathing down my neck because they are 3 episodes left of breaking bad and my server has been down 2 days. damn you remote access plex. hahaha
  2. /mnt/cache/apps Has to be, mover only managed to move half of my Plex app folder before dying, everything else is working fine. Is there a way to verify Plex is correct? I did have a lot of images that seemed "broke." Doesn't have to be /mnt/cache on the templates. /mnt/user works just fine. Best guess on the broken images is that plex is referencing /mnt/user (you can verify by editing the container) and unRaid is grabbing the version of the file(s) on the array and missing the symlinks because only half the share got copied. Back to what I originally suggested. Delete /mnt/user0/apps and restore again or even better, delete /mnt/user/apps and restore again. Either way you'll be back to where you where a couple of days ago. awesome, I think I will give that a whirl in a day or so... because when sh*t hits the fan, it hits the fan and of course I had a drive die at the same time, so right now I am preclearing and adding a replacement in the mix, so I definitely dont want to do anything right now.
  3. /mnt/cache/apps Has to be, mover only managed to move half of my Plex app folder before dying, everything else is working fine. Is there a way to verify Plex is correct? I did have a lot of images that seemed "broke." or its kind of not... crap. I see my downloads are going to my disk5 now... ugh...
  4. Is this something that could wait until the next backup and then restore from that?
  5. As an experiment you could set appdata to "cache prefer" and run the mover again. It will take just as long, and you may still have to follow squid's directions, but it would be an interesting experiment. I suspect it will fail, but the resulting log file and summary of actions leading up to this could be useful for limetech. I am all for testing but what happens if I nuke something in the mean time? Just use the CA backup and restore again? I don't want to hose my server, its my production box so to speak.
  6. Everything is up and running now, thank you. Quick one: Because mover is like half finished now I am getting this error in Fix Common Problems: "Share apps set to cache-only, but files / folders exist on the array" If I just delete what is on my array will that go away? Is there a way to tell mover to stop thinking its 50% done in a process? All of my files are on the cache drive now. This is actually a weird little edge case and I'm not 100% sure how the user system handles copying to a cache-only share when an identical file exists on the array. This is what I would do to make sure everything is ok The apps folder sitting on the array is going to be from mover yesterday, and will be safe to kill. Either use mc or the dolphin/krusader apps to kill the folder from /mnt/user0 or rm -rf /mnt/user0/apps Note that you must use user0 and not user to only delete the files on the array. Then do the restore again. (Only files which are missing from the cache drive version will get copied) I'll have to play around tonight to see exactly what CA Backup does in this edge-case, as I know its not something that I ever tested. if i do the restore again but have changed this since then is that a problem?
  7. Everything is up and running now, thank you. Quick one: Because mover is like half finished now I am getting this error in Fix Common Problems: "Share apps set to cache-only, but files / folders exist on the array" If I just delete what is on my array will that go away? Is there a way to tell mover to stop thinking its 50% done in a process? All of my files are on the cache drive now.
  8. cool, ya i have everything other than appdata backed up elsewhere as well.
  9. Sure that's in your syslog and not on one of the UI pages? If so, which page? tools->system log. screen shot: https://drive.google.com/open?id=0B7baFOqwwO97dVMwVkZiRHRLU3M I think at this point I just want to know how to kill a Mover job? If I reboot will it kill it or will it just start up again? It's a UI page that's doing it. If you reboot (and it works from the UI), then mover will not start back up again. I know what the error is (a PHP script attempted to allocate more memory than it was allowed to) and your only real recourse is to reboot. What I think happened here is that you have a sizeable Plex library, and mover logs everything which basically means that your syslog wound up with an extra 200,000+ lines in it and. Since your already half way done (hopefully), just start mover back up after the system reboots. Side Note: Using CA Backup to handle copying the appdata would have been - Far, far faster process as it doesn't have the overhead that mover does - You wind up with a backup copy (the original cache drive) just in case something goes wrong - Doesn't log all moves into the syslog for this very reason (logs it elsewhere) I used CA Backup and forgot my backup runs Wednesday AM so I have a nice clean fresh backup, which is why I don't care about mover at this point and ya, my plex library is kind of large So since I have a backup point can I just plugin my new cache drive, format it and restore?
  10. Sure that's in your syslog and not on one of the UI pages? If so, which page? tools->system log. screen shot: https://drive.google.com/open?id=0B7baFOqwwO97dVMwVkZiRHRLU3M I think at this point I just want to know how to kill a Mover job? If I reboot will it kill it or will it just start up again?
  11. this is all my syslog says: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 72 bytes) in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(300) : eval()'d code on line 73
  12. I am in the process of moving my data off my cache to put on a new cache drive. the mover appears stuck, it hasnt written to log in over an hour Dec 8 13:47:33 Hades root: cd+++++++++ apps/Plex/Library/Application Support/Plex Media Server/Media/localhost/a/da0557cb221d85715fb47d468e8bd5cdbb0ac65.bundle/Contents/Thumbnails/ ---that folder has 1 image in it. I was using the instructions here: https://lime-technology.com/wiki/index.php/Replace_A_Cache_Drive I do have an app folder backup from last night, so not sure how I can kill this process? if I shouldnt or what? It is writing to my drive with space on it, so not sure why it just died out. any help appreciated, my server is dead in the water as we speak.
  13. I know, but having it split at level 1 the show is still being split across different drives... Should that not have been the case? Ideally, with my scenario, am I to assume switching it to level 2 should fix my issue anyways?
  14. But they are, I have The Flash on numerous drives without issue. Is it because it should actually be on the Season folder and not the show? and if that is the case I guess I would want 2? \SHARENAME\Show Name\Season ##\file.ext \TV Series\The Flash (2014)\Season 03\file.ext \\SHARENAME\level 1\level 2\file.ext this way multiple seasons can be on multiple disks?
  15. So NZBGet writes to the cache. Sonarr takes from the cache and moves it to a volume on the array such as: /tv = /mnt/user/TV Series/
  16. My real world example: Minimum set: 5GB. 2GB was free. It tried writing a video file to the drive that was 2.2GB. It failed and left a .partial~ file (I assume this is a Sonarr thing). The split level I have set on my TV Series share is "Automatically split only the top level directory as required." My directory structure is \TV Series\The Flash (2014)\Season 03\file.ext When trying to use that directory it failed. So, bringing it all back I guess my questions are now: 1. Is my structuring bad? If so what would be recommended? The way it appears is Shows can be split per drive without issue. 2. If the split level is an issue on the Minimum split can drives just be excluded from the share but still have their data shown in it? That way I can manually fill the drive and "turn them off" from being written too but still show in the share. 3. If 1 & 2 don't solve it, I guess I am back to splitting up the space evenly and leaving this completely manual
  17. So its not working, maybe I am doing something wrong? On the share I put a "Minimum Free Space" to 5GB for that specific share. My docker is writing to that share specifically and it placed a 3GB file (or tried); when only 1GB was left. Am I not understanding this correctly? Basically what I want is to not have to move data around because a drive fills up. In its current state however, Sonarr still writes to that location regardless...
  18. Don't understand what you are trying to accomplish. You should just refer to everything as user shares and let unRAID worry about what drive anything is on. Set minimum free on each user share to be larger than the largest single file you will write to that share and everything else should take care of itself. Honestly I didn't know that setting existed haha. So thank you! That was what was frustrating me, because without it things would never copy over, and it would backlog etc etc. THANKS! Turn on Help in the webUI. Lots of things are explained for each page. its on, Ive just been here to long, before help. I set all that stuff up in 4.0.
  19. Don't understand what you are trying to accomplish. You should just refer to everything as user shares and let unRAID worry about what drive anything is on. Set minimum free on each user share to be larger than the largest single file you will write to that share and everything else should take care of itself. Honestly I didn't know that setting existed haha. So thank you! That was what was frustrating me, because without it things would never copy over, and it would backlog etc etc. THANKS!
  20. I was wondering on some guidance on overall drive capacity and what is a safe percentage? I have been for years moving stuff around to keep each drive at the same basically capacity level and I realized I should just take the drive out of rotation once its hit a limit, might be easier than moving data back and forth. If this isn't a good idea then I wont but if it is, whats a good percentage? I have 4TB drives (end goal, 3 are upgrade-able still from 2TBs); 8 of them at the moment and about 800GB free. Would performance be effected or anything else if I left say 10GB free on each drive? Should it be more? Is less fine? I just know if I keep a drive that low in rotation i run into space issues since dockers don't understand to not use that drive.
  21. I just noticed (it could have been for awhile though) that I am seeing this in my unRAID logs, not sure if this is docker related, box related or what, any thoughts? Nov 2 13:55:49 Hades kernel: Plex Script Hos[26795]: segfault at 0 ip 00002ad34ecbf2f0 sp 00007ffe5d0916d8 error 4 in libc-2.23.so[2ad34eb7a000+1c0000]
  22. Any special permissions needed to be set? I can't modify the .sh file to add my settings. The docker is shutdown. Thank you for the docker though, can't wait to give it a try! Thanks, fixed now. Update the docker, and delete the .sh file. It will then be recreated with the right permissions. thanks, looked good! Will this grab automatic updates? I know its still in the works so wasn't sure if you are autograbbing updates or you will need to do releases each time.
  23. Any special permissions needed to be set? I can't modify the .sh file to add my settings. The docker is shutdown. Thank you for the docker though, can't wait to give it a try!
  24. this would be a great docker for Plex people: ConvertToMKV https://forums.plex.tv/discussion/233133/convert2mkv-now-supporting-mp4-h264-and-x265-hevc-output-updated-13-10-16/p1 https://gitlab.com/ThatGuy/convert2mkv/tree/master How To: https://gitlab.com/ThatGuy/convert2mkv/wikis/howto