wickedathletes

Community Developer
  • Posts

    435
  • Joined

  • Last visited

Posts posted by wickedathletes

  1. I think the key question is here is how are your apps referencing their config folders.  /mnt/cache/apps or /mnt/user/apps

     

    /mnt/cache/apps

     

    Has to be, mover only managed to move half of my Plex app folder before dying, everything else is working fine.

     

    Is there a way to verify Plex is correct? I did have a lot of images that seemed "broke."

    Doesn't have to be /mnt/cache on the templates.  /mnt/user works just fine.

     

    Best guess on the broken images is that plex is referencing /mnt/user (you can verify by editing the container) and unRaid is grabbing the version of the file(s) on the array and missing the symlinks because only half the share got copied.

     

    Back to what I originally suggested.  Delete /mnt/user0/apps and restore again or even better, delete /mnt/user/apps and restore again.  Either way you'll be back to where you where a couple of days ago.

     

    awesome, I think I will give that a whirl in a day or so... because when sh*t hits the fan, it hits the fan and of course I had a drive die at the same time, so right now I am preclearing and adding a replacement in the mix, so I definitely dont want to do anything right now.

  2. I think the key question is here is how are your apps referencing their config folders.  /mnt/cache/apps or /mnt/user/apps

     

    /mnt/cache/apps

     

    Has to be, mover only managed to move half of my Plex app folder before dying, everything else is working fine.

     

    Is there a way to verify Plex is correct? I did have a lot of images that seemed "broke."

     

    or its kind of not... crap. I see my downloads are going to my disk5 now... ugh...

  3. Doing it through mover will take forever due to how it operates, and then you will also have the problem that the appdata won't be in sync with how its supposed to be (since the backup ran the day before you did the mover thing) - May not be a problem, but you could wind up in a situation of file A references file B, but B isn't there because mover never touched it and it wasn't in the backup set from the day prior.

     

    Personally, I wouldn't bother with the test.  But you do have the backup set just in case.

     

    Is this something that could wait until the next backup and then restore from that?

  4. "Share apps set to cache-only, but files / folders exist on the array"

     

    If I just delete what is on my array will that go away? Is there a way to tell mover to stop thinking its 50% done in a process? All of my files are on the cache drive now.

    As an experiment you could set appdata to "cache prefer" and run the mover again. It will take just as long, and you may still have to follow squid's directions, but it would be an interesting experiment. I suspect it will fail, but the resulting log file and summary of actions leading up to this could be useful for limetech.

     

    I am all for testing but what happens if I nuke something in the mean time? Just use the CA backup and restore again? I don't want to hose my server, its my production box so to speak.

  5. So long as there's nothing else on the cache drive. 

     

    Sent from my LG-D852 using Tapatalk

     

    Everything is up and running now, thank you. Quick one:

     

    Because mover is like half finished now I am getting this error in Fix Common Problems:

     

    "Share apps set to cache-only, but files / folders exist on the array"

     

    If I just delete what is on my array will that go away? Is there a way to tell mover to stop thinking its 50% done in a process? All of my files are on the cache drive now.

    This is actually a weird little edge case and I'm not 100% sure how the user system handles copying to a cache-only share when an identical file exists on the array.

     

    This is what I would do to make sure everything is ok

     

    The apps folder sitting on the array is going to be from mover yesterday, and will be safe to kill.  Either use mc or the dolphin/krusader apps to kill the folder from /mnt/user0 or

    rm -rf /mnt/user0/apps

     

    Note that you must use user0 and not user to only delete the files on the array.

     

    Then do the restore again.  (Only files which are missing from the cache drive version will get copied)

     

    I'll have to play around tonight to see exactly what CA Backup does in this edge-case, as I know its not something that I ever tested.

     

    if i do the restore again but have changed this since then is that a problem?

  6. So long as there's nothing else on the cache drive. 

     

    Sent from my LG-D852 using Tapatalk

     

    Everything is up and running now, thank you. Quick one:

     

    Because mover is like half finished now I am getting this error in Fix Common Problems:

     

    "Share apps set to cache-only, but files / folders exist on the array"

     

    If I just delete what is on my array will that go away? Is there a way to tell mover to stop thinking its 50% done in a process? All of my files are on the cache drive now.

  7. this is all my syslog says:

     

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 72 bytes) in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(300) : eval()'d code on line 73

    Sure that's in your syslog and not on one of the UI pages?  If so, which page?

     

    tools->system log.

     

    screen shot: https://drive.google.com/open?id=0B7baFOqwwO97dVMwVkZiRHRLU3M

     

    I think at this point I just want to know how to kill a Mover job? If I reboot will it kill it or will it just start up again?

    It's a UI page that's doing it.  If you reboot (and it works from the UI), then mover will not start back up again.

     

    I know what the error is (a PHP script attempted to allocate more memory than it was allowed to) and your only real recourse is to reboot.

     

    What I think happened here is that you have a sizeable Plex library, and mover logs everything which basically means that your syslog wound up with an extra 200,000+ lines in it and.

     

    Since your already half way done (hopefully), just start mover back up after the system reboots.

     

     

    Side Note: Using CA Backup to handle copying the appdata would have been

    - Far, far faster process as it doesn't have the overhead that mover does

    - You wind up with a backup copy (the original cache drive) just in case something goes wrong

    - Doesn't log all moves into the syslog for this very reason (logs it elsewhere)

     

    I used CA Backup and forgot my backup runs Wednesday AM so I have a nice clean fresh backup, which is why I don't care about mover at this point :) and ya, my plex library is kind of large :)

     

    So since I have a backup point can I just plugin my new cache drive, format it and restore?

     

     

  8. this is all my syslog says:

     

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 72 bytes) in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(300) : eval()'d code on line 73

    Sure that's in your syslog and not on one of the UI pages?  If so, which page?

     

    tools->system log.

     

    screen shot: https://drive.google.com/open?id=0B7baFOqwwO97dVMwVkZiRHRLU3M

     

    I think at this point I just want to know how to kill a Mover job? If I reboot will it kill it or will it just start up again?

  9. I am in the process of moving my data off my cache to put on a new cache drive. the mover appears stuck, it hasnt written to log in over an hour

     

    Dec 8 13:47:33 Hades root: cd+++++++++ apps/Plex/Library/Application Support/Plex Media Server/Media/localhost/a/da0557cb221d85715fb47d468e8bd5cdbb0ac65.bundle/Contents/Thumbnails/

     

    ---that folder has 1 image in it.

     

    I was using the instructions here: https://lime-technology.com/wiki/index.php/Replace_A_Cache_Drive

     

    I do have an app folder backup from last night, so not sure how I can kill this process? if I shouldnt or what? It is writing to my drive with space on it, so not sure why it just died out. any help appreciated, my server is dead in the water as we speak.

  10. Split level doesn't effect reading, only writing new files.

     

    I know, but having it split at level 1 the show is still being split across different drives... Should that not have been the case?

     

    Ideally, with my scenario, am I to assume switching it to level 2 should fix my issue anyways?

  11. Reread the split level help VERY carefully. If you allow only the top level to split, your shows can't get spread as needed.

     

    But they are, I have The Flash on numerous drives without issue. Is it because it should actually be on the Season folder and not the show? and if that is the case I guess I would want 2?

     

    \SHARENAME\Show Name\Season ##\file.ext

    \TV Series\The Flash (2014)\Season 03\file.ext

     

    \\SHARENAME\level 1\level 2\file.ext

     

    this way multiple seasons can be on multiple disks?

  12. When it begins to write a file, it doesn't know how large the file will be. You set Minimum Free to be larger than the largest file you ever expect to write.

     

    Just for clarification, here is how it is supposed to work:

     

    Say you set Minimum Free to 5GB, and there is 6GB left. You write a 4GB file. Since there is more than 5GB left, it writes the 4GB file, then there will be only 2GB left. If you try to write another file, it won't write it there because the space remaining is less than the Mininum Free.

     

     

    The other thing that will affect this is Split Level. Split level has precedence over Minimum Free, so if Split Level says the file belongs with other files it will write them where the other files are.

     

    My real world example:

     

    Minimum set: 5GB. 2GB was free. It tried writing a video file to the drive that was 2.2GB. It failed and left a .partial~ file (I assume this is a Sonarr thing). The split level I have set on my TV Series share is "Automatically split only the top level directory as required."

     

    My directory structure is \TV Series\The Flash (2014)\Season 03\file.ext

     

    When trying to use that directory it failed.

     

    So, bringing it all back I guess my questions are now:

     

    1. Is my structuring bad? If so what would be recommended? The way it appears is Shows can be split per drive without issue.

    2. If the split level is an issue on the Minimum split can drives just be excluded from the share but still have their data shown in it? That way I can manually fill the drive and "turn them off" from being written too but still show in the share.

    3. If 1 & 2 don't solve it, I guess I am back to splitting up the space evenly and leaving this completely manual  :-\

  13. Agree -- just let UnRAID handle it.

     

    Technically there's no reason you can't simply fill a drive up entirely.  On my media server (mostly static content), 5 of the 14 drives have less than 1GB of free space.    Clearly they're never written to anymore -- and once they got down to a few GB I selectively saved the last few DVDs to them so take max advantage of the space.  There's no read performance issue with completely full drives.  There is a write performance "hit" with very full drives that use Reiser; much less so with XFS.

     

    So its not working, maybe I am doing something wrong?

     

    On the share I put a "Minimum Free Space" to 5GB for that specific share. My docker is writing to that share specifically and it placed a 3GB file (or tried); when only 1GB was left. Am I not understanding this correctly?

     

    Basically what I want is to not have to move data around because a drive fills up. In its current state however, Sonarr still writes to that location regardless...

  14. I was wondering on some guidance on overall drive capacity and what is a safe percentage? I have been for years moving stuff around to keep each drive at the same basically capacity level and I realized I should just take the drive out of rotation once its hit a limit, might be easier than moving data back and forth. If this isn't a good idea then I wont but if it is, whats a good percentage?

     

    I have 4TB drives (end goal, 3 are upgrade-able still from 2TBs); 8 of them at the moment and about 800GB free. Would performance be effected or anything else if I left say 10GB free on each drive? Should it be more? Is less fine? I just know if I keep a drive that low in rotation i run into space issues since dockers don't understand to not use that drive.

    Don't understand what you are trying to accomplish. You should just refer to everything as user shares and let unRAID worry about what drive anything is on. Set minimum free on each user share to be larger than the largest single file you will write to that share and everything else should take care of itself.

     

    Honestly I didn't know that setting existed haha. So thank you! :) That was what was frustrating me, because without it things would never copy over, and it would backlog etc etc. THANKS!

    Turn on Help in the webUI.  ;)

    Lots of things are explained for each page.

     

    its on, Ive just been here to long, before help. I set all that stuff up in 4.0. :)

  15. I was wondering on some guidance on overall drive capacity and what is a safe percentage? I have been for years moving stuff around to keep each drive at the same basically capacity level and I realized I should just take the drive out of rotation once its hit a limit, might be easier than moving data back and forth. If this isn't a good idea then I wont but if it is, whats a good percentage?

     

    I have 4TB drives (end goal, 3 are upgrade-able still from 2TBs); 8 of them at the moment and about 800GB free. Would performance be effected or anything else if I left say 10GB free on each drive? Should it be more? Is less fine? I just know if I keep a drive that low in rotation i run into space issues since dockers don't understand to not use that drive.

    Don't understand what you are trying to accomplish. You should just refer to everything as user shares and let unRAID worry about what drive anything is on. Set minimum free on each user share to be larger than the largest single file you will write to that share and everything else should take care of itself.

     

    Honestly I didn't know that setting existed haha. So thank you! :) That was what was frustrating me, because without it things would never copy over, and it would backlog etc etc. THANKS!

  16. I was wondering on some guidance on overall drive capacity and what is a safe percentage? I have been for years moving stuff around to keep each drive at the same basically capacity level and I realized I should just take the drive out of rotation once its hit a limit, might be easier than moving data back and forth. If this isn't a good idea then I wont but if it is, whats a good percentage?

     

    I have 4TB drives (end goal, 3 are upgrade-able still from 2TBs); 8 of them at the moment and about 800GB free. Would performance be effected or anything else if I left say 10GB free on each drive? Should it be more? Is less fine? I just know if I keep a drive that low in rotation i run into space issues since dockers don't understand to not use that drive.

  17. width=60https://raw.githubusercontent.com/bjonness406/Docker-templates/master/Icon/avi-to-mkv.jpg[/img]Convert2MKV: Convert2MKV will convert your videos in a folder to mkv or mp4

     

    Description: A docker to convert your videos to mkv or mp4

    Application: https://gitlab.com/ThatGuy/convert2mkv - https://forums.plex.tv/discussion/233133/convert2mkv-now-supporting-mp4-h264-and-x265-hevc-output-updated-18-10-16/p1

    Docker Hub: https://hub.docker.com/r/bjonness406/convert2mkv/

    Github: https://github.com/bjonness406/Convert2MKV/

     

    Thanks to @ntrevena (plex forum) for the script itself!

     

    Any special permissions needed to be set? I can't modify the .sh file to add my settings. The docker is shutdown. Thank you for the docker though, can't wait to give it a try!

    Thanks, fixed now.

    Update the docker, and delete the .sh file. It will then be recreated with the right permissions.

     

    thanks, looked good! Will this grab automatic updates? I know its still in the works so wasn't sure if you are autograbbing updates or you will need to do releases each time.

  18. width=60https://raw.githubusercontent.com/bjonness406/Docker-templates/master/Icon/avi-to-mkv.jpg[/img]Convert2MKV: Convert2MKV will convert your videos in a folder to mkv or mp4

     

    Description: A docker to convert your videos to mkv or mp4

    Application: https://gitlab.com/ThatGuy/convert2mkv - https://forums.plex.tv/discussion/233133/convert2mkv-now-supporting-mp4-h264-and-x265-hevc-output-updated-18-10-16/p1

    Docker Hub: https://hub.docker.com/r/bjonness406/convert2mkv/

    Github: https://github.com/bjonness406/Convert2MKV/

     

    Thanks to @ntrevena (plex forum) for the script itself!

     

    Any special permissions needed to be set? I can't modify the .sh file to add my settings. The docker is shutdown. Thank you for the docker though, can't wait to give it a try!

  19. I don't understand your description. What do you mean by

    leaving myself out of parity

     

    Sorry, so right now I have a drive missing (it failed) and as I am waiting for the new drive to arrive (which is today but will still need preclear etc etc) and bring my parity back to a healthy state, so if I lose another drive my parity is broke correct? Under that assumption, I currently have a second drive throwing errors, which I suspect to be a bad seating in my NAS or loose connection. What I was mainly wondering is if I put my drive 6 into my drive 3 bay, will unRAID still see it as drive 6 or will unRAID think its drive 3 and start wigging out?

     

    Parity Drive 1 is fine

    Drive 1 of 7 is fine

    Drive 2 of 7 is fine

    Drive 3 of 7 is dead and has been removed - waiting on replacement to arrive

    Drive 4 of 7 is fine

    Drive 5 of 7 is fine

    Drive 6 of 7 is throwing errors currently, wondering if its a bad connection

    Drive 7 of 7 is fine

     

     

    That make any more sense?