Jump to content

Leifgg

Members
  • Posts

    408
  • Joined

  • Last visited

Posts posted by Leifgg

  1. 1 hour ago, Fma965 said:

    I'm aware of this i was asking specifically how to go back a version (how to find the version number string) , as you can see in my edit in the quoted post i have already found it.

     

    Thanks.

    Have a look in your appdata share and open your Plex folder. Browse to Library/Application Support/Plex Media Server/Crash Reports/

     

    Here you have folders for each version you have had installed and the name of the folder is the same as version number string. So just use this name to specify what version you would like to use.

  2. 45 minutes ago, carlos28355 said:

    tried a force update and same issue...thanks anyways

     

     

    Im not sure what id like to do either ha ha. im just trying suggestions to get this to work. how much ram would you recommend for doing the transcoding? Im going to get an ssd cache drive soon but cant right now. do you have any suggestions that I could try to get my plex working again? Thanks

     

    What I would recommend as first step is to setup transcoding to disk. If you have cash disk use that one (any type, don't need to be an SSD), if not do it to the array. Ask if you need help.

    You shouldn't try transcoding to RAM until you have Plex working first.

     

    You can read more about transcoding to RAM here:

     

  3. 4 hours ago, carlos28355 said:

    ok please forgive my ignorance. is this correct? I also changed plex back to /transcode

     

    how does this look? if it looks correct. i still have the problem :S

    transcode.jpg

    tmp.jpg

    plex.jpg

     

    Not sure I understand what you would like to do, if it’s transcoding to disk or to RAM.

     

    If you have the Transcoder temporary directory in Plex set to /transcode then you should also have a folder mapping with a Container Path: set to /transcode

     

    If the Transcoder temporary directory in Plex is set to something else then the folder mapping for the Container Path: should be set to the same.

     

    If they aren’t the same, you will transcode to disk and in this case the docker image. Depending on the size of the docker image you could run out of space when transcoding.

     

    The setting that configures the transcoding location is the Host Path: If it’s set to /tmp you will transcode to RAM and hopefully you have enough RAM to do this.

     

    If you prefer transcoding to disk, you need to configure Host Path: with the location (folder) that you would like to use. Could be something like /mnt/user/appdata/transcode or similar. Preferably a folder located on an SSD disk but that’s up to you.

     

  4. 2 hours ago, tazire said:

    Just a question in relation to the transcode drive. I had a spare 120gb ssd lying around and im looking to use it specifically for transcoding. I'm trying to lighten the writes on my cache drives as much as i can... I have it mounted with unassigned devices and I'd like to now use it for the transcodes.... I created a container path pointing to it and put the corresponding path into plex but when monitoring it it doesnt appear to be using it at all. I also tried changing the TRANS_DIR container variable but that just broke it which i kinda expected.

     

    Any body know how I go about getting this working??

     

    Here is an example…. I don’t use binhex container right now, but it shouldn’t be to much different.

    I have configured Plex to use the folder /transcode (this is what the Plex app in the container can see).

     

    5a71f33b450db_Plextranscode.png.7bf2a3aa6191dd651c38f632c6b77c7e.png

     

    Then I have a folder mapping for the Plex container that maps /transcode inside the container (Container Path:) to a location outside the container (Host Path:)

    In this case I have mounted an unassigned disk (not belonging to the array and not a cash disk). When browsing unassigned disks will show up at /mnt/disks

    I have labelled my unassigned disk to Unassigned (otherwise I think it will show up with its s/n). I have also created a folder Plex_transcode on that disk that I will use.

     

    Mapping.png.7185ebbd11a284419267fd47721baa58.png

     

    The Access Mode: should also be set to RW/Slave

     

    5a71f3768fec7_RWslave.png.e761a1ea4f454b6fa05c39822ad85894.png

     

    /Leif

     

  5. 1 hour ago, binhex said:

     

    its what plex calls "early access" which gives you access to features before it ends up in 'plex', in my opinion this really means beta/release candidates for those who are impatient and can't wait for feature X in 'plex', you will note that the version for plex pass generally is in front of plex, but plex will catch up as the features get pushed from plex pass to plex.

     

    And not to forget, having a Plex Pass also support the developers to keep a great product getting even better….;-)

  6. 5 minutes ago, wgstarks said:

    Thanks. I’ll give this a shot since my server  maxs at 32GB of ram then if your worst case is true I wouldn’t be able to transcode many of my movies. I’m guessing almost all of the BD rips are over 32GB.

     

    I have 32 GB as well and a Windows 10 VM running (4 GB) as well as a few dockers and I am able to do transcoding to RAM but my Samsung TV plays most formats natively.

    Also have remote friends but they have bandwidth limitations so files are likely to decrees in size after transcoding.

     

  7. 47 minutes ago, wgstarks said:

    Roughly how much ram is needed for this? I’m wondering if 8GB is enough?

     

    Was just waiting for that one coming…. There are so many parameters involved so it’s hard to predict.

     

    There are typically three reasons for transcoding, the video stream, audio stream and subtitles. In addition, transcoding might be needed due to the bit rate (to high). All above need to be within what the client player can handle so obviously it will be different for different clients.

     

    The number of simultaneous clients playing can increase the required space needed as well.

     

    It should also be noted that Plex does the transcoding “on the fly” in smaller data block and they are kept for a while (probably the whole session) so that you can stop playing and revers back in the movie and start playing again.

     

    I haven’t checked actual file sizes for a transcoded video, but I believe that a “worst case scenario” would be the total size of the actual movie, but once again I never checked it myself.

     

    My suggestion is that you start with transcoding to a folder first and play a full video and look at the files in the transcoder folder to get a feel for what it actually does when using different formats and clients. Let that decide the way to go and keep in mind that running out of memory eventually could crash Plex but potentially the server as well.

  8. 21 hours ago, BendikHa said:

    Under "Container Path" I added "/transcode" and under "Host Path", I added: "/tmp". Was that the right way to do it? 

     

    That should be correct, also make sure that the "Transcoder temporary directory" in Plex is set to /transcode

    Transcoding can require a lot of space and there might be risk of running out of RAM.

     

    • Like 1
    • Upvote 1
  9. 12 hours ago, naturalcarr said:

    Hello, I want to rollback to a previous version of PMS, is there a version list somewhere? The one on the docke hub shows the updates to the docker hub (ie. 95, 96, 97, etc) but I want to find a list of docker versions (like 1.2.7.2987-1bef33a).

     

    Go to the appdata share and look in Plex\Library\Application Support\Plex Media Server\Crash Reports

    You will find folders for your previous versions named as the release version.

    • Like 2
    • Upvote 2
  10. 11 hours ago, dmacias said:

    So it just stops right there and won't shut down even after that? Do you also have fan control running? Only thing you could check is if you log back in remotely and run "ps aux | grep ipmi" and see if ipmiseld, ipmifan or 2 instances of ipmitail are still running. I have been moving everything from my C2750 to my X10SLL-F and have shutdown a dozen times without fail. I'll double check though and record the console as it shuts down. 

     

    Edit. You could also test starting and stopping the services manually

    /etc/rc.d/rc.ipmiseld start/stop

    /etc/rc.d/rc.ipmitail start/stop

    ipmifan --daemon

    ipmifan --quit

     

    Edit 2. I think it's ipmifan that's not shutting down. I changed some command line variables and the shutdown script uses -q instead of --quit.

     

    Many thanks for the update. I have installed it on my Test server and enabled Event notifications, Footer setting as well as Fan control and everything looks fine including Power down.

    I will test my Main and Backup servers later as well but I don’t expect any problems.

    Once again thanks!

    /Leif

     

  11. I am having some issues when powering down one of my servers, it simply hangs (see the pic). I had just updated the BMC firmware so I thought that could be the problem. I downgraded the firmware but no luck, still same problem.

    I decided to power down my two other servers and got into same problem.

    Decided to uninstall the IPMI plugin on all three servers and they all power down as they should. Reinstalling the plugin will bring back the problems again.

    Can I get some guidance for troubleshooting this pls…. (my signature is updated with current config)

    unRAID2 pwr down.png

  12. 1 minute ago, Leifgg said:

     

    My settings are:

    
              <dataDeDupAutoMaxFileSize>1</dataDeDupAutoMaxFileSize>
              <dataDeDupAutoMaxFileSizeForWan>1073741824</dataDeDupAutoMaxFileSizeForWan>

    I have disable DataDeDuplication for files above 1073741824 Bytes (1 GByte) since these are often media files that has (more or less) unique content, smaller files gets DeDuplicated.

    DataDeDuplication use a lot more CPU power so its easy to see what file actually is DeDuplicated.

    I uploaded 15 TByte in 4 weeks with a 100 Mbit line.

     

    I set 3 GByte as max memory for my 15 TByte and 800.000 files and that works for me. The recommendations looks like a bit "to be on the safe side".

  13. 4 hours ago, Helmonder said:

     

    I am using: jlesage/crashplan-pro

     

    But hey !  If its on the host then I can do that right away, thanks !!

     

    EDIT: Just did that, did not immediately work, crashplan also started to restart.. So I also increased the memory to 8gigs (it has been working on 4 gigs sofar and I have backupped 14 terabyte of files to crashplan without a problem...)

     

    Crashplan's advice is to have 1gig per terabyte, but that is based on the average amount of files on a system, and not on media storage (which typically is a lot of data but not so much files / large files).

     

    If the memory usage extrapolates  in the same way I should be fine until I hit 35tb ( in the allready backupped set there also is my mysic library which is a lot of files again).

     

    My settings are:

              <dataDeDupAutoMaxFileSize>1</dataDeDupAutoMaxFileSize>
              <dataDeDupAutoMaxFileSizeForWan>1073741824</dataDeDupAutoMaxFileSizeForWan>

    I have disable DataDeDuplication for files above 1073741824 Bytes (1 GByte) since these are often media files that has (more or less) unique content, smaller files gets DeDuplicated.

    DataDeDuplication use a lot more CPU power so its easy to see what file actually is DeDuplicated.

    I uploaded 15 TByte in 4 weeks with a 100 Mbit line.

  14. On 10/19/2017 at 10:47 AM, mbc0 said:

    Have Crashplan done something to prevent the fast upload we could achieve by editing the my.service.xml?  I used to get 18mb/s upload when changing the AutoMaxFileSizeForWan from 0 to 1 but I am now getting 1mb/s upload on Crashplan Pro which will take 3 years to complete instead of a month!  I also notice the <dataDeDupAutoMaxFileSize>1073741824</dataDeDupAutoMaxFileSize> line? I do not remember this, can anyone else here still take advantage of higher upload speeds with Crashplan Pro like they did on Crashplan home?

     

    <dataDeDupAutoMaxFileSize>1073741824</dataDeDupAutoMaxFileSize>
              <dataDeDupAutoMaxFileSizeForWan>1</dataDeDupAutoMaxFileSizeForWan>
              <dataDeDuplication>MINIMAL</dataDeDuplication>
     

    Yes, you can edit the xml file in CrashPlan Pro in the same way.

  15. This container will auto update (or actually the app will). Mine was in June. Check history.log in the log folder.

    I 06/15/17 12:32PM Downloading a new version of CrashPlan.
    I 06/15/17 12:32PM Download of upgrade complete - version 1436674800483.
    I 06/15/17 12:32PM CrashPlan has downloaded an update and will restart momentarily to apply the update.
    I 06/15/17 12:33PM Installing upgrade - version 1436674800483
    I 06/15/17 12:33PM Upgrade installed - version 1436674800483
    I 06/15/17 12:33PM CrashPlan started, version 4.8.3

     

×
×
  • Create New...