Jump to content

jumperalex

Members
  • Posts

    2,000
  • Joined

  • Last visited

Posts posted by jumperalex

  1. If you don't have an SSD or want to limit IO contention to the device used for transcoding, this tweak could help.

     

    Even on an SSD, if you have a bunch of VMs or containers all running on the same device and if they are doing lots of IO work, this could impact performance in those instances. While you'd have to be doing some pretty intense stuff, it is possible.

     

    Now that's a reason I can get behind. I personally don't have that concern, but I can surely see others might.

     

    And at the end of the day, flash based memory will always eventually suffer burnouts. Over years and years of use, between both Plex transcoding and front ending writes to the array, this could add up.  So am I trying to say that by moving transcoding to ram you "triple ssd life"?  No.  I'm simply stating that RAM does not ever burn out from writes...ever.  So if you are RAM rich, why not implement this tweak? I don't see anything to lose here unless you just don't have enough memory.

     

    Of course. As with all things it is a question of degrees and use case. I can see the benefits, I just don't know that I'd agree they are very big. The same can be said for the down side. Most people probably won't run into problems with enough ram / few enough streams. Of course "enough" ram is relative depending on how many of those other functions, which might have i/o contention, they are running and how ram intensive they are.

     

    I'm really not trying to be completely contrary. I think it is great that you've presented the option and there are surely valid use cases. My nit pick is only that SSD endurance statistically isn't really one of them. Yes they wear out, but they do it at a rate so as to not matter when you are looking at even a high use case giving 20 years of endurance. Cut it in half, or even a quarter and you're still looking at a practical useful life beyond how long most of us will keep these drives before upgrading. 

     

    Ok I don't want to beat a dead horse so I'll bow out until I have something else useful to add to the conversation.  Again, thank for the attention to Plex, no matter what I DO appreciate that fact.

  2. I mean sure, there are other write operations going on our SSD's but I think it is fair to say even if we doubled, or tripled our daily SSD write throughput we're still safe from hitting the endurance limits.

     

    Thank you. There's so much FUD about SSD wear that I see all sorts of goofy workarounds in lots of places. Your write up was quite succinct.

     

    I think the biggest driver of that is confirmation bias.  People forgetting that stuff fails early. They hear about, or experience, an SSD failure and it validates their feelings that SSDs must be treated with kid gloves. But an HDD fails and they call it what it is; poor QC, poor shipping, dumb luck whatever. Sure there were some models known for having problems [cough] OCZ [cough] but even those were over blown anecdotes versus actual data. They were also usually failures due to poor firmware and not wear of the actual chips. In any case, it doesn't hurt to have options so I can't argue with that.

  3. Oh, and you should probably read through that link you posted.  I don't think that was a statement against using /tmp in general, but just one guys issue.  Also, the recommendation in there about "making it as large as your largest piece of media" is just nonsense.  The transcoder doesn't need that much space as it's default setting is to only transcode 60 second of content at a time.

     

    Yeah that was just one link from that search query. Others, like yours, had success though they generally admitted to not finding much benefit (usually related to stream initiation speed which was their goal).

     

    Its not a horrible idea, and you are right flash does eventually wear out. I just don't know that it is enough to matter or to drive change absent other benefits. Change for change sake and all that [shrug].

     

    For sure, I'm glad to see you posting the method, regardless of use case, because it means Limetech is "on it" with regards to plex :)

     

    archdraft: if you look at the link / search term I posted you'll see that it does indeed apply as an alternative method for general linux install.  You'll also see discussions about using ramfs vs tmpfs (iirc) and the possibility of rollover into swap if you run out of ram etc etc. That means you need to have swap of course; I do not but I also am not planning to change transcoding to ram right now as can probably tell.

     

    StevenD: can you characterize the memory usage with those three streams? Keep in mind you need to let them run to completion to get the full story. Plex does not discard .ts files until the stream is stopped. So usage will grow and grow until that point.

  4. hmmm an interesting think piece. I can certainly see the benefits in some cases, but I would disagree that SSD wear is of any real concern. By your own stats you're only looking at writing 430mb to the SSD. I don't know how long that video is, so I started from your 12mbps value (a reasonable, but even high one) and turned that into 129,600 MB/day or 127GB/day if you were to run a single stream for 24 hours non-stop.

     

    Per this article http://techreport.com/review/27062/the-ssd-endurance-experiment-only-two-remain-after-1-5pb it is reasonable to expect ~ 1PB of writes before seeing the start of errors (some drives a bit less, some drives MUCC more). But based on 1PB = 104,8576GB that means we can push a 127GB stream, non-stop, for 8256 days, or 22.6 years.

     

    I mean sure, there are other write operations going on our SSD's but I think it is fair to say even if we doubled, or tripled our daily SSD write throughput we're still safe from hitting the endurance limits.

     

    My real concern with transcoding to RAM is that pushing three or four simultaneous transcoded streams might stretch the memory limits of some people, though it is unlikely for most.

     

    I will add that even in the Plex forums there are discussions about dealing with /tmp (iirc?) writing to ram instead of disc and being a problem. This is just the first link I found in a google search of "plex /tmp ram" https://forums.plex.tv/index.php/topic/119669-issues-with-tmp-on-ram/

  5. Hmm interesting. Back in early 6beta days there was an inverse version with AMD. It was not ramping up enough and was actually impacting parity checks. At first it seemed unbelievable that a 4-core 975 was unable to keep up even downclocked, but sure enough a tweak to the ondemand settings (I still have them commented out in my go script) ramped up the CPU during parity (but only just barely; you could see a dip occasionally) and parity ran at the same speed as when the governor was set to performance.

     

    Not that it is really related, just an anecdote of how touchy cpu governors can't be.

  6. UPDATE 3 (by JonP)Sorry for editing someone else's post, but thought it'd be good to post an update here in the OP for others that are seeing this issue.  Eric has a potential fix that he shared further in this thread.  Here's the info:

     

    In KVM mode, there might be a cpu scaling driver issue with certain hardware combinations.  One of those drivers is called Intel-Pstate.  This is the chosen driver if your Intel cpu is Sandy Bridge (2011) or newer.  On my Haswell-class cpu (i7-4771) the Intel-Pstate driver is too sensitive and seems to keep the cpu frequency near the max frequency even when idle but occasionally it does scale the frequency down.

     

    You can disable the Intel-Pstate driver by editing your /boot/syslinux/syslinux.cfg and adding a intel_pstate=disable parameter to the append line below:

     

    ...

    label unRAID OS

      menu default

      kernel /bzimage

      append intel_pstate=disable initrd=/bzroot

    ...

     

     

    Save the file, stop the array and reboot unRAID.  Doing this on my Haswell machine caused it to use the acpi-cpufreq scaling driver instead of the intel_pstate one.  It scales the frequency down like a rockstar now!  Usually keeps it around 800MHz - 1000MHz during idle now.

     

    On the flip side, my other test machine, a year older Intel cpu (i5-3470) was able to scale down to 1600MHz (minimum) pretty consistently when using the intel_pstate driver... but when I disabled intel_pstate then there wasn't a scaling driver available.  For some reason the acpi-cpufreq driver wasn't compatible with this cpu.  Your mileage may very.

     

    Give this a try and let me know if it helps you.  Either way, if it helped or not, let me know which cpu you tried with this command:

    grep -m 1 'model name' < /proc/cpuinfo

     

    UPDATE 2: I spoke too soon and it seems there is indeed some sub-optimal configurations with regards to Intel CPU's and the KVM (not Xen?) environment. In either case, Limetech is aware and working the issue. There are also some tweaks to be found in this tread to possible help in the meantime.

     

    UPDATE: this appears to only be a cosmetic issue in that the GUI is not polling the right place for data. Depending on if you are using Xen or not, there are methods (in this thread) to confirm that your CPU is in fact ramping up/down down as expected.

     

    =================================

    The subject really does say it all. My server specs are in my sig.  However, some additional info:

     

    From the Dashboard I can see my cpu is running at full speed even when load is very low.

     

    To confirm it was not just a GUI issue I went into the console to look at the cpu freq but I can't seem pull that info. Looking at my old GO file I can see I was touching "/sys/devices/system/cpu/cpufreq" to modify my ondemand parameters.  That folder is no longer present

     

    Looking at http://docs.slackware.com/howtos:hardware:cpu_frequency_scaling I'd expect to see folders like /sys/devices/system/cpu/cpu*/cpufreq but there isn't.

     

    Did something fail to get loaded into the kernel?

  7. First I like to propose a simplification in the "plugin" installer script and *not* store any -stale- plugins, just forget/ignore older plugin versions. I think that is less confusing for the end-user.

     

    What do we do when, not if, there is a bug in the updated plug-in and we need to roll-back to a working version?

     

    Or is that what you mean by forget/ignore? If so, what do you mean by "forget/ignore?

  8. Another thumbs up for Checksum.  Go ahead and look into md5deep but I can tell you right now (and probably repeating the threads you read and my own posts) md5deep does not have all the built in functions that Checksum does.  It generates and validates hases, that is all.  It will not append new hashes (vs. regenerating the entire set of hashes), delete hashes of removed files, nor update hashes of changed files (vs just adding a new hash to the list leaving an orphaned hash). 

     

    You can certainly script all that, but that is what Checksum does for you.  I've pinged Corz to encourage him to re-attack the linux version of the checksum.  It is missing a lot of functionality because he doesn't use it personally anymore.  That said, the linux version can help you generate that initial set of hashes vice moving everything over your network for windows to generate.  It is indeed a good bit faster.  Then use the windows version for later management (synchronization) where only new files must move over the network.  You can also use the linux version for a faster disaster check (i.e. after a parity fail).

     

    Finally ... if I get the big, or if you're so inclined, you could create a docker (or VM) with wine and checksum installed and then you can operate it all on unraid using VNC or RDS.

  9. Keep in mind the "min free space" (when working correctly) only stops you from writing to the cache and then either (I can't remember) fails to allow the copy operation or defaults to writing directly to the array.

     

    That is "OK" but it would be a much more elegant and seemless option (yes option, not forced choice) if the mover script could kick off after you've hit that min-free-space (or %) limit so the user doesn't have to deal with it.

     

    I'd also say this should have been, and should be, the functionality all along regardless of the cache's new redundancy ability. Consider that "moving stuff off the cache as quickly as possible" makes sense if you are of the mindset that the cache might take days to fill.  But what about after large writes (and/or if your running with a smaller SSD as a cache) that can get filled well before the standard daily mover?  Of course nothing stops us setting the mover to run more often, but that seems a bit kludgy afterll :)

     

    In fact if you want to talk about getting the data off the cache as quickly as possible, then doing it automaticallyafter a huge amount of data has been copied to it seems like a no brainer.

  10. Sound idea.

     

    Maybe however we should just use a free set that mirrors what people are already used to such as (random google):

     

    http://www.jankoatwarpspeed.com/sixpack-status-free-icon-pack/

     

    sixpack_status_icons.jpg

     

    ?!?!?! Wait what?  I mean people have dealt with this problem before?  And have come up with reasonable solutions?

     

    ;)  Sorry this theme has come up a bunch lately IRL so I was just amused.  Good find!!

  11. I think the simplest "solution" is a simple On/Off switch for Cache-Dirs in the Web GUI  :)

     

    I suspect most of us would leave it on all the time except when we wanted to do a parity check or drive rebuild.

     

    I can't imagine it would be that hard to automatically turn off cache-dir during those two situations, both otherwise provide the switch.

×
×
  • Create New...