DaClownie

Members
  • Posts

    41
  • Joined

  • Last visited

Everything posted by DaClownie

  1. Why would you configure Unmanic to do that? It's not? the prior version would never do it. I had the understanding that once it had processed a file it gave it some sort of internal flag in the file so that unmanic wouldn't want to process it again. It just keeps processing the file over and over and I don't know why lol
  2. I'm having files that are being processed multiple times. It processes the same files every cycle of the library scanner from the looks (set to 15 minutes). Or perhaps when the Library File Monitor is enabled (just disabled it now to test). Each one of those "successes" is the same exact size as the prior time it ran.
  3. Well, I converted a few files, and so far so good! Reduced file size, converting audio to AAC, creating a Stereo clone, checking for container size post conversion. I'll need more files to convert to fully test whether its rejecting for file sizes, audio streams are all working, changing containers etc. Thanks a ton man
  4. Perhaps I'm doing something wrong, but I'm failing to add repositories whenever I attempt to use that button. I tried with the full link to the repository, as well as just Josh5/unmanic-plugins
  5. Is it still possible to create a stereo audio layer like in the previous version of unManic? I was using it before to create a stereo track from the 5.1 tracks (if stereo didn't exist) and also to compress using nvenc to 265. That's the only functionality I haven't been able to find in the new 0.1.0 version, and its currently stopping me from using it. I'd really rather not go through the hassle of learning all new software with tdarr lol Thanks! EDIT: I'm going to assume it is, but its going to require custom FFMPEG configuration in the AAC conversion plugin. I just don't know how to properly format a FFMPEG command to do that. I want to keep the existing 5.1 audio, and create a duplicate audio layer (that is the new primary audio for the file) that is the stereo audio. EDIT 2: Appears there's already an issue on Github related to this, with a plugin request in. I'll just keep unmanic turned off until this plugin exists because thats a major reason why I was using unmanic in the first place. I wanted to save space on the file itself, and then also create a stereo audio layer that allowed me better quality playback on my streaming devices, while maintaining the 5.1 audio layer for those with better setups than me.
  6. OK, so there are a couple things that come into play. 1. One encode at a time with unmanic is more efficient than multiple. I showed that in prior posts in this thread where one encode was 6 minutes and change, 2 encodes was taking 13 or so minutes, 3 was taking 19, etc. Actually worked out to a few seconds slower per encode this way. 2. Removing subtitles utilizes a lot of CPU, GPU can't do that. 3. Is your temporary encode on a SSD? or is it on your array? if its on your array, you're going to get a lot of CPU usage for I/O as your parity needs to recalculate constantly. If you go to your docker tab, and look at the CPU usage per docker, is it showing the full 50% going to unmanic? If you're not seeing where all your CPU usage is going from the docker tab, you can install glances and get a full break down. My server encounters some heavy I/O wait when using unmanic, so I have it so that it only runs at night when my server isn't being utilized.
  7. It still needs to utilize some CPU. It also has to work if its removing subtitles from files. if you use watch nvidia-smi command in the unRAID terminal, are you seeing a ffmpg line showing that unmanic is utilizing your card?
  8. Hey @Josh.5, If I wanted to empty my logs of files converted, is there a .db file i should delete and let it recreate? I'm up to 5000 entries, and it cripples the I/O of my whole server trying to open history now
  9. Those are all some awesome changes right there. I have a general question for you... I'm setting a folder to convert. I have a temporary folder that is used as the encode cache which is on a separate SSD that is only for download and encode temp folders. However, when its done encoding, its not writing the file back to the cache to be written into the array later. It's doing it back to the array constantly. This keeps one hard drive always spun up, and also forces the array to perform constant parity calculations. Is this intended? The /mnt/user/TV is a cache allowed folder (obviously as its user and not user0). I just assumed since it was writing the conversion to a different location off the array, it'd write the file back to the cache and then wait for the mover.
  10. Hey Josh, is there any way to purge the logs? I'm up to 2000 files completed now and the file history is getting to be a bit long Container is still working great. I appreciate all your work
  11. Just for reference, when I was using multiple workers, I simply saw the conversion happen at the normal speed * numbers of workers. i.e. 1 worker takes 5 minutes per file, 2 workers take 10 minutes per file, 3 workers take 15, etc. The only difference was the increased CPU usage I saw. Most efficient conversion for me was one file at a time and let it go.
  12. Did you choose the hevc-nvenc encoder inside unmanic itself? Its only going to use the GPU if you choose the right encoder.
  13. I think it has to do with the specific files. I'm still getting some files that are reencoding to like 70% of the original size, which was the results I saw a lot with the libx265. I'm doing a folder now and the first 30 files so far have averaged about 55% size reduction. My server was consuming a ton of power, creating a ton of heat, and taking a ton of time to compress before. My goal was simply to stave off the need to start dropping hard drives into my array again. I'll save 2-3TB, and i'll set the watch for all future additions, and I'll be able to save some money in the long run.
  14. As I've been going through my library, I've had some mixed results. Some files shrink 70%. Some shrink 25%. The overall goal of me reencoding my library was to save some space. For me, saving all that time and some space is much more valuable than the additional space saved vs. the heat/power/time requirement of the libx265. I'm still going to stick with the nvenc. I reencoded one library from 336GB to 206GB. If I can do that to most of my library, I'll save a couple terabytes, which is perfect. Especially with how easily the gpu transcodes h265 content.
  15. Are you encoding multiple files at once? Lowering mine to a single worker dropped my CPU usage significantly. Also, I found there was absolutely no speed boost in encoding speed. 3 workers took 3x as long, so same exact speed in the end. Just higher wattage from the wall according to my UPS.
  16. Fun fact for anyone that cares: Multiple transcodes does not save time. 6 minutes and 7 seconds to transcode one file. 19 minutes to transcode each file if there's 3 transcodes happening at once. 1 worker seems to be best if you're using nvidia encoding.
  17. Does this mean I need to change my docker back to its normal path? drop the dev-hw-encoding? Not sure when thats being rolled into the normal version.
  18. OK, so here are the full results of the NVidia powered encoding vs. libx265 encoding: Original File: 1.5GB. .mkv container. h264. 1080p, 42 minutes long. Size after conversion with nvenc_hvec and hvec_nvenc: 639.4MB. Total time to convert the media: 6 minutes and 17 seconds. Size after conversion with libx265: 598.9MB. Total time to convert the media: 2 hours, 13 minutes, 58 seconds. Side by side comparison: I literally can not see a difference. I watched about 5 minutes of the file side by side, and I can not see what is different about them. Mind you, the original isn't the best quality, but it proves the point of what we want to see. Cost savings: Factored at $0.30/kWh: 30W increase to power usage of server during encoding with nvidia. 0.003kWh per file. $0.0009 to encode. 60W increase to power usage of server during encoding with libx265. $0.039 to encode. @Josh.5 You said we should use the hvec_nvenc or nvenc_hvec due to errors throw in the log? Just want to clarify before I change my settings to match and start converting whole folders.
  19. That's it. I'm on the 6.8.3 version of nvidia unraid. Not sure if it makes a difference.
  20. Currently 20% into converting with libx265, I'll get the full comparison once its completed. Compelling argument number one: Power usage is SIGNIFICANTLY lower. My server with the GPU conversion was pulling 110W from the wall for 6 minutes. I"m pulling 132W from the wall using all CPU. Just a matter of how long it needs to run. In theory I could convert an entire season of a show in less than 3 hours (23 episodes at 42 minutes each). I'm assuming that'll be the same amount of time as it takes per episode with lib encoder. Actually, even faster. Current drivers allow for 3 simultaneous streams. I guess it'd stand to reason that it would allow for 3 simultaneous unmanic workers as well.
  21. So, I converted the same file with both. Identical output, identical time frame for conversion. 1.5GB file, .mkv container, h264. 1080p 42ish minutes. Size after conversion: 639MB, 6 minutes and 17 seconds to convert with GTX 1650 Super. (leaving audio tracks untouched) Converting same file now with the libx265 now to get a size/speed comparison. Then I'll play the files side by side for quality comparison. Will post results later. libx265 will probably take 4 hours since i'm only allowing it to use 2 cores/2 threads as to not bog down my server.
  22. Well, I'll working on getting the settings dialed into the same quality/size profile as your current encoder does, and then i'll get you access to that .json. When changing of settings is available, maybe just mirroring those settings will work well for you. Appreciate the work on the utility!
  23. That's unfortunate but we can play with it. I don't know much about the inner workings, so I have no idea what modifiers to settings you have to play with in the background... In testing NVENC in handbrake, The file sizes were very finicky. I still haven't got it dialed in. Converting a single episode with nvenc was increasing size from 1.6gb to 2.9gb. However, if I turned quality all the way down, it tuned it to 150mb. Ideally I'm going to get a single file, and duplicate it. I'll convert it using your normal docker with the lib265 encoder and see the size, then i'll try your nvenc docker. After that, I'll tune the handbrake file to get me the same relative size as your lib265 encoder to see the quality comparison. Are the settings with nvenc changeable on your end? or are they locked in?
  24. I will take a stab at this with NVENC tonight to see how it fares. If it works, I'll be using it to convert tons of libraries for some space conservation. Cartoons especially benefit a ton from compression like this, with almost no visible degredation to quality. Should we see the same speed/cpu advantages that we see in Plex? I know when I transcode in Plex it handles the transcode MUCH faster than it would with normal software transcode.