• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

DaClownie's Achievements


Newbie (1/14)



  1. OK, so there are a couple things that come into play. 1. One encode at a time with unmanic is more efficient than multiple. I showed that in prior posts in this thread where one encode was 6 minutes and change, 2 encodes was taking 13 or so minutes, 3 was taking 19, etc. Actually worked out to a few seconds slower per encode this way. 2. Removing subtitles utilizes a lot of CPU, GPU can't do that. 3. Is your temporary encode on a SSD? or is it on your array? if its on your array, you're going to get a lot of CPU usage for I/O as your parity needs to recalculate constantly. If you go to your docker tab, and look at the CPU usage per docker, is it showing the full 50% going to unmanic? If you're not seeing where all your CPU usage is going from the docker tab, you can install glances and get a full break down. My server encounters some heavy I/O wait when using unmanic, so I have it so that it only runs at night when my server isn't being utilized.
  2. It still needs to utilize some CPU. It also has to work if its removing subtitles from files. if you use watch nvidia-smi command in the unRAID terminal, are you seeing a ffmpg line showing that unmanic is utilizing your card?
  3. Hey @Josh.5, If I wanted to empty my logs of files converted, is there a .db file i should delete and let it recreate? I'm up to 5000 entries, and it cripples the I/O of my whole server trying to open history now
  4. Those are all some awesome changes right there. I have a general question for you... I'm setting a folder to convert. I have a temporary folder that is used as the encode cache which is on a separate SSD that is only for download and encode temp folders. However, when its done encoding, its not writing the file back to the cache to be written into the array later. It's doing it back to the array constantly. This keeps one hard drive always spun up, and also forces the array to perform constant parity calculations. Is this intended? The /mnt/user/TV is a cache allowed folder (obviously as its user and not user0). I just assumed since it was writing the conversion to a different location off the array, it'd write the file back to the cache and then wait for the mover.
  5. Hey Josh, is there any way to purge the logs? I'm up to 2000 files completed now and the file history is getting to be a bit long Container is still working great. I appreciate all your work
  6. Just for reference, when I was using multiple workers, I simply saw the conversion happen at the normal speed * numbers of workers. i.e. 1 worker takes 5 minutes per file, 2 workers take 10 minutes per file, 3 workers take 15, etc. The only difference was the increased CPU usage I saw. Most efficient conversion for me was one file at a time and let it go.
  7. Did you choose the hevc-nvenc encoder inside unmanic itself? Its only going to use the GPU if you choose the right encoder.
  8. I think it has to do with the specific files. I'm still getting some files that are reencoding to like 70% of the original size, which was the results I saw a lot with the libx265. I'm doing a folder now and the first 30 files so far have averaged about 55% size reduction. My server was consuming a ton of power, creating a ton of heat, and taking a ton of time to compress before. My goal was simply to stave off the need to start dropping hard drives into my array again. I'll save 2-3TB, and i'll set the watch for all future additions, and I'll be able to save some money in the long run.
  9. As I've been going through my library, I've had some mixed results. Some files shrink 70%. Some shrink 25%. The overall goal of me reencoding my library was to save some space. For me, saving all that time and some space is much more valuable than the additional space saved vs. the heat/power/time requirement of the libx265. I'm still going to stick with the nvenc. I reencoded one library from 336GB to 206GB. If I can do that to most of my library, I'll save a couple terabytes, which is perfect. Especially with how easily the gpu transcodes h265 content.
  10. Are you encoding multiple files at once? Lowering mine to a single worker dropped my CPU usage significantly. Also, I found there was absolutely no speed boost in encoding speed. 3 workers took 3x as long, so same exact speed in the end. Just higher wattage from the wall according to my UPS.
  11. Fun fact for anyone that cares: Multiple transcodes does not save time. 6 minutes and 7 seconds to transcode one file. 19 minutes to transcode each file if there's 3 transcodes happening at once. 1 worker seems to be best if you're using nvidia encoding.
  12. Does this mean I need to change my docker back to its normal path? drop the dev-hw-encoding? Not sure when thats being rolled into the normal version.
  13. OK, so here are the full results of the NVidia powered encoding vs. libx265 encoding: Original File: 1.5GB. .mkv container. h264. 1080p, 42 minutes long. Size after conversion with nvenc_hvec and hvec_nvenc: 639.4MB. Total time to convert the media: 6 minutes and 17 seconds. Size after conversion with libx265: 598.9MB. Total time to convert the media: 2 hours, 13 minutes, 58 seconds. Side by side comparison: I literally can not see a difference. I watched about 5 minutes of the file side by side, and I can not see what is different about them. Mind you, the original isn't the best quality, but it proves the point of what we want to see. Cost savings: Factored at $0.30/kWh: 30W increase to power usage of server during encoding with nvidia. 0.003kWh per file. $0.0009 to encode. 60W increase to power usage of server during encoding with libx265. $0.039 to encode. @Josh.5 You said we should use the hvec_nvenc or nvenc_hvec due to errors throw in the log? Just want to clarify before I change my settings to match and start converting whole folders.
  14. That's it. I'm on the 6.8.3 version of nvidia unraid. Not sure if it makes a difference.