[Support] Josh5 - Unmanic - Library Optimiser


Recommended Posts

On 1/21/2019 at 6:43 PM, Josh.5 said:

Yea. The logs are excessive when debugging is enabled. I'd suggest keeping it off unless you are encountering issues. If you really wish to keep debugging enabled you can also truncate your docker logs.

Sent from my ONE E1003 using Tapatalk
 

Hey sir, I found that unmaniac was filling my docker image up. It was taking up +50gb of space in the docker image. I do not have debugging enabled and reinstalling the image cleared the issue for now. 

 

I do have file history enabled but I see have noticed that the conversion history working/showing is hit or miss. 

 

Any idea on what I can do to limit this issue?

Link to comment
Hey sir, I found that unmaniac was filling my docker image up. It was taking up +50gb of space in the docker image. I do not have debugging enabled and reinstalling the image cleared the issue for now. 
 
I do have file history enabled but I see have noticed that the conversion history working/showing is hit or miss. 
 
Any idea on what I can do to limit this issue?
Sounds like you have not set a mapped volume for your cache
  • Thanks 1
Link to comment
On 6/20/2021 at 2:48 PM, Josh.5 said:
On 6/20/2021 at 10:08 AM, Aerodb said:
Hey sir, I found that unmaniac was filling my docker image up. It was taking up +50gb of space in the docker image. I do not have debugging enabled and reinstalling the image cleared the issue for now. 
 
I do have file history enabled but I see have noticed that the conversion history working/showing is hit or miss. 
 
Any idea on what I can do to limit this issue?

Sounds like you have not set a mapped volume for your cache

Can you elaborate a bit? if youre talking about the Encoding Cache Directory i left that blank since i dont have much ram to spare. should this be mapped to a folder? would that limit how much space it will use in the docker.img?

Link to comment
Can you elaborate a bit? if youre talking about the Encoding Cache Directory i left that blank since i dont have much ram to spare. should this be mapped to a folder? would that limit how much space it will use in the docker.img?
Yes, that should then be mapped to a share
Link to comment

Hey I have a lot of files failing to convert. Here is the conversion details of one of the files but out of the ones i have looked at they all end with the same error.

 

RUNNER:
Default Unmanic Process

COMMAND:

ffmpeg -hide_banner -loglevel info -strict -2 -i /library/movies/Movies/N/National Treasure (2004)/National Treasure (2004) Bluray-1080p.mp4 -max_muxing_queue_size 9999 -map 0:0 -map 0:3 -map 0:1 -c:v:0 hevc_nvenc -c:v:1 hevc_nvenc -c:a:0 copy -y /tmp/unmanic/unmanic_file_conversion-1624899814.9573288/National Treasure (2004) Bluray-1080p-1624899814.9573374-WORKING-1.mkv



LOG:
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55aa00cbab80] stream 0, timescale not set
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/library/movies/Movies/N/National Treasure (2004)/National Treasure (2004) Bluray-1080p.mp4':
   Metadata:
     major_brand : mp42
     minor_version : 0
     compatible_brands: mp42isomavc1
     creation_time : 2019-10-17T21:40:26.000000Z
     title : National.Treasure.2004.1080p.BluRay.H264.AC3.DD5.1
     artist :
     album :
     comment :
     encoder : DVDFab 11.0.4.2
   Duration: 02:11:04.51, start: 0.000000, bitrate: 4173 kb/s
     Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(bt709), 1920x800 [SAR 1:1 DAR 12:5], 3525 kb/s, 23.98 fps, 23.98 tbr, 24k tbn, 47.95 tbc (default)
     Metadata:
       creation_time : 2019-10-17T21:40:26.000000Z
       encoder : JVT/AVC Coding
     Stream #0:1(eng): Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, 5.1(side), fltp, 640 kb/s (default)
     Metadata:
       creation_time : 2019-10-17T21:40:26.000000Z
     Side data:
       audio service type: main
     Stream #0:2(eng): Subtitle: dvd_subtitle (mp4s / 0x7334706D), 6 kb/s (default)
     Metadata:
       creation_time : 2019-10-17T21:40:26.000000Z
     Stream #0:3: Video: png, rgba(pc), 640x266, 90k tbr, 90k tbn, 90k tbc (attached pic)
Stream mapping:
   Stream #0:0 -> #0:0 (h264 (native) -> hevc (hevc_nvenc))
   Stream #0:3 -> #0:1 (png (native) -> hevc (hevc_nvenc))
   Stream #0:1 -> #0:2 (copy)
Press [q] to stop, [?] for help
[hevc_nvenc @ 0x55aa00cc0100] dl_fn->cuda_dl->cuInit(0) failed -> CUDA_ERROR_UNKNOWN: unknown error
Error initializing output stream 0:1 -- Error while opening encoder for output stream #0:1 - maybe incorrect parameters such as bit_rate, rate, width or height
Conversion failed!

 

Any help would be appreciated.

Link to comment

Hi Josh,

 

Thanks very much for the excellent app.  I've been using this for about a month now, with no notable issues.

 

I do have a couple of suggestions - feel free to ignore of course, as I am grateful for your hard work regardless.

 

1. Allow us to prioritize files by size.  If for example I have a mix of remux and h.264 files, I would much prefer to have unmanic process the remux files first, as I will get much more storage back more immediately that way.  I can of course do this manually, but it's not super easy to manage the pending tasks queue and becomes a bit of a pain.

2. As a previous poster said, if you do develop a plugin to convert ASS to SRT, you will instantly earn yourself another Patreon supporter! I would very much love to see this functionality - it's always bothered me to see Plex transcoding subtitled anime when it shouldn't need to.

3. Add a Pause/Resume button to the web UI.  I feel like it would be cleaner to pause from the web UI vs pausing the entire docker when you need your server's memory back for some reason.

 

Thanks for the consideration, and for all of the hard work!

Edited by Valyth
Eloborating a little, fixing some spelling errors
Link to comment
On 6/24/2021 at 5:38 PM, Josh.5 said:

Yes, that should then be mapped to a share

I mapped it to a share as you detailed in the setup guide at the beginning of this thread. however a new issue as come up...

 

Unmanic will run much fast now and seems to get more done in the same time now. but once it fills half my ram it sites idle until i restart the docker. I dont think it is clearing out old files once the work is completed on a job. 

Link to comment
Posted (edited)
On 6/28/2021 at 7:10 PM, Jurak said:

[hevc_nvenc @ 0x55aa00cc0100] dl_fn->cuda_dl->cuInit(0) failed -> CUDA_ERROR_UNKNOWN: unknown error

Error initializing output stream 0:1 -- Error while opening encoder for output stream #0:1 - maybe incorrect parameters such as bit_rate, rate, width or height
Conversion failed!

 

I also have the same issue. It skips (almost) all files with the same error message. Did you find a solution for this @Jurak

Edited by hobbis
Link to comment
1 hour ago, hobbis said:

I also have the same issue. It skips (almost) all files with the same error message. Did you find a solution for this @Jurak

I solved my issue. It turned out I did not use the Nvidia Encoder at all. It was a space character before the gpu uuid in the template which did not work out well. Removed the space character and now all is good.

Link to comment

Yea, I'm having a lot of super weird occurrences after the most recent update also.  The UI/Dashboard for unmanic becomes pretty much unresponsive. set_mempolicy: Operation not permitted in my logs.  And I have one fire that just keeps failing.

[h264 @ 0x5605116f0680] SEI type 195 size 888 truncated at 48
[h264 @ 0x5605116f0680] SEI type 170 size 2032 truncated at 928
[h264 @ 0x5605116f0680] SEI type 81 size 1920 truncated at 32
[h264 @ 0x5605116f0680] SEI type 195 size 888 truncated at 47
[h264 @ 0x5605116f0680] SEI type 170 size 2032 truncated at 927
[h264 @ 0x5605116f0680] SEI type 81 size 1920 truncated at 31
[h264 @ 0x5605116f0680] A non-intra slice in an IDR NAL unit.
[h264 @ 0x5605116f0680] decode_slice_header error
[h264 @ 0x5605116f0680] no frame!
[h264 @ 0x5605116f0680] SEI type 163 size 248 truncated at 32
[h264 @ 0x5605116f0680] non-existing PPS 2 referenced
[h264 @ 0x5605116f0680] SEI type 163 size 248 truncated at 31
[h264 @ 0x5605116f0680] non-existing PPS 2 referenced
[h264 @ 0x5605116f0680] decode_slice_header error
[h264 @ 0x5605116f0680] no frame!
[h264 @ 0x5605116f0680] SEI type 195 size 1448 truncated at 32
[h264 @ 0x5605116f0680] SEI type 195 size 1448 truncated at 30
[h264 @ 0x5605116f0680] top block unavailable for requested intra mode -1
[h264 @ 0x5605116f0680] error while decoding MB 0 0, bytestream 24
[h264 @ 0x5605116f0680] concealing 3600 DC, 3600 AC, 3600 MV errors in I frame
[h264 @ 0x5605116f0680] SEI type 33 size 2024 truncated at 16
[h264 @ 0x5605116f0680] non-existing PPS 2 referenced
Guessed Channel Layout for Input Stream #0.1 : 5.1

Literally all it says in the log.  And it takes a good 10-20 seconds for that to even appear. 

The most recent unmanic push... is weird.

  • Like 1
Link to comment

I just tried out this docker and i must say its has my attention. I came here to ask if there is a way to convert the file without replacing the original as I feel that is the dumbest thing ever... Now I have to re acquire the originals to fix a setting i decided not to go with... sad face. Is there a way to set a dedicated output location leaving the originals intact and un molested? I'm not looking forward to trying to understand tdarr ( I spent like 5 min so far on tdarr ) and handbrake is failing me atm. 

Link to comment

Can anyone take a look at these 7 log files I have (I just copied the log from the details for each failed one, and made individual log files in notepad for them)?

They keep failing, and I'm not sure why.  I'm unsure what to look at specifically to figure out why they're failing.

Especially log file #7.  It's been failing for the past two weeks, almost instantly and I have no clue why or what to do to fix it.

I'd greatly appreciate some help as I'm so confused. 

logs.rar

Link to comment

So this is super useful but my backlog is going to take far too long.  I solved this by running unmanic docker containers for unmanic on multiple machines, and using the unRAID instance as the primary that tells other containers what to encode (I now have 6 computers to churn through my files).  Just wanted to share in case others find this useful:

 

Sorry this isn't setup as an unraid package and is going to require some technical skills, if anyone wants to help out with that I'd welcome it but that's not part of my normal workflow.

 

To use:

  1. git clone https://github.com/shaenchen/unmanic-distributed to your /mnt/cache/appdata folder.
  2. install your unmanic containers on other machines and ensure that /library/tv and /library/movies are bound to the same paths (I use linux, windows, and Mac as sources so I know it can be done anywhere)
  3. IMPORTANT: all unmanic instances need to be using tag 0.0.5 version.  This version exposed a route to get current status that no longer exists. All sub instances should have library scanning/watching turned off.
  4. edit ./src/config.js so that primary is your unraid unmanic instance, and there is an entry for each secondary you have
  5. run ./docker_up.sh script on unraid and this will start the container
  6. you can browse to http://unraidip:49163 to see the status of all containers

What it does: it reads the queue from the main unraid instance, checks all your sub-instances and sends queue items to the others if they aren't busy.  They can process it exactly as the main instance would (or differently if you want them to take advantage of hw encoding etc.) but only run files when told to by this application.  I'll admit this code is a bit hacky, but I've been running it for a month without issue.

 

@Josh.5  please let me know if you'd like me to take this discussion elsewhere. (and thank you for your work on providing this, it's super useful)

Edited by shbr
Link to comment

Hey I thought I'd ask again since no one answered before.

I have this one file that keeps failing over and over and over.  When I tried to view the details, it causes my browser to come to a complete stop.  Lags it down really badly. 

Here's the log from it. No clue what any of this means.

Thanks!

unmanic_failed_log.txt

Link to comment
So this is super useful but my backlog is going to take far too long.  I solved this by running unmanic docker containers for unmanic on multiple machines, and using the unRAID instance as the primary that tells other containers what to encode (I now have 6 computers to churn through my files).  Just wanted to share in case others find this useful:
 
Sorry this isn't setup as an unraid package and is going to require some technical skills, if anyone wants to help out with that I'd welcome it but that's not part of my normal workflow.
 
To use:
  1. git clone https://github.com/shaenchen/unmanic-distributed to your /mnt/cache/appdata folder.
  2. install your unmanic containers on other machines and ensure that /library/tv and /library/movies are bound to the same paths (I use linux, windows, and Mac as sources so I know it can be done anywhere)
  3. IMPORTANT: all unmanic instances need to be using tag 0.0.5 version.  This version exposed a route to get current status that no longer exists. All sub instances should have library scanning/watching turned off.
  4. edit ./src/config.js so that primary is your unraid unmanic instance, and there is an entry for each secondary you have
  5. run ./docker_up.sh script on unraid and this will start the container
  6. you can browse to http://unraidip:49163 to see the status of all containers
What it does: it reads the queue from the main unraid instance, checks all your sub-instances and sends queue items to the others if they aren't busy.  They can process it exactly as the main instance would (or differently if you want them to take advantage of hw encoding etc.) but only run files when told to by this application.  I'll admit this code is a bit hacky, but I've been running it for a month without issue.
 
@Josh.5  please let me know if you'd like me to take this discussion elsewhere. (and thank you for your work on providing this, it's super useful)
Sounds really cool. Perhaps it would be better if we work on merging this feature into the main source?
Link to comment
On 7/10/2021 at 10:13 PM, Meller said:

Yea, I'm having a lot of super weird occurrences after the most recent update also.  The UI/Dashboard for unmanic becomes pretty much unresponsive. set_mempolicy: Operation not permitted in my logs.  And I have one fire that just keeps failing.

[h264 @ 0x5605116f0680] SEI type 195 size 888 truncated at 48
[h264 @ 0x5605116f0680] SEI type 170 size 2032 truncated at 928
...
[h264 @ 0x5605116f0680] SEI type 33 size 2024 truncated at 16
[h264 @ 0x5605116f0680] non-existing PPS 2 referenced
Guessed Channel Layout for Input Stream #0.1 : 5.1

Literally all it says in the log.  And it takes a good 10-20 seconds for that to even appear. 

The most recent unmanic push... is weird.

Agreed. I've been having issues with all encodes failing. Looking at the portion of your log you posted I decided to take a look at my Unraid system log and found this to be happening over and over in real time:

Jul 24 23:30:43 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Jul 24 23:30:43 Tower kernel: caller _nv000708rm+0x1af/0x200 [nvidia] mapping multiple BARs
Jul 24 23:30:44 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Jul 24 23:30:44 Tower kernel: caller _nv000708rm+0x1af/0x200 [nvidia] mapping multiple BARs
Jul 24 23:30:46 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]

 

I had a feeling that there was some sort of conflict over the GPU wherein Unmanic is failing. Although, I don't have any issues using the same GPU for my gaming VM. And I always stop Unmanic before launching my gaming VM...so something is happening when I re-launch Unmanic and it cannot interface with the GPU for some reason.

 

I have rebooted the Unraid server in the past and I feel that this clears up the issue when it does occur. I wonder if using the Dynamix S3 Sleep is causing an issue...but I didn't really have these kind of encoding failures until this year. Will edit this post if/when I learn more.

**Update**
Found this in one of the failed encodes:

[hevc_nvenc @ 0x560418cb3d40] dl_fn->cuda_dl->cuInit(0) failed -> CUDA_ERROR_UNKNOWN: unknown error 
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height 
Conversion failed! 

 

Also, Unmanic is working again after rebooting the server. 

Edited by Zer0Nin3r
Found more errors
Link to comment
5 hours ago, Zer0Nin3r said:

Agreed. I've been having issues with all encodes failing. Looking at the portion of your log you posted I decided to take a look at my Unraid system log and found this to be happening over and over in real time:



Jul 24 23:30:43 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Jul 24 23:30:43 Tower kernel: caller _nv000708rm+0x1af/0x200 [nvidia] mapping multiple BARs
Jul 24 23:30:44 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Jul 24 23:30:44 Tower kernel: caller _nv000708rm+0x1af/0x200 [nvidia] mapping multiple BARs
Jul 24 23:30:46 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]

 

I had a feeling that there was some sort of conflict over the GPU wherein Unmanic is failing. Although, I don't have any issues using the same GPU for my gaming VM. And I always stop Unmanic before launching my gaming VM...so something is happening when I re-launch Unmanic and it cannot interface with the GPU for some reason.

 

I have rebooted the Unraid server in the past and I feel that this clears up the issue when it does occur. I wonder if using the Dynamix S3 Sleep is causing an issue...but I didn't really have these kind of encoding failures until this year. Will edit this post if/when I learn more.

**Update**
Found this in one of the failed encodes:



[hevc_nvenc @ 0x560418cb3d40] dl_fn->cuda_dl->cuInit(0) failed -> CUDA_ERROR_UNKNOWN: unknown error 
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height 
Conversion failed! 

 

Also, Unmanic is working again after rebooting the server. 

Why is it trying to use nvenc to encode?  I don't have a GPU in my server.  If I go to Settings > Video Encoding.

I have the Video Codec set to HEVC and the Video Encoder to set to libx265.

I've encoded nearly 25,000 tv show episodes so far, and this is the only one that fails over and over, with a huge log file attached to it. 

Edited by Meller
Link to comment
On 7/24/2021 at 2:12 PM, Josh.5 said:
On 7/22/2021 at 2:19 AM, shbr said:
So this is super useful but my backlog is going to take far too long.  I solved this by running unmanic docker containers for unmanic on multiple machines, and using the unRAID instance as the primary that tells other containers what to encode (I now have 6 computers to churn through my files).  Just wanted to share in case others find this useful:
 
Sorry this isn't setup as an unraid package and is going to require some technical skills, if anyone wants to help out with that I'd welcome it but that's not part of my normal workflow.
 
To use:
  1. git clone https://github.com/shaenchen/unmanic-distributed to your /mnt/cache/appdata folder.
  2. install your unmanic containers on other machines and ensure that /library/tv and /library/movies are bound to the same paths (I use linux, windows, and Mac as sources so I know it can be done anywhere)
  3. IMPORTANT: all unmanic instances need to be using tag 0.0.5 version.  This version exposed a route to get current status that no longer exists. All sub instances should have library scanning/watching turned off.
  4. edit ./src/config.js so that primary is your unraid unmanic instance, and there is an entry for each secondary you have
  5. run ./docker_up.sh script on unraid and this will start the container
  6. you can browse to http://unraidip:49163 to see the status of all containers
What it does: it reads the queue from the main unraid instance, checks all your sub-instances and sends queue items to the others if they aren't busy.  They can process it exactly as the main instance would (or differently if you want them to take advantage of hw encoding etc.) but only run files when told to by this application.  I'll admit this code is a bit hacky, but I've been running it for a month without issue.
 
@Josh.5  please let me know if you'd like me to take this discussion elsewhere. (and thank you for your work on providing this, it's super useful)

Sounds really cool. Perhaps it would be better if we work on merging this feature into the main source?

If you would be open to it sure. I’ll try to get started on a PR in the near future. Will reach out to you if I have any questions. 

Link to comment
14 hours ago, shbr said:

If you would be open to it sure. I’ll try to get started on a PR in the near future. Will reach out to you if I have any questions. 

+1 for this, i think it would be awesome. i have multiple unmanic instances running as well, doing different things and some the same. i could easily see the use case for this.

Link to comment

I have some really bad OOM errors while using Unmanic. Cache dir set to share, not to RAM. While I'm converting .ts H264 video file to H265 using nvenc, it goes above 100% to 200% and after this entire server get stuck. Almost about 32 gigabytes of RAM is used by Unmanic, so I can't convert this file at all. It happens not on every content. What should I do?

 

I have tried all versions of Unmanic, also I'm using custom ffmpeg options - yadif enabled and vbr.

Edited by SuberSeb
Link to comment
3 hours ago, SuberSeb said:

I have some really bad OOM errors while using Unmanic. Cache dir set to share, not to RAM. While I'm converting .ts H264 video file to H265 using nvenc, it goes above 100% to 200% and after this entire server get stuck. Almost about 32 gigabytes of RAM is used by Unmanic, so I can't convert this file at all. It happens not on every content. What should I do?

 

I have tried all versions of Unmanic, also I'm using custom ffmpeg options - yadif enabled and vbr.

 

Link to comment
16 minutes ago, Squid said:

 

This is just limit RAM usage by service but this will not fix problem with converting.

If I limit RAM usage then I got this error:
kernel: Memory cgroup out of memory: Killed process 2733 (unmanic-service) total-vm:15112168kB, anon-rss:4151332kB, file-rss:7820kB, shmem-rss:3832kB, UID:99 pgtables:8324kB oom_score_adj:0

 

After this unmanic can't transfer converted file from cache to library directory. Seems that unmanic tries to load full video file in RAM which is awful (tmp folder NOT IN RAM).

Edited by SuberSeb
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.