[Support] Josh5 - Unmanic - Library Optimiser


Recommended Posts

9 hours ago, Can0nfan said:

Hi @Josh.5

I am using the dev-hw-encoding repository version and the previous current master one but cant seems to get unmanic to hardware transcode using intel quicksync and the hevc codec I tried both hevc_nevc and nevc_hevc they all fail however when i use the libx265 it processes using my CPU

a quick note in Plex harware transcode directions vs yours (in the container install)

for plex under extra parameters it should read:

--device /dev/dri:/dev/dri

yours mentions this

--device /dev/dri

 

I have tried both and neither are shoing anything in my intel-gpu tools plus i can see in the docker tab high CPU usage

I am only encoding video, not stripping subtitles or doing anything with audio

Having not used quick sync before this, I thought cpu utilization was normal when enabling QS transcoding. i downloaded the intel-gpu-tools container to take a closer look, and turns out it's not working in the unmanic container for me, even with all the prerequisites in the go file, parameters, etc. Also tried all the encoders like you did, and the only one that works is the libx265.

 

However, i posted earlier in the forum (when i thought QS was working) that i noticed my processing time was almost cut in half, on average. Whereas it used to take my little intel 8-12 hours to process a 2gb 1080p x264 file w/ 2 workers, that time is down now to about 3-4 hours with QS enabled. It seems QS is not working (unless it involves high cpu usage and the intelgpu container isn't working) though the change in process times is definite, if not inexplicable.

Link to comment
Hi Josh, great job on this project. 
 
Any way to change the db from SQLite to something more robust like MariaDB? With 1000s of files it just seems like SQLite is really struggling. If not, any thoughts on adding support?
I'd be interested to know how it is struggling. Sqlite tables can handle millions of table entries without issue. It something is struggling, it's how the query is written and needs to be optimised.
It is my opinion that using mysql for this app is a waste of time, but I'm happy to be wrong.
Link to comment
Hi @Josh.5

I am using the dev-hw-encoding repository version and the previous current master one but cant seems to get unmanic to hardware transcode using intel quicksync and the hevc codec I tried both hevc_nevc and nevc_hevc they all fail however when i use the libx265 it processes using my CPU

a quick note in Plex harware transcode directions vs yours (in the container install)

for plex under extra parameters it should read:

--device /dev/dri:/dev/dri

yours mentions this

--device /dev/dri
 
I have tried both and neither are shoing anything in my intel-gpu tools plus i can see in the docker tab high CPU usage

I am only encoding video, not stripping subtitles or doing anything with audio
I don't qs is configured correctly. And I don't really have a way to test unfortunately. A PR on GitHub would be nice if anyone wants to make the necessary changes and test for me. They should only be a couple of lines.
Link to comment
1 hour ago, Josh.5 said:
14 hours ago, Can0nfan said:
Hi @Josh.5

I am using the dev-hw-encoding repository version and the previous current master one but cant seems to get unmanic to hardware transcode using intel quicksync and the hevc codec I tried both hevc_nevc and nevc_hevc they all fail however when i use the libx265 it processes using my CPU

a quick note in Plex harware transcode directions vs yours (in the container install)

for plex under extra parameters it should read:

--device /dev/dri:/dev/dri

yours mentions this

--device /dev/dri
 
I have tried both and neither are shoing anything in my intel-gpu tools plus i can see in the docker tab high CPU usage

I am only encoding video, not stripping subtitles or doing anything with audio

I don't qs is configured correctly. And I don't really have a way to test unfortunately. A PR on GitHub would be nice if anyone wants to make the necessary changes and test for me. They should only be a couple of lines.

Hi @Josh.5 i have used both and just cant get intel quick sync acceperated transcoding to work.  I'd be happy to test got another 90TB or so of media to convert to h265 once intel works ill see if i can get logs and stuff and post to github for you

Link to comment
9 hours ago, Josh.5 said:
22 hours ago, Can0nfan said:

 

I don't qs is configured correctly. And I don't really have a way to test unfortunately. A PR on GitHub would be nice if anyone wants to make the necessary changes and test for me. They should only be a couple of lines.

if anyone wants to make the PR to fix the broken intel quick sync hardware encode I have both an i5 8600K (Coffee Lake) and i7 7700K 

Iron Lake gen 5 to Coffee Lake gen 9.5 all supported as per

https://trac.ffmpeg.org/wiki/Hardware/QuickSync

 

on Github
https://github.com/intel/intel-vaapi-driver

 

Link to comment
21 hours ago, Josh.5 said:

I'd be interested to know how it is struggling. Sqlite tables can handle millions of table entries without issue. It something is struggling, it's how the query is written and needs to be optimised.
It is my opinion that using mysql for this app is a waste of time, but I'm happy to be wrong.

Hey Josh, I'm PMing you a video link of me refreshing the screen and going into the console and showing the debug profiles. Perhaps this will help you understand where it's struggling. 

 

The refresh takes 55 seconds.

The Pending tasks list comes up immediately

35 seconds to load completed tasks

40 seconds for the workers to begin loading

 

This client has a 3900x with 32GB mem and an rtx 2070 gpu, the server specs are in my signature. I hope the video helps you figure out where the delays are coming from. If there's anything specific you'd like for me to export to help you further, please let me know.

Link to comment
On 7/24/2020 at 3:05 PM, Josh.5 said:

I'd be interested to know how it is struggling. Sqlite tables can handle millions of table entries without issue. It something is struggling, it's how the query is written and needs to be optimised.
It is my opinion that using mysql for this app is a waste of time, but I'm happy to be wrong.

When opening the webpage it is taking a while to populate the see all records.  then it takes more time to actually view details on a completed item.  Chosing failures and massive wait time.. 

Link to comment
1 hour ago, dertbv said:

When opening the webpage it is taking a while to populate the see all records.  then it takes more time to actually view details on a completed item.  Chosing failures and massive wait time.. 

I've seen something similar. Mine has gotten to the point twice where the page will never load when i try to view all records. I've had to blow away the database to restore functionality to that page. I can almost never get it to open an individual record even after starting with a fresh database. The entire tab locks up

Link to comment
2 hours ago, chiefo said:

I've seen something similar. Mine has gotten to the point twice where the page will never load when i try to view all records. I've had to blow away the database to restore functionality to that page. I can almost never get it to open an individual record even after starting with a fresh database. The entire tab locks up

If you blow away the entire database won't it go through every file that you have ever done?

 

Link to comment
1 hour ago, tasmith88 said:

How do we blow it away?

*Not sure if this can cause any issues, although I personally haven't seen any problems, perform at own risk*

 

Stop the docker and then delete the following file from your appdata: appdata\unmanic\.unmanic\config\unmanic.db. Then restart the docker

Link to comment
When opening the webpage it is taking a while to populate the see all records.  then it takes more time to actually view details on a completed item.  Chosing failures and massive wait time.. 
The issue is the page being loaded does not paginate the query. And the query is not optimised as an SQL dataset. Switching to mysql will not improve page load times. The orm calls for those pages need to be optimised. This is something already on my to-do list and will probably be a few hours of work to do properly when I get the time. No ETA ATM.
Link to comment
2 minutes ago, Josh.5 said:
4 hours ago, dertbv said:
When opening the webpage it is taking a while to populate the see all records.  then it takes more time to actually view details on a completed item.  Chosing failures and massive wait time.. 

The issue is the page being loaded does not paginate the query. And the query is not optimised as an SQL dataset. Switching to mysql will not improve page load times. The orm calls for those pages need to be optimised. This is something already on my to-do list and will probably be a few hours of work to do properly when I get the time. No ETA ATM.

Not complaining thank you for all that you do! I just turned on the Nvida piece and have completed more in 10 days than i have in the last year.  

 

Link to comment
Not complaining thank you for all that you do! I just turned on the Nvida piece and have completed more in 10 days than i have in the last year.  
 
I did last week overhaul the incoming task lists. This overhaul opens up the potential of that task list. In the future t will finally give us the ability to start modifying the pending tasks list and perhaps add the ability to shift things around, or modify settings per task before it gets to a worker, save the list between application restarts, mark a file in an ignore list so it's never added again, etc.
That was a solid days work on its own. Fun tho.
Link to comment

 Compliments to @Josh.5, this thing is a beast!  Testing it out right now on a smaller library, but it's averaging a 40-50% reduction in file size.

 

The only issue that I'm running into is how to let Sonarrv3 know that the file has been changed to 265?  I read the thread and didn't see a solution :(


~Spritz

Link to comment
19 hours ago, Spritzup said:

 Compliments to @Josh.5, this thing is a beast!  Testing it out right now on a smaller library, but it's averaging a 40-50% reduction in file size.

 

The only issue that I'm running into is how to let Sonarrv3 know that the file has been changed to 265?  I read the thread and didn't see a solution :(


~Spritz

SV3 will automatically detect these file changes during scheduled library changes, same for RadarrV3

Link to comment
2 hours ago, Cpt. Chaz said:

SV3 will automatically detect these file changes during scheduled library changes, same for RadarrV3

Do you happen to know how often these scheduled library changes run?  I have some shows from a few days ago now that still haven't updated.

 

~Spritz

Link to comment

I just installed this to test out and first thing I noticed is that other than selecting the codec, there's no method of configuring the quality.

 

Is the quality/resolution of the codecs just arbitrarily selected? What is the conversion actually going to end up being equivalent to? 480p? 720p? 1080p?

 

I don't see anything mentioned anywhere about what the resulting file is actually going to be.

 

Apologies if it's already been addressed somewhere in the last 27 pages, but it seems as though your library is going to be in the preferred format of what Josh5 wants in his library....  I'm going to start converting some files and play around a bit, but it seems that the only difference between this and Handbrake is that Unmanic can scan your existing libraries whereas Handbrake scans a watch directory --- but offers you all the format customization you want.

 

This is not a dis to Unmanic, I'm just wondering about what the codec configuration actually is??

Link to comment

Any idea how i can resolve this error?

 

"2020-08-01T22:00:59:ERROR:Unmanic.Worker-2 - [FORMATTED] - Exception in processing job with Worker-2: - 'utf-8' codec can't decode byte 0xb2 in position 24: invalid start byte"
 

This was caused by the Audio.  I had to change the settings just to pass though.

Edited by dertbv
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.