[Support] Josh5 - Unmanic - Library Optimiser


Recommended Posts

1 hour ago, Josh.5 said:

What are the characters of this file:
 


Storage Wars - S12E09 - Let's Give \udce2\udc80\udc98Em Something to Tonka About - WEBDL-720p - h264 AAC.mkv

I have a funny feeling that this is not longer a problem and that somehow the file's names managed to make it through the conversion process and written to this history in this messed up state.

The changes made on the last update should prevent this from happening again. So if it does happen again please report it to me. I will see if I can come up with a tidy way to gracefully handle this error in the webUI so it does not error out quite so hard...

Not sure to be honest.

 

Just adding that mine has stopped encoding too as the above.

Link to comment

@BomB191

@zAdok

@trekkiedj

I've pushed an update to the docker container again tonight. There was an issue with the new method for parsing progress information. It was piping it to a temp file to read. This was somehow falling over as soon as you give it a file with a decent size but at random times (anywhere from 30MB to 60MB). I missed this as all my test files are 10MB or less. I've refactored this code and added support for elapsed time, time position, frame, fps, speed, bitrate and file size along with percent. While this information is now available, I have yet to implement it in the worker elements of the WebUI. But I'll see if I can throw that together this week. If you care to look at what info is there, you can open http://{UNMANIC_IP}:{UNMANIC_PORT}/?ajax=workersInfo while you are converting to see all your jobs information in json format.

 

PS.

This is now version 0.0.1-beta3 (displayed in the bottom left of the WebUI)

Edited by Josh.5
  • Like 1
Link to comment
18 minutes ago, Josh.5 said:

@BomB191

@zAdok

@trekkiedj

I've pushed an update to the docker container again tonight. There was an issue with the new method for parsing progress information. It was piping it to a temp file to read. This was somehow falling over as soon as you give it a file with a decent size but at random times (anywhere from 30MB to 60MB). I missed this as all my test files are 10MB or less. I've refactored this code and added support for elapsed time, time position, frame, fps, speed, bitrate and file size along with percent. While this information is now available, I have yet to implement it in the worker elements of the WebUI. But I'll see if I can throw that together this week. If you care to look at what info is there, you can open http://{UNMANIC_IP}:{UNMANIC_PORT}/?ajax=workersInfo while you are converting to see all your jobs information in json format.

 

PS.

This is now version 0.0.1-beta3 (displayed in the bottom left of the WebUI)

Trying this now. Thanks again Josh

Link to comment
6 hours ago, Josh.5 said:

@BomB191

@zAdok

@trekkiedj

I've pushed an update to the docker container again tonight. There was an issue with the new method for parsing progress information. It was piping it to a temp file to read. This was somehow falling over as soon as you give it a file with a decent size but at random times (anywhere from 30MB to 60MB). I missed this as all my test files are 10MB or less. I've refactored this code and added support for elapsed time, time position, frame, fps, speed, bitrate and file size along with percent. While this information is now available, I have yet to implement it in the worker elements of the WebUI. But I'll see if I can throw that together this week. If you care to look at what info is there, you can open http://{UNMANIC_IP}:{UNMANIC_PORT}/?ajax=workersInfo while you are converting to see all your jobs information in json format.

 

PS.

This is now version 0.0.1-beta3 (displayed in the bottom left of the WebUI)

You rock @Josh.5

 

So far so good, getting past the 40MB point which I wasn't before.

Link to comment

This is an awesome app! i've been waiting for some time for something like this. 

Does anyone else get 100% CPU usage on 1 core when its idle? 

For Testing i've allocated everything to run on 0/4 so there is nothing on the others

and unmanic has 1/5,2/6,3/7 - it jumps around which core its using 

image.png.8c7960ab7beef97eb746732b9aa8ef85.png

image.thumb.png.8c8b327540fc49722a88fc17fa235db4.png

Link to comment
2 hours ago, Ezzy91 said:

This is an awesome app! i've been waiting for some time for something like this. 

Does anyone else get 100% CPU usage on 1 core when its idle? 

For Testing i've allocated everything to run on 0/4 so there is nothing on the others

and unmanic has 1/5,2/6,3/7 - it jumps around which core its using 

image.png.8c7960ab7beef97eb746732b9aa8ef85.png

image.thumb.png.8c8b327540fc49722a88fc17fa235db4.png

Not seen this myself. I would hazard a guess that it must be unraid itself.

Link to comment
This is an awesome app! i've been waiting for some time for something like this. 
Does anyone else get 100% CPU usage on 1 core when its idle? 
For Testing i've allocated everything to run on 0/4 so there is nothing on the others
and unmanic has 1/5,2/6,3/7 - it jumps around which core its using 
image.png.8c7960ab7beef97eb746732b9aa8ef85.png
image.thumb.png.8c8b327540fc49722a88fc17fa235db4.png
To answer your question in a more technical way. The core application of unmanic is a single process application. It can only take advantage of a single processor at a time.

There was a bug that I fix las week that caused it to slam the CPU use when the main process was idle.

Can you confirm that you are running the latest version of the docker image as it should not be doing this any longer?

To be clear, the main unmanic service runs on a single process (as is the nature of any python application), but each worker spawns a separate process that can take advantage of multiple processors for the encoding task.

Sent from my ONE E1003 using Tapatalk

Link to comment

I'm running unmanic on both servers now. My second server is a dual xeon setup. Can you think of a reason unmanic would only use one processor?  I'm pegged at 100 percent usage on half my cores leading me to believe it's only utilizing one processor..

 

i did add "--cpu-shares=2" to extra parameters in an attempt to set unmanic at lower priority so it doesn't choke out my system.

Edited by munit85
Link to comment

Josh-5 said in a previous post that "To be clear, the main unmanic service runs on a single process (as is the nature of any python application), but each worker spawns a separate process that can take advantage of multiple processors for the encoding task.".

 

So I don't think that you will be able to get it to use more than the 1 core per file being converted.

Link to comment
2 hours ago, MMW said:

Josh-5 said in a previous post that "To be clear, the main unmanic service runs on a single process (as is the nature of any python application), but each worker spawns a separate process that can take advantage of multiple processors for the encoding task.".

 

So I don't think that you will be able to get it to use more than the 1 core per file being converted.

I get all 4 cores used to their max when converting a single file.   I believe the ffmpeg libraries are highly optimised to exploit multi-core systems.

Link to comment



Josh-5 said in a previous post that "To be clear, the main unmanic service runs on a single process (as is the nature of any python application), but each worker spawns a separate process that can take advantage of multiple processors for the encoding task.".
 
So I don't think that you will be able to get it to use more than the 1 core per file being converted.


No. You will be able to. The spawned ffmpeg process will use all available CPU cores. There is a difference between a "process" and a "processor". The main python application runs multiple threads but on a single process (usually this means the application will only use one core at a time while idle). When it starts a ffmpeg job as a separate "subprocess", ffmpeg is able to also create a number of subprocesses itself. The ffmpeg subprocesses are optimised by default to take advantage on all CPU cores (processors) available.
So...
If you run only one worker with one ffmpeg job, it will still convert using all CPU cores available.
The reason for having multiple workers is because you will likely be bottlenecked by other things (such as HDD RW). Therefore running 2-3 workers/jobs at a time should get you the fastest conversion of your library.
If you have a CPU with 16 cores, you probably will not be able to saturate them with 2 jobs. You will need to increase the number of workers. However, a 4 core CPU will likely only handle 1-2 jobs at a time. Any more any you may start to slow down your library conversion.

All this needs testing BTW. To see what works faster.

Sent from my ONE E1003 using Tapatalk

Link to comment
On 1/20/2019 at 12:24 AM, SimonG said:

I'm also seeing 1 CPU @ 100%.  Currently running Version - 0.0.1-beta3.

I also see 1 Core maxed out when idle. It jumps from core to core also. As soon as I stop the docker that behaviour stops. @Josh.5 can i get some logs for you on this? Just let me know what log files you need.

Link to comment
I also see 1 Core maxed out when idle. It jumps from core to core also. As soon as I stop the docker that behaviour stops. [mention=76627]Josh.5[/mention] can i get some logs for you on this? Just let me know what log files you need.
Yea. I'm seeing it also. I have an idea of what it is. I'll let you know when I have a fix. Sorry about the delay. I'm pretty busy at work this week.

Sent from my ONE E1003 using Tapatalk

Link to comment
Just now, Josh.5 said:

Yea. I'm seeing it also. I have an idea of what it is. I'll let you know when I have a fix. Sorry about the delay. I'm pretty busy at work this week.

Sent from my ONE E1003 using Tapatalk
 

No dramas mate. Just happy to be able to test this app. Already dropped 2% of array utilisation :) 

Link to comment
No dramas mate. Just happy to be able to test this app. Already dropped 2% of array utilisation  
Yea. Same. I have a bunch of series that I keep for rewatching etc. Mist of the episodes were around 1gb each. Now they are closer to 300mb each.

Sent from my ONE E1003 using Tapatalk

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.