Jump to content
Josh.5

[Support] Josh5 - Unmanic - Library Optimiser

153 posts in this topic Last Reply

Recommended Posts

I'm taking a look now

 

Edit:

This may require a bit more investigation. I'm leaving some tests running while I head to bed. I'll let you know what I find in the morning.

Edited by Josh.5
  • Like 2

Share this post


Link to post
1 hour ago, Josh.5 said:

What are the characters of this file:
 


Storage Wars - S12E09 - Let's Give \udce2\udc80\udc98Em Something to Tonka About - WEBDL-720p - h264 AAC.mkv

I have a funny feeling that this is not longer a problem and that somehow the file's names managed to make it through the conversion process and written to this history in this messed up state.

The changes made on the last update should prevent this from happening again. So if it does happen again please report it to me. I will see if I can come up with a tidy way to gracefully handle this error in the webUI so it does not error out quite so hard...

Not sure to be honest.

 

Just adding that mine has stopped encoding too as the above.

Share this post


Link to post

@BomB191

@zAdok

@trekkiedj

I've pushed an update to the docker container again tonight. There was an issue with the new method for parsing progress information. It was piping it to a temp file to read. This was somehow falling over as soon as you give it a file with a decent size but at random times (anywhere from 30MB to 60MB). I missed this as all my test files are 10MB or less. I've refactored this code and added support for elapsed time, time position, frame, fps, speed, bitrate and file size along with percent. While this information is now available, I have yet to implement it in the worker elements of the WebUI. But I'll see if I can throw that together this week. If you care to look at what info is there, you can open http://{UNMANIC_IP}:{UNMANIC_PORT}/?ajax=workersInfo while you are converting to see all your jobs information in json format.

 

PS.

This is now version 0.0.1-beta3 (displayed in the bottom left of the WebUI)

Edited by Josh.5
  • Like 1

Share this post


Link to post
18 minutes ago, Josh.5 said:

@BomB191

@zAdok

@trekkiedj

I've pushed an update to the docker container again tonight. There was an issue with the new method for parsing progress information. It was piping it to a temp file to read. This was somehow falling over as soon as you give it a file with a decent size but at random times (anywhere from 30MB to 60MB). I missed this as all my test files are 10MB or less. I've refactored this code and added support for elapsed time, time position, frame, fps, speed, bitrate and file size along with percent. While this information is now available, I have yet to implement it in the worker elements of the WebUI. But I'll see if I can throw that together this week. If you care to look at what info is there, you can open http://{UNMANIC_IP}:{UNMANIC_PORT}/?ajax=workersInfo while you are converting to see all your jobs information in json format.

 

PS.

This is now version 0.0.1-beta3 (displayed in the bottom left of the WebUI)

Trying this now. Thanks again Josh

Share this post


Link to post
6 hours ago, Josh.5 said:

@BomB191

@zAdok

@trekkiedj

I've pushed an update to the docker container again tonight. There was an issue with the new method for parsing progress information. It was piping it to a temp file to read. This was somehow falling over as soon as you give it a file with a decent size but at random times (anywhere from 30MB to 60MB). I missed this as all my test files are 10MB or less. I've refactored this code and added support for elapsed time, time position, frame, fps, speed, bitrate and file size along with percent. While this information is now available, I have yet to implement it in the worker elements of the WebUI. But I'll see if I can throw that together this week. If you care to look at what info is there, you can open http://{UNMANIC_IP}:{UNMANIC_PORT}/?ajax=workersInfo while you are converting to see all your jobs information in json format.

 

PS.

This is now version 0.0.1-beta3 (displayed in the bottom left of the WebUI)

You rock @Josh.5

 

So far so good, getting past the 40MB point which I wasn't before.

Share this post


Link to post
1 hour ago, zAdok said:

You rock @Josh.5

 

So far so good, getting past the 40MB point which I wasn't before.

Agreed. So far so good!

Share this post


Link to post

Yep Problem seems to have been fixed now! thanks you.

 

 I'll turn it up back to the entire kids folders and let it go nuts. will report any issues

 

Share this post


Link to post

@Josh.5I have been running this for 24 hours now and everything seems 100% rock solid. Thanks for your hard work!

Share this post


Link to post

This is an awesome app! i've been waiting for some time for something like this. 

Does anyone else get 100% CPU usage on 1 core when its idle? 

For Testing i've allocated everything to run on 0/4 so there is nothing on the others

and unmanic has 1/5,2/6,3/7 - it jumps around which core its using 

image.png.8c7960ab7beef97eb746732b9aa8ef85.png

image.thumb.png.8c8b327540fc49722a88fc17fa235db4.png

Share this post


Link to post
2 hours ago, Ezzy91 said:

This is an awesome app! i've been waiting for some time for something like this. 

Does anyone else get 100% CPU usage on 1 core when its idle? 

For Testing i've allocated everything to run on 0/4 so there is nothing on the others

and unmanic has 1/5,2/6,3/7 - it jumps around which core its using 

image.png.8c7960ab7beef97eb746732b9aa8ef85.png

image.thumb.png.8c8b327540fc49722a88fc17fa235db4.png

Not seen this myself. I would hazard a guess that it must be unraid itself.

Share this post


Link to post
This is an awesome app! i've been waiting for some time for something like this. 
Does anyone else get 100% CPU usage on 1 core when its idle? 
For Testing i've allocated everything to run on 0/4 so there is nothing on the others
and unmanic has 1/5,2/6,3/7 - it jumps around which core its using 
image.png.8c7960ab7beef97eb746732b9aa8ef85.png
image.thumb.png.8c8b327540fc49722a88fc17fa235db4.png
To answer your question in a more technical way. The core application of unmanic is a single process application. It can only take advantage of a single processor at a time.

There was a bug that I fix las week that caused it to slam the CPU use when the main process was idle.

Can you confirm that you are running the latest version of the docker image as it should not be doing this any longer?

To be clear, the main unmanic service runs on a single process (as is the nature of any python application), but each worker spawns a separate process that can take advantage of multiple processors for the encoding task.

Sent from my ONE E1003 using Tapatalk

Share this post


Link to post

Hi, Thought I would give this a go and it works great apart from the same issue as above with it maxing out a single cpu core when in idle.

 

Version in the bottom corner of the webpage says

 

image.png.2ab872483ccdaa5529653edc3affc638.png

Share this post


Link to post

Thanks, I did a force update just to make sure. 

webUI says 

Version - 0.0.1-beta3 

Share this post


Link to post

happy little update for you @Josh.5 kids TV started at 892GB and after 10ish days were down too 748GB. No idea how far along I am its selection seems rather random. started at thomas the tank engine and now doing adventure time.

Share this post


Link to post
On 1/17/2019 at 8:40 AM, Ezzy91 said:

Thanks, I did a force update just to make sure. 

webUI says 

Version - 0.0.1-beta3 

I'm also seeing 1 CPU @ 100%.  Currently running Version - 0.0.1-beta3.

Share this post


Link to post

I'm running unmanic on both servers now. My second server is a dual xeon setup. Can you think of a reason unmanic would only use one processor?  I'm pegged at 100 percent usage on half my cores leading me to believe it's only utilizing one processor..

 

i did add "--cpu-shares=2" to extra parameters in an attempt to set unmanic at lower priority so it doesn't choke out my system.

Edited by munit85

Share this post


Link to post

Josh-5 said in a previous post that "To be clear, the main unmanic service runs on a single process (as is the nature of any python application), but each worker spawns a separate process that can take advantage of multiple processors for the encoding task.".

 

So I don't think that you will be able to get it to use more than the 1 core per file being converted.

Share this post


Link to post
2 hours ago, MMW said:

Josh-5 said in a previous post that "To be clear, the main unmanic service runs on a single process (as is the nature of any python application), but each worker spawns a separate process that can take advantage of multiple processors for the encoding task.".

 

So I don't think that you will be able to get it to use more than the 1 core per file being converted.

I get all 4 cores used to their max when converting a single file.   I believe the ffmpeg libraries are highly optimised to exploit multi-core systems.

Share this post


Link to post

interesting, going to see exactly what mine does when I convert just 1 file and not the standard 3 that it was running with

 

Share this post


Link to post



Josh-5 said in a previous post that "To be clear, the main unmanic service runs on a single process (as is the nature of any python application), but each worker spawns a separate process that can take advantage of multiple processors for the encoding task.".
 
So I don't think that you will be able to get it to use more than the 1 core per file being converted.


No. You will be able to. The spawned ffmpeg process will use all available CPU cores. There is a difference between a "process" and a "processor". The main python application runs multiple threads but on a single process (usually this means the application will only use one core at a time while idle). When it starts a ffmpeg job as a separate "subprocess", ffmpeg is able to also create a number of subprocesses itself. The ffmpeg subprocesses are optimised by default to take advantage on all CPU cores (processors) available.
So...
If you run only one worker with one ffmpeg job, it will still convert using all CPU cores available.
The reason for having multiple workers is because you will likely be bottlenecked by other things (such as HDD RW). Therefore running 2-3 workers/jobs at a time should get you the fastest conversion of your library.
If you have a CPU with 16 cores, you probably will not be able to saturate them with 2 jobs. You will need to increase the number of workers. However, a 4 core CPU will likely only handle 1-2 jobs at a time. Any more any you may start to slow down your library conversion.

All this needs testing BTW. To see what works faster.

Sent from my ONE E1003 using Tapatalk

Share this post


Link to post

Nice, Only got a four core AMD myself but am running a few tests to see how it best handles the conversion.

Share this post


Link to post
On 1/20/2019 at 12:24 AM, SimonG said:

I'm also seeing 1 CPU @ 100%.  Currently running Version - 0.0.1-beta3.

I also see 1 Core maxed out when idle. It jumps from core to core also. As soon as I stop the docker that behaviour stops. @Josh.5 can i get some logs for you on this? Just let me know what log files you need.

Share this post


Link to post
I also see 1 Core maxed out when idle. It jumps from core to core also. As soon as I stop the docker that behaviour stops. [mention=76627]Josh.5[/mention] can i get some logs for you on this? Just let me know what log files you need.
Yea. I'm seeing it also. I have an idea of what it is. I'll let you know when I have a fix. Sorry about the delay. I'm pretty busy at work this week.

Sent from my ONE E1003 using Tapatalk

Share this post


Link to post
Just now, Josh.5 said:

Yea. I'm seeing it also. I have an idea of what it is. I'll let you know when I have a fix. Sorry about the delay. I'm pretty busy at work this week.

Sent from my ONE E1003 using Tapatalk
 

No dramas mate. Just happy to be able to test this app. Already dropped 2% of array utilisation :) 

Share this post


Link to post
No dramas mate. Just happy to be able to test this app. Already dropped 2% of array utilisation  
Yea. Same. I have a bunch of series that I keep for rewatching etc. Mist of the episodes were around 1gb each. Now they are closer to 300mb each.

Sent from my ONE E1003 using Tapatalk

  • Like 1

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now