Jump to content

Josh.5

Members
  • Content Count

    45
  • Joined

  • Last visited

  • Days Won

    3

Josh.5 last won the day on January 8

Josh.5 had the most liked content!

Community Reputation

31 Good

3 Followers

About Josh.5

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This is not hard coded. This is a request from an open browser. Check on your network what device is 192.168.0.12, it will have an open browser tab looking at the webUI dashboard. This will be polling every 5 seconds for an update for the workers. These logs are already on my todo list to move to their own file like I did with the main service logs and implement a log rotation of 7 days
  2. That may be something.... I'll log it and look into it when I get time.
  3. @trekkiedj @itimpi Yea you guys were right. I made a mistake with the config that cased it to try and export our logger to the settings file... oops. As you predicted this broke things like running a scan on start as the settings file was corrupted and everything reverted to defaults. There is an update building now. Once you have it just open the web UI to the settings tab and click the "Submit" button to save the settings (this will over write the buggered settings file). Make sure you also double check that all the settings are correct, they have likely reverted back to the defaults. I've added an update to my todo list (https://github.com/Josh5/unmanic/issues/21) that will add a layer of protection against this in the future. This will be easy enough to implement. Thanks for the report
  4. I'll take a look this morning Sent from my ONE E1003 using Tapatalk
  5. I've partially finished updating the logging. This may not cut down on log sizes yet, but now that I've done this first part I can implement a log rotation so logging is only kept for X amount of days.
  6. Yea buddy. Kiwi here. Sent from my ONE E1003 using Tapatalk
  7. Thanks for the gratitude. I'm glad that this tool is useful to others You can take a look at the commit log on github: https://github.com/Josh5/unmanic/commits/master Anything committed to the master branch will be automatically built and taged as "josh5/unmanic:latest". I have not yet setup any sort of release notes, but as a rule I do my best to ensure my commit messages on github explain the reason for the code change and what it is fixing (opposed to just saying what was changed as many people do).
  8. Actually I got halfway through the CPU slamming and logging issues on Wednesday (it was a public holiday over here). There should be a build sitting on :latest that fixes the CPU slamming while idle. I have not been able to finish the other issue yet Sent from my ONE E1003 using Tapatalk
  9. It's possible I severely underestimated the amount of data going into the logs with debugging turned off. Sorry about that. Sent from my ONE E1003 using Tapatalk
  10. Make sure you have debugging turned off. Unless you have faults, there is no point. The application is just logging to docker logs at the moment and so there also is no log rotation. Later on I'll update the container to log to a file and include a daily log rotation. With debugging turned off your log should be about 10KB per file processed. With it enabled this could be in the MBs ad it logs progress ever second of every file. Sent from my ONE E1003 using Tapatalk
  11. I'll figure something out for you Sent from my ONE E1003 using Tapatalk
  12. Yea. The logs are excessive when debugging is enabled. I'd suggest keeping it off unless you are encountering issues. If you really wish to keep debugging enabled you can also truncate your docker logs. Sent from my ONE E1003 using Tapatalk
  13. Yea. Same. I have a bunch of series that I keep for rewatching etc. Mist of the episodes were around 1gb each. Now they are closer to 300mb each. Sent from my ONE E1003 using Tapatalk
  14. Yea. I'm seeing it also. I have an idea of what it is. I'll let you know when I have a fix. Sorry about the delay. I'm pretty busy at work this week. Sent from my ONE E1003 using Tapatalk
  15. No. You will be able to. The spawned ffmpeg process will use all available CPU cores. There is a difference between a "process" and a "processor". The main python application runs multiple threads but on a single process (usually this means the application will only use one core at a time while idle). When it starts a ffmpeg job as a separate "subprocess", ffmpeg is able to also create a number of subprocesses itself. The ffmpeg subprocesses are optimised by default to take advantage on all CPU cores (processors) available. So... If you run only one worker with one ffmpeg job, it will still convert using all CPU cores available. The reason for having multiple workers is because you will likely be bottlenecked by other things (such as HDD RW). Therefore running 2-3 workers/jobs at a time should get you the fastest conversion of your library. If you have a CPU with 16 cores, you probably will not be able to saturate them with 2 jobs. You will need to increase the number of workers. However, a 4 core CPU will likely only handle 1-2 jobs at a time. Any more any you may start to slow down your library conversion. All this needs testing BTW. To see what works faster. Sent from my ONE E1003 using Tapatalk