CyrixDX4

Members
  • Posts

    28
  • Joined

  • Last visited

Everything posted by CyrixDX4

  1. I'll need all the logs. There are items in your Post-Processor queue that are either taking a long time or are throwing exceptions and staying in cache. I no longer have the logs as I switched back to 0.0.9. I missed the simplicity of the earlier version with less fiddly bits and it was incredibly stable. I appreciate your hard work in bringing enhancements to unmanic. I'll follow along and see what other upgrades come along and try again later. For those folks that want to roll back to 0.0.9 (I know it's no longer supported) you can add the following to the docker template under "repository": josh5/unmanic:0.0.9
  2. I get this error repeatedly when I try to process a large library: 2021-10-10T14:16:46:WARNING:Unmanic.Foreman - [FORMATTED] - Postprocessor queue is over 4. Halting feeding workers until it drops. When this happens I have to restart unmanic and then it will pickup and start processing files again. I have 4 workers all sitting idle, nothing processing, no scans or anything. I have plenty of disk space available for use.
  3. One of the big pros of Unmanic was the ability to shrink the file size of the base file. I seem to be missing that feature as I run through my library. I have set the CRF to 20 and the preset quality to "SLOW" . I barely see any change in file size. Do I need to increase CRF to be higher in order to gain a better reduction in size?
  4. This wouldn't touch your subtitles. This only removes the PNG thumbnail image in the video container. wait is THAT all it does? I thought there was some images put in to specific movies that display subtitles or other images on the screen (John Wick movie when they go into talking about Keana being Baba Yaga, etc.).
  5. eh that will break lots of subtitle files again and I have a mountain of movies with baked in subtitles. Do you need me to reraise the ticket for this as you had closed it out recently.
  6. Can you post just the FFmpeg command log of the failed task sorry got confused on the logs wanted. ffmpg_error_chinese_story.txt
  7. You are using Hardware encoding, I'm using CPU encoding, big difference in performance with the two models. I'm getting constant errors when writing MP4->MKV file conversion: [quote] Too many packets buffered for output stream 0:1. x265 [info]: frame I: 25, Avg QP:18.95 kb/s: 5938.62 x265 [info]: frame P: 1018, Avg QP:20.10 kb/s: 4848.96 x265 [info]: frame B: 2053, Avg QP:23.89 kb/s: 1874.06 x265 [info]: Weighted P-Frames: Y:3.8% UV:3.3% x265 [info]: consecutive B-frames: 16.8% 16.2% 25.8% 35.3% 5.9% encoded 3096 frames in 3442.70s (0.90 fps), 2885.06 kb/s, Avg QP:22.60 Conversion failed! [/quote] I've reduced the buffer down to 2048 and still get this same issue. Why?
  8. Thanks for the info on where to search: These are my 2 errors: set_mempolicy: Operation not permitted and Too many packets buffered for output stream 0:1. I have 256GB of RAM and was doing a packet buffer of 10k. I reduced that to 4096. See if that helps alleviate the packet buffer error. Unsure why the mempolicy is occuring.
  9. Is there anyway to see why a job "failed" with the Red X by it? There's nothing in the logging or data dashboard to understand why a file failed.
  10. Like others I was LIVID that things had changed and things weren't scanning and processing. Then I watched the video and got a better understanding of what is going on and why things have changed. I was very comfortable how things were but times change and new options give more depth of control which I'm beginning to come to grips with. Curmudgeonly and all that jazz. My jobs are now taking ages (upwards of 18hrs) to process when they used to take a few hours. I've set the process' to "SLOW" in order to have maximum bitrate as well as dropping CRF to "20". I have 18 threads dedicated to these jobs and only 4 workers running. How do I tell unmanic to skip processing all audio and just convert it as is like the options before. Plus have unmanic skip subtitles and keep them as is (I think this is already enabled by default unless I turn the options off) Any tips or optimizations, plugins, to bring the old way of unmanic was running would be helpful.
  11. Was there any fix for this besides flashing the firmware on the card?
  12. I too am getting this same kernel error. Is there any update to the drivers that is being used?
  13. I'll wait for 6.9.1. There's always some unforeseen bugs that crop up on hardware and I've learned there's no reason to rush these things. Thanks for all the hard work in finally bringing this release out to the masses.
  14. I'll echo others thanking you for making this beautiful set of metrics and panels. My inner monitoring child is going bonkers. Now, forgive me for asking these questions: 1. Where is the install guide? I've combed through several pages and I only see the dockers that I need to install and some config files. Is there a step by step guide/wiki on how to impliment this wonderment? 2. What is the json file for? What does that do? Where does it go. If I missed a page that had the install instructions and full configs that would be wonderful. I'm a bit lost and don't want to go willy/nilly installing all the dockers and come back with "Well now what?"
  15. That's what I tried using the :VERSION and looks like everything is gone. Very odd. Only versions up are the 0.0.1-beta7 version and I can't pull down those versions for some reason.
  16. I'd love to know how you specified the container version on install so myself and others can do the same.
  17. I'm still not able to convert mp4 to mkv files. Other the files are read by the container and then skipped. I've isolated the folder down to one specific folder to skip the inode 'issue'. Submitted a bug issue here: https://github.com/Josh5/unmanic/issues/131
  18. Getting same error as others: [E 201021 13:43:22 web:1788] Uncaught exception GET /dashboard/?ajax=pendingTasks&format=html (10.100.0.112) HTTPServerRequest(protocol='http', host='xx', method='GET', uri='/dashboard/?ajax=pendingTasks&format=html', version='HTTP/1.1', remote_ip='xx') Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 3099, in execute_sql cursor.execute(sql, params or ()) sqlite3.OperationalError: database is locked During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tornado/web.py", line 1697, in _execute result = method(*self.path_args, **self.path_kwargs) File "/usr/local/lib/python3.6/dist-packages/unmanic/webserver/main.py", line 59, in get self.handle_ajax_call(self.get_query_arguments('ajax')[0]) File "/usr/local/lib/python3.6/dist-packages/unmanic/webserver/main.py", line 77, in handle_ajax_call self.render("main/main-pending-tasks.html", time_now=time.time()) File "/usr/local/lib/python3.6/dist-packages/tornado/web.py", line 856, in render html = self.render_string(template_name, **kwargs) File "/usr/local/lib/python3.6/dist-packages/tornado/web.py", line 1005, in render_string return t.generate(**namespace) File "/usr/local/lib/python3.6/dist-packages/tornado/template.py", line 361, in generate return execute() File "main/main-pending-tasks_html.generated.py", line 5, in _tt_execute for pending_task in handler.get_pending_tasks(): # main/main-pending-tasks.html:4 File "/usr/local/lib/python3.6/dist-packages/unmanic/webserver/main.py", line 103, in get_pending_tasks return self.foreman.task_queue.list_pending_tasks(limit) File "/usr/local/lib/python3.6/dist-packages/unmanic/libs/taskqueue.py", line 171, in list_pending_tasks if results: File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 1987, in __len__ self._ensure_execution() File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 1969, in _ensure_execution self.execute() File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 1886, in inner return method(self, database, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 1957, in execute return self._execute(database) File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 2129, in _execute cursor = database.execute(self) File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 3112, in execute return self.execute_sql(sql, params, commit=commit) File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 3106, in execute_sql self.commit() File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 2873, in __exit__ reraise(new_type, new_type(exc_value, *exc_args), traceback) File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 183, in reraise raise value.with_traceback(tb) File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 3099, in execute_sql cursor.execute(sql, params or ()) peewee.OperationalError: database is locked was working flawlessly for months. I went and wiped my install because it didn't matter too much what I had already encoded. Also getting this error still: [2020-10-21 16:42:12,454 pyinotify WARNING] Event queue overflowed. [W 201021 16:42:12 pyinotify:929] Event queue overflowed. Version - 0.0.1-beta7+752a414 The workers are hit/miss when it decides to pickup and work on a file. UnManiac is not grabbing/converting my mp4's into mkvs like it once did. Not sure what broke and why it's not forcing converting them as I have all the flags set. Here is the error log/output of one of the files I'm trying to convert: [mov,mp4,m4a,3gp,3g2,mj2 @ 0x55c3374e3880] stream 0, timescale not set Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/library/xxx/Project A (1983)/Project A [1080p].mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp42isomavc1 creation_time : 2019-12-25T02:24:21.000000Z title : Project.A.1983.1080p.BluRay.H264.AC3,DD5.1 artist : album : comment : encoder : DVDFab 11.0.4.2 Duration: 01:45:38.02, start: 0.000000, bitrate: 5539 kb/s Stream #0:0(und): Video: h264 (avc1 / 0x31637661), yuv420p(bt709), 1920x808 [SAR 1:1 DAR 240:101], 4890 kb/s, 23.98 fps, 23.98 tbr, 24k tbn, 47.95 tbc (default) Metadata: creation_time : 2019-12-25T02:24:21.000000Z encoder : JVT/AVC Coding Stream #0:1(zho): Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, 5.1(side), fltp, 640 kb/s (default) Metadata: creation_time : 2019-12-25T02:24:21.000000Z Side data: audio service type: main Stream #0:2(eng): Subtitle: dvd_subtitle (mp4s / 0x7334706D), 7 kb/s (default) Metadata: creation_time : 2019-12-25T02:24:21.000000Z Stream #0:3: Video: png, rgba(pc), 640x269, 90k tbr, 90k tbn, 90k tbc (attached pic) Multiple -c, -codec, -acodec, -vcodec, -scodec or -dcodec options specified for stream 0, only the last option '-c:v libx265' will be used. Multiple -c, -codec, -acodec, -vcodec, -scodec or -dcodec options specified for stream 1, only the last option '-c:v libx265' will be used. Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> hevc (libx265)) Stream #0:3 -> #0:1 (png (native) -> hevc (libx265)) Stream #0:1 -> #0:2 (copy) Stream #0:2 -> #0:3 (dvd_subtitle (dvdsub) -> subrip (srt)) Subtitle encoding currently only possible from text to text or bitmap to bitmap . I turned on debug and reduced my scanning to genre's instead of my entire multi-TB movie directory. Is there an upper limit on files that python says "Too many files, can't process"
  19. Anyway to 'force' a build through? There's a nasty bug for many private trackers that aren't reading cookie data that has been patched in the last 24 hours. I could wait till tomorrow or hope you can release a manual build today. Do appreciate all your work.
  20. What's the point in then of the Devices field in Additional Settings? That doesn't make any sense. I didn't want to have to go the route of manual entry, if that's the only way....
  21. How do I add additional network interfaces to my docker containers? Use Case: Syncthing has the ability to be managed over one port but have the replication stream over another port. I want all replication over my 40gb network instead of my 1gb network I've searched all over this forum but still unsure where to put this in. Adding a device doesn't work as the container fails to recognize my bridge when I tell it to use "br4"
  22. Keep me posted, just dropped 40gbe in all my servers, haven't had a chance to mess with them just yet. Upgraded from 10gbe
  23. Just checked: 5 TB Storage Limits [PRO] Data volume you can manage with CloudBerry Backup is 5 TB in PRO version and 200 GB in freeware version. CloudBerry Lab doesn't offer storage, you need to buy from storage providers separately. Goddamnitt...
  24. Why are you on an unsupported/old build of UnRaid? 6.1 is deprecated, we are at 6.6.x now.
  25. Is it possible to select the version that I want to use? 3.3.16 works vastly better for some trackers vs. others.