CyrixDX4
-
Posts
35 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by CyrixDX4
-
-
Here is the updated 'script"
#!/bin/bash wget https://slackware.uk/slackware/slackware64-14.2/slackware64/l/gd-2.2.1-x86_64-1.txz upgradepkg --install-new gd-2.2.1-x86_64-1.txz wget https://slackware.uk/slackware/slackware64-14.2/slackware64/x/fontconfig-2.11.1-x86_64-2.txz upgradepkg --install-new fontconfig-2.11.1-x86_64-2.txz wget https://slackware.uk/slackware/slackware64-14.2/slackware64/l/harfbuzz-1.2.7-x86_64-1.txz upgradepkg --install-new harfbuzz-1.2.7-x86_64-1.txz wget https://slackware.uk/slackware/slackware64-14.2/slackware64/l/freetype-2.6.3-x86_64-1.txz upgradepkg --install-new freetype-2.6.3-x86_64-1.txz wget https://slackware.uk/slackware/slackware64-14.2/slackware64/x/libXpm-3.5.11-x86_64-2.txz upgradepkg --install-new libXpm-3.5.11-x86_64-2.txz wget https://slackware.uk/slackware/slackware64-14.2/slackware64/x/libX11-1.6.3-x86_64-2.txz upgradepkg --install-new libX11-1.6.3-x86_64-2.txz wget https://slackware.uk/slackware/slackware64-14.2/slackware64/x/libxcb-1.11.1-x86_64-1.txz upgradepkg --install-new libxcb-1.11.1-x86_64-1.txz wget https://slackware.uk/slackware/slackware64-14.2/slackware64/x/libXau-1.0.8-x86_64-2.txz upgradepkg --install-new libXau-1.0.8-x86_64-2.txz wget https://slackware.uk/slackware/slackware64-14.2/slackware64/x/libXdmcp-1.1.2-x86_64-2.txz upgradepkg --install-new libXdmcp-1.1.2-x86_64-2.txz
- 1
-
I too started getting this error oddly enough. No major changes on my end, still using 6.9.2.
Also getting docker complaining about updates being "not avialable" .
-
Update has been released and fixed the issue. You can upgrade back to "latest" and the webUI is back to working.
- 1
-
8 hours ago, writablevulture said:
Same for me too.
This is the version that you rollback to:
linuxserver/deluge:amd64-2.0.3-2201906121747ubuntu18.04.1-ls127- 1
-
Not having an issue with binhex's deluge which looks like it got an update at the same time. Seems whatever the update yesterday broke something badly.
-
Latest update broke something:
[WARNING ][deluge.i18n.util :83 ] IOError when loading translations: [Errno 2] No translation file found for domain: 'deluge'
full log output:
Quote()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/
Brought to you by linuxserver.io
-------------------------------------
To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------
User uid: 99
User gid: 100
-------------------------------------
[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 30-config: executing...
[cont-init.d] 30-config: exited 0.
[cont-init.d] 90-custom-folders: executing...
[cont-init.d] 90-custom-folders: exited 0.
[cont-init.d] 99-custom-scripts: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-scripts: exited 0.
[cont-init.d] done.
[services.d] starting services
You are using a legacy method of defining umask
please update your environment variable from UMASK_SET to UMASK
to keep the functionality after July 2021
You are using a legacy method of defining umask
please update your environment variable from UMASK_SET to UMASK
to keep the functionality after July 2021
[services.d] done.
09:38:48 [WARNING ][deluge.i18n.util :83 ] IOError when loading translations: [Errno 2] No translation file found for domain: 'deluge'
Exception ignored in: <bound method CorePluginBase.__del__ of <deluge_label.core.Core object at 0x14e75774cb38>>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/deluge/plugins/pluginbase.py", line 41, in __del__
File "/usr/lib/python3/dist-packages/deluge/component.py", line 490, in get
KeyError: ('RPCServer',)
Exception ignored in: <bound method CorePluginBase.__del__ of <deluge_autoadd.core.Core object at 0x14e7564b0cf8>>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/deluge/plugins/pluginbase.py", line 41, in __del__
File "/usr/lib/python3/dist-packages/deluge/component.py", line 490, in get
KeyError: ('RPCServer',)
Exception ignored in: <bound method CorePluginBase.__del__ of <deluge_stats.core.Core object at 0x14e7564c4a20>>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/deluge/plugins/pluginbase.py", line 41, in __del__
File "/usr/lib/python3/dist-packages/deluge/component.py", line 490, in get
KeyError: ('RPCServer',)
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting. -
13 hours ago, Josh.5 said:18 hours ago, CyrixDX4 said:I get this error repeatedly when I try to process a large library:
2021-10-10T14:16:46:WARNING:Unmanic.Foreman - [FORMATTED] - Postprocessor queue is over 4. Halting feeding workers until it drops.
When this happens I have to restart unmanic and then it will pickup and start processing files again.
I have 4 workers all sitting idle, nothing processing, no scans or anything. I have plenty of disk space available for use.
I'll need all the logs. There are items in your Post-Processor queue that are either taking a long time or are throwing exceptions and staying in cache.
I no longer have the logs as I switched back to 0.0.9. I missed the simplicity of the earlier version with less fiddly bits and it was incredibly stable.
I appreciate your hard work in bringing enhancements to unmanic. I'll follow along and see what other upgrades come along and try again later.
For those folks that want to roll back to 0.0.9 (I know it's no longer supported) you can add the following to the docker template under "repository":
josh5/unmanic:0.0.9- 1
-
I get this error repeatedly when I try to process a large library:
2021-10-10T14:16:46:WARNING:Unmanic.Foreman - [FORMATTED] - Postprocessor queue is over 4. Halting feeding workers until it drops.
When this happens I have to restart unmanic and then it will pickup and start processing files again.
I have 4 workers all sitting idle, nothing processing, no scans or anything. I have plenty of disk space available for use.
-
One of the big pros of Unmanic was the ability to shrink the file size of the base file. I seem to be missing that feature as I run through my library. I have set the CRF to 20 and the preset quality to "SLOW" . I barely see any change in file size.
Do I need to increase CRF to be higher in order to gain a better reduction in size?
-
20 hours ago, Josh.5 said:20 hours ago, CyrixDX4 said:
eh that will break lots of subtitle files again and I have a mountain of movies with baked in subtitles. Do you need me to reraise the ticket for this as you had closed it out recently.This wouldn't touch your subtitles. This only removes the PNG thumbnail image in the video container.
wait is THAT all it does? I thought there was some images put in to specific movies that display subtitles or other images on the screen (John Wick movie when they go into talking about Keana being Baba Yaga, etc.).
-
35 minutes ago, Josh.5 said:
So the cause of the failure is the PNG video stream.
I will see if I can come up with an improvement to the video encoder plugin, but if you want a quick fix I would suggest installing the "Strip all image streams from file" Plugin and putting that first in the worker flow.This will remove the png stream from your file.
eh that will break lots of subtitle files again and I have a mountain of movies with baked in subtitles. Do you need me to reraise the ticket for this as you had closed it out recently.
-
4 minutes ago, Josh.5 said:17 minutes ago, CyrixDX4 said:log
Can you post just the FFmpeg command log of the failed task
sorry got confused on the logs wanted.
-
3 hours ago, guest_user said:
You are using Hardware encoding, I'm using CPU encoding, big difference in performance with the two models.
I'm getting constant errors when writing MP4->MKV file conversion:
[quote]
Too many packets buffered for output stream 0:1.
x265 [info]: frame I: 25, Avg QP:18.95 kb/s: 5938.62
x265 [info]: frame P: 1018, Avg QP:20.10 kb/s: 4848.96
x265 [info]: frame B: 2053, Avg QP:23.89 kb/s: 1874.06
x265 [info]: Weighted P-Frames: Y:3.8% UV:3.3%
x265 [info]: consecutive B-frames: 16.8% 16.2% 25.8% 35.3% 5.9%
encoded 3096 frames in 3442.70s (0.90 fps), 2885.06 kb/s, Avg QP:22.60
Conversion failed!
[/quote]I've reduced the buffer down to 2048 and still get this same issue. Why?
-
Thanks for the info on where to search:
These are my 2 errors:
set_mempolicy: Operation not permitted
and
Too many packets buffered for output stream 0:1.
I have 256GB of RAM and was doing a packet buffer of 10k. I reduced that to 4096. See if that helps alleviate the packet buffer error.
Unsure why the mempolicy is occuring. -
Is there anyway to see why a job "failed" with the Red X by it? There's nothing in the logging or data dashboard to understand why a file failed.
-
Like others I was LIVID that things had changed and things weren't scanning and processing. Then I watched the video and got a better understanding of what is going on and why things have changed. I was very comfortable how things were but times change and new options give more depth of control which I'm beginning to come to grips with. Curmudgeonly and all that jazz.
My jobs are now taking ages (upwards of 18hrs) to process when they used to take a few hours. I've set the process' to "SLOW" in order to have maximum bitrate as well as dropping CRF to "20". I have 18 threads dedicated to these jobs and only 4 workers running.
How do I tell unmanic to skip processing all audio and just convert it as is like the options before. Plus have unmanic skip subtitles and keep them as is (I think this is already enabled by default unless I turn the options off)
Any tips or optimizations, plugins, to bring the old way of unmanic was running would be helpful.
-
I'll wait for 6.9.1.
There's always some unforeseen bugs that crop up on hardware and I've learned there's no reason to rush these things.
Thanks for all the hard work in finally bringing this release out to the masses.
- 8
-
I'll echo others thanking you for making this beautiful set of metrics and panels. My inner monitoring child is going bonkers.
Now, forgive me for asking these questions:
1. Where is the install guide? I've combed through several pages and I only see the dockers that I need to install and some config files. Is there a step by step guide/wiki on how to impliment this wonderment?
2. What is the json file for? What does that do? Where does it go.
If I missed a page that had the install instructions and full configs that would be wonderful. I'm a bit lost and don't want to go willy/nilly installing all the dockers and come back with "Well now what?"
-
10 minutes ago, Transient said:
As the previous user said, just add :119 to the end of the repository name. This means to pull the one tagged 119. When left off, it will pull the one tagged latest.
To find the tags, you can check Docker Hub. The easiest way IMO is to turn on Advanced View in Unraid (top right) then scroll down to your Unmanic container and click the little link that says By: josh5/unmanic. That'll take you over to Docker Hub and you can go to the Tags tab and see all the previous versions.
...only I just tried it and it looks like josh5 has since removed all tags other than latest so you may be unable to pull down 119. I'm not sure why he would do that. Maybe he didn't and there's an issue with Docker Hub at the moment?
That's what I tried using the :VERSION and looks like everything is gone. Very odd.
Only versions up are the 0.0.1-beta7 version and I can't pull down those versions for some reason.
-
10 minutes ago, Transient said:
It did indeed fix the error, however it appears to have been unrelated. Now I have no errors in the log, but it still doesn't process anything. All the workers are idle even though there are several pending. If I roll back to 119 everything works again.
Is there any information I can provide that would be useful in identifying the problem?
I'd love to know how you specified the container version on install so myself and others can do the same.
-
I'm still not able to convert mp4 to mkv files. Other the files are read by the container and then skipped.
I've isolated the folder down to one specific folder to skip the inode 'issue'.
Submitted a bug issue here:
https://github.com/Josh5/unmanic/issues/131
-
Getting same error as others:
[E 201021 13:43:22 web:1788] Uncaught exception GET /dashboard/?ajax=pendingTasks&format=html (10.100.0.112) HTTPServerRequest(protocol='http', host='xx', method='GET', uri='/dashboard/?ajax=pendingTasks&format=html', version='HTTP/1.1', remote_ip='xx') Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 3099, in execute_sql cursor.execute(sql, params or ()) sqlite3.OperationalError: database is locked During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tornado/web.py", line 1697, in _execute result = method(*self.path_args, **self.path_kwargs) File "/usr/local/lib/python3.6/dist-packages/unmanic/webserver/main.py", line 59, in get self.handle_ajax_call(self.get_query_arguments('ajax')[0]) File "/usr/local/lib/python3.6/dist-packages/unmanic/webserver/main.py", line 77, in handle_ajax_call self.render("main/main-pending-tasks.html", time_now=time.time()) File "/usr/local/lib/python3.6/dist-packages/tornado/web.py", line 856, in render html = self.render_string(template_name, **kwargs) File "/usr/local/lib/python3.6/dist-packages/tornado/web.py", line 1005, in render_string return t.generate(**namespace) File "/usr/local/lib/python3.6/dist-packages/tornado/template.py", line 361, in generate return execute() File "main/main-pending-tasks_html.generated.py", line 5, in _tt_execute for pending_task in handler.get_pending_tasks(): # main/main-pending-tasks.html:4 File "/usr/local/lib/python3.6/dist-packages/unmanic/webserver/main.py", line 103, in get_pending_tasks return self.foreman.task_queue.list_pending_tasks(limit) File "/usr/local/lib/python3.6/dist-packages/unmanic/libs/taskqueue.py", line 171, in list_pending_tasks if results: File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 1987, in __len__ self._ensure_execution() File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 1969, in _ensure_execution self.execute() File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 1886, in inner return method(self, database, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 1957, in execute return self._execute(database) File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 2129, in _execute cursor = database.execute(self) File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 3112, in execute return self.execute_sql(sql, params, commit=commit) File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 3106, in execute_sql self.commit() File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 2873, in __exit__ reraise(new_type, new_type(exc_value, *exc_args), traceback) File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 183, in reraise raise value.with_traceback(tb) File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 3099, in execute_sql cursor.execute(sql, params or ()) peewee.OperationalError: database is locked
was working flawlessly for months.
I went and wiped my install because it didn't matter too much what I had already encoded.
Also getting this error still:
[2020-10-21 16:42:12,454 pyinotify WARNING] Event queue overflowed.
[W 201021 16:42:12 pyinotify:929] Event queue overflowed.
Version - 0.0.1-beta7+752a414The workers are hit/miss when it decides to pickup and work on a file. UnManiac is not grabbing/converting my mp4's into mkvs like it once did. Not sure what broke and why it's not forcing converting them as I have all the flags set.
Here is the error log/output of one of the files I'm trying to convert:
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55c3374e3880] stream 0, timescale not set Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/library/xxx/Project A (1983)/Project A [1080p].mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp42isomavc1 creation_time : 2019-12-25T02:24:21.000000Z title : Project.A.1983.1080p.BluRay.H264.AC3,DD5.1 artist : album : comment : encoder : DVDFab 11.0.4.2 Duration: 01:45:38.02, start: 0.000000, bitrate: 5539 kb/s Stream #0:0(und): Video: h264 (avc1 / 0x31637661), yuv420p(bt709), 1920x808 [SAR 1:1 DAR 240:101], 4890 kb/s, 23.98 fps, 23.98 tbr, 24k tbn, 47.95 tbc (default) Metadata: creation_time : 2019-12-25T02:24:21.000000Z encoder : JVT/AVC Coding Stream #0:1(zho): Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, 5.1(side), fltp, 640 kb/s (default) Metadata: creation_time : 2019-12-25T02:24:21.000000Z Side data: audio service type: main Stream #0:2(eng): Subtitle: dvd_subtitle (mp4s / 0x7334706D), 7 kb/s (default) Metadata: creation_time : 2019-12-25T02:24:21.000000Z Stream #0:3: Video: png, rgba(pc), 640x269, 90k tbr, 90k tbn, 90k tbc (attached pic) Multiple -c, -codec, -acodec, -vcodec, -scodec or -dcodec options specified for stream 0, only the last option '-c:v libx265' will be used. Multiple -c, -codec, -acodec, -vcodec, -scodec or -dcodec options specified for stream 1, only the last option '-c:v libx265' will be used. Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> hevc (libx265)) Stream #0:3 -> #0:1 (png (native) -> hevc (libx265)) Stream #0:1 -> #0:2 (copy) Stream #0:2 -> #0:3 (dvd_subtitle (dvdsub) -> subrip (srt)) Subtitle encoding currently only possible from text to text or bitmap to bitmap
. I turned on debug and reduced my scanning to genre's instead of my entire multi-TB movie directory. Is there an upper limit on files that python says "Too many files, can't process"
-
Anyway to 'force' a build through? There's a nasty bug for many private trackers that aren't reading cookie data that has been patched in the last 24 hours.
I could wait till tomorrow or hope you can release a manual build today.
Do appreciate all your work.
-
1 hour ago, bonienl said:
There is no GUI support for connecting a container to multiple networks.
It is possible using CLI and the "docker network connect" command.
What's the point in then of the Devices field in Additional Settings? That doesn't make any sense.
I didn't want to have to go the route of manual entry, if that's the only way....
Support for [laromicas] ROMVault docker container
in Docker Containers
Posted · Edited by CyrixDX4
Getting a Permissions issue:
Not sure where else to put these permissions at as I added both USER_ID and GROUP_ID as variables, gave them the proper numbers/id and then restarted the container with the same issue.