deusxanime

Members
  • Posts

    143
  • Joined

  • Last visited

Everything posted by deusxanime

  1. Something change with this JDownloader (JD) docker container or RapidGator (RG) lately, or has anyone else noticed this? I frequently use RG because it is about the only one that would pop up the captcha in the docker WebGUI and so was easy to use. But lately (this week?) it has not been popping up the captcha window for me to fill, so it just times out after awhile instead. Wondering if something changed from RG that JD can't pop up the captcha anymore to fill out or if it is maybe something broke in JD (mine or in general). Edit: Well luckily I was able to pull the captcha up in the MyJD app on my android phone. Weird though, as I've always been able to do the RG captchas right in the JD WebGUI in the past. Hopefully they are able to bring that back as the MyJD has not been the most reliable thing to use, but I'm glad it seems to be working today.
  2. Updated plugins, including UD, earlier today and now I'm realizing I don't have the UD SMB shares anymore. Any thing change that could have caused this? I tried disabling SMB share on the disk and re-enabling it, but still nothing there. Running unRAID 6.9.2. edit: Huh, I was fiddling around and refreshed and now it has come back again... strange.
  3. Updated this last night and now it doesn't seem to be starting up (or at least I can't tell as I can't get to it). I start it from the unRAID docker page and it shows running, but I just get a HTTP ERROR 400 page when trying to access the web GUI. The docker logs don't show any errors though. [migrations] started [migrations] no migrations found usermod: no changes ─────────────────────────────────────── ██╗ ███████╗██╗ ██████╗ ██║ ██╔════╝██║██╔═══██╗ ██║ ███████╗██║██║ ██║ ██║ ╚════██║██║██║ ██║ ███████╗███████║██║╚██████╔╝ ╚══════╝╚══════╝╚═╝ ╚═════╝ Brought to you by linuxserver.io ─────────────────────────────────────── To support LSIO projects visit: https://www.linuxserver.io/donate/ ─────────────────────────────────────── GID/UID ─────────────────────────────────────── User UID: 99 User GID: 100 ─────────────────────────────────────── [custom-init] No custom files found, skipping... [ls.io-init] done. Tried restarting it multiple times and also tried just leaving it running for 30min+ to see if it would eventually go, but still the same thing. edit: Something with Chrome it seems. If I go to the webgui using Edge, it comes up fine.
  4. Since I applied the latest update to this docker on Monday I get 100+ of these warnings a day. 2023-02-15 00:54:42 WARNING GENERICQUEUESCHEDULER-UPDATE-RECOMMENDED-ANILIST :: [918cfe7] Could not parse AniDB show, with exception: Traceback (most recent call last): File "/app/medusa/medusa/show/recommendations/anilist.py", line 95, in fetch_popular_shows recommended_show = self._create_recommended_show(show) File "</app/medusa/ext/decorator.py:decorator-gen-54>", line 2, in _create_recommended_show File "/app/medusa/ext/dogpile/cache/region.py", line 1577, in get_or_create_for_user_func return self.get_or_create( File "/app/medusa/ext/dogpile/cache/region.py", line 1042, in get_or_create with Lock( File "/app/medusa/ext/dogpile/lock.py", line 185, in __enter__ return self._enter() File "/app/medusa/ext/dogpile/lock.py", line 87, in _enter value = value_fn() File "/app/medusa/ext/dogpile/cache/region.py", line 977, in get_value value = self._get_from_backend(key) File "/app/medusa/ext/dogpile/cache/region.py", line 1265, in _get_from_backend self.backend.get_serialized(key) File "/app/medusa/ext/dogpile/cache/backends/file.py", line 217, in get_serialized with self._dbm_file(False) as dbm_obj: File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/app/medusa/ext/dogpile/cache/backends/file.py", line 213, in _dbm_file with dbm.open(self.filename, "w" if write else "r") as dbm_obj: File "/usr/lib/python3.10/dbm/__init__.py", line 91, in open raise error[0]("db type is {0}, but the module is not " dbm.error: db type is dbm.gnu, but the module is not available Anyone else getting these? I see a possible explanation and solution here, but thought I'd check before going ahead with deleting the *.dbm files from the cache folder and see if there are any other thoughts or ideas. edit: Deleted the dbm files and seems to be doing fine and so far no more 100+ errors a day.
  5. Does wireguard only work with PIA? I use TorGuard and was thinking of maybe trying to switch to WG to see if it works better/is faster than using OVPN.
  6. I've kind of seen similar things. I've been using this rTorrent container for a while (at least a couple years, probably more) with TorGuard VPN and it worked great up until recently. The past few months or so it just kind of stopped working well. I use Medusa and Radarr to grab stuff, but downloads have really fallen off. It takes a day or two sometimes for the magnets sent over from Medusa/Radarr to rTorrent to resolve into actual files (sometimes they never do even) and then it is a crapshoot whether the thing will actually download or not. I have to frequently manually go in and grab .torrent files, generally from rar bg, to catch up and when I load them in to rTorrent it doesn't seem to really work with the .torrent files at all, they just spin and never download or connect to the tracker (which may be due to the only tracker really being in there is rar bg). But I can load the same .torrent files in to my uTorrent v2.2.1 which runs on a Windows 10 VM with the TorGuard client on it, and things work fine there. I'm not really sure if the issue is rTorrent the program, the tracker(s), VPN, protocol, or something specific to the container, so it is hard to pinpoint where the problem lies.
  7. I had a bunch of issues last week with it too, seems like the recent updates have been a bit buggy. I couldn't paste any links into JD (only tried mega ones since that was all I had), they just never seemed to resolve. I'd see the window pop up for a fraction of a second and then nothing would happen. I searched around and ended up following this support article and it got it working again. Putting it here for future reference if you don't want have to blow away your entire JD appdata folder, I think you just need to stop JD container, delete the Core.jar file and tmp, update folders, and then start it up again and it should redownload those files/folders and reset JD. I did also have the weird mixed messages I've seen others post here for the my.jdownloader login (both green and red text), but I don't use that much so didn't look into it. As of now it is only showing the green/successful text at least.
  8. Just updated my lsio medusa docker, which has been working fine up to this point, and now it won't run. I just get these errors continuously looping in my log for it: s6-supervise custom-svc-README.txt (child): fatal: unable to exec run: Exec format error s6-supervise custom-svc-README.txt: warning: unable to spawn ./run - waiting 10 seconds Anyone elese getting this or seen it before and have an idea how to fix? edit: It looks like this is due to using a custom startup script and that they've changed how those work now. More info here: https://info.linuxserver.io/issues/2022-08-29-custom-files/
  9. Is that only on the RC/6.10? I'm on 6.9.2 and searched for both "dynamix" and "file manager" and I'm not seeing that plugin on CA.
  10. All my libraries are pointing to shares on the array and the metadata is in my appdata which is on the cache pool, nothing is using UD for Plex that I can think of. I noticed what I replied to was from back in January too, so might not have to do with the latest update and I'm only just running in to it now for different reasons.
  11. I added some new files on my server and when I went to scan for them in Plex, my library scans were just instantly stopping with nothing new added. I saw a bunch of these same errors in my logs. Seemed like it lost track of the mounts for some reason maybe and so it couldn't find the folder/files. I restarted the container and seems to be working again finding new files, but I'm also occasionally see that error pop up again as well. Very strange and hope something didn't break in the latest update. Do you still get those errors and has your Plex been continuing to work ok even with them?
  12. I understand, no problem. 😁 Thanks for the help/hints before, gives me ideas and a simple solution anyway.
  13. Thanks for the suggestion! Any benefit to just creating /.cache myself manually? I have a script I run when I first connect to the CLI to create some aliases for me and such. I could easily add a couple lines to create the directory instead in there, if needed or useful.
  14. Not sure how much support there is for using the command line directly, but that is how I use yt-dlp and ffmpeg in this container. I recently switched over from a different one I used to use (liquid-dl) since they were just using youtube-dl and most people seem to recommend using yt-dlp at this point instead. I start up the container in unRAID and then go into using "docker exec -it MeTube /bin/sh". So the error I get when I try to use yt-dlp at the command line is the following: /mnt/misc/_music_staging/chloe $ yt-dlp -x --embed-thumbnail --audio-format mp3 --audio-quality 3 https://www.youtube.com/watch?v=Dyl6EoU0rNY [youtube] Dyl6EoU0rNY: Downloading webpage [youtube] Dyl6EoU0rNY: Downloading android player API JSON [youtube] Dyl6EoU0rNY: Downloading player 495d0f2b WARNING: Writing cache to '/.cache/yt-dlp/youtube-sigfuncs/js_495d0f2b_108.json' failed: Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/yt_dlp/cache.py", line 49, in store os.makedirs(os.path.dirname(fn)) File "/usr/local/lib/python3.8/os.py", line 213, in makedirs makedirs(head, exist_ok=exist_ok) File "/usr/local/lib/python3.8/os.py", line 213, in makedirs makedirs(head, exist_ok=exist_ok) File "/usr/local/lib/python3.8/os.py", line 223, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/.cache' This isn't a big deal since it still downloads the video/music, so it must be falling back to some other way to cache it, but thought I'd check see if there is a way to fix it or not.
  15. Also a reminder for those who use a custom server jar that they have to update manually themselves. I grabbed the latest PaperMC server version that had the fix a few days ago and swapped it out on mine.
  16. OK, that is what I was thinking it might be. I see the default jar is named "minecraft_server.jar", so long as I name the spigot jar or whatever I go with something else and set the custom jar path I should be good. Then any updates only happen to the vanilla "minecraft_server.jar" when restarting the container.
  17. Just following up to say I think I partially found the issue. We built our base area near world spawn (since that is where we started) and all the animals we had bred were causing issues. There were way too many of them and since vanilla always keep the spawn area loaded, it was causing lag. We killed off almost all of them, just leaving a few of each, and it improved some. There still might be too much other stuff in spawn causing issues, but at least that helped a ton. Been doing more reading to see how else I can tweak it and I see there are much more performance optimized servers you can run - Bukkit, Spigot, Paper, etc. Can we run those on this container? Would it just be replacing the vanilla server.jar file with one for Spigot and setting the custom jar path? (stop/start/restart in between of course) If we can and do run a different server jar, how does this container handle updates? If a new minecraft version came out, does it auto-download and overwrite the server jar, or do we have to do that manually? Just thinking if it auto-updates how that would affect running a custom server jar too, probably would want to disable auto-update in that case so it doesn't mess stuff up.
  18. Hopefully you Minecraft server vets can help or give me ideas to check. I installed this docker on my unRAID server about a month ago. It has been a fun game for the whole family! Just playing standard stock vanilla MC and have gotten myself, wife, and 2 kids all playing together! When we started it was good, but it has gotten pretty badly laggy the past week or two. We have spread and expanded pretty far and done a decent amount of landscaping (creating paths/tunnels between bases and such) on the server. From the original spawn point, we've built bases 3000+ away (according to F3 coordinates) in various directions and connected them via paths, tunnels, and some with rails. Now it has gotten to the point, especially when there are multiple people on, where if we mine a block it takes a couple seconds or more to "pop" and give us the resource. Riding on our railways has gotten REALLY slow and glitchy. Walking, swimming, boating, and especially riding a horse has gotten bad. If you go any distance it gets to the point where it stops loading the world and takes forever to load the next section. Like we'll ride a horse or row a boat somewhere and you go a short distance and then have to wait 10-15 seconds to load the next chunk of land, go a little, wait for it to load, etc etc. It takes FOREVER now to get between points any distance apart. Basically anything more than walking speed and you have to keep pausing and wait for loading. If I go to the web UI I see a bunch of these messages - "[16:52:21] [Server thread/WARN]: Can't keep up! Is the server overloaded? Running 35616ms or 712 ticks behind". Even with no one on, I see similar messages! (Though usually not quite that high when no one on, it is still there and starts immediately as soon as I boot up the server.) I've tried moving the appdata/config files around. I started with it in my main appdata area which is on my cache SSD drive. I have the appdata share set to cache only, so it should be only on the SSD and not moving stuff to the array disks. My other dockers are on there as well of course. Now that we are having problems I've tried moving it to another SSD which is where my two VMs live and is an unassigned drive. Then I tried moving it to another unassigned drive that had nothing else on it, though that one was a standard spinning disk. Neither helped and for now I've moved it back to my main appdata for now. I've increased my starting heap size/memory to 1024M and max memory to 4096M and now even 6144M. Neither seemed to have helped as well. We were originally playing via WiFi, but I've wired both mine and my wife's systems to see if it would be better, but still seems the same even with just us on. Being that it is a server on our local network, I thought I wouldn't run into these kind of lag problems, so it has been pretty frustrating. Not sure what else I can tweak to help this out, but my wife is getting so frustrated she is thinking of quitting playing, so I hope I can figure something out! The WAF is dropping rapidly! I know this isn't a Minecraft support forum, but since this is unRAID I was wondering if there's anything specifically with that that people have seen that may be causing it or tweak/tips they have for running on unRAID. My specs are in my sig. It is a dual Xeon system from 5 years or so ago I think, so I think should be more than powerful enough.
  19. Thanks, this fixed it for me! The next time I reboot (eventually) will be to install v6.9.2, or whatever is available at the time, so that will be my "permanent" fix. This got it going for now though.
  20. I updated mine to the latest build yesterday (or maybe day before?... recently anyway) and it looks like the issue is fixed. Not sure what caused it, but I can open the webpage in Chrome now again. Hopefully working again for you as well!
  21. I installed the latest update this morning, started the Duplicati container up after, and now I cannot get to the admin web page for it. It just loads forever but nothing ever comes up (just a blank page). Oddly, even Chrome just keeps trying to load forever and isn't timing out. The log doesn't show any errors or anything suspect that I can see. ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io ------------------------------------- To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 30-config: executing... [cont-init.d] 30-config: exited 0. [cont-init.d] 99-custom-scripts: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-scripts: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. Edit: Possibly an issue on Chrome? I tried opening the site on Edge and it worked. I thought maybe something was wrong with my Chrome so I exited it completely and reopened and still have the same issue on it.
  22. Glad you were able to figure it out. Myself and some others have run into similar problems with that setting enabled. Has anyone acknowledged it even yet?
  23. I noticed a possible bug/issue with WireGuard on unRAID. I have a docker container that runs on a custom network and I needed it to talk to a container on bridge so I went into docker settings and enabled "Host access to custom networks". After doing so (and all the required stop/start/reboot), the containers could talk on the network and I thought all was well. Later that week I tried to use my WG VPN tunnel access (LAN access and tunnel through server to my home internet WAN) on my laptop and phone, which I'd used previously and worked great then, since I was on an untrusted Wifi network. After connecting, I was able to access LAN resources on the unRAID server, but could not get the WG client systems to go out to the internet when I had WG turned on on them. I thought back to what had changed and all I could think of was the setting above. So today, since I had to restart unRAID to add a disk, I disabled that setting to test it out and after restarting I tried WG tunnel access and lo and behold it is working again! I can get to LAN resources as well as out to the WAN/internet while connected to WG on the clients. So it seems like something with enabling the "Host access to custom networks" setting breaks WG's ability to allow VPN clients to tunnel through it and use the WAN while connected.
  24. As a counterpoint, I used to run the Plexpass/latest version and it screwed up things a few times for me. I can't remember exactly what, but it got annoying enough that I just set mine back to run the public version instead to avoid the early adopter headaches and it has been better in that regard. Honestly there's very little difference and usually only a short amount of time before changes and updates get pushed down to public from plexpass, assuming they don't break things. Plex itself is pretty stable, so unless there is a bug that made it into public that is affecting me, or some super cool new feature I just can't wait for, I don't see a need to run the Plexpass/latest/beta/rc/whatever-you-want-to-classify-it-as version.
  25. Shoko new version is out and looks like they've changed the docker path. https://shokoanime.com/blog/shoko-version-4-0-0-released/