deusxanime

Members
  • Posts

    128
  • Joined

  • Last visited

Everything posted by deusxanime

  1. OK, that is what I was thinking it might be. I see the default jar is named "minecraft_server.jar", so long as I name the spigot jar or whatever I go with something else and set the custom jar path I should be good. Then any updates only happen to the vanilla "minecraft_server.jar" when restarting the container.
  2. Just following up to say I think I partially found the issue. We built our base area near world spawn (since that is where we started) and all the animals we had bred were causing issues. There were way too many of them and since vanilla always keep the spawn area loaded, it was causing lag. We killed off almost all of them, just leaving a few of each, and it improved some. There still might be too much other stuff in spawn causing issues, but at least that helped a ton. Been doing more reading to see how else I can tweak it and I see there are much more performance optimized servers you can run - Bukkit, Spigot, Paper, etc. Can we run those on this container? Would it just be replacing the vanilla server.jar file with one for Spigot and setting the custom jar path? (stop/start/restart in between of course) If we can and do run a different server jar, how does this container handle updates? If a new minecraft version came out, does it auto-download and overwrite the server jar, or do we have to do that manually? Just thinking if it auto-updates how that would affect running a custom server jar too, probably would want to disable auto-update in that case so it doesn't mess stuff up.
  3. Hopefully you Minecraft server vets can help or give me ideas to check. I installed this docker on my unRAID server about a month ago. It has been a fun game for the whole family! Just playing standard stock vanilla MC and have gotten myself, wife, and 2 kids all playing together! When we started it was good, but it has gotten pretty badly laggy the past week or two. We have spread and expanded pretty far and done a decent amount of landscaping (creating paths/tunnels between bases and such) on the server. From the original spawn point, we've built bases 3000+ away (according to F3 coordinates) in various directions and connected them via paths, tunnels, and some with rails. Now it has gotten to the point, especially when there are multiple people on, where if we mine a block it takes a couple seconds or more to "pop" and give us the resource. Riding on our railways has gotten REALLY slow and glitchy. Walking, swimming, boating, and especially riding a horse has gotten bad. If you go any distance it gets to the point where it stops loading the world and takes forever to load the next section. Like we'll ride a horse or row a boat somewhere and you go a short distance and then have to wait 10-15 seconds to load the next chunk of land, go a little, wait for it to load, etc etc. It takes FOREVER now to get between points any distance apart. Basically anything more than walking speed and you have to keep pausing and wait for loading. If I go to the web UI I see a bunch of these messages - "[16:52:21] [Server thread/WARN]: Can't keep up! Is the server overloaded? Running 35616ms or 712 ticks behind". Even with no one on, I see similar messages! (Though usually not quite that high when no one on, it is still there and starts immediately as soon as I boot up the server.) I've tried moving the appdata/config files around. I started with it in my main appdata area which is on my cache SSD drive. I have the appdata share set to cache only, so it should be only on the SSD and not moving stuff to the array disks. My other dockers are on there as well of course. Now that we are having problems I've tried moving it to another SSD which is where my two VMs live and is an unassigned drive. Then I tried moving it to another unassigned drive that had nothing else on it, though that one was a standard spinning disk. Neither helped and for now I've moved it back to my main appdata for now. I've increased my starting heap size/memory to 1024M and max memory to 4096M and now even 6144M. Neither seemed to have helped as well. We were originally playing via WiFi, but I've wired both mine and my wife's systems to see if it would be better, but still seems the same even with just us on. Being that it is a server on our local network, I thought I wouldn't run into these kind of lag problems, so it has been pretty frustrating. Not sure what else I can tweak to help this out, but my wife is getting so frustrated she is thinking of quitting playing, so I hope I can figure something out! The WAF is dropping rapidly! I know this isn't a Minecraft support forum, but since this is unRAID I was wondering if there's anything specifically with that that people have seen that may be causing it or tweak/tips they have for running on unRAID. My specs are in my sig. It is a dual Xeon system from 5 years or so ago I think, so I think should be more than powerful enough.
  4. Thanks, this fixed it for me! The next time I reboot (eventually) will be to install v6.9.2, or whatever is available at the time, so that will be my "permanent" fix. This got it going for now though.
  5. I updated mine to the latest build yesterday (or maybe day before?... recently anyway) and it looks like the issue is fixed. Not sure what caused it, but I can open the webpage in Chrome now again. Hopefully working again for you as well!
  6. I installed the latest update this morning, started the Duplicati container up after, and now I cannot get to the admin web page for it. It just loads forever but nothing ever comes up (just a blank page). Oddly, even Chrome just keeps trying to load forever and isn't timing out. The log doesn't show any errors or anything suspect that I can see. ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io ------------------------------------- To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 30-config: executing... [cont-init.d] 30-config: exited 0. [cont-init.d] 99-custom-scripts: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-scripts: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. Edit: Possibly an issue on Chrome? I tried opening the site on Edge and it worked. I thought maybe something was wrong with my Chrome so I exited it completely and reopened and still have the same issue on it.
  7. Glad you were able to figure it out. Myself and some others have run into similar problems with that setting enabled. Has anyone acknowledged it even yet?
  8. I noticed a possible bug/issue with WireGuard on unRAID. I have a docker container that runs on a custom network and I needed it to talk to a container on bridge so I went into docker settings and enabled "Host access to custom networks". After doing so (and all the required stop/start/reboot), the containers could talk on the network and I thought all was well. Later that week I tried to use my WG VPN tunnel access (LAN access and tunnel through server to my home internet WAN) on my laptop and phone, which I'd used previously and worked great then, since I was on an untrusted Wifi network. After connecting, I was able to access LAN resources on the unRAID server, but could not get the WG client systems to go out to the internet when I had WG turned on on them. I thought back to what had changed and all I could think of was the setting above. So today, since I had to restart unRAID to add a disk, I disabled that setting to test it out and after restarting I tried WG tunnel access and lo and behold it is working again! I can get to LAN resources as well as out to the WAN/internet while connected to WG on the clients. So it seems like something with enabling the "Host access to custom networks" setting breaks WG's ability to allow VPN clients to tunnel through it and use the WAN while connected.
  9. As a counterpoint, I used to run the Plexpass/latest version and it screwed up things a few times for me. I can't remember exactly what, but it got annoying enough that I just set mine back to run the public version instead to avoid the early adopter headaches and it has been better in that regard. Honestly there's very little difference and usually only a short amount of time before changes and updates get pushed down to public from plexpass, assuming they don't break things. Plex itself is pretty stable, so unless there is a bug that made it into public that is affecting me, or some super cool new feature I just can't wait for, I don't see a need to run the Plexpass/latest/beta/rc/whatever-you-want-to-classify-it-as version.
  10. Shoko new version is out and looks like they've changed the docker path. https://shokoanime.com/blog/shoko-version-4-0-0-released/
  11. Most likely nobody here is going to be a person who codes for the project directly, including @binhex. He just bundles and creates the docker template/container, not the application itself. You'll want to go to their github and create an issue there if you want someone to look at it. You could also look into trying Airsonic-Advanced instead as I believe they forked Airsonic and are more aggressively developing it, including performance enhancements. Perhaps that's something they've fixed. There is a docker template out there for it for unRAID in Community Apps.
  12. Plex internally makes it's own backups of the db as well I believe. In the same directory as the main db but appended with the date, like "com.plexapp.plugins.library.db-2020-05-02". Have you tried to go back to one of those using these instructions? Hopefully the backups aren't corrupt as well.
  13. I ran into a couple annoyances with the update over the weekend (v0.3.13) and thought I'd post here to let others be aware of them: 1. I had a download that didn't seem to be getting processed. It turns out there was a bug with the scheduler that it seems like the frequency you set to post-process your completed download directory was ignored and just sets to run every 24h instead. Manually forcing a post-process worked though. Github issue thread. Looks like they've already put up a new release v0.3.14 with a fix and it has been applied to the docker image. Just run another update to your container in unRAID web interface to fix it. 2. Luckily(?) due to the above I ran that manual post-process and found out about this other change. I run a post-processing shell (bash) script and I noticed that it didn't work when I looked at the post-processing log output. I was getting the error "Extra script /config/postproc.sh is not a Python file". I looked back at the release log for v0.3.13 due to the above issue and found out they changed so it can only run python scripts now for post-processing. Here is a new wiki entry they just put up a few days ago explaining this along with a small python snippet you can use to work as an intermediary between python and whatever your script is. If you run a bash script like I did, you'll want to create a python file similar to this and change your post-processing script in Settings to point at it now instead. /config/postproc-shiv.py import subprocess import sys subprocess.call(['/bin/bash', '/config/postproc.sh'] + sys.argv[1:]) # If you need to run more scripts (optional) # subprocess.call(['my_cmd2', 'my_script2'] + sys.argv[1:]) Hope that helps someone who might run into the same issues!
  14. Not sure what happened but seems like since I updated over the weekend that Medusa is no longer letting me log in with my credentials. Is there something that could have reset them or caused some issue with the latest update? Edit: No idea what caused it, but I shut down the container, went into config.ini and blanked out the web_password, logged in with no pwd, and then re-set the same password in Settings again. Seems to be working for now anyway.
  15. Good to know, I thought it was only Samsung drives affected. Definitely want to do some research before purchasing new ones then to be sure they'll work correctly. I blew $600 (at the time) on two 850 EVO 1TB drives specifically to use as my unRAID cache drives in a mirror and was quite frustrated that it didn't work (and still doesn't a couple years later!). Hopefully others will be spared the pain and expense.
  16. If you were planning to use them for this specific purpose and have the option to return them still, I'd say to do that. Right now there is no idea if/when this issue will be resolved. As far as what to replace them with, I think anything non-Samsung will do. I believe the issue is specific to their drives, but someone correct me if I'm wrong.
  17. I use rTorrent-VPN from binhex which also includes Privoxy and I use it with JDownloader. I just went under Settings in JDownloader, went under Connection Manager, and added the Privoxy address (same as my unRAID server since I'm not doing anything fancy with my docker network settings) with port 8118. That's what it is set to use in rTorrent-VPN, not sure if that is default or would be the same address/port for you. Also make sure to uncheck the "No Proxy" entry so it doesn't try to download stuff without the proxy/VPN.
  18. Just having a VPN isn't quite enough. What you need is a proxy to direct it through. If you are using binhex's DelugeVPN container, I believe he usually includes the Privoxy proxy host as part of that. If so, you should be able to go into JDownloader's Settings and look under Connection Manager. You'll want to add the Privoxy proxy there using your unRAID server's host/IP address (or the DelugeVPN container's IP if you have it set to use a different one from the unRAID server, which is uncommon) and port 8118, with user and password left blank. Also make sure to uncheck the No Proxy line so it doesn't try to use your non-VPN'ed internet connection. Once you have that set you should be good to go. Just to add on - to double-check you are using your proxy, look in the Connection column of your Downloads when something is running. I think it will say in there what IP address you are using to download (might have to hover over the icons to see in a tool tip popup). Verify that is different from your normal IP you are using for your home connection. If you don't know how to check your home connection's IP, just open a browser on your home system and go to Google and search "what is my ip" and it will tell you.
  19. I was trying to write up a better detailed tutorial/step-by-step for how to do it for others who aren't as savvy, but realized there was a step that was more complicated and I was wondering how you did it. That is copying the OmbiExternal.db back to the appdata folder (or altering it in place). By default I think the permissions are for "nobody" and so you can't write to the folder or alter the file as your Windows user. I know how to alter permissions via linux command line, so I did it that way so I could change the file and then set them back to what they were before, but was wondering how you did it? Trying to think of an easier way to put in my steps for people who may not use CLI.
  20. Ran into this too and it looks like this is the issue. Look at my post there (currently the last one) for a fix you can do without having to install anything into your container. It looks like they added a new column to a table and the container must have jumped versions too much and missed out on the intermediary release that included the code to add that column. So now the current version expects it to be there, but it is missing. edit: My post isn't the bottom one anymore and to avoid confusion, I'll just repost it here: I had this problem also and fixed it by running the previously recommended SQL query against OmbiExternal.db ALTER TABLE PlexServerContent ADD COLUMN RequestId INTEGER NULL; I didn't want to install stuff into my container so I just used the zip version (no install needed on your desktop either) of this SQLite browser/editor. I stopped my Ombi container, opened the OmbiExternal.db with the SQLite editor program, ran the query above, verified it had added the column into the proper table, saved it, and then started the Ombi container back up again. Everything looks like it working again with movies and TV posters showing up, as well as the "no such columns" error now no longer showing up in the logs.
  21. I've said repeatedly I'll do what you suggest if it breaks again. I needed to remotely access my system this week and thus needed it working, so I had rebuilt it from scratch again already before that suggestion was brought up as a troubleshooting step. It is currently functioning as is. Since I don't know what causes it to break, I can't force that to happen yet. Just posted because I saw others posting with the same issue which I thought was worth noting.
  22. Sorry, when you were saying my appdata was the problem it seemed you were implying there was an issue with the way it is set up in unRAID, the drive was failing/corrupting, or something like that. None of which appears to be true from my observations so far. Obviously from the error the issue seems to be that files are going missing randomly in the appdata folder and I've said that the only way I've been able to get mine working or "fix" it is to do the same thing as that person - delete the docker container and appdata folder completely and rebuild it from scratch. I've had to do this a few times now. Due to the fact that it has happened to multiple separate people with likely disparate systems/builds, it seems like whatever is causing the files to go missing or corrupt is coming from inside the container, or at least not specific to my system. What that is I'm not sure as I'm not a dev and it is just a hypothesis, but I figured I'd report it here, and seeing other reports of the same thing tended to confirm my suspicions. As I said, I'll try your suggestion and report here if it happens again, but at least something is documented in case it rears its head again on mine or someone else's system.
  23. I don't believe it is an issue with my appdata, unless it is universal to more users than just me. As you can see others are having this problem as well (above and previous pages). If it happens to me again though I can try to change the path of my /config to /mnt/cache/appdata/openvpn-as and see what happens.
  24. Never had an issue with my appdata in the couple years I've been running unRAID. I have about 25 docker containers on there (about half running usually). All others have been stable/no issues currently. Single drive Samsung SSD cache drive with all appdata written to it. Appdata share is set to use cache drive only. For openvpn-as I just left all parameters at their defaults when loading the container. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='openvpn-as' --net='bridge' -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'PGID'='100' -e 'PUID'='99' -p '943:943/tcp' -p '9443:9443/tcp' -p '1194:1194/udp' -v '/mnt/user/appdata/openvpn-as':'/config':'rw' --cap-add=NET_ADMIN 'linuxserver/openvpn-as'
  25. So I set up the whole container from scratch a couple weeks ago due to the issues I had earlier (see below or my previous posts a page back). Had to relearn how to do it all again since it had been a couple years since I've had to set it up, but with @SpaceInvaderOne's videos and some reading on Github I was able to get it going again. Used it a couple times on my phone and laptop and it was working fine. Now I just did an update today and I'm back to the same thing again. Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-time: executing... [cont-init.d] 20-time: exited 0. [cont-init.d] 30-config: executing... [cont-init.d] 30-config: exited 0. [cont-init.d] 40-openvpn-init: executing... find: ‘/config/etc/db’: No such file or directory /var/run/s6/etc/cont-init.d/40-openvpn-init: line 14: /usr/local/openvpn_as/bin/ovpn-init: No such file or directory Stopping openvpn-as now; will start again later after configuring cat: /var/run/openvpnas.pid: No such file or directory kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] [cont-init.d] 40-openvpn-init: exited 0. [cont-init.d] 50-interface: executing... /var/run/s6/etc/cont-init.d/50-interface: line 9: /usr/local/openvpn_as/scripts/confdba: No such file or directory /var/run/s6/etc/cont-init.d/50-interface: line 10: /usr/local/openvpn_as/scripts/confdba: No such file or directory /var/run/s6/etc/cont-init.d/50-interface: line 11: /usr/local/openvpn_as/scripts/confdba: No such file or directory /var/run/s6/etc/cont-init.d/50-interface: line 12: /usr/local/openvpn_as/scripts/confdba: No such file or directory [cont-init.d] 50-interface: exited 127. [cont-init.d] 99-custom-scripts: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-scripts: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. ./run: line 3: /usr/local/openvpn_as/scripts/openvpnas: No such file or directory ./run: line 3: /usr/local/openvpn_as/scripts/openvpnas: No such file or directory ./run: line 3: /usr/local/openvpn_as/scripts/openvpnas: No such file or directory ./run: line 3: /usr/local/openvpn_as/scripts/openvpnas: No such file or directory ./run: line 3: /usr/local/openvpn_as/scripts/openvpnas: No such file or directory ./run: line 3: /usr/local/openvpn_as/scripts/openvpnas: No such file or directory The last line just continually keeps popping up/repeating until I kill it. Tried a restart of the container as well as stop/start. I really don't want to have to re-set this thing up from scratch every couple weeks or when there is an update. Is there any idea what causes this and how to fix it from happening again? I know others previously had the same issue, did you get this again or has it worked since you've resetup from scratch? edit: Been messing with it, including removing and re-adding the container (with existing appdata), but no luck. I need to use it tomorrow to connect remotely, so I ended up re-setting it up from scratch again and it is working... for now.