deusxanime

Members
  • Posts

    128
  • Joined

  • Last visited

About deusxanime

  • Birthday July 3

Converted

  • Gender
    Male

Recent Profile Visitors

1085 profile views

deusxanime's Achievements

Apprentice

Apprentice (3/14)

16

Reputation

  1. OK, that is what I was thinking it might be. I see the default jar is named "minecraft_server.jar", so long as I name the spigot jar or whatever I go with something else and set the custom jar path I should be good. Then any updates only happen to the vanilla "minecraft_server.jar" when restarting the container.
  2. Just following up to say I think I partially found the issue. We built our base area near world spawn (since that is where we started) and all the animals we had bred were causing issues. There were way too many of them and since vanilla always keep the spawn area loaded, it was causing lag. We killed off almost all of them, just leaving a few of each, and it improved some. There still might be too much other stuff in spawn causing issues, but at least that helped a ton. Been doing more reading to see how else I can tweak it and I see there are much more performance optimized servers you can run - Bukkit, Spigot, Paper, etc. Can we run those on this container? Would it just be replacing the vanilla server.jar file with one for Spigot and setting the custom jar path? (stop/start/restart in between of course) If we can and do run a different server jar, how does this container handle updates? If a new minecraft version came out, does it auto-download and overwrite the server jar, or do we have to do that manually? Just thinking if it auto-updates how that would affect running a custom server jar too, probably would want to disable auto-update in that case so it doesn't mess stuff up.
  3. Hopefully you Minecraft server vets can help or give me ideas to check. I installed this docker on my unRAID server about a month ago. It has been a fun game for the whole family! Just playing standard stock vanilla MC and have gotten myself, wife, and 2 kids all playing together! When we started it was good, but it has gotten pretty badly laggy the past week or two. We have spread and expanded pretty far and done a decent amount of landscaping (creating paths/tunnels between bases and such) on the server. From the original spawn point, we've built bases 3000+ away (according to F3 coordinates) in various directions and connected them via paths, tunnels, and some with rails. Now it has gotten to the point, especially when there are multiple people on, where if we mine a block it takes a couple seconds or more to "pop" and give us the resource. Riding on our railways has gotten REALLY slow and glitchy. Walking, swimming, boating, and especially riding a horse has gotten bad. If you go any distance it gets to the point where it stops loading the world and takes forever to load the next section. Like we'll ride a horse or row a boat somewhere and you go a short distance and then have to wait 10-15 seconds to load the next chunk of land, go a little, wait for it to load, etc etc. It takes FOREVER now to get between points any distance apart. Basically anything more than walking speed and you have to keep pausing and wait for loading. If I go to the web UI I see a bunch of these messages - "[16:52:21] [Server thread/WARN]: Can't keep up! Is the server overloaded? Running 35616ms or 712 ticks behind". Even with no one on, I see similar messages! (Though usually not quite that high when no one on, it is still there and starts immediately as soon as I boot up the server.) I've tried moving the appdata/config files around. I started with it in my main appdata area which is on my cache SSD drive. I have the appdata share set to cache only, so it should be only on the SSD and not moving stuff to the array disks. My other dockers are on there as well of course. Now that we are having problems I've tried moving it to another SSD which is where my two VMs live and is an unassigned drive. Then I tried moving it to another unassigned drive that had nothing else on it, though that one was a standard spinning disk. Neither helped and for now I've moved it back to my main appdata for now. I've increased my starting heap size/memory to 1024M and max memory to 4096M and now even 6144M. Neither seemed to have helped as well. We were originally playing via WiFi, but I've wired both mine and my wife's systems to see if it would be better, but still seems the same even with just us on. Being that it is a server on our local network, I thought I wouldn't run into these kind of lag problems, so it has been pretty frustrating. Not sure what else I can tweak to help this out, but my wife is getting so frustrated she is thinking of quitting playing, so I hope I can figure something out! The WAF is dropping rapidly! I know this isn't a Minecraft support forum, but since this is unRAID I was wondering if there's anything specifically with that that people have seen that may be causing it or tweak/tips they have for running on unRAID. My specs are in my sig. It is a dual Xeon system from 5 years or so ago I think, so I think should be more than powerful enough.
  4. Thanks, this fixed it for me! The next time I reboot (eventually) will be to install v6.9.2, or whatever is available at the time, so that will be my "permanent" fix. This got it going for now though.
  5. I updated mine to the latest build yesterday (or maybe day before?... recently anyway) and it looks like the issue is fixed. Not sure what caused it, but I can open the webpage in Chrome now again. Hopefully working again for you as well!
  6. I installed the latest update this morning, started the Duplicati container up after, and now I cannot get to the admin web page for it. It just loads forever but nothing ever comes up (just a blank page). Oddly, even Chrome just keeps trying to load forever and isn't timing out. The log doesn't show any errors or anything suspect that I can see. ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io ------------------------------------- To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 30-config: executing... [cont-init.d] 30-config: exited 0. [cont-init.d] 99-custom-scripts: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-scripts: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. Edit: Possibly an issue on Chrome? I tried opening the site on Edge and it worked. I thought maybe something was wrong with my Chrome so I exited it completely and reopened and still have the same issue on it.
  7. Glad you were able to figure it out. Myself and some others have run into similar problems with that setting enabled. Has anyone acknowledged it even yet?
  8. I noticed a possible bug/issue with WireGuard on unRAID. I have a docker container that runs on a custom network and I needed it to talk to a container on bridge so I went into docker settings and enabled "Host access to custom networks". After doing so (and all the required stop/start/reboot), the containers could talk on the network and I thought all was well. Later that week I tried to use my WG VPN tunnel access (LAN access and tunnel through server to my home internet WAN) on my laptop and phone, which I'd used previously and worked great then, since I was on an untrusted Wifi network. After connecting, I was able to access LAN resources on the unRAID server, but could not get the WG client systems to go out to the internet when I had WG turned on on them. I thought back to what had changed and all I could think of was the setting above. So today, since I had to restart unRAID to add a disk, I disabled that setting to test it out and after restarting I tried WG tunnel access and lo and behold it is working again! I can get to LAN resources as well as out to the WAN/internet while connected to WG on the clients. So it seems like something with enabling the "Host access to custom networks" setting breaks WG's ability to allow VPN clients to tunnel through it and use the WAN while connected.
  9. As a counterpoint, I used to run the Plexpass/latest version and it screwed up things a few times for me. I can't remember exactly what, but it got annoying enough that I just set mine back to run the public version instead to avoid the early adopter headaches and it has been better in that regard. Honestly there's very little difference and usually only a short amount of time before changes and updates get pushed down to public from plexpass, assuming they don't break things. Plex itself is pretty stable, so unless there is a bug that made it into public that is affecting me, or some super cool new feature I just can't wait for, I don't see a need to run the Plexpass/latest/beta/rc/whatever-you-want-to-classify-it-as version.
  10. Shoko new version is out and looks like they've changed the docker path. https://shokoanime.com/blog/shoko-version-4-0-0-released/
  11. Most likely nobody here is going to be a person who codes for the project directly, including @binhex. He just bundles and creates the docker template/container, not the application itself. You'll want to go to their github and create an issue there if you want someone to look at it. You could also look into trying Airsonic-Advanced instead as I believe they forked Airsonic and are more aggressively developing it, including performance enhancements. Perhaps that's something they've fixed. There is a docker template out there for it for unRAID in Community Apps.
  12. Plex internally makes it's own backups of the db as well I believe. In the same directory as the main db but appended with the date, like "com.plexapp.plugins.library.db-2020-05-02". Have you tried to go back to one of those using these instructions? Hopefully the backups aren't corrupt as well.
  13. I ran into a couple annoyances with the update over the weekend (v0.3.13) and thought I'd post here to let others be aware of them: 1. I had a download that didn't seem to be getting processed. It turns out there was a bug with the scheduler that it seems like the frequency you set to post-process your completed download directory was ignored and just sets to run every 24h instead. Manually forcing a post-process worked though. Github issue thread. Looks like they've already put up a new release v0.3.14 with a fix and it has been applied to the docker image. Just run another update to your container in unRAID web interface to fix it. 2. Luckily(?) due to the above I ran that manual post-process and found out about this other change. I run a post-processing shell (bash) script and I noticed that it didn't work when I looked at the post-processing log output. I was getting the error "Extra script /config/postproc.sh is not a Python file". I looked back at the release log for v0.3.13 due to the above issue and found out they changed so it can only run python scripts now for post-processing. Here is a new wiki entry they just put up a few days ago explaining this along with a small python snippet you can use to work as an intermediary between python and whatever your script is. If you run a bash script like I did, you'll want to create a python file similar to this and change your post-processing script in Settings to point at it now instead. /config/postproc-shiv.py import subprocess import sys subprocess.call(['/bin/bash', '/config/postproc.sh'] + sys.argv[1:]) # If you need to run more scripts (optional) # subprocess.call(['my_cmd2', 'my_script2'] + sys.argv[1:]) Hope that helps someone who might run into the same issues!
  14. Not sure what happened but seems like since I updated over the weekend that Medusa is no longer letting me log in with my credentials. Is there something that could have reset them or caused some issue with the latest update? Edit: No idea what caused it, but I shut down the container, went into config.ini and blanked out the web_password, logged in with no pwd, and then re-set the same password in Settings again. Seems to be working for now anyway.
  15. Good to know, I thought it was only Samsung drives affected. Definitely want to do some research before purchasing new ones then to be sure they'll work correctly. I blew $600 (at the time) on two 850 EVO 1TB drives specifically to use as my unRAID cache drives in a mirror and was quite frustrated that it didn't work (and still doesn't a couple years later!). Hopefully others will be spared the pain and expense.