oko2708

Members
  • Posts

    52
  • Joined

  • Last visited

Everything posted by oko2708

  1. Just updated to 1.50.0 after coming from 1.48.2. Unraid GUI went down, system keeps running and is accesible through SSH, but not using HTTP GUI. Had to revert back to 1.48.2 to get it working again. I am on unraid 6.12.3
  2. @binhex Any idea what could cause this? At first I thought the log was normal, but after comparing it with some other logs in this thread it seems my log just stops without reporting any errors.
  3. Hello, I just installed this containter on Unraid. The only changes I made to the default container settings are: - Set VPN username - Set VPN password - Set VPN_PROV to 'custom' - Set VPN_CLIENT to 'openvpn' - Set LAN_NETWORK I then downloaded the OpenVPN config from my vpn provider and placed that file in the openvpn folder. Everything seems to be working, but I cannot access the WebUI. I could not find anything in the logs myself, but maybe someone else knows what's wrong? I've attached the logs below. supervisord.log
  4. Sorry for the late reply. Different browser fixes the problem. Thanks!
  5. All off a sudden I am no longer able to connect to the Web Console. I give's me a 401 error: Bad Request. I've been experimenting with some settings, but I am having the same problem on a clean docker images. Here is the log supervisord.log
  6. Ahhh I didn't know it was such a new feature. I was trying to avoid the current workaround for the current glibc issue, but I guess thats no longer an option for me. Thanks.
  7. It is purposely out of date. Does it not work for 1.16.5-02?
  8. Of course. Here it is. supervisord.log
  9. Is it possible to use a custom jar with this container? I tried adding "CUSTOM_JAR_PATH: /config/minecraft/spigot-1.16.5.jar" as a variable but it still launches the minecraft default jar.
  10. Worked like a charm. Thank you very much!
  11. @binhex Did you get a chance to take a look at my logs yet?
  12. Thanks for taking the time to reply, you must be busy. I appreciate it. Here are the logs. Now I understand why the browser was crashing. It doesn't really like the 1Mb/s in logs -.- supervisord.log
  13. Just pulled the latest image for Binhex-DelugeVPN since it was no longer working (guessing because of the change), but this latest versions doesn't work for me either. I am not running any other containers though it and I am not using privoxy. As far as I am aware in this case I shouldn't need to change anything right? Currently the DelugeVPN is eating up 8%! of my cpu (so i pulled it offline), opening the container logs causes my entire browser to freeze. It looks like the container is just infinitely trying to boot up. The logs show: "Options error: Maximum number of 'remote' options (64) exceeded". Repeating infinitely. Any suggestions?
  14. I have tried this, but the I got a bad experience with mineOS (Using a lot of CPU and not able to shutdown without forcing). I am currently using binhex-minecraft server which works fine, but it is dependent on archlinux minecraft server and doesn't support custom JAR files, so while it does work, I'd rather move to something that offers a little bit more flexibility.
  15. @binhex Could you possibly add the option to specify a jar file? I don't know much about dockers but as far as I can tell it's just this line that needs to be updated. In /config/nobody/minecraft. MAIN_EXECUTABLE="minecraft_server.jar" To MAIN_EXECUTABLE="${SERVER_JAR}" Also I have set my max-backups to 10, but there is no config/minecraft/backups folder on my system and it has never created a backup before? Am I missing something?
  16. I want to host a small minecraft server for me and my friends. I want to do that using a Ubuntu VM. I have a cache drive where my appdata lives and preferably I would like the minecraft server to be hosted from that cache drive as well. So my thinking was to create a new share, and then somehow mount that share to the VM, so I can have the VM read/write to the cache drive, but still control server backups from Unraid. If found this part in the VM configuration: Now that seems like that is exactly what I want to do, but after some experimentation I couldn't get it to work. Can I in fact mount a share this way or am I misunderstanding what this part of the configuration is for? And if it is possible, how do I get the share to show up on the VM? If the above is not possible, I could mount the share as network share to the VM, but I have a feeling that this would cause a lot off overhead for the huge amount of tiny reads/writes that would all have to take place over the network. (Even if it's on the same server). Is this a legitimate concern or can I just use the share as a network drive? What is the best way to setup a minecraft (or any other game) server. So that the VM can access the game server files on the cache disk, while it remains possible to control backups using the main Unraid OS?
  17. Its just the h.265 that is causing problems. Im using an I7-8700. Issue is also on github: https://github.com/jlesage/docker-handbrake/issues/118
  18. can your post you conversion.log in /config/log/hb/conversion.log
  19. That's quite interesting. Your preset looks like it's derived from a default one, maybe that has something to do with it? As for the QSV I've seen multiple people reporting that it's not working for them, but since its working for you the update possibly doesn't affect all systems in the same way.
  20. Yea I understand at first I thought that it was related to this specific container, but it turns out that was not the case. I still think it was a very poor decision on handbrakes part. They should at least have added some sort of fallback that attempts to find the preset without needing the prefix if it is not present. And it's not the only thing that broke after the update. Intel QuickSync also stopped working, which is really disappointing.
  21. I have exactly the same issue. The log just outputs "Conversion failed". Everything was working just fine before I updated, just noticed today that none of my files are being encoded. Your workaround however did not work for me. My custom preset is called "H.265 MKV 1080 Audio Passthrough" I tried prefixing the AUTOMATED_CONVERSION_PRESET value with: - General/ - Custom/General/ - Matroska/ - Custom/Matroska/ - Custom/ None of them worked. This seems like quite a serious issue where a major part of the functionality just stops working. In my opinion this update should be reverted until it can be fixed. In the meantime does anyone have any other suggestions/workarounds? EDIT: I found the problem. For some reason my custom presets were being split from the default templates into two identical categories even though they are in the same category. I cloned my preset using 'save as' and put it into a new category called 'Personal'. Then in my docker settings I prefixed the AUTOMATED_CONVERSION_PRESET value with 'Personal/' and now my preset is loaded correctly. You can verify if you have the same issue as me by viewing /config/log/hb/conversion.log. You should see the following error: Invalid preset <preset name> Valid presets are: It then lists the valid presets. You should see the category that your preset is in listed twice. If this is the case than you need to create a custom category to prevent this so handbrake can find your preset. This issue is being tracked on github: https://github.com/jlesage/docker-handbrake/issues/116
  22. Hello I just installed this container, however when I restart it the world gets reset. Do I need to map a specific data path? It currently is saving the world to the appdata folder, which seems like its right, but I might be missing something. Thanks! EDIT: Figured out that if you manually run /save-all it will immediately save all changes, so that when you restart these changes won't be lost. Im guessing that the command runs on an interval since /save-on returns "already on". But this means that unless you manually /save-all before a restart you will probably lose a couple of minutes of data. It would be nice if triggering a container shutdown/restart could also trigger the server /save-all command before shutting down.