Jump to content

ich777

Community Developer
  • Posts

    15,753
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. No, but why then use luckyBackup? You could easily do this from Unraid itself too if you want to.
  2. I don't remember where I found it, somewhere on Steam I think, simply search for Ark multihome.
  3. Please read the description from the container again, it is required for a schedule because luckyBackup was not designed to run in a Docker container... Exactly.
  4. Debian, but why should that be distribution dependent? This is clearly a thing from the game server and has nothing to do with the underlying OS or if it's running in Docker or not (if you running a game server in a Docker container it is almost the same as you would run it on bare metal). Try to append: ?Multihome=YOU.RIP.ADD.RES (replace YOU.RIP.ADD.RES with the one from your VPS public IP address)
  5. When the container is running and it says that is fully start up you can connect with the servers IP and the query port. If you want that the server shows up in the server list you have to forward the ports in your router with the appropriate protocol, that's it.
  6. Exactly, 777 is basically allow everything for everyone… rwxrwxrwx
  7. The ports are also forwarded properly? I‘m really sorry but I can‘t help here…
  8. I‘ve initially designed the container to upload files to Mega. Anyways, isn‘t there an option within the Mega app to set the permissions or am I mistaken? You can also try rclone from the CA App if it‘s the same there too.
  9. No. Only minor changes, base image update and some other minor improvements that haven‘t changed how the container is working.
  10. I think this Cluster thing is run with only one ARK directory or am I wrong? If yes, try to fire up only one container with validation on, maybe all containers tried to update the game and made a mess out of it, that is my only guess. Maybe @Cyd will answer since I'm not into making a Cluster nor modding Ark.
  11. What permissions do the files have when you upload them? Do you run the container with UID 99, GID 100 and UMASK 000?
  12. Which container? luckyBackup? Make sure that you select preserve privileges in the advanced options from the sync
  13. Even of you disable the iGPU with modprobe.d (so that the driver will not load) it will output the console just fine so that you can use PiKVM just fine.
  14. Even of you disable the iGPU with modprobe.d it will output the console just fine so that you can use PiKVM just fine.
  15. From what I see in your Diagnostics everything seems fine too me, nvidia-smi reports your card correctly. I can also see that your first card is recognized as a amdgpu and your second card is your Nvidia. Have you yet tried to remove nvidia-persistenced from your go file and/or disabling your iGPU if that helps? Do you use your iGPU for something on your system?
  16. No, you can multiple instances from luckyBackup to sync different folders but it‘s not able to run them in parallel. However when you are using the schedule you should be able to set up multiple syncs to start at the same time (don‘t forget to tick the box ConsoleMode).
  17. Then I really don‘t know what the issue is. Please double check every setting… You can also try to download a fresh copy from the CA App and change only the port so that it doesn‘t interfere with other containers and try if it works there. As said before, over here everything is working just fine and all settings are preserved like they should be.
  18. Is your appdata share set to use cache only or prefer in the Share settings? If not I could imagine that your mover moves the files to the array but it is searching it on the cache and couldn‘t find the files so they are reverted, that‘s my best guess…
  19. No, not a public IP, but I think you are behind some kind of CG-NAT and someone insider your CG-NAT is using up all the GitHub API calls. I would recommend that you create a thread for that in the General Support Forums.
  20. @mgutt wie hast du die bestellt?
  21. Unraid 6.11.0 stable was released yesterday and the LibreELEC drivers should work fine now for your card. Have you yet upgraded and tried if it‘s working?
  22. TBS-OS are not compiling against Kernel 5.19 currently I would recommend that you stick to Unraid 6.10.3 for now until Linux Media fixes it and compilation is working.
  23. Ich verwende einen ZigStar LAN Gateway in verbindung mit HA und kann auch bestätigen das der Sonoff Zigbee Stick die auch unterstützt. Conbee2 ist glaub ich auch voll mit Zigbee 3.0 kompatibel. Also ich hab das Angebot noch und ist auch verfügbar, kann aber auch an Banggood liegen die haben teilweise komischo georules drin und du findest manche sachen nur mit VPN (bin übrigens in AT). Wie hast du die bestellt @mgutt?
×
×
  • Create New...