Jump to content

ich777

Community Developer
  • Posts

    15,758
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. You have to use the admin user for Pushbits that you've specified in the config.yaml not the Matrix user to create an application for Pusbhits.
  2. Please use as wrote above: http://127.0.0.1:8080 However if you get a 403 error this means that the password is wrong.
  3. Did you do a reverse proxy on that? If you are doing it from inside the container the URL will be http://127.0.0.1:8080 since you are in the container and you can use localhost. The container port needs to match the container port in the Docker template so I would recommend that you let it at 8080 so that it is reachable from outside the container on port 8050 like in your configuration. Hope that makes sense.
  4. Whyt URL are you using can you privide post the full command please. Yes, you‘ll di that from the Docker Terminal from the container.
  5. Please read the description from the container and you will see how you can create a token. 😉
  6. It seems that the container fails to download the game server files, do you have any AdBlocking software or a custom DNS server set up?
  7. What path is set in the container template?
  8. What are the permissions in that folder? Have you done anything custom? I can only tell you that over here it is working perfectly fine.
  9. Did you put the config.yaml in the main directory from the container? The container should create it on first start.
  10. For now yes. This is also only a combination of commands executed in the background. I will maybe add a backup script that you can fire from a User Script maybe with xz compression and also another script will allow you to take snapshots from a User Script. As you can read in the first post it‘s still in development and currently I don‘t have much spare time but these are all things which are on the "features to do" list. But you also can put LXC on a mirrored pool. I don‘t fully understand… You can create a „new“ container from a Snapshot. Just for clarification the Snapshot function saves the whole container including the configuration. EDIT: With the next plugin release that I push I will integrate a command that is system wide available and allows you to take snapshots from containers where you can specify how many old snapshots you want to keep, this can be then easily integrated into a User Script. Please update the container to the latest version 2023.07.29 and you have now a new command: lxc-autosnapshot This command lets you snapshot a container and also specify the snapshots that you want to keep, the usage would be as follows for a LXC container with the name DebianLXC and you want to keep the last 3 snapshots: lxc-autosnapshot DebianLXC 5 This can easily be integrated into a User Script if you want to have scheduled snapshots (the output will be written to the syslog).
  11. I'm using OPNsense and I have no issues whatsoever. I've stopped using PFsense a while back because I've also had some trouble back then.
  12. I just tried it and it is working as expected. Did you make sure that the share where the game files are stays on the same disk as the path that you've specified in the container template?
  13. I‘m talkig about if you are using a VPN container and you are using the OpenVPN network for your Qbit and luckyBackup container, so to speak if you are tunneling all traffic through the OpenVPN container, so to speak the tree containers have a shared internal network like if all three applications are running on one machine. The variable from above is not necessary if you are using it if the containers are on the same bridge/IP because then they all havetheir own internal network and if you set the host port to somehing else then you are basically doing something like NAT and mapping the port from the containers internal network port 8080 to for example 5353. Do the following: Delete the NOVNC_PORT variable that you‘ve created Delete the port mapping that you‘ve created (5353) Create a new port mapping with the container port 8080 and host port 5353 After that the container will be reachable on port 5353 Each container has usually it‘s own internal network (that‘s the IP that you see on the left side) but you can combine containers together by sharing the network (what you havem’t done from what I see in your screenshot) like I‘ve explained above.
  14. What type of database are you using? If you are using the default you shouldn‘t have to do anything since it will only store the applications that you create and the tokens for the applications.
  15. Please post your Diagnostics. Do you have any kind of scripts or patches installed on your system for the Nvidia driver? EDIT: I have now tried this on my test machine and can‘t reproduce that.
  16. What Firewall do you own. I have no issue using it with OPNsense, just tried it with 2 friends on my server and everything is working as expected.
  17. The errors/warnings in the template are normal because these are mostly caused by WINE. Are you sure that you first stopped the container -> edited the files -> start the container again. Is validation disabled? Is the path that you've set for the game files itself set so that it says on this disk?
  18. This is only necessary if you use the network from another container which is not the case here or am I wrong? If you don't share the network from another container with luckyBackup, simply delete the variable that you've created along side with that the custom port that you've created and create a new port in the template with the Container port 8080 and Host port 8081 or whatever you want. Again, changing the NOVNC_PORT variable is only necessary if you set the container network to None in the template and use another container network in the Extra Parameters which is most likely not the case here.
  19. People who are currently testing it have no issues whatsoever or am I wrong? I'm not quite following here I think... I could of course build the RC3 too but I'm really not too sure if I want to do that since it's an RC release, another possibility would be to build form the latest stable but that is something I consider a nightly build. I really don't get your concerns here, don't get me wrong but you could always use another build or that one that you are comfortable with since you are capable of modifying it. I'm really not following anymore, this build is not experimental, why do you think it's experimental? 😕 Please search the source: https://slackbuilds.org/repository/15.0/system/nut/ Did you have any confirmation that the package from the other source is stable and works for everyone? Just saying. Completely agreed on that point... Sure, but Unraid is also not fully Slackware since Tom builds many packages for Unraid also from source, I hope now that makes more sense to you. See, this is also one of the issues from Open Source development and I fear in every project since you can not make everybody happy and some people will walk away, rant about certain things and so on and in some cases some Open Source projects will go down <- but this is a whole other story. But at least we can work together and fix things, help each other if an issue appears. This will be my last post about the NUT package since I really don't know what else to say.
  20. But this package is still built wrongly in terms of the drivers since these belong in /usr/libexec/nut/ directory. Also the argument that this package is tested is not really accurate since you don't know if it's really well tested, usually such packages are built. <- no and there is nothing more... Why is this a big step? Updates are good in general, I really don't like the idea of always staying on a low version to avoid issues for users who have no issues but maybe make it harder for new users with newer UPS units which are maybe supported by the newer version. May I ask why it should be more unstable if I build it with the same libraries that are used on Unraid? This is a bit confusing for me, there are only two things that could happen: It is working It is not working I would also recommend that you take a look at the old 2.7.4 package and where things are located, this package was basically built by Slackware and I stick to the Slackware conventions at least in terms how things are built and I can customize them for Unraid but in this case I didn't do it because it is a default package. But keep in mind that this older package uses maybe older shared libraries which are not fully compatible (will maybe not be not the case) than a newer package which is built for the libraries that Unraid is using. Just saying to keep that also in consideration.
  21. Have you also tried the official build?
  22. Create the modprobe.d directory on your USB Boot device to make changes persistent: mkdir -p /boot/config/modprobe.d Create the file zfs.conf in the above created directory and fill it with the text "options zfs zfs_arc_max=8589934592" <- this will imit the ARC cache to 8GB (you have to specify the size in bytes) echo "options zfs zfs_arc_max=8589934592" > /boot/config/modprobe.d/zfs.conf After that reboot and the ARC size should be changed to the value that you've specified
  23. I would recommend that you first look in the linked repository if you have to do anything else to get those readigs. Do you see the values when you issue: sensors from a Unraid terminal? If you see those values you have to extract them yourself, I have no plans on adding a Dashboard card to the plugin.
  24. Are you already on Unraid 6.12.3? According to the GitHub repository which my plugin is based on your device should be supported since Kernel 6.1.x (which Unraid 6.12.x is based on).
  25. You have to understand that you have to pass through the two GPUs separately to the VM and then you have to create a SLI but that is not possible because the emulated Motherboard doesn‘t support SLI even if your Motherboard supports it, the VM doesn‘t see your real Motherboard. So to speak you will need some kind of workaround to create the SLI in the VM otherwise the option won‘t even show up in the driver. Hope that makes more sense to you.
×
×
  • Create New...