Jump to content

ich777

Community Developer
  • Posts

    15,758
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. You are talking about the Server Manager not Assetto Corsa itself right? I can‘t help with that because the developers from it told me that I‘m not allowed to bundle that into my container and they want no third party involved so I keep my hands off of that. 😉 Maybe @Vitor Ventura can help you in that case.
  2. No issue over here: Are you sure that transcoding is even working?
  3. I will look into that and report back. First post from this thread or on the CA App on every app that you habe installed from my repository.
  4. TBH I blame the built in updater and really can‘t tell why this happens. I will look into that but that could take some time because real life…
  5. That's not how my container works. My container pulls the initial version from Mozilla itself, extracts it and then it uses the built in updater to update itself, it doesn't pull anything from the Debian repo itself in terms of Thunderbird. No plans yet and not for the foreseeable future for my containers. EDIT: maybe I can look into that when real life calms down a bit but I assume that will not be the case this year…
  6. I close this thread, please use the support thread for Tdarr:
  7. Did you stop the container in the first place before editing the config file? Or do you have enabled validate installation, if yes disable it, the validation will always overwrite the file named serverconfig.xml <- validation is only meant if you have issues and the container won't pull the update. I would also recommend that you copy the default serverconfig.xml and rename it to something different and then specify the name in the template like that: EDIT: Please also consider joining @Spectral Force's Discord since he helps me out with 7DtD support because I'm not that familiar with that game: https://discord.gg/VwwYA5h
  8. RadeonTOP is working but if you get no reading out from it, it seems that nothing is using you AMD GPU. RadenTOP is just a diagnostics tool. To what containers (which repository) where you passing the GPU to and how did you pass it through? Do you want to use it for transcoding? If yes, because this is a Hawaii based GPU I don‘t think it supports many formats (IIRC only h264 and everything below). You would need something more recent for h265
  9. It's the same for i-Core series CPUs. There are some i-Core series CPUs out there which are also not supported but I can't put a list together because it's all over the place, you can't find a list which are really supported or not and this sentence is actually from the documentation for GVT-g from Intel... I can only tell you that all i-Core series chips are supported which don't have a letter at the end, except for 'K' these are also supported.
  10. I think this would be better suited in the support thread from the maintainer who is releasing the Minecraft modpack containers and not here in my support thread. However you should be able to set up nearly every modded Minecraft server with my MinecraftBasicServer but I can‘t help how since I‘m not familiar with Forge…
  11. Was it the MACVLAN bug? I have no issue at all but I have to say that I'm using IPVLAN. I really don't think so... But not on 6.11.5 only Bullseye was working. LXC also now has a backup function that I've wrote and that can be even be set up as a User Script, see page 12 the last few posts where I explain it in detail.
  12. About what containers are we talking about? Have you read the note about cgroupv2 on the first page? Why did you even downgrade to 6.11.5? This won‘t solve issues and I have to mark some of my plugins as incompatible with Unraid version below 6.12.x
  13. Yes and no, if you are willing to write user scripts that run on Array startup or startup in general, look at this: There is the command that needs to be executed to run in background.
  14. Hmmm, this is really strange. I would suggest that you try it with the existing container, you even can replace the container if you specify the same exact name. A word of warning, if you are using a browser which is Chrome based (EDGE, Chrome,...) and you are creating a backup from the GUI, please leave the tab in the foreground because if you switch to another tab Chrome will pause the tab and if it finishes in the meantime and you come back to the tab after some time it will never display the DONE button <- this will not happen on Firefox and there is nothing I can do about that. Please also use the settings with caution, if you use compression ration 9 (the backups will be of course be well compressed and small in terms of size) it will take a huge amount of RAM to take the backup, about 12GB to be precise. If you configure to use all cores on your server the WebGUI can get quite slow and unresponsive because then it is using all of your Cores at full blast. You can also set Use Snapshot to Yes, because this will take a temporary snapshot from the container, start it right after the snapshot finished, start a backup from the snapshot and finally delete the snapshot <- this is handy if you got containers that needs to be back up and running quickly if you want to use a high compression ration. If you want to take a backup from the command line, when global configuration is enabled do that: lxc-autobackup --name=<CONTAINERNAME> when global configuration is disabled: lxc-autobackup --name=<CONTAINERNAME> --path=<PATHTOBACKUPLOCATION> --compression=9 --threads=all when global configuration is disabled with a temporary snapshot: lxc-autobackup --from-snapshot --name=<CONTAINERNAME> --path=<PATHTOBACKUPLOCATION> --compression=9 --threads=all All of the above command can be also used in User Scripts to schedule backups or (even snapshots with lxc-autosnapshot). To restore a container from the command line, when global configuration is enabled do that: lxc-autobackup --restore --name=<CONTAINERNAME> --newname=<NEWCONTAINERNAME> when global configuration is disabled: lxc-autobackup --name=<CONTAINERNAME> --path=<PATHTOBACKUPLOCATION> --newname=<NEWCONTAINERNAME> In case for restore you can also use the existing container name to overwrite it <- this is specific to this script that I've wrote. Please feel free to try it with a existing container and also let me know how it goes. Again if you have configured the global backup settings then you can do everything from the GUI too. Maybe try it with your Nextcloud container and restore it to a different name (this can be also be done from the gui if the global backup settings are enabled). It should be even possible to specify a remote mounted share though Unassigned Devices for your backups. If you find a bug or anything that doesn't work please let me know. Hope that helps
  15. Have you yet tried to do the following: https://github.com/containers/podman/issues/6961#issuecomment-657929781 Please note that you maybe have to create the path /dev/fuse inside the container. EDIT: Are you yet on the latest LXC plugin version, I've implemented a few nifty scripts (lxc-autobackup & lxc-autosnapshot) which can be easily run from the command line. If you configure Global backup config in the plugin settings itself you will even see the backups on the LXC page, if you have any further questions let me know.
  16. Was this always the case or did you change anything recently. I would recommend that you put it on a spare SSD or single device since it can limit your download speeds significantly if it unpacks and downloads at the same time.
  17. Yes, because if you are using a Block Device the host is not really interested which filesystem you are using on the block device because it writes it's 0 and 1 to it or better speaking it has not to deal with the filesystem of the host.
  18. Where is the _NAS share located on a Cache pool or on the Array?
  19. This is usually mostly the overhead from the filesystem(s).
  20. Are you sure that you are not limited by your provider? Maybe they have currently bandwidth issues, of course this is just a guess... Do you have another provider that you can try and see if it's the same? Does you downloads path in the Docker template point to the real path (/mnt/DISKNAME/...) or the FUSE file path (/mnt/user/.../?
  21. Please post your Diagnostics. Are you sure that you haven‘t it installed through NerdTools too?
  22. Do you also have issues with Plex and transcoding? I can't reproduce this over here with the latest driver.
  23. This is due to GPU Statistics, it polls nvidia-smi to get the readings, this was always the case. I would recommend that you set a lower polling rate in the GPU Statistics plugin or simply don't visit the Dashboard, this will stop the polling and also the spikes. EDIT: I just tested it and transcoding is working with Plex and Jellyfin just fine with the latest Nvidia driver on unRAID 6.12.3
  24. You have to reboot, the GPU is now in a state where it is not available to the system, you can look at your syslog, it says that it failed to initialize the GPU. Did you reboot already, you have to reboot to get your GPU in a working state. I will try this later too when I'm home because I've got the same GPU as you. I would report that in the NerdTools support thread.
×
×
  • Create New...