Joly0

Members
  • Posts

    46
  • Joined

  • Last visited

Everything posted by Joly0

  1. Hey, its me again with the dns issue. I was able to setup the the container so it could connect to the lancache which is running in a container using br0 network mode. The trick was to sett the --dns entry first in the extra parameters field (i tried it last, which somehow didnt work?). Though now i am facing another issue: For the controller support, if i understood that correctly, i need the uinput plugin and the steam-headless container has to be in host network mode, but there lays the problem as the container in host mode cant contact containers running in br0 on the same docker host, so currently i can either use controller support or lancache support. Is it possible to alter this container so it is able to use the uinput plugin even in br0 network mode? Or any idea how to make it communicate with containers in br0 while being in the host network mode?
  2. Hey, i tried to add a custom dns through the --dns="xyz...." option in the extra parameters field, which in theory should work, but somehow the dns is not used. I am trying to use my lancache as a dns server so game downloaded get downloaded way way faster if i have them downloaded atleast once in the past year/s, unfortunately, this doesnt work. Any idea why or an option to set a custom dns through an environment variable maybe?
  3. Btw i have 2 bugs. Streaming steam to my nvidia shield i am able to use my dualsense controller in the steam menus but as soon as i get into a game, i cant do anything anymore. Same is for audio, works in the steam menu, but as soon as i start a game, i cant hear anything streaming to my shield
  4. Ye, i know. Just an idea what might be useful to be added to the base image, but nontheless, great project, works good so far for me
  5. Hey, great Project. Maybe it could be a good idea to integrate protonup/proton ge https://github.com/AUNaseef/protonup
  6. I am available now for the rest of the day
  7. Any specific date? I am available now and for the next ~4 hours
  8. Cant do the second part, there is no "Download logs" option for me.
  9. Any logs or anything i could give you to help solve this issue?
  10. This does not work, i have the same issue and i pinged you on the unraid discord in a thread regarding this issue. Manually installing the previous version with this url is the only fix i found working: https://raw.githubusercontent.com/Squidly271/community.applications/1cab4cf730fcd3d5b8d22227953ff7ac347ca0b6/plugins/community.applications.plg
  11. Hm, i couldnt find jq before, added a custom script to install it and afterwards i was able to use it
  12. Hey? Has jq https://stedolan.github.io/jq/ been removed? I cant find it in nerd pack
  13. Alright, i moved all the files to a dataset, but the legacy datasets remained. Even after completly moving it away from the zfs pool. Any idea or command on how to quickly delete/destroy all those legacy datasets?
  14. Hey, i am trying to hide the auto generated docekr datasets, but mountpoint is "legacy" and dont know exactly what to write into the exclusion. Any instructions you could give for that? Btw great update, works nice so far, compatibility to dark theme is nice
  15. Regarding the datasets: That has nothing to do with Unraid, its the filesystem driver of docker. Usually it uses overlayfs or overlay2 (however its called) but as soon as the directory is on a zfs array docker uses the zfs driver. Problem here is, afaik, you cant use another driver currently, there is work done to make overlayfs compatible with zfs but that is a long on going problem and it might take a while to see this fixed. Other then that, its normal zfs+docker behavior when it creates tons of datasets
  16. Hey, great plugin, though i have 1 issue and one inconvinience. The issue first. When using Dark theme in unraid, every second entry in the dataset list is unreadable The inconvinience is, that i would like to filter out the auto created datasets docker creates when the docker directory is placed on the zfs array (visible in the screenshot), would be great if this was possible. They all have the "legacy" mountpoint, rather then an actual path, i guess that could make it easier to filter out Other then that, great plugin, keep up the work
  17. Also so wie es aussieht ist deine ursprüngliche frage nicht möglich, jedenfalls ist das die rückmeldung vom unraid-discord
  18. Ok, dabei kann ich dir auch nicht helfen, wäre aber interessant zu wissen. Sowas müsste aber dann eher wahrscheinlich über die implementierung von Unraid für Docker sein, dementsprechend eher tiefergehend im system, daher wäre es dabei ggfs sinnvoll bei limetech anzufragen, ob das möglich ist. Man könnte möglicherweise auch mal auf dem Unraid-discord nachfragen, da sind einige von limetech und auch häufig online und geben support
  19. @ich777 i have yet not been able to test it, cause the whole system is my production server, which is more in need then usual, so its hard for me to just shut it down for a few hours to test. If i would be able to get unraid running in virtualbox or vmware or whatever, then i could test it in that environment, but unfortunately i dont have a second system or anything i can use for this, so i have to wait until my server is not in use for a few hours and test than
  20. Hey @JoergHH könntest du vllt etwas expliziter erklären, was du machen möchtest, vllt kann ich dir dabei dann helfen. Allerdings bin ich auch kein zfs-profi, nur halt soweit learning-by-doing soweit ich es für mich brauche. Wegen snapshots benutze ich persönlich gerne ZnapZend, hatte sanoid mal kurzzeitig, aber hat mir nicht sonderlich gut gefallen, und wenn man die commands und die paramenter etc raus hat, ist ZnapZend ein mächtiges und gutes tool (nutze das plugin von steini84)
  21. Thanks for the information. In the next few days i am going to make some more precise tests to see exactly which containers are affected by this issue. I think nothing from zpool get all seems to cause that issue, we have mostly the same settings and the ones we dont have cancel each other out or are irrelevant such as the createtxg or the snapdir option. Also it seems not to make a difference having Z1 or Z2, but we will see, jsut lets try to pin down this issue as precisely as possible.
  22. Ok, i can only spot a few differences between the settings of my dataset and yours. Like the snapdir is visible on my end, createtxg is 1 instead of 14165, xattr is set to on instead of sa. Other then that, its basically the same. But there are still a few differences on your end compared to mine like i am running RaidZ2 instead of Z1 and my docker.img is not on a separate dataset but rather just simply on a combined dataset for basically everything. and well other then the jdownloader, nextcloud and mariadb i have no container in common with you. @MarshalleqCould you tell us a bit about your configration and the settings of your dataset? Maybe that could help
  23. Nope, for me still whole system lockup when having the docker.img on my zfs array on 2.0.4/6.9.1
  24. @steini84 @ich777 I just tested the problem with the latest zfs version (2.0.2) and the problem still perstists, but as i already pointed out, the problem only occurs, when the docker.img file is on the zfs-volume. If its on a default unraid xfs or btrfs array, everything works perfectly fine. So in my opinion, you could make 2.0.2 a stable release with the caution to look for the storage path of the docker.img and the suggestion to put that on another array other then the zfs one.