Jump to content

ich777

Community Developer
  • Posts

    15,753
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. It seems that you have closed the plugin installation window before it actually finished. Please uninstall the plugin once, go to the CA App download a fresh copy from the plugin and wait for the Done button to appear.
  2. Necessary is a „strong“ word… Lets put it this way, I would recommend that there is always some kind of Display or at least a HDMI Dummy Plug with a valid EDID connected to the iGPU if you want to use it for transcoding. In your case it should be enough to have the KVM connected.
  3. I would recommend that you always plugin in a HDMI Dummy plug with a valid EDID to your motherboard if you want to use the iGPU for transcoding. Don't forget that most of this stuff is consumer grade hardware and not for business use and such hardware was most of the times never designed for running it headless like in our use case with Unraid.
  4. Sehr gut, freut mich. Sag mal aber bei dir merken sich die den status auch nicht wenn du sie aussteckst oder? Sprich das die wieder an sind wenn sie vorher an waren wenn du sie zurück in die Steckdose steckst oder? Weil generell könnten die das, aber bei mir wird das in HomeAssistant nicht abgebildet zumindest fehlt die Option und ich bin auch nicht dazu gekommen einen GitHub issue auf zu machen. Hab die selbst überall im Einsatz (Waschmaschine, PC,...).
  5. Warum macht ihr nicht einen PR im ZHA GitHub damit der Upstream integriert wird, hab ich auch erst gemacht für meinen Brennenstuhl Heizkörperthermostat und der ist dann in der nächsten version integriert und ich kann den Quirk raus schmeißen.
  6. First of all I have to say that I just wanted to understand your use case and that this should be a friendly conversation and I've never said that I'm completely against it... I will look into this and try it on my own but I have to say, bcache wasn't designed for both of your described use cases by default and is also on the FAQ mentioned over here: Click That's for sure a thing you can do but then you basically cache every file on the Cache and you will reduce the life span of your SSD significantly since by default only files or better speaking IO that is smaller than 4MB is cached. The next thing is that bcache needs to modify the superblock from the backing device and this can also lead to issues on the Array. But as said, I'm not completely against it and I will look into it, try it on my test machine and report back, give me a few days, will be somewhere next week.
  7. If the Kernel supports it they will be supported, this has strictly speaking nothing to do with Unraid and should be the same. But I have to say that time will tell and depending on if I can get my hands on one I will try to test it but signs are not good that I get one in the near future.
  8. Please also post your Diagnostics.
  9. I thought this was an issue with the variable itself that it was defined wrong and with a space at the end? Is this now not solved?
  10. I get the point from your request but the Array was designed to be a slower archival type of store and the Cache is actually where your data is located that needs to be accessed/changed quickly every day. Primo Cache is actually a simmilar soultion but also not, since Primo cache has algorythms in place that will also write the data (actually blocks) back to the fast storage type if you access it often and will also delete old blocks based on how often they where accessed. AFAIK only random I/O will benefit from bcache because it was intentinally optimized for SSDs and senqential I/O won’t be even cached in bcache. Please correct me if I‘m wrong about that… Of course in certain use cases this can make sense but IMHO for Unraid that doesn‘t male a lot of sense.
  11. I don‘t understand, what compannion app? Modding is usually up to the user and the compannion app doesn‘t have anything to do with the container, strictly speaking…
  12. Jup, I try to put everything in the desription as possible. Hope everything is working now for you.
  13. EDAC should usually load automatically if you CPU/Motherboard/Memory Controller supports it. However you can load it manually using: modprobe amd64_edac for AMD CPUs or for Intel 10th, 11th, 12th Gen processors with: modprobe igen6_edac If you get an error like "modprobe: ERROR: could not insert 'MODULNAME': No such device" then it's most likely the wrong module since this means that the module doesn't find a compatible hardware device. BTW you can get all available modules with this command: ls -la /lib/modules/*-Unraid/kernel/drivers/edac/ The main issue with EDAC is that it is really noisy at times and also can give you a lot of false positives and can ultimately drive you crazy... EDAC also reports PCIe errors from what I know and since not all PCIe devices follow the entire PCIe standard there can be many, many, many, maaaannnny issues at certain times and with certain hardware combinations.
  14. I don't know if bcache makes a lot of sense nowadays because NAND flash is getting cheaper and cheaper and most people use the vdisks on a SSD or NVME anyways. Also a thing to consider is that you have to create the bcache device every time that Unraid is started, of course this can be automated but what is your exact use case for it? From what I know bcache acts by default only used as a read cache and not a write cache and you have to enable it to even act as a write cache.
  15. Go to your Docker tab in Unraid, scroll down to the bottom, click on Add Container, from the drop down select your Plex container below the section [User Templates], check if everything is filled in correctly and click Apply. This should bring back your container, of course make sure that you first install the Nvidia driver and restart the Docker service once or your entire server after installing the Nvidia driver.
  16. @ptbsare & @JorgeB & @limetech with a custom 6.11.0 build with Kernel 5.19.12 everything is working as usual:
  17. I will create a custom build with Kernel 5.19.12 (latest stable) later today and see if it fixes the issue on my Cherry Trail machine too.
  18. Wenn du schon auf 6.11.0 bist bitte aktivier auch cgroup v2 damit LXC wirklich fehlerfrei funktioniert (für Distributionen die systemd nutzen wichtig), ich brauche Tester und will auch das cgroup v2 Standard wird in Unraid. Soweit ich und eine Hand voll User es getestet haben funktioniert bis jetzt alles fehlerfrei (VM, Docker, Passthrough,...). Zum aktivieren siehe:
  19. No, GVT-g is only a virtual GPU and is only meant for accelerating the video inside the VM but without any physical output to a Display, so to speak for using it with RDP or Parsec or your favorite Remote Desktop software. However you can use a USB Dongle in combination which is capable of Display Link to mirror the accelerated display through GVT-g inside the VM and show the Display output through it. Just to make you aware of that, I'm not sure if GVT-g is working properly on "T" series processors because they gave some users issues IIRC. Passthrough from the iGPU, without a dedicated GPU built in can be a bit tricky and I would always recommend that you use a dGPU for a VM or at least a USB Display Link device for a VM, a Display Link device should work too for such a use case if I'm not mistaken, even if you don't have a GPU assigned to the VM but I've never tried that.
  20. Kannst du auch machen aber weniger erfahrenen Nutzern würde ich das nicht empfehlen... Wenn du da was falsch machst ist vielleicht was weg was eigentlich nicht weg sein sollte. Vor allem da Unraid ja seine "eigene" Übersicht für Docker hat kann es da schnell mal zu Problemen kommen. Exakt, ich bau schon seit ewigen Zeiten mit Jenkins und einer VM bzw. seit dem ich das LXC plugin released habe in einem LXC Container, funktioniert gut und mit prune cron schedule im LXC Container für die images läuft das super. LXC ist einfach super weil du keine Resourcen verschwendest da du nicht wie in einer VM fix zugeteilte Resourcen hast sondern die ähnlich wie bei Docker geteilt sind.
  21. Weil du dir deine Docker Seite in Unraid mit den images zumüllst. Unraid ist eigentlich kein general purpose server, Unraid ist eigentlich eine appliance... Mach dir doch einfach einen LXC container, installier dort Docker und bau dort. Oder wenn du willst mach das in einer VM. Sicher ist es möglich aber ich empfehle es nicht.
  22. Execute this command from a Unraid terminal and reboot afterwards: sed -i "/disable_xconfig=/c\disable_xconfig=true" /boot/config/plugins/nvidia-driver/settings.cfg
  23. Von meiner seite aus nicht aber bedenke das es auch im Idle ein wenig Strom ziehen wird in dieser Konfiguration… Würde ich auf jeden Fall dabei bleiben, viele NVMEs, PCIe usw. das Feature Set von den Z Boards ist im gegensatz zu den anderen doch ein bisschen größer und ich würd bei deinem Vorhaben eher zu was erweiterbaren greifen.
  24. Warum? Bzw. bei welchen Geräten hast du bedenken?
  25. Naja, die Frage ist eben was dort genau verbaut ist, normalerweise würde ich mir mehr um die CPU sorgen machen ob dort alles richtig funktioniert. Die Frage ist einfach ob du das brauchst? Ich hab auch nur "normalen" Arbeitsspeicher drin in meinem Server: Wäre natürlich immer besser wenn man ECC hätte aber du musst für dich selbst die Frage beantworten ob du es wirklich brauchst...
×
×
  • Create New...