Leaderboard

Popular Content

Showing content with the highest reputation on 06/23/21 in all areas

  1. Hat der User einen Unraid User Account bzw. SMB Zugriff auf einen Unraid User Share? Wenn ja, dann wäre das wirklich einfach: 1.) Zwei Skripte auf dem Desktop des Users zum Starten/Stoppen. Die machen nix anderes als 'echo "Start" > \\server\share\datei' bzw. 'echo "Stop" > \\server\share\datei'. Bei Bedarf mit 'Intelligenz', also mit Rückgabewert. 2.) Ein User Script, wie das von Dir, welches im Minutentakt die Datei prüft und bei Bedarf docker start|stop ausführt.
    2 points
  2. Die beiden NVMe ... als klassischen Schreib-Cache: Nein als Pool für Docker Container/VM: Ja
    2 points
  3. Should be able to do it though diskspeed
    2 points
  4. The specific reason for your error appears to be that "/mnt/borg-repository" doesn't exist in the environment where you're running the borgmatic command. Everything should be done from within the borgmatic container... as a matter of fact you don't even need to install the nerdpack borg tools at all in Unraid. This docker container is the only thing you need. The whole point of this CA package was to allow you to keep a "stock" Unraid installation, which does not include borg or borgmatic.
    1 point
  5. That's my secret... Jokes asside, I use the latest Media Driver, and new Mesa drivers, eventually that's the special sauce... My container is not special at all since I'm not doing anything special or really differently from the official one and the others (mine is one from the "others" )
    1 point
  6. Spot on - deleted and rebuilt docker.img and re-installed apps from previous apps and all working again
    1 point
  7. Ohhh man, ich bin auch mal was dä(h)mlich Ich hab die ganze zeit versucht DVB-T zu nutzen - aber mit DVB-C klappt es dann Vielen Dank für eure Zeit und Mühen
    1 point
  8. 1 point
  9. Nein. Einfach alle ausgewählt lassen. Die VM zeigt dann also auch alle Kerne an.
    1 point
  10. Do you have some sort of "special sauce" in your Jellyfin build? It's the only one that worked for me.
    1 point
  11. This only affects spawns, any that were on the map will still be on the map. You need to do a wild Dino wipe to get rid of any currently on the map. Simply restarting the server will not do it.
    1 point
  12. If you want to use the Aspeed GPU then simply follow the instructions above and it should also just work fine. If you experience any problems please feel free to contact me again.
    1 point
  13. +1 gerade in der Grössen- und Schnittstellen-Ausprägung macht das so Sinn. NVMe haben deutlich höhere Performance als eine über SATA angebundene SSD. Was anderes wäre es, wenn die/eine NVMe als Parity laufen würde(n)....aber Parity muss ja mindestens so gross sein, wie die grösste Disk im Array...hier geht es also nicht.
    1 point
  14. Limetech führt natürlich einen Account pro Kunde (Registered to). Das ist die kleine und theoretische Sicherung, dass niemand die personalisierte Lizenz entgegen der Regeln weiter verkauft. Wenn Du auf Grund eines defekten Sticks (Registered for) einen neuen Stick ins Spiel bringen willst, dann geht die neue Lizenz-Datei an Deine im Limetech-Konto hinterlegte E-Mail Adresse. Beide Konten haben bisher nur marginal etwas miteinander zu tun. Mit dem neuen MyServer Plugin scheint sich das offensichtlich zu ändern ...
    1 point
  15. ...wie wäre es, eine Portainer Instanz bereitzustellen, dort den User zu konfigurieren, dass er nur auf den einen Docker zugreifen kann? Habe sowas noch nicht gemacht, aber möglich wäre es evtl.: https://documentation.portainer.io/v2.0/users/create/ und https://documentation.portainer.io/v2.0/users/promoting/ Edit: sieht noch einfacher aus ...User oder Team: ...hängt man den Portainer über Nginx/Authelia nach draussen, braucht es keinen Durchgriff auf den unraid host.
    1 point
  16. Genau das möchte ich natürlich nicht. Ich habe jetzt ein Script geschrieben, was halbwegs mein Ziel erfüllt: https://forums.unraid.net/topic/110483-startstop-container-by-non-root-user/ Und zwar ist die Nginx Proxy Manager GUI bei mir immer erreichbar über http://192.168.178.8:81/login. Das Script überwacht nun, ob die URL http://192.168.178.8:81/login?dynamite mehrfach aufgerufen wird. Dadurch wird dann der Minecraft Container starten bzw stoppen, je nachdem welchen Status er gerade hat. Wirklich nutzerfreundlich ist das natürlich nicht, da der Nutzer keinerlei Feedback bekommt, außer dass er eben selbst testen muss, ob der Container noch läuft oder nicht.
    1 point
  17. ...worum geht es genau? Hab den faden jetzt nicht durchgelesen.... ....ist dies das Problem? @Tiras Welche console meinst Du? Auf dem unraid Host und im template verwendest Du die Nomenklatur eth0, eth1 des unraid Host...im innern des Containers läuft ja ein eigenes Mini-System ...dort werden Schnittstellen neu nummeriert (oder wird das Device direkt 1:1 in den Container gemappt - kenne den Docker nicht ... für eine Netzwerk-Schnittstelle eigentlich nicht, eher für eine (i)GPU würde ich denken).
    1 point
  18. The easiest solution would be to disable the GPU from the motherboard and use the output from the Nvidia GPU. The second solution, if you want to use the Aspeed GPU from the motherboard would be that you create the file: '/etc/modprobe.d/ast.conf' and do that from the terminal: sed -i '/disable_xconfig=/c\disable_xconfig=true' "/boot/config/plugins/nvidia-driver/settings.cfg" after that reboot. A user had that problem once but I can't remember if he needs to create the file on the USB Boot device but I think so. What you have to do for sure is to issue the command that I've posted. I recommend trying the second solution if you want to use the GPU from your motherboard with the file and the command. Please let me know if it works.
    1 point
  19. Are you using unassigned device plugin? There is a setting (per disk) that you need to enable, "passed through" to prevent the system from mounting the disk.
    1 point
  20. hi guys, big thanks to everyone! no more XFS corruption errors! how to mark this topic as solved?
    1 point
  21. Some Zen3 owners showed me CoreFreq is working fine. Which issue do you have ?
    1 point
  22. Pic of the 40mm fan addition if anyone is interested. I just used the rubber things that Noctua includes, instead of going and buying screws. It works perfectly but takes a bit of force.
    1 point
  23. Sehe ich keinen Sinn drin. Wenn man eh nur SSDs hat packt man alles in einen Pool oder alles in ein Array.
    1 point
  24. The driver is already auto compiled but the driver isn't listed on their download site so you actually can't install it because I grab the driver versions from there otherwise this will be a completely mess, or you switch to latest, then it should be listed if I'm not mistaken... Driver MD5 I would first try to remove your "script" eventually that's the problem.
    1 point
  25. Change the slider from Basic to Advanced, then a Delete Tunnel button will appear.
    1 point
  26. That is how I use it. I am honestly not certain of the advantages of deleting it every on stop. This plugin is a fork of an existing swapfile plugin and that setting was carried over. Unfortunately that likely means that something went wrong in attempting to start swap, check your logs for messages logged by the swapfile plugin.
    1 point
  27. Ich gehe davon aus, dass Du keinen Federated Access auf die Firmen-Nextcloud hast. Grob umrissen: - In Unraid einen neuen Share einrichten und konfigurieren - Diesen Unraid Share in der Nextcloud Container Definition auf einen Nextcloud Ordner mappen (Stichwort: Add Path) z.B. Host: /mnt/user/MeinShare --> Container: /mnt/MeinShare - In den Nextcloud Einstellungen einen externen Speicher einrichten. Dieser zeigt dann auf /mnt/MeinShare innerhalb des Containers. Damit ist die Einrichtung für Deine Nextcloud abgeschlossen. Nun muss "nur noch" der Inhalt gesynct werden. Das habe ich noch nie gemacht, es müsste meines Erachtens aber mit rsync/rclone gehen.
    1 point
  28. You can use the User Scripts plugin and then add the comments to run there. For example I run my nextcloud cron every 5 minutes with this as script docker exec -u www-data Nextcloud php -f /var/www/html/cron.php So you docker exec (then optionally the user) then the name of the container and last the command you want to run
    1 point
  29. just to point out that you can get scenarios where all sticks test fine individually, but you still get failures when all sticks are installed at the same time.
    1 point
  30. Yes, only 0 errors are acceptable, and even that not a guarantee there's aren't issues, but any errors is a guarantee there are. Looks like you need an updated GLIBC, download this one, also to the extra folder, then reboot: http://ftp.riken.jp/Linux/slackware/slackware64-current/slackware64/a/aaa_glibc-solibs-2.33-x86_64-2.txz
    1 point
  31. Realized you can list IP addresses that are exceptions from the proxy. Thanks for the guide anyway!
    1 point
  32. Hey, Just used this guide to setup transmission and I really appreciate the guide, worked great. Havn't gotten around to setting up remote gui for transmission yet so going to try that. I use firefox as my default browser however and using the proxy as you described will make it so I can't access my unraid GUI (I am assuming this is intended behavior). Is remote GUI the best way to go or do you have any idea how to solve this differently?
    1 point
  33. Can we please please please get the ability to create user accounts, disable root logon, and enable mfa.
    1 point
  34. Haha no worries, as far as it looks, the card is resetting correctly. Did you notice if one core get stuck at 100% when this happens? Easiest way to see this is probably from Unraid dash, but you can see it via Task Manager or by running top as well. That'd be a sure sign the vBIOS you're using is causing problems, and you'd want to dump it yourself. I hate to be the link guy but I'm a little stumped, but I think you're on the right track already with the changes you made to your XML. I came across this comment for someone having the same issue with a 6800XT on Reddit that says: OP said they resolved it: I'm guessing you've tried this from your XML. Does setting a vendor ID with a valid format like 0x0438 make a difference?: <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='0x0438'/> </hyperv> <kvm> <hidden state='on'/> </kvm> Also this slightly less relevant but potentially very useful post on Level1techs could provide some possible solutions, ranging from confirming that your GPU is configured correctly in your BIOS to some potential XML tweaks. 😅 it'll work out-of-the-box in Windows, when I was trying to get it working in linux there were no compatible drivers.
    1 point
  35. Thanks that was quite insightful. I understood the theory of the parity drive but I wasnt aware of the technical aspect of reading and writing the data back. Out of curiousity: Are the speeds I am hitting with smb shares without cache what is considered normal?
    1 point