Leaderboard

Popular Content

Showing content with the highest reputation on 12/07/23 in all areas

  1. Here is the official page dedicated to the support of this advanced version of stable distribution. You can make your requests/comments regarding the template or the container. The goal of this docker container is to provide an easy way to run different WebUI for stable-diffusion. You can choose between the following: 01 - Easy Diffusion : The easiest way to install and use Stable Diffusion on your computer. https://github.com/easydiffusion/easydiffusion 02 - Automatic1111 : A browser interface based on Gradio library for Stable Diffusion https://github.com/AUTOMATIC1111/stable-diffusion-webui 03 - InvokeAI : InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products. https://github.com/invoke-ai 04 - SD.Next : This project started as a fork from Automatic1111 WebUI and it grew significantly since then, but although it diverged considerably, any substantial features to original work is ported as well https://github.com/vladmandic/automatic 05 - ComfyUI : A powerful and modular stable diffusion GUI and backend https://github.com/comfyanonymous/ComfyUI Docker Hub: https://hub.docker.com/r/holaflenain/stable-diffusion GitHub: https://github.com/superboki/UNRAID-FR/tree/main/stable-diffusion-advanced Documentation: https://hub.docker.com/r/holaflenain/stable-diffusion Donation : https://fr.tipeee.com/superboki
    1 point
  2. Hope the lounge is appropriate place for this post, if not just let me know and apologies... I've just setup unraid and am exploring all it can do. Interested in learning about and trying a self hosted game server and see a few options in CA apps. Wondering if docker approach is best for my setup given it's a mini pc vs. a VM? My setup: I have an AMD Ryzen 7 5800H mini pc (8core, 16 thread, 3.2 to 4.4Ghz) with integrated Radeon Vega 8 graphics (2Gb memory I think, 8 graphics cores?); 64Gb ram ddr4; zfs array, 1 parity drive, cache pool: 1 - 2Tb nvme cache & with 1 - 2Tb Samsung ssd I want to try to self host a gaming server for solo play and maybe infrequent friends joining sometimes, then maybe they don't for a while but game still keeps moving forward smoothly playing solo. Ideally, they can rejoin game later when they want and still be part of my group and hop into where I'm at. I like puzzle games mostly, explore then solve and also casual tower defense and maybe some building/exploring/surviving but not just an intense shoot em up, weapons fest. Not sure what games like this come to anyones mind. I'm not a big gamer so don't have much recent background. I really liked Myst series back in the day. Not into virtual RPG board games. Great options would be easy install/setup, some control to make things easier at beginning like enemies, attacks, resources so solo isn't a slaughter or grind while I figure it out. Looking for your success stories on unraid and good matches based on above. Thanks in advance! I really appreciate the community here
    1 point
  3. Hi, I think it would be really great to be able to set individual mover schedules that are specific to each cache pool. As i would find it useful to have some pools move files more often than others.
    1 point
  4. Naja, dann kann es ja auch gleich auf dem Raspi bleiben, wenn die Performance reicht... Bitte auch bedenken: im Array funktioniert kein Trim. Das kann ggfs schon nachteilig sein. Und auch niemals vergessen: der Hunger kommt beim Essen 😉 Wenn man nach einer Weile merkt was alles noch mit UNRAID geht und man immer mehr ausprobiert, ärgert man sich am Ende über die selbst eingebaute Bremse
    1 point
  5. Post new diagnostics
    1 point
  6. I agree, for my specific use case I think the overlap of people running Unraid with a 3D printer that runs Klipper in CAN Bridge mode might be very small, but i'll create the feature request.
    1 point
  7. Also warum krusader? Ich bin ein Gewohnheitstier. Für mich macht es keinen Unterschied, ob ich die eine App installiere oder die andere. Dann bleibe ich bei dem womit bisher alles gut geklappt hat. Hat der Dynamix File Manager irgendwelche Vorteile für die sich ein Wechsel lohnt? Hatte den mal ausprobiert, als der vorgestellt wurde und war damals bei krusader geblieben. Zurück zum Thema. Ich war mir sicher an den richtigen Stellen die richtigen Dinge eingestellt und an den richtigen Stellen gesucht zu haben, In den Einstellungen vom krusader Docker habe ich die Übergabe von dem /mnt Ordner gelöscht und noch mal (exakt genau so wie zuvor) eingetragen und nun hat die UD gemountete Platte die 4TB. Ich weiß nicht, warum vorher 1 MiB angezeigt wurde und nun 3.6TiB. Einzige Änderung war den Verweis (ich nenne diesen mountpoint in der Dockereinstellung mal so) zu löschen und genau so wieder einzutragen. Danach startete der Dockercontainer neu. Bis hier schon mal vielen Dank für die Hilfe. Das Kopieren läuft. Am WE probiere ich das Array aufzulösen und dann mit getauschter Platte wieder zusammen zu fügen. Ich werden bestimmt verzweifelte Fragen haben. PS: Ich finde es toll, dass man in diesem Forum so viel Hilfe erhält und auch drangeblieben wird. Lieben Dank.
    1 point
  8. Hi, Thanks for the help. I don't know why, but the Docker containers could be restarted after a few hours. I did recreate the docker.img today anyway accordig to the instruction on the website that you linked. Everything works now. I hope it keeps working.
    1 point
  9. Added option to override defaults for timers
    1 point
  10. Not much consolation, but I'm having pretty much the same experience running windows 11 My hardware: Intel Arc A750 Limited Edition Asus Strix x570-e gaming AMD Ryzen 9 3900X Kingston NV2 2TB (passthru) Unraid Version: 6.12 I am booting unRaid with CSM in the bios. Because the NVMe is passthru, I've booted up the system directly to Windows (switching to UEFI) to make sure everything works fine. GPU-Z recognizes the card, sees that ReBAR is set, all looks good. I've also tried virtually mapping the soundcard onto the same virtual slot with no luck. I think there may also be an Intel Bluetooth device that comes with the card so I've begun experimenting with passing that as well (Going back to @SpaceInvaderOne tutorial talking about Nvidia splitting soundcards causing problems) Current test I'm running is a VM with Ubuntu 23.04. Hardinfo see an Intel i915 card but testing with something like blender not seeing the Xe-HPG or intel-compute-runtime Dave
    1 point
  11. Don't be, those messages are not related to the disk, they appear because they contain "sdd" in the log, same as the disk identifier.
    1 point
  12. Thanks for sharing, updated. Fingers crossed.
    1 point
  13. A new beta is out and I want as many users as possible to test it. It has the container grouping in it and many other changes. I think some bugs are in as well 🤣 The new beta offers you to copy the prod config to it, so you dont have to reconfigure anything EXCEPT FOR THE DESTINATION PATH!!!
    1 point
  14. Danke. Hat gut geklappt, bis auf meine VM, die wollte nicht starten wegen eines USB Fehlers. Da habe ich erst mal Schnappatmung bekommen, aber nach 20min den Fehler gefunden: Ganz unten bei Eigenschaften der VM war ein USB Gerät angewählt was es nicht gibt. Ich hatte vor Monaten mal mit einem RS485 USB Dongle experimentiert, was nicht geklappt hatte und den USB Eintrag damals auch wieder entfernt. Der ist nach dem Update wohl wieder erschienen und hat die VM nicht starten lassen. Der Package C9 Status wird ebenfalls so erreicht wie vorher auch.
    1 point
  15. alles gut und Nein ... NC, Jelly per Reverse Proxy macht Sinn wenn man den "Beteiligten" nicht unbedingt einen VPN Zugang gewähren will, was ich auf keinen Fall weiterleiten würde ... dann einen Reverse Proxy wie NPM (oder swag) und darüber die HTTP Dienste teilen ... dein Minecraft Projekt wird wie gesagt ganz sicher nicht darüber laufen, ein Gameserver braucht in der Regel separate Ports (TCP oder UDP) und sind nicht Http basiert ... daher bringt dir dafür weder NPM, noch swag etwas ... das geht tatsächlich nur per offenen Ports ... aber ich meine die meisten haben zumindest ein Passwort ... was du dann an deine Mitspieler vergibst. Oder du lässt die per VPN zu Dir ... was ich persönlich nicht machen würde ... dann lieber die Ports auf den Gameserver und gut ist
    1 point
  16. Also: ich hab die m2 nvme nun einmal ausgebaut und noch mal eingebaut. In der Speicherverwaltung habe ich sie als Cache wieder zugewiesen. Neu gestartet und siehe da, die Docker waren alle wieder vorhanden und Startbereit. Warum dies der Fall war, weiß ich nicht. Ich denke dass ein Kontaktproblem die Ursache hierfür war. Anscheinend habe ich auch noch ein Problem mit der Paritätsprüfung. Hier muss ich das Problem noch suchen. Auch zeigt es mir bei den Ordner Freigaben noch drei Orange Dreiecke mit Ausrufezeichen an. Diese sollen anscheinend noch ungeschützte Dateien enthalten. Weiß jemand ob das normal ist? Ansonsten gehe ich jetzt noch mal alles peu à peu durch und versuche mein System auf Vordermann zu bringen. Falls jemand noch einen Tipp für mich hat wäre ich hierfür sehr dankbar. Vielen Dank für eure Unterstützung.
    1 point
  17. Hi all, I picked up an Arc A770 during black Friday as an upgrade to my 1060. I was kind of hesitating because I couldn’t find definitive answers if the card would work for my use case or not, but I couldn’t resist the bang for buck 😊 Maybe I should not be as cheap in the future, because now I run into some deal-breaking-issues. Searching these forums, Reddit, etc gave me a lot of info, but it did not clarify everything unfortunately. So I’ll try my luck here: First my relevant hardware and use case: ASRock Intel Arc A770 Challenger 16GB OC Gigabyte Aorus X570 Ultra AMD Ryzen 9 5900X Samsung 990 Pro 1TB Unraid Version: 6.12.5 The SSD is passed through directly as a (newly created) Windows 11 Pro VM. My goal is to use this VM for gaming. It worked flawlessly with my previous setup (Win10 & nvidia 1060). But now with the A770 I run into the following issues: The VM boots up just fine with the A770 passed through (After setting Hyper-V to No). But when shutting down the VM, the Unraid host locks up and can only be reset the hard way... With a monitor connected to a second GPU, the last messages are something along the lines of: vfio-pci 000:0e:00.0: not ready 1234ms after FLR: waiting This happens ONLY when the sound card of the GPU is also passed through. ReBAR is reported as disabled in Intel software and performance is sub-par. It is enable in system BIOS though (together with Above 4G something..) Now googling these symptoms give a lot of hits. But not a definitive answer. I saw some tweaks etc in this forum, but not sure if they are still relevant to the current Unraid version. So my main question is: Will the above issues be resolved when I use a custom kernel with Linux 6.2+ ? Or am I better of returning the card and spend a few hundred bucks more if I want a seamless experience? EDIT: is this also your usecase @SimonF ? And did you get it working?
    1 point
  18. This was the problem. Had no crash since I changed it. Thx
    1 point
  19. Not sure if this helps but when i bind the WX4100 to a VM, the Intel GPU stats springs to life.
    1 point
  20. I looked and you have one Unraid share that is Public. How are you logging onto Unraid? (It appears that you have an Apple PC.) With Windows, you are basically allowed only only one login to each server. If you establish a login as a 'guest' by accessing a Public share, you can never gain access to Private share (without going to the Windows command line). And the attempt to access a Private share generally presents an error message that is completely cryptic... Are mapping that share as a drive on your PC? Double check that you have not changed the permissions on that drive on the PC side of the setup. There is also a sub-forum for MacOS/SMB issues. For some reason, MacOS often requires some special Samba settings. You can find it here: https://forums.unraid.net/forum/103-macossmb/
    1 point
  21. Sorry but I don't maintain that template anymore, please go to the support thread over here:
    1 point
  22. Server load is very high, reboot and post new diags after array start.
    1 point
  23. Don't see anything out of the ordinary logged so far.
    1 point
  24. Beim Updateprozess werden die erforderlichen Daten direkt auf den angeschlossenen USB-Stick heruntergeladen und entpackt. Die vorher dort abgelegten OS-spezifischen Daten verbleiben aber noch auf dem Stick, sodass man im Notfall direkt wieder zurück kann. Wenn der Entpackungsvorgang abgeschlossen ist (siehst Du in dem angezeigten Fenster), muss der Server neugestartet werden. Das Stoppen von VM's und/oder Containern ist nicht nötig. Sofern Du bei den Containern eingestellt hast, das sie automatisch starten sollen (kann für jeden Container separat geschehen), laufen sie nach dem Neustart auch direkt wieder.
    1 point
  25. I think I figured it out, turns out I just had to remap my media libraries. I don't think it was an issue with the plug-in after all.
    1 point
  26. 1 point
  27. I actually resolved this. I deleted the ENTIRE directory that was “meta data” and re ran plex and let it rebuild the data. After some time I re-ran the app backup plugin and the issue was resolved. I did notice what the issue was though. Turns out, it was permission issues. Some files in that directory had re-wrote permission for ALL, whereas others had permission for no one. APP BACKUP could backup ones that had permission for ALL but not for NONE.
    1 point
  28. This is not about hiding a bug, but starting from scratch and add things back to determine when and where it goes wrong.
    1 point
  29. https://forums.unraid.net/topic/149033-atx-messungen-gigabyte-z790-d-ddr4/ das wäre er, irgendwann kommen die AltBestand Festplatten aus den BackUp Pool raus und beim nächsten Sale auf Amazon oder so kommt dann noch ein 2,5Gbit / 10Gbit Switch dazu.
    1 point
  30. You are having issues with SSL and DNS. After disabling SSL like JorgeB recommended you should be able to access the server via: http://10.10.10.10 (note: http not https) Then follow the instructions here: https://docs.unraid.net/unraid-os/manual/security/secure-webgui-ssl/ to enable SSL with a custom certificate. Note that servername + localTLD has to be listed in the custom SSL cert. And the network has to provide a DNS entry that resolves to the server's IP.
    1 point
  31. Hey @ZappyZap, wanted to ask where do you prefer feature suggestions? Here or github? Was gonna suggest Under Results, have a filter for "Doesn't meet threshhold" Highlight any line that doesn't meet threshhold. Export as .CSV Thank you for such a great app!
    1 point
  32. Try disabling ssl, type use_ssl no If that doesn't help, boot with a different flash drive using a stock Unraid install, no key needed, to confirm if it's a config problem.
    1 point
  33. Hi Yes when I said by default, you will have to know the path location. Clicking in the web gui will go to the folder. But you must maintain file name and path. The container path must be: /docker-entrypoint-initdb.d/init-mongo.js and the 1 for 1 file path must be /mnt/user/appdata/unifi-controler/mongodb/init-mongo.js ^ Except it's where your init-mongo.js file is located. But default unraid if i click the prompts will only do folders. So I would get /mnt/user/appdata/unifi-controler/mongodb/ When I need to specify the file name. to complete the 1 for 1 file passing
    1 point
  34. @JorgeB @emotion_chip @cardo @ich777 It looks like the driver cdc_ncm hijacked the usb driver. Add blacklist to this driver and reboot, then it will work fine. echo "blacklist cdc_ncm" > /boot/config/modprobe.d/cdc_ncm.conf Please check whether it works. It seems the CONFIG_USB_NET_CDC_NCM was added in 6.12.5rc, I have no idea why it was added. Don't know whether it is conflict with If it is permanent change, then we might need to add blacklist in the plugin code.
    1 point
  35. I was in the same boat and thought I'd finally upgrade! Unfortunately my 13th gen Intel iGPU was not properly detected and modprobe i915 froze every time (same with powertop --auto-tune when it tries to change power managment for the iGPU), whole system hangs on reboots and stuff like that... downgraded to 6.11.5 again and everything works flawlessly still. Idk if that is an issue with my specific configuration but I guess I'll wait for 6.13...... Since diagnostics are requested on downgrade from you guys I guess I'll add them here. tower-diagnostics-20231202-1418.zip
    1 point
  36. Finally got this sorted. Changed all the keys in the template to match the repo and then added DB01_AUTHDB key for the postgresql backup to work. The postgresql dump wants there to be a db with the same name as the user, so the extra key is needed.
    1 point
  37. Discovered lately cryptpad but I struggled to get it running. I had some rights issues reported for logs preventing from starting : [Error: EACCES: permission denied, mkdir '/cryptpad/data/logs'] { errno: -13, code: 'EACCES', syscall: 'mkdir', path: '/cryptpad/data/logs' } /cryptpad/lib/log.js:93 throw err; ^ I managed to fix through sudo chown -R 4001:4001 data customize config as documented on https://github.com/cryptpad/docker The Docker container IP is apparently 172.17.0.3 so I had to set in the configuration : httpUnsafeOrigin: 'http://172.17.0.3:3000', Now I will try to get it running through HTTPS as example.com ...
    1 point
  38. This is a summary of the steps I took to migrate to LSIO's unifi-network-application docker now available in CA. The only guarantee here is that this worked for me so USE IT AT YOUR OWN RISK. If you decide to do this and have problems I'll be glad to help solve them but I am far from a docker expert. NOTES This guide is based on the readme in the Github project page. I would recommend reading that first before proceeding. I use tags for my docker versions for the Unifi dockers rather than running latest. You will see later in the summary that the tag for unifi controller was 7.3.83. This tag isn't available for unifi-network-application so I went ahead and installed the latest version currently available which is 7.5.187. Luckily it worked. I also found that the docker template is missing an entry for the webUI address so I copy/pasted the entry from the unifi controller docker. Before installing the unifi-network-application docker you should stop the unifi controller docker you already have installed. PLEASE DO NOT DELETE IT until you are sure that there aren't any issues with unifi-network-application. There's always a possibility that you may need to switch back. PROCEDURE Create a backup in the unifi controller application and download it to your local computer. This will be needed after the docker install is complete in order to restore the app. Quote from the GitHub project page: In my case I installed the official mongoDB docker available in CA and used this contents in the init-mongo.js file- db.getSiblingDB("unifi").createUser({user: "unifi", pwd: "password", roles: [{role: "dbOwner", db: "unifi"}, {role: "dbOwner", db: "unifi_stat"}]}); where MONGO_DBNAME=unifi, MONGO_USER=unifi and MONGO_PASS=password. I created a new folder named mongo_init in my appdata share and saved the init-mongo.js file there. You can store this file anywhere you prefer though. Open a terminal window and create a custom docker network with the following command- docker network create <network name> Replace <network name> with the name you would like to use. I used unifi for simplicity. You will need to use this network for MongoDB and unifi-network-application so that they can communicate with each other. Go to the CA tab and search for MongoDB. You will see the official docker available in the results. Install it with the following configuration- docker run -d --name='MongoDB' --net='unifi' -e TZ="America/New_York" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Brunnhilde" -e HOST_CONTAINERNAME="MongoDB" -l net.unraid.docker.managed=dockerman -l net.unraid.docker.icon='https://raw.githubusercontent.com/jason-bean/docker-templates/master/jasonbean-repo/mongo.sh-600x600.png' -p '27017:27017/tcp' -v '/mnt/user/appdata/mongodb/':'/data/db':'rw' -v '/mnt/user/appdata/mongo_init/init-mongo.js':'/docker-entrypoint-initdb.d/init-mongo.js':'ro' 'mongo:4.4.25' 628975ea8111d6f57c24177ed25533fde878d3acaf0af5f2559d173d5f6c4428 The command finished successfully! I had to change the network to my custom unifi network and add the path for the init-mongo.js file. You will need to modify these to fit your system. You can see that I used the tag 4.4.25 for this docker. According to the UNA project page: Gon to the CA tab and search for LSIO's unifi-network-application docker. Install this docker with the following configuration: docker run -d --name='unifi-network-application' --net='unifi' -e TZ="America/New_York" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Brunnhilde" -e HOST_CONTAINERNAME="unifi-network-application" -e 'MONGO_USER'='unifi' -e 'MONGO_PASS'='password' -e 'MONGO_HOST'='unifi-db' -e 'MONGO_PORT'='27017' -e 'MONGO_DBNAME'='unifi' -e 'MEM_LIMIT'='1024' -e 'MEM_STARTUP'='1024' -e 'MONGO_TLS'='' -e 'MONGO_AUTHSOURCE'='' -e 'PUID'='99' -e 'PGID'='100' -e 'UMASK'='022' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='https://[IP]:[PORT:8443]' -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/unifi-network-application-icon.png' -p '8443:8443/tcp' -p '3478:3478/udp' -p '10001:10001/udp' -p '8080:8080/tcp' -p '1900:1900/udp' -p '8843:8843/tcp' -p '8880:8880/tcp' -p '6789:6789/tcp' -p '5514:5514/udp' -v '/mnt/cache/appdata/unifi-network-application':'/config':'rw' 'lscr.io/linuxserver/unifi-network-application' 30a1a947ab3c10273d89c661359d6ad9ab415bcecb9371e2d0c6f339e03ed262 The command finished successfully! Again, I had to add the webUI address (https://[IP][PORT:8443]) and change the network to use my custom unifi network. You will also need to enter the values you used for MONGO_PASS and MONGO_USER in the init-mongo.js file earlier. I would also suggest using tagged versions on this docker since running latest has been rather risky in the past with unifi controller. Navigate to the unify webui on port 8443 and run new setup. You will need to login to the Ubiquiti website as part of this process. Once the new setup is complete you can restore the backup file that you downloaded at the beginning of this process. This may take a few minutes but once it's complete the app should have all the same devices and configurations. Hopefully I haven't missed anything. I'm sure there are probably a few typos. If anyone has any suggestions for changes please let me know.
    1 point
  39. As far as I know this works fine. Each User Share can specify which pool is used for caching purposes. what is not supported at the moment is moving files directly between pools or one pool acting as a cache for another pool. I think both of these are on the roadmap although no idea of the ETA.
    1 point
  40. Hi Team, I figured out a workaround to get the poller up and running within this docker container. I edited this file "/etc/cont-init.d/07-svc-cron.sh" and added the below under echo "Creating LibreNMS cron artisan schedule:run" echo "Creating LibreNMS poller scheduler" echo "*/5 * * * * /opt/librenms/cronic /opt/librenms/poller-wrapper.py 16" >>${CRONTAB_PATH}/librenms Hope this helps people getting this one up and running.
    1 point
  41. With this amount of vram It will be hard to produce images larger than 512x512 (and even at this resolution I'm not sure it will work). some interfaces (easy-diffusion for instance) have an option for low memory GPU. you could try that. note that it will also use a lot of RAM (20GB to 25GB)
    1 point
  42. 1. Add support for 2FA on the GUI login page 2. Allow to set up aditional users to access the gui other then root user and enable setting custom permissions
    1 point
  43. A built-in backup system that supports both local targets, external servers, and S3 providers.
    1 point
  44. Registered my option as “other”. Ability to start virtualisation services (Docker and KVM) irrespective of array status.
    1 point
  45. Native Docker Compose with it properly recognizing when a stack/container has been updated manually or via watchtower.
    1 point
  46. No native Docker-Compose option 🥲
    1 point
  47. Docker host and docker bridge networks can only operate on IPv4, to use IPv6 you need to configure a custom network. The example below is going to use br0 as custom docker network with both IPv4 and IPv6 (you can disable the bonding part when a single interface is used). First: make sure under network settings both IPv4 and IPv6 is enabled. The preference is to use automatic IPv6 assignment, using either DHCP or SLAAC. Your router must be properly configured to hand-out an IPv6 address and associated DNS server address(es). Second: Under Docker settings enable the necessary br0 networks for both IPv4 and IPv6 Note: - Configure a DHCP pool for IPv4 which hands out IPv4 addresses to containers and does NOT clash with the DHCP range of your router. - Leave the DHCP pool for IPv6 disabled, docker uses an assignment which won't clash with your router Configure a docker container to use custom network br0. The example below shows the Firefox application. Since Firefox is a browser, it can be used to access the Internet on both IPv4 and IPv6 and we can test connectivity... Use ipv6-test.com Note: Certain traffic in my network is inhibited due to the firewall rules I have created on my router (out of scope for this tutorial).
    1 point
  48. you can change which device is eth0, scroll down on your network settings to "interface rule", change the mac address for interface eth0 to the pcie card.
    1 point