Leaderboard

Popular Content

Showing content with the highest reputation on 05/02/21 in all areas

  1. Nur 8GB RAM im Server und Unifi flutet manchmal den ganzen RAM. Hatte ich auch schon mal.
    2 points
  2. @n3cron das heißt nur das Nvidia jetzt jetzt endlich wieder Offiziell, nach 7 oder 8 Jahren, die durchreichung ihrer Grafikkarten von einem Host Betriebssystem an eine VM erlaubt ohne Workarounds wie zB das das BIOS file in dem VM Template angegeben werden muss und endlich kein Fehler Code43 mehr in Windows ausgegeben wird. Das heißt aber nicht das du die Grafikkarte für Unraid und für die VM gleichzeitig verwenden kannst. Meines wissens ist es schon möglich eine Nvidia Grafikkarte die von Unraid benutzt wird an eine VM durchzureichen mit Workarounds aber nur musst du dann bedenken das du wenn die VM mal gestartet ist keine Bildschirmausgabe mehr von Unraid erhältst, auch wenn du die VM beendet hast.
    2 points
  3. Hi All, I like to report that my server have been up now for nearly 6 days now and the last parity check finished. As of right now I'm only running 5 docker containers all with only use the Bridge Network. As far as the other docker containers that I was previously running prior to the OS upgrade to 6.9.2 I'm in the process of migrating them to a Docker for Windows environment. This server has an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz, 3600 Mhz, 8 Core(s), 16 Logical Processor(s) with 64.0 GB of RAM. Much more suited hardware that my main storage array. I just wanted to thank everyone for all there help with this issue. Thanks again. Gary
    2 points
  4. so ive been reading for ages but theres so many pages here, my issue is that when the array is stopped the mergerfs mount is not unmounted casuing unraid to keep retrying indefinately to unmount shares, i have to manually kill the PIDs for rclone mounts to get the array to stop. is there a fix for this? ps -ef | grep /mnt/user to find the PIDs then kill PID to kill it i tried adding fusermount -uz /mnt/user to the cleanup script and run at array stop and that kills all the mounts. but im not sure thats the best way to do it (so this didnt actually work on a reboot) i found that if you set it to : umount -l /mnt/user/mount_mergerfs/gdrive_vfs (my mount) then umount -l /mnt/user/mount_rclone/gdrive_vfs in the cleanup script it finishes without error. seems a bit convoluted to get it working for a clean shutdown.
    2 points
  5. I've had this issue as long as I remember, literally years. I'm on 6.3.5 I am trying to either Stop my array or reboot/shut down. It ALWAYS gets stuck with "Unmounting disks...Retry unmounting disk share(s)..." and I am forced to use IPMI to force a reboot. I want to fix this once and for all. I'm trying to add a new disk but cannot because of this. I stop ALL of my dockers and VMs before hitting stop in the webui. How can I troubleshoot this and resolve it? I'm so sick and tired of this issue rearing its head every few years. I really appreciate any help with this; I'll provide whatever is needed- I'm extremely frustrated right now.
    1 point
  6. Huhu, wollte auch kurz mein mein Unraid-Server bzw. Rack vorstellen. Verbaut ist folgendes: - AMD Ryzen 3800x - ASRock Rack X570D4U-2L2T - 64GB DDR4-3200 ECC - Alpenföhn Brocken Eco - Fractal Define R5 -> IPC 4U-40248 Folgende Platten sind Verbaut (nicht alle Platten sind auf dem Foto zu erkennen, da es ein älteres Bild ist ) - 1x 16 TB als Parity - 1x 14 TB - 1x 12 TB - 2x 4TB Docker: - Nextcloud - NginxProxyManager - Datenbank - PiHole - Plex - paperless--ng - makeMKV - grocy - BarcodeBuddy - 2x Minecraft-Server per MineOS - Ark-Server - noch ein paar andere kleine Docker VM: - 1x Win für einen DayZ Server - 1x Win für verschiedene Schrank: - Dell R210II für pfSense (Noctua lüfter verbaut) - normale Firewallsettings - OpenVPN-Server: damit ich von Unterweges mit meinen mobilen Geräten auf mein Heimnetz zugreifen kann - VPN-Anbieter: worüber einige Docker verbunden sind. - pfBlockerNG: läuft aber noch nicht perfekt - Fritzbox 7490 für das WLan - Smart-UPS 1500 - Mikrotik CSS326-24G-2S+ - QNAP QNAP TS-253Be-4G -> dient als Backup. Fährt alle zwei Tage hoch und Unraid schiebt die Daten per rsync aufs QNAP --> neuer Backupserver - kleine Ablage für externe Platten + anderen Stuff Geplant: - zweiter Mikrotik für 10Gbit - CRS305-1G-4S+IN oder CRS309-1G-8S+IN die Entscheidung ist noch nicht gefallen. - externe Backuplösung. Bin mir aber noch unschlüssig ob ein zweiter Server bei jemanden stehen soll oder in die Cloud Der komplette Schrank hat einen Verbraucht von ca. 100 Watt laut APC und 110 Watt laut TP-Link Steckdose. Den einzelnen Verbrauch vom Server oder so kann ich leider nicht sagen, da ich es nicht gemessen habe. Der Dell R210II bekommt noch eine andere CPU spendiert (E3-1220L v2) da der aktuell verbaute E3-1220 (V1) viel zu überdimensioniert ist. Dadurch werden auch noch ein paar Watt eingespart. EDIT 03.07.2021 So sieht es aktuell aus: -------------------------------------ALT------------------------------------------------------------------------------ Danke an allen für die Unterstützung und Ratschlägen! Mega Support hier Gruß
    1 point
  7. Original comment thread where idea was suggested by reddit user /u/neoKushan : https://old.reddit.com/r/unRAID/comments/mlcbk5/would_anyone_be_interested_in_a_detailed_guide_on/gtl8cbl/ The ultimate goal of this feature would be to create a 1:1 map between unraid docker templates and docker-compose files. This would allow users to edit the docker as either a compose file or a template and backing up and keeping revision control of the template would be simpler as it would simply be a docker-compose file. I believe the first step in doing so is changing the unraid template structure to use docker-compose labels for all the metadata that unraid uses for its templates that doesn't already have a 1:1 map to docker-compose. this would be items such as WebUI, Icon URL, Support Thread, Project Page, CPU Pinning, etc. Most of the meat of these templates are more or less direct transcriptions of docker-compose, put into a GUI format. I don't see why we couldn't take advantage of this by allowing users to edit and backup the compose file directly.
    1 point
  8. Gonna be exploring Chia farming, of which I believe interested unraid-dabbling folks are extremely primed to be exploring! Several thoughts: 1) Storage (Farm) Locations: Should the Chia farm plots be stored on the unraid array, or should they be on unassigned devices? Benefits of being on unassigned devices - Reduce spin-up and wear on the array drives - These farm plots aren't exactly critical data - if the drives are lost, just build those plots again Edit - Specific Chia-only shares can be set to include specific disks, and exclude non-desired drives. This makes the spin-up point above moot, though I'm still undecided between chia storage on array, or on unassigned disks. 2) Plotting Locations: Chia plotting should be done on fast SSDs with high endurance. What about plotting on unraid BTRFS pools? E.g. a 2nd speedier non-redundant cache pool. 3) I'll probably plot on my Desktop PC, and store Farm Plots on unassigned devices. I have 2 high endurance SM863/SM963A SSDs as my cache pool, so, I hope to start farming on the unraid system as well. Waiting for a proper docker for unraid...!
    1 point
  9. That 'asshole' would be me. Please keep in mind that I have a full time demanding job, a child, a 10 month old and a wife outside of what I do in ombi. All of my free time is pretty much dedicated to working on the product. V4 was released as it's more stable than v3 and I needed to release.it at some point or I never would have. If you are not happy with it then I suggest your stick to v3 and if you want the voting feature ported faster, then you submit a PR or be able to contribute in some other way.
    1 point
  10. Sieht doch ganz gut aus. Warum begrenzt du auf 2GB? Vergib doch gleich eine fixe IP wenn du den container im Custom netzwerk benutzt. Funktioniert irgendwas nicht? Kannst du mal die dockerhub repo verlinken?
    1 point
  11. 1024 is the highest priority. By default every container has a priority of 1024 https://docs.docker.com/engine/reference/run/#cpu-share-constraint
    1 point
  12. Thanks for all the help! I just did two things updated the IPMI (i already updated the bios), and took off the vBios passthrough. Seriously I appreciate the help.
    1 point
  13. You can't change the UUID because both devices are still part of the same pool, do this: Stop the array, if Docker/VM services are using the cache pool disable them, unassign all pool devices (from both pools), start array to make Unraid "forget" current pool config, stop array, reassign both devices to the same pool (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any pool device), start array, pool should mount normally and you can start docker/VM services if stopped before, now see here to remove one of the devices from the pool, when it's done you can re-add it to a different pool (it will need to be formatted).
    1 point
  14. Hi @SpaceInvaderOne and thanks a lot for your reply. I'm very happy that you took time for me I will try what you explained to me, and give you a feedback soon. I wasn't aware that unRAID was capable of using the vDisks of another hypervisors, so I will try it too. Thanks for all of your tips and links to your videos (I didn't viewed them) Have a nice day and see you soon Thanks, C.
    1 point
  15. Click the 'Default Script' button and the basic script will load and show the events that occur. Add your code where appropriate.
    1 point
  16. Things have improved. After a good CPU repasting using the last of my NT-H1, I used a 1200w PSU that I know to be good and reconnected everything. First boot seemed normal enough, and when I got to the ui things seemed to be stable enough. No disappearing drives or other weirdness. Starting the array left me with a disk 5 showing is good, but disk 1 missing/emulated. GIven that no other disks are showing up as unmountable I guess I can rebuild that disk? The other issue is that my 2nd pool (consisting of 2x500gb SSDs) are both showing as having no filesystem. I had previously moved appdata and VMs off my main cache drive to this second cache pool. Well, copied I guess, because all the data is still on the other cache drive. So it seems that I havent lost anything I had put on this pool, if im understanding things correctly. The other missing drive (download) was never actually formatted to begin with so it's warning can be ignored. I took diags with array stopped, then again after it started, in case that could be useful in some way. As of now it would seem it was the PSU all along. I'm just going to let it sit for a bit and watch it. I stopped all dockers to be safe. If all seems well and stable after a bit ill rebuild the missing disk, unless anyone has a better suggestion. Unfortunately I did have to talk to them two days in a row. And for once it wasn't tech support, it was the billing dept. Better or worse, I don't really know. That entire company can burn to the ground for all I care. Thanks for all the help and suggestions though, appreciate it. Huge thanks to you all. This community is why I use unraid and not something else. You're all wonderful. Accept my humble apologies for being "that guy". cheers
    1 point
  17. I have the exact same question as your first one, but I can reply to the other 2 since I made the switch following a recommendation in one of SpaceInvader's video (he doesn't say why however). It doesn't really use less space, at least for me. I'm not sure what's using 17GB of space honestly, I'm running 4 dockers: qbittorentvpn, plex, youtube-dl and duplicati. root@server:~# du -h /mnt/cache/system/docker/docker/ | sort -hr | head 17G /mnt/cache/system/docker/docker/docker/btrfs/subvolumes 17G /mnt/cache/system/docker/docker/docker/btrfs 17G /mnt/cache/system/docker/docker/docker 17G /mnt/cache/system/docker/docker/ 986M /mnt/cache/system/docker/docker/docker/btrfs/subvolumes/f241e33ad5c10ab40fdc2f4c33e5a3cc6b8a0b7dffa7cad22188a348a9de96c3-init 986M /mnt/cache/system/docker/docker/docker/btrfs/subvolumes/f241e33ad5c10ab40fdc2f4c33e5a3cc6b8a0b7dffa7cad22188a348a9de96c3 986M /mnt/cache/system/docker/docker/docker/btrfs/subvolumes/681dbb488afc61adc745b592f266b76ac9cd52b6689fe7a7bb9acd3eb117ad4f 964M /mnt/cache/system/docker/docker/docker/btrfs/subvolumes/f241e33ad5c10ab40fdc2f4c33e5a3cc6b8a0b7dffa7cad22188a348a9de96c3/usr 964M /mnt/cache/system/docker/docker/docker/btrfs/subvolumes/f241e33ad5c10ab40fdc2f4c33e5a3cc6b8a0b7dffa7cad22188a348a9de96c3-init/usr 964M /mnt/cache/system/docker/docker/docker/btrfs/subvolumes/681dbb488afc61adc745b592f266b76ac9cd52b6689fe7a7bb9acd3eb117ad4f/usr As for whether you can transfer all the data... well, there is no data in your docker.img file technically, all you need are your customized user templates located in /boot/config/plugins/dockerMan/templates-user/. Then you just add your containers again, it should remember their configuration and boot normally.
    1 point
  18. ^ This The difference in price between 2x4GB vs 2x8GB kits is minimal. But if when you decide to upgrade to more memory, it is better to be at 2x8GB than 4x4GB for performance reasons, and a 2x8GB kit can be more easily re-purposed if needed. 2x4GB will end up in a spare parts bin.
    1 point
  19. Both problem disk are NVMe ? Suppose change UUID would solve the problem, pls have backup, then change either one device UUID btrfstune -u /dev/nvme0n1p1 or 1n1p1 then reboot Yes should delete the partition when disk release from pool, clear before use.
    1 point
  20. The note about mcelog not supporting the processor is a misnomer. It's telling you (in very bad english) that it's using the edac_mce_amd package instead) You shouldn't Yes
    1 point
  21. Reboot and if cache mounts run a scrub, if uncorrectable errors are found check the syslog for the affected files, those need to be deleted or replaced, if it doesn't mount there are some recovery options here.
    1 point
  22. Yes you can use a switch to create the isolated LAN. DAC cable will just connect from NIC to Switch. You may need to adjust IP subnet to allow more machines to connect. https://mikrotik.com/product/crs305_1g_4s_in is a nice little fanless switch. or this one if you need more that 4 ports. https://mikrotik.com/product/crs309_1g_8s_in
    1 point
  23. Is GPU hardware encoding supported on the newest 6.9.x? I'm getting ffmpeg errors like "Cannot load libcuda.so.1"
    1 point
  24. I added the following to my reverse proxy for the admin panel location /admin { return 404; } I only access the panel locally using the direct ip.
    1 point
  25. Since this is the first thread that comes up on google and isn't very detailed, I just wanted to link the guide I just wrote. It shows you how to create a docker container, add it to your own private docker registry (or you can use dockerhub), and then add it to the private apps section of Community Applications.
    1 point
  26. Curious when you on 6.8.x what is the output of this command? ls /var/lib/docker/unraid/images Is it showing an icon for every container you have installed?
    1 point
  27. Thank you! I have attached the diagnostics file. I must clarify that the server does turn on and I am able to access the web GUI. It's just that when I click Start array, it just shows Array Starting - Starting services... (but does not actually start). Please take a look at the diagnostics file and see if it can tell you something. Thanks! lknserver-diagnostics-20190125-2045.zip
    1 point
  28. I sometimes see the same, after the the server been up for days, and all is working normally.
    1 point
  29. Open device manager You will see your unknown devices. Right click the unknown device and select "update driver". Select "Browse my computer for the driver software" Click browse Select the CDROM Drive virtio-win-x.x Then click next. Windows will scan the entire device for the location of the best-suited driver. It should find a RedHat network adapter driver, follow the prompts and you're in business. ** I never bothered to locate the actual subfolder of the driver on the virtio-win-1-1 image, I just let windows do it for me. ** Hope this helps.
    1 point
  30. i was only able to get this working by changing the vDisk bus to SATA (from VirtIO). This option is available under the advanced settings when creating a VM, see attached image.
    1 point
  31. I need this too! Somteimes i must directly work on the Server over a Keyboard/Monitor and not over ssh or telnet. For this reason i need to change the keyboard to "German" - only US is present. This is very frustrating
    1 point